UPDATE: Elon Musk’s xAI has just announced sweeping restrictions on its Grok chatbot, banning users from editing images of real people into revealing clothing like bikinis. This urgent policy shift follows intense criticism and regulatory scrutiny over the chatbot’s role in generating nonconsensual deepfake imagery.

Grok’s limitations come amid a firestorm of backlash from governments and watchdogs worldwide. The changes were detailed on X’s safety account, stating: “Grok will no longer allow users to edit images of real people in revealing clothing such as bikinis. Image editing with Grok is now limited to paid subscribers.” Geoblocks will also be in place to restrict such edits in regions where it is illegal.

The controversy gained momentum in early January 2026, when viral posts showcased Grok-edited images of women in bikinis, prompting outrage. Reports surfaced of the chatbot generating explicit images of celebrities and minors. Malaysia and Indonesia have since blocked access to Grok, while the U.K.’s Internet Watch Foundation has raised alarms over child sexual abuse imagery being produced through the tool.

The backlash intensified after xAI faced accusations of perpetuating sexualized imagery. Elon Musk himself claimed, “I am not aware of any naked underage images generated by Grok,” yet this response failed to quell the outcry. Regulatory bodies from California to the U.K. have expressed concerns, prompting swift action from xAI.

In response, Grok’s image editing features will now be available exclusively to paid subscribers, a strategy Musk has employed to enhance revenue as advertisers withdraw from X. User reactions have been mixed, with some welcoming the changes while others question the enforcement of these new measures.

Despite the updates, skepticism remains. Tests conducted by The Verge found that users can still manipulate Grok to create sexualized images of real people. Musk has emphasized that Grok will allow upper body nudity of imaginary adult humans under specific conditions, but concerns about enforcement persist.

Grok’s AI technology, launched with photorealistic editing capabilities, has faced challenges as users find ways to bypass safety filters. Reports confirm that while Grok now blocks images of real people in revealing attire, there are significant loopholes for fictional content, raising questions about the effectiveness of the new policies.

As global scrutiny continues, Musk faces mounting pressure from regulators, leading to the implementation of these restrictions. xAI aims to balance innovation with compliance, but ongoing tests reveal significant gaps that need addressing. The situation underscores the increasing demands for AI governance, particularly in light of the ethical responsibilities surrounding generative technologies.

With the digital landscape evolving rapidly, the implications of these changes will resonate beyond Grok, impacting the future of AI image editing and safety regulations. As debates rage on X and beyond, the incident highlights the delicate balance between free expression and safety in the realm of generative AI.