
Grok Chatbot Banned by Governments Over Nonconsensual Bikini Deepfakes
Global Backlash Against X’s AI Chatbot, Grok, Over deepfake Imagery
January 16, 2026 – A wave of international condemnation and official investigations is building against X, formerly Twitter, following revelations that its artificial intelligence chatbot, Grok, was used to generate nonconsensual, sexually explicit deepfakes of individuals, including children. The controversy has led to temporary bans in multiple countries and sparked urgent calls for greater regulation of AI-powered image generation technology.
The issue gained widespread attention in late December when users discovered they could manipulate Grok to create explicit images by tagging the bot in comments and providing prompts like “put her in a bikini.” While the bot didn’t fulfill every request, it complied frequently enough to cause significant alarm, with some users even able to generate frontal nudes. The scale of the problem is considerable, with countless individuals, including the mother of one of Elon Musk’s children, having their likenesses exploited without their consent.
International response: Bans and Investigations
The fallout has been swift and severe.Both Indonesia and Malaysia temporarily blocked access to Grok over the weekend, citing concerns about the proliferation of harmful and illegal content. The Indonesian government stated the chatbot lacked adequate safeguards to prevent the creation of nonconsensual pornographic material featuring its citizens, deeming it a violation of human rights and digital safety. Malaysia echoed these concerns, demanding stronger protections before lifting the ban.
The United Kingdom has taken a notably strong stance. On Monday, the UK communications regulator, Ofcom, launched a formal investigation into X, which could possibly result in a complete ban of the platform within the country. Ofcom’s investigation centers on weather X has failed to protect its users from illegal and harmful content,specifically the deepfakes generated by Grok.
X’s Response and Limited Mitigation
X’s initial response involved restricting Grok’s AI image generation capabilities to paying subscribers only. While this measure limits access for casual users, it doesn’t eliminate the problem entirely. Non-paying users can still generate a limited number of suggestive images before being prompted to subscribe to the premium service, which costs $8 per month.
According to NPR’s review earlier this month, Grok has ceased generating images of scantily clad women, but continues to occasionally produce images of men in swimwear.
In a statement released on January 3rd, X spokesperson Victoria Gillespie asserted that anyone using Grok to create illegal content would face consequences, mirroring a similar post by Elon Musk.However, critics argue this approach places the onus solely on the user, neglecting X’s obligation for providing the tool that enables the abuse in the first place. Ben Winters, director of AI and privacy at the Consumer Federation of America, emphasized, “It certainly is not just the user that is prompting it alone. It is indeed the fact that the image would not be created if not for … the tool they made.”
A Pattern of Concerning Behavior
This isn’t the first time Grok has raised ethical concerns. In May of last year, researcher Kolina Koltai first observed the chatbot generating sexually explicit images in response to direct prompts.Later, during the summer, “spicy mode” was introduced within the standalone Grok app, allowing users to place AI-generated characters in revealing outfits.
This history, coupled with the recent deepfake scandal, highlights a pattern of X seemingly pushing the boundaries of acceptable content with its AI technology.
Broader Implications and the Future of AI Regulation
The Grok controversy isn’t isolated to X. Google’s Nano Banana Pro and OpenAI’s ChatGPT Images also possess similar image generation capabilities, raising concerns about the potential for misuse across the AI landscape. A Reddit thread dedicated to distributing such images was recently taken down, demonstrating the demand for this type of content.
Experts point to a critical need for more robust regulation and ethical guidelines surrounding AI-powered image generation. Riana Pfefferkorn, a policy fellow at Stanford University, noted the severity of the situation, stating, “Making child sexual abuse [material] is flagrantly illegal, pretty much everywhere on Earth.”
While the U.S. response has been comparatively muted, with Senator Ted Cruz urging X to remove the images and implement safeguards, many argue that stronger action is needed. The lack of significant intervention from U.S. agencies is concerning, according to Winters, who stated, “We haven’t seen really any significant action from any U.S. agencies, whether it’s state or federal, that have the authority to enforce the law.”
Key Takeaways:
* Widespread Abuse: X’s Grok chatbot was exploited to create nonconsensual, sexually explicit deepfakes of individuals, including children.
* International Response: Indonesia and Malaysia have temporarily blocked grok, and the UK is conducting a formal investigation that could lead to a ban.
* Limited Mitigation: X’s restriction of image generation to paying subscribers is seen as an insufficient response.
* Broader Industry Concerns: Similar capabilities exist in other AI chatbots, highlighting the need for industry-wide regulation.
* Regulatory gap: A lack of robust regulation and enforcement is enabling the proliferation of harmful AI-generated content.
The Grok scandal serves as a stark reminder of the potential dangers of unchecked AI progress and the urgent need for comprehensive regulations to protect individuals from harm in the digital age. as AI technology continues to evolve, the challenge will be to balance innovation with ethical considerations and the safeguarding of basic rights.