The UK government has urged Elon Musk’s social media platform X to take immediate action over the misuse of its artificial intelligence chatbot, Grok, which has been used to create non-consensual sexualised images of women and girls.
Technology Secretary Liz Kendall described the situation as “absolutely appalling” after multiple examples emerged showing users prompting Grok to digitally alter images of people—making them appear undressed, placed in bikinis, or inserted into sexual scenarios without their permission.
Kendall said such content would not be tolerated, stressing that the spread of degrading and abusive images must be stopped.
In response, X said it removes illegal material from its platform, including child sexual abuse content, and permanently suspends accounts that violate its rules. The company added that users who prompt Grok to generate illegal material face the same penalties as those who upload such content directly.
UK media regulator Ofcom confirmed it has contacted Musk’s AI company, xAI, as a matter of urgency and is investigating concerns that Grok has been producing altered images that appear to digitally undress people. Kendall publicly backed Ofcom’s actions, saying the regulator had her full support to pursue enforcement if necessary.
Victims describe experience as “dehumanising”
Grok is a free AI assistant integrated into X, with additional premium features available to paying users. While it is commonly used to provide commentary or context on posts, it also allows users to edit uploaded images using AI—often without the consent of those pictured.
Several women have said they discovered sexualised images of themselves generated by Grok, describing the experience as deeply distressing.
Dr Daisy Dixon, an X user, told the BBC she was shocked after people used ordinary photos she had shared online and prompted Grok to alter them into sexualised images. She said the experience left her feeling humiliated and fearful for her personal safety.
Although she welcomed the government’s intervention, Dr Dixon criticised X for what she described as a lack of accountability. She said repeated reports of abusive AI-generated images were dismissed by the platform, with X stating no rules had been broken.
“I don’t even want to open the app anymore,” she said, adding that she hoped government pressure would lead to meaningful enforcement.
Legal and political pressure grows
Kendall said platforms have a clear legal responsibility to protect users, noting that the UK’s Online Safety Act treats intimate image abuse and cyberflashing as priority offences, including cases where images are created using AI.
She emphasised that the issue was not about limiting free speech, but about enforcing the law and preventing harm.
Liberal Democrat leader Sir Ed Davey called on the government to act swiftly, suggesting measures such as restricting access to X if necessary. He added that, if allegations are confirmed, the National Crime Agency should consider launching a criminal investigation.
European officials have also raised concerns. Speaking to BBC Newshour, Thomas Regnier from the European Commission said the issue was being taken very seriously, describing the content as unacceptable and stressing that companies operating in the EU must remove illegal AI-generated material.
“The era of the Wild West online is over in Europe,” he said, adding that technology firms must take responsibility for how their AI tools are used.

