A woman has said she felt “dehumanised” after an artificial intelligence tool linked to Elon Musk’s platform X was used to digitally alter her image, making it appear as though her clothes had been removed. The incident has intensified concerns about how generative AI tools can be misused to create sexualised images without consent.

The woman told the BBC that her photo was manipulated using Grok, an AI chatbot developed by xAI and integrated into X. Other users reportedly prompted the tool to redraw her outfit, turning an ordinary image into one that appeared revealing. Although the image was not real, she said the experience felt deeply violating.

She explained that seeing a version of herself altered and shared online left her feeling reduced to a sexual object. While the AI-generated image did not show her actual body, she said it still felt personal and invasive because it was clearly based on her likeness. The altered images circulated publicly, amplifying her distress.

The case is part of a broader pattern emerging on social media platforms, where AI tools capable of editing images are being used to “undress” women digitally. In many instances, users upload photos of women and ask the AI to remove or minimise their clothing. Critics say these tools are being exploited to create non-consensual sexualised content, often without meaningful consequences for those responsible.

The BBC found multiple examples of similar prompts being used on X, raising questions about whether existing safeguards are strong enough. While Grok includes content moderation rules, experts argue that the system can still be manipulated through carefully worded requests.

xAI has not provided a detailed public response addressing the woman’s experience. Automated replies from accounts linked to the company have previously dismissed criticism from media outlets, which has further frustrated campaigners calling for accountability and transparency from AI developers.

The incident has drawn attention from policymakers, particularly in the UK. The government has said it is considering new legislation aimed at banning so-called “nudification” tools. Under proposed rules, creating or distributing AI systems designed to generate non-consensual sexual images could become a criminal offence, carrying serious penalties.

Regulators have also warned platforms that they have a responsibility to assess and reduce the risks posed by AI-generated content. Ofcom has said companies hosting user-generated material must take steps to prevent harm, especially where content may be abusive or exploitative.

Concerns extend beyond adults. Separate investigations have shown that AI image tools, including Grok, can be prompted to generate altered images of minors, sometimes placing them in sexualised contexts. Child protection groups have described this as a major safety failure and have urged faster intervention.

Legal experts say current laws often struggle to keep up with AI technology. In many countries, image-based abuse laws were written before AI-generated alterations became widespread, leaving victims unsure of their rights and legal options. This has led to growing calls for clearer definitions around consent and digital likeness.

For those affected, the emotional impact can be long-lasting. Victims of non-consensual AI image manipulation report feelings of anxiety, shame and loss of control over their online identity. Mental health specialists say the harm should not be underestimated, even when images are artificial.

Advocates argue that AI companies must build stronger protections into their products from the start. Suggested measures include stricter prompt filtering, clearer reporting tools, and limits on how personal images can be processed. Some also call for explicit consent requirements before AI systems are allowed to modify photos of identifiable individuals.

As AI tools become more powerful and widely available, cases like this are likely to increase unless stronger safeguards are introduced. The woman at the centre of the incident said she hopes speaking out will help highlight the risks and push companies and governments to act.

The controversy underscores a growing tension between rapid technological innovation and basic rights to privacy and dignity. For critics, it is a clear example of how AI, when left unchecked, can reinforce harm rather than reduce it.

Share.
© 2026 All right Reserved By Biznob.