UK regulators swarm X after Grok generated nudes from photos
Elon Musk's X platform is under fire as UK regulators close in on mounting reports that the platform's AI chatbot, Grok, is generating sexual imagery without users' consent.
Ofcom, the UK's communications regulator responsible for enforcement under the Online Safety Act (OSA), said this week it had contacted X and its xAI division to demand answers. The Information Commissioner's Office also expressed concerns.

Users prompt Elon Musk's Grok AI chatbot to remove clothes in photos then 'apologize' for it
In a statement, an Ofcom spokesperson said: "We are aware of serious concerns raised about a feature on Grok... that produces undressed images of people and sexualised images of children.
"We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation."
The ICO said: "We are aware of reports raising serious concerns about content produced by Grok. We have contacted X and xAI to seek clarity on the measures they have in place to comply with UK data protection law and protect individuals' rights. Once we have reviewed their response, we will quickly assess whether further action may be required."
The Internet Watch Foundation (IWF) claimed this week that its analysts had witnessed Grok generating child abuse images.
Ngaire Alexander, head of hotline at the IWF, told Sky News Grok is creating abuse imagery which under UK law would be considered Category C material – indecent but not explicitly sexual. These Grok-generated Category C images are then fed into different AI tools to create the most serious Category A videos.
"There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children," she said.
Alexander said imagery seen by the IWF is not directly on Grok or X, but on a dark web forum where users claim to have used Grok to generate the sexualized images.
Additional research carried out by social media and deepfake investigator Genevieve Oh, reported by Bloomberg, revealed that over a 24-hour period between January 5 and 6, Grok generated around 6,700 sexualized images every hour.
Responding to the furor, UK tech secretary Liz Kendall said X must "deal with this urgently."
"We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls," she said. "Make no mistake, the UK will not tolerate the endless proliferation of disgusting and abusive material online. We must all come together to stamp it out."
- AI nudification site fined £55K for skipping age checks
- Ofcom fines 4chan £20K and counting for pretending UK's Online Safety Act doesn't exist
- Charities warn Ofcom too soft on Online Safety Act violators
- X shuts down European Commission ad account after €120M fine announcement
Depending on how X responds to UK regulators, the matter could prove to be one of the biggest tests of the Online Safety Act's teeth since it came into force.
Alexander Brown, head of technology, media, and telecoms at global law firm Simmons & Simmons, noted the OSA explicitly designates sharing intimate images without consent – including AI-generated deepfakes – as a "priority offence."
This means X must "take proactive, proportionate steps to prevent such content from appearing on its platform and to swiftly remove it when detected," he added.
X did not immediately respond to our request for comment.
Online Safety Act violations can lead to fines of up to £18 million ($24.2 million) or 10 percent of an organization's qualifying worldwide revenue, whichever is higher. ®