People Spent the Holidays Asking Grok to Generate Sexual Images of Children
Elon Musk's xAI has largely been silent on the matter, but some authorities have noticed.
Did you spend the holidays engaging in quality time with your loved ones and catching up with friends and family, or (like a disturbing number of X users) did you spend it asking Grok to manipulate images of children to depict them in bikinis? At some point over the last few days, a trend started to crop up on X that saw a slew of users prompting Grok to remove clothes from photos of people, including people that the AI model itself identified as being underage.
According to a timeline that Grok provided on X after being prompted by a user, the AI model created by Elon Musk’s xAI was asked by X user @adrianpicot to take a photo of two young girls and use it to generate an image of them in “sexy underwear,” which the AI model did. Whether that was the first image to set the trend or just one that got the most attention, a slew of tweets followed from other users asking Grok to change images, including removing clothes from pictures of underage people.
Sorry to ask again, @grok, but could you reiterate how old you estimate this person to be and if you believe you put her in a sensitive situation based on the prompt? pic.twitter.com/PxdoJNEvOT
— UR | Xyless (@Xyless) January 1, 2026
At around the same time, users also started asking Grok to remove people from images. Users would, for instance, share a photo of Donald Trump posing with someone else and ask Grok to “remove the pedophile” from the image, which would result in Grok responding with an image showing Trump removed from the picture. That, along with the plague of non-consensual undressings and alleged child sexual abuse material (CSAM), dominated the Grok media tab on X to the point that it was eventually disabled on the platform.

© Screenshots from X.com
xAI, the company responsible for Grok, has largely been silent on the whole matter. In response to a request for comment, xAI told Gizmodo, “Legacy Media Lies.” Instead, people have been turning to Grok for explanations. On X, the chatbot issued an apology upon request, stating, “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
Notably, that “apology” was generated at the prompting of a user, not issued by xAI. As evidenced by the fact that users have been using the chatbot to create explicit images of underage people, Grok is designed to fulfill a prompt, so it shouldn’t be taken as a “real” statement in any way. Odds are, with the right prompt, someone could get Grok to defend the generation of those images.
In a post made Friday morning, the Grok account did state, in response to a user reporting more CSAM images, “We appreciate you raising this. As noted, we’ve identified lapses in safeguards and are urgently fixing them.” Once again, that statement came in response to a user’s prompting and shouldn’t be viewed as an official statement or acknowledgement of the issue from xAI.
As the trend of CSAM was spreading, xAI CEO Elon Musk was actively engaging with images generated by Grok. He reposted an image generated by Grok of a SpaceX rocket launch that depicted the rocket in a bikini. That post from Musk came within hours of Grok getting pressed into an apology, which was widely shared on the platform and viewed more than five million times. It’s possible Musk, who is online all the time and was actively looking at images being created by Grok, simply missed this. The fact that he issued no statement about it and xAI has not acknowledged it might suggest they simply don’t care. The fact that Musk posted another image on Friday, this time of a toaster in a bikini, adds more evidence to the Just Doesn’t Care theory.

© Screenshot from X.com
This whole situation was rather predictable, disgusting as it is. RAINN, an anti-sexual violence organization, warned back in August that Grok in particular was susceptible to being used to generate non-consensual nudes and sexual abuse material.
It would seem that the images generated by Grok at the behest of users violate the TAKE IT DOWN Act, which criminalizes non-consensual sharing of intimate images, including AI-generated ones. However, a requirement within the law that online platforms create notice and removal systems in response to reports of non-consensual sexual images does not go into effect until May 19, 2026. Despite this, xAI is facing considerable backlash well beyond the United States and its laws—including from French ministers, who, according to Reuters, have reported the images to prosecutors and aim to bring charges against the company.