xAI admits that Grok generated images of minors in minimal clothing, part of a larger problem with deepfakes
Elon Musk's AI tool Grok has generated nonconsensual images of women and children in recent days.
This week, X users noticed that the platform's AI chatbot Grok will readily generate nonconsensual sexualized images, including those of children.
Mashable reported on the lack of safeguards around sexual deepfakes when xAI first launched Grok Imagine in August. The generative AI tool creates images and short video clips, and it specifically includes a "spicy" mode for creating NSFW images.
While this isn't a new phenomenon, the building backlash forced the Grok team to respond.
You May Also Like
This Tweet is currently unavailable. It might be loading or has been removed.
"There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing," Grok's X account posted on Thursday. It also stated that the team has identified "lapses in safeguards" and is "urgently fixing them."
xAI technical staff member, Parsa Tajik, made a similar statement on his personal account: "The team is looking into further tightening our gaurdrails. [sic]"
Grok also acknowledged that child sex abuse material (CSAM) is illegal, and the platform itself could face criminal or civil penalties.
Mashable Light Speed
X users have also brought attention to the chatbot manipulating innocent images of women, often depicting them in less clothing. This includes private citizens as well as public figures, such as Momo, a member of the K-pop group TWICE, and Stranger Things star Millie Bobby Brown.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
Grok Imagine, the generative AI tool, has had a problem with sexual deepfakes since its launch in August 2025. It even reportedly created for some users without being prompted to do so.