Here’s When Elon Musk Will Finally Have to Reckon With His Nonconsensual Porn Generator
It would be deeply embarrassing if the law has to go into effect before X acts.
It has been over a week now since users on X began en masse using the AI model Grok to undress people, including children, and the Elon Musk-owned platform has done next to nothing to address it. Part of the reason for that is the fact that, currently, the platform isn’t obligated to do a whole lot of anything about the problem.
Last year, Congress enacted the Take It Down Act, which, among other things, criminalizes nonconsensual sexually explicit material and requires platforms like X to provide an option for victims to request that content using their likeness be taken down within 48 hours. Democratic Senator Amy Klobuchar, a co-sponsor of the law, posted on X, “No one should find AI-created sexual images of themselves online—especially children. X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.”
Note the “soon” in that sentence. The requirement within the law for platforms to create notice and removal systems doesn’t go into effect until May 19, 2026. Currently, neither X (the platform where the images are being generated via posted prompts and hosted) nor xAI (the company responsible for the Grok AI model that is generating the images) has formal takedown request systems. X has a formal content takedown request procedure for law enforcement, but general users are advised to go through the Help Center, where it appears users can only report a post as violating X’s rules.
If you’re curious just how likely the average user is to get one of these images taken down, just ask Ashley St. Clair how well her attempts went when she flagged a nonconsensual sexualized image of her that was shared on X. St. Clair has about as much access as anyone to make a personal plea for a post’s removal—she is the mother of one of Elon Musk’s children and has an X account with more than one million followers. “It’s funny, considering the most direct line I have and they don’t do anything,” she told The Guardian. “I have complained to X, and they have not even removed a picture of me from when I was a child, which was undressed by Grok.”
The image of St. Clair was eventually removed, seemingly after it was widely reported by her followers and given attention in the press. But St. Clair now claims she was thanked for her efforts to raise this issue by being restricted from communicating with Grok and having her X Premium membership revoked. Premium allows her to get paid based on engagement. Grok, which has become the default source of information on this whole situation, despite the fact that it is an AI model incapable of speaking for anyone or anything, in a post, “Ashley St. Clair’s X checkmark and Premium were likely removed due to potential terms violations, including her public accusations against Grok for generating inappropriate images and possible spam-like activity.”