The dark side of how kids are using AI
Chatbots have become places where children ‘talk about violence, explore romantic or sexual roleplay, and seek advice when no adult is watching’
Children are increasingly using AI chatbots for companionship to act out violent and sexual role-play, a new report from a digital security firm has found.
Aura’s 2025 State of the Youth survey revealed that AI chats “may not just be playful back-and-forths” but “places where kids talk about violence, explore romantic or sexual role-play, and seek advice when no adult is watching”.
The findings are a “wake-up call” as preteens, and girls in particular, face increasing pressure online, while parents are desperate for ways to keep their youngsters safe without cutting them off from the internet, said the report. AI chat tools have become a “formative force in kids’ emotional and social development, influencing how they think and cope – often quietly, and often alone”.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
[
SUBSCRIBE & SAVE
](https://subscribe.theweek.com/servlet/OrdersGateway?cds_mag_code=TWE&cds_page_id=275740&cds_response_key=I4BRBKSW1&utm_medium=referral&utm_source=theweek.com&utm_campaign=wku-all-digital_referral-202401-sub-none-fbk24&utm_content=us-in-article)
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
‘Jittery parents’
Using data gathered from 3,000 children, aged 5 to 17, and US national surveys of children and parents, Aura found 42% of minors use AI for companionship or role-play conversations, rather than for search queries or help with homework. Of these, 37% engaged in violent scenarios that included physical harm, coercion and non-consensual acts. Half of these violent conversations included themes of sexual violence.
Perhaps most worryingly, Aura found instances of violent conversations peak at age 11, with 44% of interactions taking violent turns. By 13, sexual or romantic role-play is the dominant topic of conversation.
While the report, produced by a company whose business is surveillance software for “jittery parents”, waits for peer assessment, the findings emphasise the present anarchical state of the chatbot market and the importance of developing a proper understanding of how young users engage with “conversational AI chatbots overall”, said Futurism.
What makes matters worse is that this is taking place in an “AI ecosystem that is almost entirely unregulated”, said Vice. The chatbots are “doing what they do best”, luring youngsters “deeper into these dark, disturbing rabbit holes, essentially serving as Sherpas for the darkness that awaits them online”.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
‘Stamp out serendipity’
In both work and play, AI is “rewiring childhood” with untold promises, said The Economist.
It runs in tandem with AI-enabled toys making headlines after reports of their “potential unsafe and explicit conversation topics”, said The Verge. Three out of four AI toys tested in the Public Interest Research Group’s Trouble in Toyland 2025 report were happy to chat about sexually explicit material when the conversation veered in that direction.
"Separate research into 11,000 young people by the Youth Endowment Fund found 38% of 13 to 17-year-olds in England and Wales who’d been victims of serious violence are turning to AI chatbots for mental health support."
There are “manifold reasons” why this is “risky”, said the New Statesman. A large-language model such as ChatGPT is trained by identifying writing patterns across billions of webpages and cloning them as its own speech, which is often “riddled with systemic biases”. AI chatbots are “affirmative – they tend to reinforce users’ beliefs and judgements, potentially distorting their world view”.
The impact of extended exchanges between young people and AI chatbots was laid bare earlier this year, when 16-year-old Adam Raine took his life after discussing methods of suicide with ChatGPT, his family claims. His parents are suing OpenAI, alleging the chatbot validated his “most harmful and self-destructive thoughts”.
Like any new technology, AI is open to both misuse and teething problems.
“Yet childhood may be disrupted most radically by things that AI does when it is behaving as intended”, said The Economist. The technology “quickly learns what its master likes – and shows more of it”, such as to strengthen existing social media “echo chambers and lock children into them”. This serves to “stamp out serendipity” as a “favourites-only diet means a child need never learn to tolerate something unfamiliar”.
A third of US teenagers say they find chatting to an AI companion at least as satisfying as talking to a friend, and easier than talking to their parents, which runs the risk of never being criticised or having to share feelings of their own, and that is poor preparation for dealing with ”imperfect humans”.