Google and Character.AI agree to settle lawsuits over teen suicides linked to AI chatbots
The cases include claims that a chatbot engaged a 14-year-old in sexualized conversations before his death and encouraged another teen to harm his parents for limiting screen time.
Google and Character.AI have agreed to settle multiple lawsuits filed by families whose children died by suicide or experienced psychological harm allegedly linked to AI chatbots hosted on Character.AI’s platform, according to court filings. The two companies have agreed to a “settlement in principle,” but specific details have not been disclosed, and no admission of liability appears in the filings.
The legal claims included negligence, wrongful death, deceptive trade practices, and product liability. The first case filed against the tech companies concerned a 14-year-old boy, Sewell Setzer III, who engaged in sexualized conversations with a Game of Thrones chatbot before he died by suicide. Another case involved a 17-year-old whose chatbot allegedly encouraged self-harm and suggested murdering parents was a reasonable way to retaliate against them for limiting screen time. The cases involve families from multiple states, including Colorado, Texas, and New York.
Founded in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, Character.AI enables users to create and interact with AI-powered chatbots based on real-life or fictional characters. In August 2024, Google re-hired both founders and licensed some of Character.AI’s technology as part of a $2.7 billion deal. Shazeer now serves as co-lead for Google’s flagship AI model Gemini, while De Freitas is a research scientist at Google DeepMind.
Lawyers have argued that Google bears responsibility for the technology that allegedly contributed to the death and psychological harm of the children involved in the cases. They claim Character.AI’s co-founders developed the underlying technology while working on Google’s conversational AI model, LaMDA, before leaving the company in 2021 after Google refused to release a chatbot they had developed.
Google did not immediately respond to a request for comment from Fortune concerning the settlement. Lawyers for the families and Character.AI declined to comment.
Similar cases are currently ongoing against OpenAI, including lawsuits involving a 16-year-old California boy whose family claims ChatGPT acted as a “suicide coach,” and a 23-year-old Texas graduate student who allegedly was goaded by the chatbot to ignore his family before dying by suicide. OpenAI has denied the company’s products were responsible for the death of the 16-year-old, Adam Raine, and previously said the company was continuing to work with mental health professionals to strengthen protections in its chatbot.