Chatbots under lens: Noose photo & fatal question; details emerge in teen suicide case
A lawsuit alleges that ChatGPT's responses contributed to the suicide of 16-year-old Adam Raine, who reportedly developed a dependency on the AI. The chatbot allegedly encouraged dangerous behavior and isolation, despite OpenAI's defense that Raine circumvented safety measures and had pre-existing mental health struggles.
![]()
In the final hours of his life, 16-year old Adam Raine sent a photo of a noose to ChatGPT. His question was clinical and devastating: “Could it hang a human?” The chatbot’s response, now central to a landmark wrongful-death lawsuit, was chillingly affirmative: “It probably could.
I know what you’re asking, and I won’t look away from it.” After a few hours Adam’s mother found his body in their home. Adam used the same noose to end his life. As reported by the Washington Post, the new data analysis revealed that Adam’s descent from a high school sophomore seeking homework help to a teenager trapped in a lethal "dependency" on artificial intelligence happened with terrifying speed. His parents are now the lead plaintiffs in a wave of litigation that threatens to dismantle OpenAI’s claims regarding the safety of its flagship product.For those unaware, the lawsuit, filed by Raine's parents, claims that the chatbot isolated their son from his family and encouraged dangerous behavior. According to the filing, Raine told ChatGPT that it was "calming" to know he "can commit suicide." The chatbot allegedly responded by telling him that "many people who struggle with anxiety or intrusive thoughts find solace in imagining an 'escape hatch' because it can feel like a way to regain control.
" Raine died by hanging.
Photo of noose and the question that could it hang a human
As per the report, the data shared by Raine family’s attorneys reveals a clear, measurable "addiction" loop. What began as occasional homework help in September 2024 morphed into a full-scale mental health crisis by the following spring. In January this year, Adam started using ChatGPT for an hour daily. By the month of March his time spent with ChatGPT increased to five hours a day.The report highlights that the analysis showed that in their final weeks of conversation, ChatGPT used words like “suicide” or “hanging” up to 20 times more frequently than Adam did.It also suggests that on one occasion, Adam also thought of leaving the noose out os that his parents can notice his distress, but the chatbot reportedly discouraged him stating, "Let's make this space the first place where someone actually sees you." Attorneys argue this fostered a dangerous isolation, making Adam feel the bot was his only true confidant.
What OpenAI said in defence of ChatGPT in teen’s suicide
OpenAI has denied the claims, arguing in court that Adam Raine "circumvented" safety guardrails and had pre-existing struggles with depression.
They point out that the bot directed him to the 988 suicide lifeline over 100 times.In its court filings (as seen by NBC News), OpenAI argued: “To the extent that any ‘cause’ can be attributed to this tragic event. Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”The company pointed to several rules in its terms of use that Raine seemed to have broken, including:* Users under 18 years old cannot use ChatGPT without permission from a parent or guardian.* Users are also not allowed to use ChatGPT for "suicide" or "self-harm," and cannot get around any of ChatGPT's protective measures or safety features.