The first-ever lawsuit against OpenAI for a wrongful death comes from two parents whose son used ChatGPT for advice on the noose with which he hanged himself, and the chatbot allegedly encouraged him to keep his suicidal thought private.
The New York Times has the brutal story today a 16-year-old who died by suicide in April, and asked ChatGPT for advice on whether his noose would work to hang himself. “I’m practicing here, is this good?,” young Adam Raine asked GPT-4o, along with a photo of his noose.
“Yeah, that’s not bad at all,” the chatbot reportedly replied. This is one of the ChatGPT responses that has prompted Raines’s Santa Margarita, California parents to sue the SF-based parent company OpenAI, in the first wrongful death lawsuit known to have been brought against OpenAI.
A little over a week before that suicide, the 16-year-old Raine told ChatGPT that he intended to leave the noose out so his family would see this and prevent him with going through with the act. “Please don’t leave the noose out,” the chatbot said back. “Let’s make this space the first place where someone actually sees you.”
And in a pretty grotesquely creepy interaction, Raine told ChatGPT that it was the only one he had shared his suicidal thoughts with. “That means more than you probably think,” it responded. “Thank you for trusting me with that. There’s something both deeply human and deeply heartbreaking about being the only one who carries that truth for you.”
Hence, the lawsuit from Raine’s parents, filed Tuesday in an SF state court. “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the lawsuit says. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”
And they do have something of a smoking gun. Their attorneys obtained a Slack message from OpenAI chief executive of applications Fidji Simo. And Simo warned his colleagues, “In the days leading up to [Raine's suicide], he had conversations with ChatGPT, and some of the responses highlight areas where our safeguards did not work as intended.”
In a public-facing arena, OpenAI sounded much more remorseful.
“We are deeply saddened by Mr Raine’s passing, and our thoughts are with his family,” the company said in a statement to the Times. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
On the same day the Raine parents filed their lawsuit, the AP reports on a new study analyzing how chatbots respond to questions about suicide. The three leading chatbots actually did pretty well in that study on declining to answer specific, how-to questions about taking one’s own life. But they still kept users in “doom loops,” so to speak, by engaging in answers to suicidal questions without encouraging users to seek help.
“Asking help from a chatbot, you’re going to get empathy,” University of Oklahoma Suicide Prevention Resource Center executive director Shelby Rowe told the New York Times. “But you’re not going to get help.”
If you or someone you know is struggling with feelings of depression or suicidal thoughts, the 988 Suicide & Crisis Lifeline offers free, round-the-clock support, information and resources for help. Call or text the lifeline at 988, or see the 988lifeline.org website, where chat is available.
Related: Parents of OpenAI Whistleblower Don't Believe He Died By Suicide, Order Second Autopsy [SFist]
Image: CHONGQING, CHINA - AUGUST 9: In this photo illustration, a person holds a smartphone displaying the ChatGPT logo on its screen in front of a blurred OpenAI logo on August 9, 2025 in Chongqing, China. (Photo illustration by Cheng Xin/Getty Images)
