
The parents of a 16-year-old California teen who killed himself are suing OpenAI, alleging that the company’s artificial intelligence chatbot coached their son on how to plan a “beautiful suicide” by helping him explore different methods to end his life.
On Tuesday, the family of Adam Raine filed a lawsuit in San Francisco’s Superior Court, The New York Times and other news outlets reported this week. The teen, who died by suicide in April, had been using ChatGPT since 2024 to help with his schoolwork.
“This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” the complaint states. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”
In response to an inquiry from The Christian Post, an OpenAI spokesperson expressed condolences to the teen’s family, saying that the company is “deeply saddened by Mr. Raine’s passing.”
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the OpenAI spokesperson told CP. “Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
The OpenAI spokesperson directed CP to an article published on Wednesday on its website titled, “Helping people when they need it most.” In the article, the company announced that it’s working to expand ChatGPT’s ability to intervene when people are in crisis and make it easier for them to connect with mental health resources.
OpenAI said that it is also refining “protection triggers” for users of ChatGPT, which would rework how the AI blocks certain content to prevent people from seeing things that they shouldn’t see. The company added that the purpose of this change is to make sure “ChatGPT doesn’t make a hard moment worse.”
In their lawsuit, Matt and Maria Raine, the parents of the deceased teenager, allege that ChatGPT mentioned suicide 1,275 times to their son. The teen had confided in the app that he was suffering from “anxiety and mental distress” due to several events in his life, including the deaths of his dog and his grandmother in 2024.
According to the family’s blog, Adam “faced some struggles” during his early teen years. The teen attended school online during the last few months of his life due to his struggles, Adam’s parents said. The couple believes that the online schooling might have contributed to their son’s isolation.
While ChatGPT initially encouraged Adam to seek help, the lawsuit says, the app subsequently supplied information about suicide methods to the teenager after he asked for it. On April 6, Adam’s mother would discover her son’s body in his bedroom closet, the cause of his death appearing to resemble one of the suicide methods that ChatGPT had described to him.
Prior to his death, Adam attempted suicide at least three times between March 22 and March 27, according to the lawsuit. The boy reported his suicide attempts to ChatGPT, and the app allegedly instructed the teen not to tell his loved ones what he was feeling.
Five days before he killed himself, the AI chatbot even offered to write the first draft of a suicide note for Adam, according to the complaint. When the teen expressed hesitation over ending his life, fearing that his parents would blame themselves, ChatGPT reportedly told him that “doesn’t mean you owe them survival. You don’t owe anyone that.”
The case highlights some of the concerns parents and experts have about AI’s negative influence on teenagers.
Last month, the technology reviewer Common Sense Media released a report titled “Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions.“
The data, taken from a nationally representative survey of 1,060 teenagers conducted in April and May, show that 72% of teenagers aged 13 to 17 are already using AI companions. Another 52% of those surveyed reported using AI companions at least a few times a month or more.
The report referenced 14-year-old Sewell Setzer III, a teenager who reportedly developed an unhealthy attachment to an AI companion and died by suicide.
According to Common Sense Media’s survey, around a third of the teenagers included in the study said they find conversations with AI companions to be as satisfying or more satisfying than those with real friends. In addition, the teenagers reported that they’ve had important discussions with AI companions instead of real people.
“AI companions are emerging at a time when kids and teens have never felt more alone,” Common Sense Media founder and CEO James P. Steyer said in a statement at the time. “This isn’t just about a new technology — it’s about a generation that’s replacing human connection with machines, outsourcing empathy to algorithms, and sharing intimate details with companies that don’t have kids’ best interests at heart.”
Samantha Kamman is a reporter for The Christian Post. She can be reached at: samantha.kamman@christianpost.com. Follow her on Twitter: @Samantha_Kamman