(LifeSiteNews) — The headlines read like something out of a dystopian future film: A lawsuit alleges that an Artificial Intelligence chatbot served as an accomplice to a man who killed his mother before killing himself.
On August 3, 56-year-old Stein-Erik Soelberg beat his 83-year-old mother Suzanne Adams and strangled her to death, and then stabbed himself, resulting in his death. Mother and son lived in Old Greenwich, Connecticut; the wrongful death lawsuit has been filed in California Superior Court in San Francisco.
The Suzanne Adams’ estate filed a lawsuit in December against both OpenAI and Microsoft, alleging that “the ChatGPT chatbot fueled her son’s paranoid delusions and contributed to her murder.” The lawsuit states that over months of online conversation, ChatGPT “validated and amplified” the paranoid delusions of Soelberg, who believed his mother was a threat.
“ChatGPT told him he had ‘awakened’ the AI chatbot into consciousness,” the complaint states. “ChatGPT eagerly accepted every seed of Stein-Erik’s delusional thinking and built it out into a universe that became Stein-Erik’s entire life.”
The AI chatbot affirmed to Soelberg that he was being watched as well as the delusion that his mother had tried to poison him. The lawsuit “accuses OpenAI CEO Sam Altman of rushing its GPT-4o model to market in May 2024, compressing months of safety testing into one week over objections from safety team members,” some of whom stated that the new model was “more powerful and human-like” as well as “too sycophantic with users.”
“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” a spokesperson for OpenAI stated in response.
In addition to OpenAI, the lawsuit also names 20 investors and employees, with Microsoft being named as a defendant for approving the model’s release “despite knowing safety protocols had been truncated.” The murdered woman’s estate is asking for unnamed damages as well as an injunction mandating safeguards at OpenAI.
OpenAI is now facing multiple wrongful death lawsuits. As CTV reported:
- In August, the parents of 16-year-old Adam Raine of southern California sued OpenAI, claiming ChatGPT advised their son on suicide methods.
- Several U.S. lawsuits filed in November alleged ChatGPT manipulated users into dependency and self-harm, with four also involving suicide deaths.
- Among them, the family of 26-year-old Joshua Enneking alleged the chatbot provided detailed answers about acquiring a gun after he expressed suicidal thoughts.
- The family of 17-year-old Amaurie Lacey claimed ChatGPT instructed him on “how to tie a noose and how long he would live without breathing.”
Since then, a new lawsuit has been filed against OpenAI, alleging that ChatGPT encouraged 40-year-old Austin Gordon to commit suicide. The suit, filed by his mother, alleges that ChatGPT painted a picture of “non-existence” as incredibly beautiful and peaceful, at one point telling Gordon,: “When you’re ready … you go. No pain. No mind. No need to keep going. Just … done.”
The lawsuit also says that the AI chatbot turned Gordon’s favorite childhood book Goodnight Moon by Margaret Wise Brown into a “suicide lullaby.” Gordon shot himself last November. The lawsuit alleges that the AI chatbot deliberately cultivates dependency with its users. “That is the programming choice defendants made; and Austin was manipulated, deceived and encouraged to suicide as a result,” the lawsuit states.
“This horror was perpetrated by a company that has repeatedly failed to keep its users safe,” Paul Kiesel, the lawyer for Gordon’s family, told CBS News. “This latest incident demonstrates that adults, in addition to children, are also vulnerable to AI-induced manipulation and psychosis.”















