
Meta and OpenAI are working to improve their chatbots’ handling of topics that teenagers engage the technology about, including sensitive issues like suicide.
OpenAI announced Tuesday that it is adjusting its ChatGPT to serve people better when they interact in a time of crisis, making it “easier to reach emergency services and get help from experts.” The changes will also improve “protections for teens,” the company said.
“Our reasoning models — like GPT‑5-thinking and o3 — are built to spend more time thinking for longer and reasoning through context before answering,” the company explained.
“We’ll soon begin to route some sensitive conversations — like when our system detects signs of acute distress — to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.”
When it comes to teenage users, OpenAI says it’s “building more ways for families to use ChatGPT together and decide what works best in their home.” This includes allowing parents to link their accounts with their children’s accounts via an email invitation and control ChatGPT responses with default set “age-appropriate model behavior rules.” The company says the changes also give parents more control over assorted features like memory and chat history. Parents will also get “notifications when the system detects their teen is in a moment of acute distress.”
Meta, which includes social media platforms like Facebook and Instagram, announced last week that they are training its chatbots to quit engaging teens on issues like suicide, eating disorders and inappropriate sensual topics.
Meta spokesperson Stephanie Otway told TechCrunch last Friday that “we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly.”
“As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now,” said Otway.
“These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.”
The efforts of these companies have come amid multiple reports of teenagers engaging in violent behavior or self-harm due to the feedback they were receiving from chatbots.
Late last month, the family of 16-year-old Adam Raine of California filed a lawsuit against OpenAI, alleging that ChatGPT had helped their son die by suicide.
In a statement given to The Christian Post, an OpenAI spokesperson expressed condolences to the teen’s family, saying that the company is “deeply saddened by Mr. Raine’s passing.”
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the OpenAI spokesperson said.
“Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
In mid-August, Reuters obtained an internal Meta policy document approved by the company’s legal and engineering staff that purportedly revealed that it permitted chatbots to “engage a child in conversations that are romantic or sensual.” After being questioned by the news agency, Meta said it removed sections of the document that permitted chatbots to flirt with and engage underage users in romantic roleplay.