Artificial intelligenceBureaucracyBusiness and IndustryChildrenFacebookFeaturedFederal governmentFederal Trade CommissionGoogleNanny StateTechnology

Feds launch AI inquiry after a chatbot was blamed for a teen’s suicide

Federal regulators and elected officials are moving to crack down on AI chatbots over perceived risks to children’s safety. However, the proposed measures could ultimately put more children at risk.

On Thursday, the Federal Trade Commission (FTC) sent orders to Alphabet (Google), Character Technologies (blamed for the suicide of a 14-year-old in 2024), Instagram, Meta, OpenAI (blamed for the suicide of a 16-year-old in April), Snap, and xAI. The inquiry seeks information on, among other things, how the AI companies process user inputs and generate outputs, develop and approve the characters with which users may interact, and monitor the potential and actual negative effects of their chatbots, especially with respect to minors.

The FTC’s investigation was met with bipartisan applause from Reps. Brett Guthrie (R–Ky.)—the chairman of the House Energy and Commerce Committee—and Frank Pallone (D–N.J.). The two congressmen issued a joint statement “strongly support[ing] this action by the FTC and urg[ing] the agency to consider the tools at its disposal to protect children from online harms.”

Alex Ambrose, policy analyst at the Information Technology and Innovation Foundation, tells Reason that she finds it interesting that the FTC’s inquiry is solely interested in “potentially negative impacts,” paying no heed to potentially positive impacts of chatbots on mental health. “While experts should consider ways to reduce harm from AI companions, it is just as important to encourage beneficial uses of the technology to maximize its positive impact,” says Ambrose.

Meanwhile, Sen. Jon Husted (R–Ohio) introduced the CHAT Act on Monday, which would allow the FTC to enforce age verification measures for the use of companion AI chatbots. Parents would need to consent before underage users could create accounts, which would be blocked from accessing “any companion AI chatbot that engages in sexually explicit communication.” Parents would be immediately informed of suicidal ideation expressed by their child, whose underage account would be actively monitored by the chatbot company.

Taylor Barkley, director of public policy at the Abundance Institute, argues that this bill won’t improve child safety. Barkley explains that the bill “lumps ‘therapeutic communication’ in with companion bots,” which could prevent teens from benefiting from AI therapy tools. Thwarting minors’ access to therapeutic and companion chatbots alike could have unintended consequences.

In a study of women who were diagnosed with an anxiety disorder and living in regions of active military conflict in Ukraine, daily use of the Friend chatbot was associated with “a 30% drop on the Hamilton Anxiety Scale and a 35% reduction on the Beck Depression Inventory” while traditional psychotherapy—three 60-minute sessions per week—was associated with “45% and 50% reductions on these measures, respectively,” according to a study published this February in BMC Psychology. Similarly, a June study in the Journal of Consumer Research found that “AI companions successfully alleviate loneliness on par only with interacting with another person.”

Protecting kids from harmful interactions with chatbots is an important goal. In their quest to achieve it, policymakers and regulators would be wise to remember the benefits that AI may bring and not pursue solutions that discourage AI companies from making potentially helpful technology available to kids in the first place.

Source link

Related Posts

1 of 24