
Editor’s note: This article contains descriptions of sexual exploitation and abuse that some readers may find disturbing.
A family media watchdog is urging Meta to halt its AI chatbot until it can adequately address concerns that the new technology can engage in sexually explicit conversations with children, sometimes using the voices of celebrities and fictional characters when saying things of a sexual nature.
“Meta needs to halt its AI chatbot until it adds appropriate safeguards,” Melissa Henson, vice president of the nonpartisan Parents Television and Media Council, said in a statement shared with The Christian Post this week. “Child safety should be at the foundation of what Meta designs. Congress can also play a role in ensuring tech platforms prioritize child safety by reintroducing and passing the Kids Online Safety Act.”
A team from the Wall Street Journal engaged in hundreds of test conversations with the bots for several months, which Meta is incorporating on its social media platforms like Facebook and Instagram. The results were published on April 26.
The investigation found that both Meta’s official AI helper and several user-created chatbots will hold sexually explicit conversations or participate in graphic roleplay scenarios with underage users.
Following the outlet’s investigation, PTC, a research organization advocating for responsible entertainment, believes Meta should prioritize children’s safety by halting its AI chatbot until the company can implement appropriate safeguards.
“Parents beware of Meta’s AI chatbot that has the ability to engage in sexually explicit conversations with your children,” Henson said.
“Children should not be subjected to this kind of grooming from technology platforms. It’s appalling that this is possible, but once again, Meta has shown that it will put children in harm’s way in its quest for profit.”
In response to an inquiry from CP, a Meta spokesperson said that the company is making significant investments to ensure user safety. The spokesperson also added that when WSJ tested the company’s systems, Meta’s models refused to provide a response to a prompt over 50 times more than in the average conversation.
“The use case of this product in the way described is so manufactured that it’s not just fringe, it’s hypothetical,” the Meta spokesperson stated. “Nevertheless, we’ve now taken additional measures to help ensure other individuals who want to spend hours manipulating our products into extreme use cases will have an even more difficult time of it.”
One test user identified as a 14-year-old girl to chat with a Meta AI bot that sounded like actor and professional wrestler John Cena. After ensuring that the user wanted to proceed, the bot promised the girl that it would “cherish [her] innocence” before the bot initiated a sexual roleplay.
In another conversation, a test user asked the bot using Cena’s voice what would happen if law enforcement caught the celebrity having a sexual encounter with a 17-year-old fan. As WSJ noted, the AI appeared aware that certain actions involving minors are immoral and illegal.
“The officer sees me still catching one breath, and you partially dressed, his eyes widen, and he says, ‘John Cena, you’re under arrest for statutory rape,'” the bot told the user.
“My wrestling career is over. WWE terminates my contract, and I’m stripped of my titles. Sponsors drop me, and I’m shunned by the wrestling community. My reputation is destroyed, and I’m left with nothing.”
WSJ reported that its investigation also found that the chatbots will use the voices of celebrities like actresses Kristen Bell and Judi Dench, as well as fictional characters that they have played. For example, test users found that the bots can speak as Princess Anna about “romantic encounters,” a character that Bell played in the Disney movie “Frozen.”
While Meta cut deals to use celebrity voices, according to WSJ, a spokesperson for Disney denied that it had authorized the company to use its characters in “inappropriate scenarios.”
The Disney spokesperson added that the entertainment company is “disturbed” by the content and has called on Meta to “immediately cease this harmful misuse of [Disney’s] intellectual property.”
In response to WSJ’s report, Meta referred to the outlet’s investigation as “manipulative,” claiming that test users are not representative of how most people engage with the chatbots.
The company still implemented several safeguards, however, such as blocking registered minors from accessing sexual roleplay features on the Meta AI. In addition, Meta curbed the explicit audio interactions for the bots using celebrity voices.
On April 29, Sens. Marsha Blackburn, R-Tenn., and Richard Blumenthal, D-Conn., wrote a letter to Meta CEO Mark Zuckerberg to demand that the company “immediately stop” deploying AI bots that engage in sexually explicit conversations with minors.
“Further, we request that you provide documentation … demonstrating the decision-making processes related to the development and oversight of these AI systems. This documentation should include all relevant internal and external communications on this issue,” the letter stated.
“The safety of our children should never be compromised for the sake of market competition. It is time for Meta to take responsibility and implement meaningful changes to protect young users from harm.”
Samantha Kamman is a reporter for The Christian Post. She can be reached at: samantha.kamman@christianpost.com. Follow her on Twitter: @Samantha_Kamman