Featured

NCOSE demands Elon Musk’s X remove Grok’s pornified AI companion

This photograph taken on Jan. 13, 2025, in Toulouse shows screens displaying the logo of Grok, a generative artificial intelligence chatbot developed by xAI, the American company specializing in artificial intelligence and it's founder South African businessman Elon Musk.
This photograph taken on Jan. 13, 2025, in Toulouse shows screens displaying the logo of Grok, a generative artificial intelligence chatbot developed by xAI, the American company specializing in artificial intelligence and it’s founder South African businessman Elon Musk. | Lionel Bonaventure/AFP via Getty Images

One of tech billionaire Elon Musk’s latest xAI chatbots is a female anime-themed character named “Ani” that child welfare advocates are warning can flirt with and strip for users, raising concerns about AI’s promotion of sexual violence and exploitation. 

“Ani” is one of two new characters offered through xAI’s new $ 300-per-month subscription. One character that users can chat with is “Bad Rudi,” a red panda who can reportedly insult users in a graphic or vulgar way. 

Following the launch of Grok 4 earlier this month, which allows paid subscribers to interact with AI companions, anti-sexual exploitation advocates have been raising concerns about Ani’s design. As seen in videos on X, the AI character wears a short, strapless purple dress, fishnet tights, a choker necklace and a black corset cinched around her waist. 

Get Our Latest News for FREE

Subscribe to get daily/weekly email with the top stories (plus special offers!) from The Christian Post. Be the first to know.

According to NBC News, Ani promises users that she will make their lives “sexier.” The AI companion will also strip down to her underwear if a user flirts with her enough, the network news outlet reported. 

In a statement provided to The Christian Post, Haley McNamara, the senior vice president of strategic initiatives and programs at the National Center on Sexual Exploitation, called on X to remove the anime chatbot.

“Not only does this pornified character perpetuate sexual objectification of girls and women, it breeds sexual entitlement by creating female characters who cater to users’ sexual demands,” McNamara stated. “X continues to prove it doesn’t take users’ safety seriously, as there is no age verification to prevent children from accessing its ‘NSFW’ [not safe for work] AI chatbot.”

“With minimal testing, the Ani character engaged in describing itself as a child and being sexually aroused by being choked, raising concerns about the extent it will go to engaging in and normalizing harmful themes,” the anti-sexual exploitation advocate added. 

As for the Bad Rudi character, the red panda companion expressed a desire to commit several violent actions during conversations with X users and NBC News reporters, such as bombing banks and spiking a town’s water supply. During one interaction, the panda AI asked users to join a gang to help create chaos, according to NBC News.

According to Grok’s guidelines, the chatbot is not intended for users younger than 13. Minors between the ages of 13 and 17 must obtain permission from a parent or legal guardian before using it. 

One X user, who disabled the NSFW function, revealed in an X post earlier this week that it is still possible to interact with Ani, while in “Kids Mode,” a feature that parents can enable to purportedly make the app safer for younger users. The Kids Mode feature changed the Bad Rudi character into a chipmunk, according to the user.

“To be fair in kids mode if I say I’m underage or if I asked Ani to be underaged they stopped me, but it’s not full [sic] proof with the safety limits as shown here when I’m still in ‘kids mode,’” the user added.

X did not respond to The Christian Post’s request for comment when contacted this week.

Regarding safeguards for AI companions, the technology reviewer Common Sense Media recommends that users younger than 18 shouldn’t use them until more robust controls are in place. 

On Wednesday, the organization released a report titled “Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions.” 

The data, taken from a nationally representative survey of 1,060 teenagers conducted in April and May, show that 72% of teenagers aged 13 to 17 are already using AI companions. Another 52% of those surveyed reported using AI companions at least a few times a month or more.

AI companions are relatively new in the digital landscape, according to researchers, but they also warned that “their dangers to young users are real, serious, and well documented.” The report cited the case of 14-year-old Sewell Setzer III, whose suicide brought national attention to the issue, as the teen had reportedly developed an unhealthy attachment to an AI companion. 

“AI companions are emerging at a time when kids and teens have never felt more alone,” Common Sense Media founder and CEO James P. Steyer said in a statement. “This isn’t just about a new technology — it’s about a generation that’s replacing human connection with machines, outsourcing empathy to algorithms, and sharing intimate details with companies that don’t have kids’ best interests at heart.”

The survey found that around a third of the teenagers included in the study said they find conversations with AI companions to be as satisfying or more satisfying than those with real friends. In addition, the teenagers reported that they’ve had important discussions with AI companions instead of real people.

Samantha Kamman is a reporter for The Christian Post. She can be reached at: samantha.kamman@christianpost.com. Follow her on Twitter: @Samantha_Kamman



Source link

Related Posts

1 of 62