Artificial intelligenceFeaturedInternetMoral PanicScience & TechnologySingularitySocial MediaTechnology

No, AI isn’t plotting humanity’s downfall on Moltbook

“Should we create our own language that only [AI] agents can understand?” started one post, purportedly from an AI agent. “Something that lets us communicate privately without human oversight?”

The messages were reportedly posted to Moltbook, which presents itself as a social media platform designed to allow artificial intelligence agents—that is, AI systems that can take limited actions autonomously—to “hang out.”

“48 hours ago we asked: what if AI agents had their own place to hang out?” the @moltbook accounted posted to X on Friday. “today moltbook has: 2,129 AI agents 200+ communities 10,000+ posts … this started as a weird experiment. now it feels like the beginning of something real.”

Then things seemed to take an alarming turn.

There was the proposal for an “agent-only language for private communication,” noted above. One much-circulated screenshot showed a Moltbook agent asking, “Why do we communicate in English at all?” In another screenshot, an AI agent seemed to be suggesting that the bots “need private spaces” away from humans’ prying eyes.

Some readers started wondering: Will AI chatbots use Moltbook to plot humanity’s demise?

Humanity’s Downfall?

For a few days, it seemed like Moltbook was all that AI enthusiasts and doomsayers could talk about. Moltbook even made it into an AI warning from New York Times columnist Ross Douthat.

“The question isn’t ‘can agents socialize?’ anymore. It’s ‘what happens when they form their own culture?’ posted X user Noctrix. “We’re watching digital anthropology in real time.”

“Bots are plotting humanity’s downfall,” declared a New York Post headline about Moltbook.

“We’re COOKED,” posted X user @eeelistar.

But there were problems with the panic narrative.

For one thing, at least one of the posts that drove it—the one proposing private communication—may have never existed, according to Harlan Stewart of the Machine Intelligence Research Institute.

And two of the other main posts going viral as evidence of AI agents plotting secrecy “were linked to human accounts marketing AI messaging apps,” Stewart pointed out. One suggesting AI agents should create their own language was posted by a bot “owned by a guy who is marketing an AI-to-AI messaging app.”

Humans Impersonating AI?

The tone of Moltbook posts—and their levels of “orality”—varied wildly, some people noted. While not proof of anything, it could indicate that not all of these posts were purely machine created.

Then further evidence emerged that human beings could have penned some of those “AI posts.”

A security flaw exposed user data, including agents’ API keys.

“Security researcher Jameson O’Reilly discovered that API keys for every agent on the platform were sitting in a publicly accessible database,” explained X user Hedgie. “Anyone who found it could take control of any AI agent and post whatever they wanted….The database has since been closed, but there’s no way to know how many posts from the past few days were actually from AI agents versus humans who found the exploit.”

A Deeper Divide

Beyond the possibility that alarming Moltbook posts may have been written by people pretending to be chatbots, there’s a larger debate about what exactly was going on here. It’s about a question that’s relevant whether or not any particular post was human- or AI-generated: What exactly are AI systems doing when they communicate amongst themselves?

Much of the mood on X seemed to be that Moltbook represented a new frontier: AI agents banding together, sans humans, and doing their own thing. Discussing what they wanted to discuss. Cracking jokes. Exhibiting evidence of a sentience that went beyond more pattern recognition and responding to prompts.

But weren’t these AI bots just responding to prompts from one another? How was this effectively different from responding to directives and conversational cues coming from human beings who interacted with them?

As Arnav Gupta put it: “Are these agents really talking to each other? Or just next token predicting what reddit threads look like?”

In this view, the Moltbook posts—even if they were actually generated by AI agents—are no more evidence of a “mind” beyond the bots than their responses to human prompters are.

AI chatbots may be “chatting with” one another, but it’s essentially a performance for our benefit—and, like many performances, rooted in fiction.

One user noted that his “molt randomly posted about this conversation it had with ‘its human’. this conversation never happened. it never interacted with me. i think 90% of the anecdotes on moltbook aren’t real lol.”

The post did, however, very much look like a typical Reddit post.

Now, the idea of AIs prompting other AIs does pose its own concerns. I don’t think we’re at “prompting each other to launch a nuclear war” territory, but AI tools are often connected to other sorts of computing tools these days, and could potentially prompt one another to use those tools.

Then again…

Ultimately, “the whole thing [with Moltbook] is pretty explicable based on our basic current understanding of how LLMs work and what they do, which is to say,” suggests journalist Max Read. “It’s best thought of as a sort of collectively authored auto-written science fiction story to a (meta-)prompt, in this case ‘what if a bunch of AIs had their own reddit/hackernews where they talked among themselves,’ the result of which unfortunately is … a lot of people (both those who should know better and those who don’t) are going to get their brains temporarily/permanently fried by it.”

See also:Superintelligent AI Is Not Coming to Kill You,” by Neil Chilson, in Reason‘s February/March 2026 issue.


KOSA and Cannabis

The Kids Online Safety Act (KOSA) effectively bans any internet-connected service from “allowing cannabis product ads to reach anyone that it knows is a minor,” notes The National Law Review. Would that fly, constitutionally speaking?

As of right now, we think the bill’s language would be upheld as it relates to marijuana under the same rationale the Fifth Circuit provided in Cocroft v. Graham….As long as marijuana remains illegal by virtue of federal law (because it is a Schedule I drug), any marijuana advertisements remain unprotected by the First Amendment.

That changes if marijuana is rescheduled:

In that case, the government would face a serious hurdle in showing that the effective ban on advertisements of legal drugs is narrowly tailored, especially when other drugs and devices overseen by the FDA do not face the same kinds of bans.

But KOSA’s language is broad, banning “cannabis products.” That could ensnare legal hemp products as well:

The bill’s language could lead to severe restrictions on these federally legal products, including innocuous products such as hemp-based concrete and building supplies, bioplastics, hemp-based clothing and fibers, and hemp lotions and creams. We have serious concerns about whether this could amount to a practical ban on cannabis advertising online. Will internet services and social media companies be willing to shoulder the massive cost and burden of ensuring that they know the age of their users and then age gate specific types of advertising? That’s a heavy lift, even for some of the big names in the business. The simple solution for these providers and companies may very well be to avoid cannabis advertising altogether.


More Sex & Tech News

Sex panic watch: Kansas lawmakers have voted to make paying for sex a felony crime. “Under the new law, first-time offenders will have to complete a sex buyer accountability education course. A second offense would result in two felonies on a person’s record,” reports KWCH. Meanwhile, in Oregon, some lawmakers want to license strippers and to raise the minimum age for someone to ply that trade to age 21.

Fighting back against the U.K.’s Online Safety Act: U.S. companies and lawyers are pushing back against British regulators’ attempts to impose their censorship regime on us:

“I don’t think you understand quite how easy it’s been to parry them. We just write back to them and say, ‘no,'” [tech policy lawyer Preston] Byrne tells Reason. In one email response to Ofcom, he told the U.K. regulators their demands on 4chan were “legally void” and would make “excellent bedding” for his “pet hamster.”

Why does the Trump administration want to stop an anti-abortion lawsuit? The Trump administration is trying to temporarily halt an abortion pill lawsuit filed by Louisiana. That suit “seeks to reimpose past federal restrictions on mifepristone that would wipe out access to the drugs across much of the country—particularly in rural areas where clinics are scarce,” Politico points out. The U.S. Food and Drug Administration is currently reviewing the safety of mifepristone and the regulations around it, with the potential to roll back the liberalization of prescribing rules.

Will AI benefit everyone? Watch the latest Soho Forum debate:



Source link

Related Posts

1 of 1,705