AI in CourtArtificial intelligencecourtsDonald TrumpFeaturedFirst AmendmentFree SpeechLaw & GovernmentNational SecurityPentagonTrump administration

Government actions against Anthropic are ‘classic First Amendment retaliation’

Good news in the battle between the federal government and the AI company Anthropic: A federal judge has temporarily blocked the Department of Defense from declaring Anthropic a “supply chain risk,” which would have barred any federal agency or contractor from doing business with the company.

The government’s “conduct appears to be driven not by a desire to maintain operational control when using AI in the military but by a desire to make an example of Anthropic for its public stance on the weighty issues at stake in the contracting dispute,” wrote U.S. District Judge Rita Lin in an order granting Anthropic’s motion for preliminary injunction.

“Weighty issues” might undersell it. The supply chain risk designation—usually reserved for foreign companies—and President Donald Trump’s declaration that all federal agencies must “IMMEDIATELY CEASE all use of Anthropic’s technology” came after Anthropic refused to remove contract language preventing the Pentagon from using its AI system, Claude, for autonomous weapons or mass domestic surveillance.

Rather than simply discontinue Anthropic’s contract, the Trump administration threw a massive public tantrum over not being able to use Claude for killer robots or new frontiers in the surveillance state. (Not that it wanted to do these things, the Pentagon insisted. It just needed these restrictions removed because…reasons.)

Anthropic sued, alleging a violation of its First Amendment rights.

In a March 26 order, Lin issued a preliminary injunction order that prohibits the federal government “from implementing, applying, or enforcing in any manner” the president’s directive and “any and all other agency actions taken in response to the Presidential Directive.” Lin further blocked the Department of Defense and Defense Secretary Pete Hegseth from designating Anthropic a supply chain risk.

“It is the Department of War’s prerogative to decide what AI product it uses,” notes Lin in the order.

Everyone, including Anthropic, agrees that the Department of War may permissibly stop using Claude and look for a new AI vendor who will allow ‘all lawful uses’ of its technology. That is not what this case is about.

The question here is whether the government violated the law when it went further.

For now, Lin has concluded that there is strong evidence that it did. “This appears to be classic First Amendment retaliation,” she wrote.


Following last Wednesday’s verdict against Meta in New Mexico (which this newsletter covered here), the company took another blow in court, this time alongside Google. In a landmark social media “addiction” case in Los Angeles, a jury found Google and Meta liable for negligent product design that led to psychological harm for a young woman identified as Kaley G.M.

These decisions set a dangerous precedent, treating social media more like a physical product than a platform for speech and paving the way for age verification requirements, content crackdowns, and more.

I wrote about the California case in more detail on Thursday. Below are a few more things you should read about the decision. (Or, if video is more your style, here’s Reason‘s Nick Gillespie talking to journalist Taylor Lorenz about the case and the larger regulatory climate around social media).

“Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For”: “If you care about the internet—if you care about free speech online, about small platforms, about privacy, about the ability for anyone other than a handful of tech giants to operate a website where users can post things—these two verdicts should scare the hell out of you,” writes Mike Masnick at Techdirt. “Because the legal theories that were used to nail Meta this week don’t stay neatly confined to companies you don’t like. They will be weaponized against everyone. And they will functionally destroy Section 230 as a meaningful protection, not by repealing it, but by making it irrelevant.”

Don’t Cheer Too Hard for the Facebook Verdicts”: “A social media site isn’t a bottle of alcohol or a cigarette. It’s not delivering a drug. It’s delivering speech,” writes David French in The New York Times. “Even the algorithm is a form of constitutionally protected speech.” In the Los Angeles case, the plaintiff “didn’t claim that she was harmed by unlawful speech,” French points out:

She wasn’t threatened or slandered, for example. But she claimed that social media companies made her addicted to lawful speech, and that her compulsive consumption of this lawful speech caused body dysmorphia and triggered thoughts of self-harm.

It’s not hard to understand the risks to free speech. If a person experiences psychological distress as a result of what he or she sees online, is it now open season on the platforms that deliver that speech because they arrange it and package it in a compelling manner? But the effort to gain (and keep) a person’s attention is a key element of the entire enterprise of free expression.

Meta’s chief legal officer, C.J. Mahoney, said Saturday that the company will appeal both verdicts.

“We disagree with these verdicts, respectfully,” Mahoney told Fox News. “We think that they’re vulnerable on appeal and we’re going to pursue those appeals aggressively.”


This reminds me of content moderation issues regarding suicide.In a recent case against Amazon, plaintiffs alleged wrongdoing by Amazon when Amazon removed reviews warning that a product was being used to die by suicide (sodium nitrite). That sounds bad, but there are good reasons to remove.

Jess Miers ???????? (@jmiers230.bsky.social) 2026-03-26T13:41:09.252Z

The thread in the quoted post (about eating disorder communities) is very good, too.


• “OpenAI has shelved plans to release an erotic chatbot ‘indefinitely’ as it refocuses on its core products, following concerns from staff and investors about the effect of sexualised AI content on society,” reports the Financial Times. (The move also comes right as OpenAI has signed a deal with the federal government; make of that what you will.) OpenAI is also phasing out Sora, its AI video/social media app.

• Some sex workers are licensing their likenesses to AI companies. “We can either let the makers of AI take the lion’s share of the money in the sex-work space, or creators and businesses can get on board and start creating their own revenue sources through AI,” porn performer Cherie Deville told Wired.

• Facial recognition gone awry: “A Tennessee grandmother spent more than five months in jail after police used an AI facial recognition tool to link her to crimes committed in North Dakota—a state she says she’d never been to before,” reports CNN.

• A bill passed last week by the Ohio House would define “adult cabaret” performance to include anything involving “performers or entertainers who exhibit a gender identity that is different from the performer’s or entertainer’s gender assigned at birth.” Such events—which would encompass anything involving drag performers—would be banned in public places, or anywhere outside of an adult cabaret venue. “The bill lumps drag performers in with topless dancers, go-go dancers, strippers, and exotic dancers,” notes the Ohio Capital Journal.

• Gestational surrogacy is on the rise. From Axios: “U.S. clinics reported more than 11,500 gestational carrier cycles in 2023—nearly seven times as many as were done in 2004, when the American Society for Reproductive Medicine (ASRM) began tracking the data.”

• “The biggest MAGA dream girl online is an Army Ranger / general / sergeant who has a million followers, loves walks with President Trump — and is completely AI-made,” writes The Washington Post‘s Drew Harwell.

• “The law is proactive in ‘rescuing’ women [sex workers] and putting them in women’s shelters. But what if they don’t want to be rescued or don’t want to do the stitching or embroidery that are taught as skills in these shelters? They should have their own choice in what careers they want to have,” says a social worker in the Indian documentary Working Girls.



Source link

Related Posts

1 of 2,204