Artificial intelligenceFeaturedFederal governmentFirst AmendmentFree SpeechInnovationLaw & GovernmentSection 230Technology

Does Section 230 protect AI?

We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online. It enabled the rise of search engines, social media, and countless platforms that make our modern internet a thriving marketplace of all sorts of speech.

Its first 26 words have been vital, if controversial, for protecting online platforms from liability for users’ posts: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” If I defame someone on Facebook, I’m responsible—not Meta. If a neo-Nazi group posts threats on its website, it’s the Nazis, not the domain registrar or hosting service, who could wind up in court.

How Section 230 should apply to generative AI, however, remains a hotly debated issue.

With AI chatbots such as ChatGPT, the “information content provider” is the chatbot. It’s the speaker. So the AI—and the company behind it—would not be protected by Section 230, right?

Section 230 co-author former Rep. Chris Cox (R–Calif.) agrees. “To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” Cox told The Washington Post in 2023. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”

But even if AI apps create their own content, does that make their developers responsible for that content? Alphabet trained its AI assistant Gemini and put certain boundaries in place, but it can’t predict Gemini’s every response to individual user prompts. Could a chatbot itself count as a separate “information content provider”—its own speaker under the law?

That could leave a liability void. Granting Section 230 immunity to AI for libelous output would “completely cut off any recourse for the libeled person, against anyone,” noted law professor Eugene Volokh in the paper “Large Libel Models? Liability for AI Output,” published in 2023 in the Journal of Free Speech Law.

Treating chatbots as independent “thinkers” is wrong too, argues University of Akron law professor Jess Miers. Chatbots “aren’t autonomous actors—they’re tightly controlled, expressive systems reflecting the intentions of their developers,” she says. “These systems don’t merely ‘remix’ third-party content; they generate speech that expresses the developers’ own editorial framing. In that sense, providers are at least partial ‘creators’ of the resulting content—placing them outside 230’s protection.”

The picture gets more complicated when you consider the user’s role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?

Under certain circumstances, it might make sense to absolve AI developers of responsibility. “It’s hard to justify holding companies liable when they’ve implemented reasonable safeguards and the user deliberately circumvents them,” Miers says.

Liability would likely turn on multiple factors, including the rules programmed into the AI and the specific requests a user employed.

In some cases, we could wind up with the law treating “the generative AI model and prompting users as some kind of co-creators, a hybrid status without clear legal precedent,” suggested law professor Eric Goldman in his Santa Clara University research paper “Generative AI Is Doomed.”

How Section 230 fits in with that legal status is unclear. “My view is that we’ll eventually need a new kind of immunity—one tailored specifically to generative AI and its mixed authorship dynamics,” says Miers.

But for now, no one has a one-size-fits-all answer to how Section 230 does or does not apply to generative AI. It will depend on the type of application, the specific parameters of its transgression, the role of user input, the guardrails put in place by developers, and other factors.

So a blanket ban on Section 230 protection for generative AI—as proposed by Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) in 2023—would be a big mistake. Even if Section 230 should not provide protection for generative AI providers in most cases, liability would not always be so clear cut.

Roundly denying Section 230 protection would not just be unfair; it could stymie innovation and cut off consumers from useful tools. Some companies—especially smaller ones—would judge the legal risks too great. Letting courts hash out the dirty details would allow for nuance in this
arena and could avoid unnecessarily thwarting services.

This article originally appeared in print under the headline “Does Section 230 Protect AI?.”

Source link

Related Posts

1 of 1,530