ChatGPT will not enjoy the same protections as social media under Section 230, says an expert

NurPhoto/Contributor/Getty Images

OpenAI’s hugely popular ChatGPT program won’t get the same protections that social media gets when it comes to legal responsibility for content, according to an analysis published by researcher Matt Perault of the University of North Carolina at Chapel Hill on Thursday posted on the Lawfare blog.

“Courts will likely find that ChatGPT and other LLMs are informational content providers,” Perault wrote, referring to the “grand language models” that power ChatGPT and many similar AI natural language programs.

Also: What is ChatGPT and why is it important? to know everything

“The result is that the companies using these generative AI tools – like OpenAI, Microsoft and Google – will be excluded from the application of Section 230 in cases arising from AI-generated content,” Perault predicted, referring to Section 230 of Title 47 of the US Code, a section added to the Communications Decency Act, part of the Telecommunications Act of 1996 passed by Congress.

Section 230 has been used by Meta and other internet companies as a shield to avoid legal responsibility for content posted by users.

As Perault explains, “Under applicable law, an interactive computer service (a content host) shall not be liable for content posted by an informational content provider (a content creator)” because “Section 230 states that”[n]o Provider or user of an interactive computer service will be treated as a publisher or speaker of information provided by another information content provider.”

That has caused dismay among US lawmakers on both sides of the aisle, who for one reason or another are dissatisfied with the content moderation policies of Meta and Twitter and other companies, Perault noted.

READ :  LivelyVerse launches new “LivelyVerse Club” in partnership with Social Media 92 to empower blockchain industry

OpenAI’s ChatGPT likely won’t get the same Section 230 coverage, Perault believes.

According to Perault, “The relevant question will be whether LLMs ‘develop’ content, at least ‘in part’.”

He continued, “It’s hard to imagine that a court would rule differently if an LLM were to write text on a topic in response to a user query, or develop text to summarize the results of a search query (as ChatGPT can do). In contrast, Twitter doesn’t craft tweets for its users, and most Google search results simply identify existing websites in response to user queries.”

The finding is that “courts are likely to find that ChatGPT and other LLMs are excluded from section 230 protection because they are information content providers and not interactive computer services.”

In detail: These experts fight to protect AI from hackers. Time is running out

If Perault is right, and ChatGPT and other generative AI tools end up not enjoying Section 230 protection, “the risk is huge,” he wrote. “Platforms using LLMs would be subject to a multitude of lawsuits under federal and state law” and “would face a compliance minefield that would potentially force them to change their products from state to state or even pull them out of certain countries.” States whole.”