opinion | Does Section 230 Protect ChatGPT? Congress should say.

Comment on this story

comment

The early days of Microsoft’s ChatGPT were something of a replay of Internet history – beginning with excitement about these inventions and ending with concerns about the damage they might do. ChatGPT and other “Large Language Models” – artificially intelligent systems trained on huge amounts of text – can turn into liars, racists or terrorist accomplices who explain how to make dirty bombs. The question is: if that happens, who is responsible?

Section 230 of the Communications Decency Act states that services — from Facebook and Google to movie review aggregators and mom blogs with comment sections — should not be held liable for most third-party material. In these cases, it’s pretty easy to distinguish between the platform and the person posting. Not so with chatbots and AI assistants. Few have considered whether Section 230 offers them protection.

Consider ChatGPT. Enter a question and you will get an answer. It not only shows existing content like a tweet, video or website originally contributed by someone else, but writes its own post in real-time. The law states that if a person or entity “develops” any content, even “partially,” it becomes liable. And doesn’t it count as development, for example turning a list of search results into a summary? Furthermore, the contours of each AI contribution are largely determined by the creators of the AI, who set the rules for their systems and shaped their outcomes, reinforcing behaviors they like and discouraging those they dislike.

At the same time, however, every response from ChatGPT is, as one analyst put it, a “remix” of third-party material. The tool generates its answers by predicting what word should come next in a sentence based on the words that come next in sentences on the internet. And just as the creators behind a machine influence its results, so do the users who ask questions or engage in conversations. All of this suggests that the level of protection afforded to AI models can depend on how much a particular product is being regurgitated or synthesized, as well as how conscious a user has tricked a model into making a particular response to produce.

READ :  How to select a network cable for good internet speed

So far there is no legal clarity. Supreme Court Justice Neil M. Gorsuch said during a hearing in a recent Section 230 case that AI “today is generating polemics that would be content about choosing, choosing, analyzing or digesting content” – hypothesizing “that they are not protected”. Last week, the authors agreed to the determination of his analysis. But the companies working on the next frontier deserve a tougher response from lawmakers. And to figure out what that answer should be, it’s worth looking back at the history of the internet.

Scholars believe that Section 230 was responsible for the Internet’s tremendous growth in its early years. Otherwise, endless lawsuits would have prevented a fledgling service from developing into such an indispensable network as Google or Facebook. For this reason, many call Section 230 the “26 Words That Created the Internet.” The problem is that many now think in hindsight that lack of consequences not only allowed the internet to grow but spiraled out of control. With AI, the country has a chance to put into practice the lessons it has learned.

This lesson should not consist of a precautionary removal of Section 230’s immunity from large language models. After all, it was good that the Internet could grow, even if its ills did. Just as websites could not expand without Section 230 protection, these products cannot hope to provide a wide variety of answers on a wide variety of topics in a wide variety of applications – which we should expect them to do – without legal protections . But neither can the United States afford to repeat its biggest mistake in internet governance, which is not to govern much at all.

READ :  Chihuahua's over-the-top bath routine delights the internet: 'In Heaven'

Lawmakers should give the new AI models the temporary haven of Section 230 while watching what happens when this industry starts to boom. You should solve the puzzle these tools create, e.g. B. Who is liable in a case of defamation when a developer fails to do so. They should investigate complaints, including court cases, and assess whether they could be avoided by changing the immunity regime. In short, they should grow the Internet of the future as much as the Internet of the past. But this time you should be careful.

The view of the post | About the editors

Editorials represent the views of The Post as an institution, as determined by debate between members of the editorial board on a sphere of opinion basis and separate from the newsroom.

Editorial Board Members and Areas of Focus: Opinion Editor David Shipley; Assistant Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy, legal affairs, energy, environment, public health); Lee Hockstader (European Affairs, based in Paris); David E. Hoffman (Global Public Health); James Hohmann (domestic and electoral politics, including White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (Economics); Associate editor Ruth Marcus; and Molly Roberts (Technology and Society).

The editors of Big Tech

Check out 3 more stories