“No provider or user of an interactive computer service shall be treated as a publisher or speaker of information provided by another information content provider.”
These 26 words helped create the modern internet, for better or for worse. They offer platforms like Google, Facebook, Twitter and others almost unlimited immunity to disseminate information without liability for the content of that content. Mostly. In fact, these platforms may not exist without those 26 words.
On February 21, 2023, the U.S. Supreme Court heard hearings in two cases, Twitter v. Taamneh and Gonzalez v. Google, on the cases that could determine the scope and extent of immunity granted by Congress in passing called “Section 230” of the Communications Decency Act, and how to balance the interests of those harmed by harmful Internet content and the platforms that disseminate (and often amplify) that harmful content.
death by publication
The terrorist organization known in the United States as ISIS has been involved in a series of attacks in Paris, France, Istanbul, Turkey and San Bernardino, California that have left several dead. The families of these victims sought redress under a law that provided civil remedies to claim damages for injuries sustained “as a result of an international act of terrorism”. The families did not sue ISIS, but Twitter, Facebook and Google – platforms allegedly disseminated, published, amplified by the victims’ families and benefited from input from ISIS and IISIS supporters and advocates. The families also argued that the algorithms used by these platforms – which used viewers’ personal information to recommend new content related to their interests – served to highlight and distribute ISIS-related content (which the platforms benefited from) that they allegedly provided material support to the terrorist organization and that the platforms should be held accountable. The liability theories and actions of the three platforms and the claims of multiple plaintiffs are all somewhat different, but essentially the victims’ families claimed that the platforms benefited from amplifying the terrorists’ message and that they should be held liable for this action.
But then there are these 26 words.
In 1990, two competing news aggregation and comment services, Skuttlebut and Rumorville, conquered the newly commercialized Internet. When the latter’s operations posted on the message board service Compuserve what the former considered defamatory information about them, Skuttlebut’s operators sued. But they didn’t sue Rumorville — they sued the platform where Rumorville posted the allegedly defamatory material — Compuserve. Skuttlebut claimed that the platform published, disseminated and amplified the defamation and should be held liable for it. The platform CompuServe argued that even if the statements were false and defamatory, it “simply acted as a distributor and not the publisher of the statements and cannot be held liable for the statements because it did not know and had no reason from the.” Testimonies know.” The New York court agreed, awarding summary judgment to Compuserve.
Four years later, an untitled poster on an online Prodigy message board called “Money Talks” posted a message about the now infamous “Wolf of Wall Street” firm Stratton Oakmont Investments and its then-CEO Daniel Porush (played by Jonah Hill). in the movie). The message on a moderated bulletin board named a Stratton Oakmont offering a “major criminal scam” and “100% criminal scam” and noted that Porush was “soon to be proven a criminal”; and that Stratton Oakmont was a “cult of brokers who either lie for a living or get fired”.
In that case, the court found Prodigy liable for defamation as a publisher, citing a number of facts including: That Prodigy proclaimed “content guidelines” and asked posters not to publish “offensive” notes, noting that “notes that harass other members, or are considered distasteful, or grossly in violation of community standards, or are considered harmful to the maintenance of a harmonious online community,” and reserves the ability to remove objectionable material. The court also noted that content was checked by a software screening program that automatically pre-screened all bulletin board posts for offensive language, and the fact that the forums were moderated by real people, and that those people were also the had the ability to delete content. The court found the platform liable as the publisher of Wolf of Wall Street’s comments.
In the Compuserve case, the platform is said to be liable for the actions of another – the Rumorville defendants. In the Prodigy case, they were held liable for their own actions — for making an incorrect judgment about what to post online and what not to post. At least that’s what the court found.
In response to the Compuserve/Prodigy double cases, Congress did something it rarely does. It made a difference. Congress passed Section 230 of the Communications Decency Act. The law has repeatedly been seen as protecting online platforms from liability, not only for the actions of third parties (what they post), but also for the platform’s actions in deciding what to do with what they post. While not universal, the courts have been very respectful of Section 230 immunity.
That could change with this recent Supreme Court ruling. My colleague, Eric Goldman, maintains a blog that has collected virtually every case of Section 230 and states that it “expects[s] The case for free speech and the status quo of the internet is going to be bad.” The debate became clearly politicized (D vs. R), with one side thinking that “Big Tech” was intentionally “censoring” their speech and the lifting of immunity and the demanding mandatory posting of platforms (in fact, she took the side that Twitter would be required to post the screeds of ISIS), and the other side, which maintains immunity and rants about “big tech” for not doing enough to to protect society from what they consider to be “misinformation”.
Expect a lively debate in court.
So here’s the problem. When we hold platforms liable for the content of others, we oblige them to read every single post, tweet, comment, like, etc. and determine whether that comment is true or false, and whether that comment was posted with or without malicious intent and whether it is likely to cause harm. Undoubtedly, these platforms need to develop “misinformation” algorithms to limit what users can post online – and then potentially face liability for denying access to the platform (not “censorship” as that term is properly applied to government action). should). So when platforms are held liable for third-party content, platforms are forced to filter third-party content — effectively, to exercise strong editorial control.
On the other hand, when we tell platforms that they are not responsible for third-party content, we make it harder for victims of malicious online behavior to respond. We make it difficult to remove defamatory or harmful materials. Platforms are not liable for revenge porn posted on their sites, deepfakes, misinformation and organized scam schemes. Nigerian princes and terrorists. human sacrifice! Dogs and cats live together! mass hysteria!
A middle position would be that providers are not liable for the actions of third parties (those who post, tweet or comment) but for their own actions. Therefore, a website operator who encourages people to defame others or who intentionally develops an algorithm aimed at amplifying hate speech can be held liable. However, this brings these platforms closer to the Stratton Oakmont case and further from Compuserve.
Don’t wait for judgment
At the same time, both Congress and various states such as Texas and Florida are passing legislation restricting platforms’ ability to remove objectionable content, while states such as California and New York are taking the opposite approach and mandating the removal of objectionable content.
In another approach to content moderation, states have sought to require platforms to be more open about their takedown policies. The California, Georgia and Ohio proposals would require those platforms not only to publish their terms of service, but also to provide regular reports on what they have done in response to violations – including requiring them to keep confidential internal information about their opt-out policies and procedures publish.
It seems everyone has a “love-hate” relationship with Big Tech. The Supreme Court can confirm, deny or change the immunity that platforms currently enjoy from broadcasting third-party content. Whatever they do, it will be controversial. And it can destroy the internet as we know it.
Recent articles by author