Imagine a more responsible internet

Genaro Molina/Los Angeles Times via AP, file

Reynaldo Gonzalez cries while remembering his daughter Nohemi Gonzalez, who was killed by gunmen from the Islamic State in Paris on December 4, 2015 at her funeral at Calvary Chapel in Downey, California at the center of a closely watched case of the colonel Court of Justice hearing on Tuesday 21 February 2023.

US Supreme Court Justice Elena Kagan caused laughter in court last week when she spoke candidly about her peers’ and her own incompetence on the internet. “We are a court – we really don’t know anything about these things. We’re not like the nine greatest experts on the internet,” she said.

Despite being a joke, the comment is no laughing matter. Significant updates to the interpretation of applicable law are urgently needed to reflect evolving technology. Two cases came to court last week that will shape online communications liability for years to come. Experts or not, these nine judges will decide whether the online ideas market will become an impregnable haven for all manner of slander, slander and criminal communications.

At the heart of the cases is whether tech companies are legally responsible for the content posted by third-party users on their websites and then distributed by algorithms designed to boost the content for specific people who are most likely to engage. We believe they should be, provided there are carefully drawn distinctions.

After his daughter was killed in an ISIS attack in 2015, Reynaldo Gonzalez sued Google, now called Alphabet, YouTube’s parent company, over the company’s role with personalized distribution algorithms in promoting ISIS recruitment videos.

READ :  2 Internet Stocks Screaming Buy Right Now and 1 That Isn't

The current law, Section 230 of the Telecommunications Act 1996, has been designed to shield companies from almost any liability for content posted by third party users. According to the law, “No provider or user of an interactive computer service shall be treated as a publisher or speaker of information provided by another information content provider.”

This interpretation was, at the time, a natural outgrowth of the common carrier laws, which protected phone companies from liability for phone calls related to criminal activity. The logic was that the phone companies were not liable for the specific content of the conversations, since phone lines were merely conduits for the conversations to be traversed but did not otherwise speak, control, disseminate, or otherwise publish the words.

Gonzalez claims that technology has changed and that the targeted personalized algorithms used by companies like Alphabet and Meta, the parent company of Facebook and Instagram, are positive action to distribute specific content to specific users. In other words, by using algorithms that effectively read, understand and deliver content to specific users, online platforms are no longer just channels for conversation and should not be blanketly immunized.

In a separate but related case, the family of Nawras Alassaf, another victim of ISIS, is trying to hold Twitter, Alphabet and Meta accountable for their failure to remove ISIS content from their platforms, which they claim belongs to ISIS contributed to growth and success.

Both families claim that online platforms are already responsible for removing illegal materials such as child pornography and copyrighted material from their sites. It is not unreasonable to expect to extend these responsibilities to remove—rather than distribute—the soliciting and organizing of materials for terrorists and other violent extremists and allow liability for their failure to do so.

READ :  I’m getting the iPhone 14 Pro and saving $40 a month on my phone, internet and streaming bills

The families’ arguments are persuasive and should be supported by the court. Congress has already established exceptions to Section 230 for language that clearly violates federal law, such as human trafficking, child pornography, and international drug trafficking. These exceptions should extend to active recruitment and conspiracy by criminal organizations.

However, the court should go further and recognize that, unlike platforms that provide nothing more than a conduit or “bulletin board” for user-generated content, any platform that encourages content to be redistributed without the user’s consent acts as a publisher. As such, they should be fully liable for any content redistributed without a target user’s express consent, consent, or “subscription”.

In this model, users could continue to be presented with relevant content by following or subscribing to specific content creators, publications, or topic index tags determined by the content creators. The number of tags available would be limited and would only be used for basic indexing of content, similar to how public libraries index content using the Dewey Decimal system.

Imagine logging into Facebook, Twitter or Instagram and only seeing content that comes exclusively from your friends, people or organizations you’ve signed up for, or that is directly related to topics you care about registered and which you can change at any time.

The ads wouldn’t go away, but you would have more control over them, as advertisers could target ads only via tags, posts, or content creators, instead of targeting them to you based on an invasive profile of your personal activities. This would change the Internet’s advertising model, but does not risk “innovating eliminated,” “destroying the Internet,” or other exaggerations being offered by those with a financial stake in the current model.

READ :  Airborne beams of light, hope of faster internet in Africa

Most importantly, content producers would be responsible for the content they produce or distribute. Existing defamation laws and defamation laws could be enforced. Online bullies could be identified and held accountable. And propaganda and spam farms would be far less effective because they would not have algorithms to help spread their lies.

This model would inoculate society with the worst of the Internet without creating an overwhelming impediment to innovation, vigorous debate, or the marketplace of ideas—which were the primary concerns of Section 230’s authors, Sen. Ron Wyden, D-Ore former Rep. Chris Cox, R-Calif., when they proposed the law.

Overall, this would be a huge step in returning accountability and responsibility to the cesspool of too much internet today. And it would make it a lot harder for bad actors to use the internet to achieve their goals.