About 500 hours of video are uploaded to YouTube every minute. The online video platform hosts more than 800 million videos and is the second most visited website in the world with 2.5 billion monthly active users.
With the deluge of content flooding the site every day, one would guess that YouTube must have an army of people standing up to the spread of misinformation — especially after the January 6, 2021 riot, which was fueled by lies in the social media was fueled .
Well, actually not.
After recent cuts, there is only one person responsible for the world’s misinformation policy, according to a recent report in the New York Times. This is alarming as fact-checking organizations have stated that YouTube is an important pipeline in spreading disinformation and misinformation.
» READ MORE: We need a functioning democracy. Teaching media skills can help. | editorial staff
YouTube is owned by Google. The cuts were part of a broader cut by Alphabet, Google’s parent company, which shed 12,000 jobs to boost profits, which totaled about $60 billion last year.
YouTube isn’t the only social media company to relax some of the already limited safeguards put in place in the wake of the Russian disinformation campaign that helped lead to Donald Trump’s election in 2016.
Meta, which owns Facebook, Instagram and WhatsApp, cut 11,000 jobs last fall and is reportedly preparing for more layoffs.
Those cuts came as Facebook, which made $23 billion last year, quietly scaled back its efforts to prevent foreign interference and voting misinformation ahead of November’s midterm elections.
Facebook also dropped an investigation into how lies are amplified in political ads on the social media site and indefinitely banned a team of researchers from New York University from the site.
Twitter made even deeper cuts, laying off 50% of its employees days before the November midterm elections. The cuts affected employees responsible for preventing the spread of misinformation. In January there were further layoffs in the so-called Trust and Safety team.
It’s not just the spread of political misinformation that misleads and divides the public. Twitter has ruthlessly ended its ban on COVID-19 misinformation, which is likely to result in more needless deaths.
Hate speech has also exploded on Twitter since Elon Musk bought the company for $44 billion in October.
» READ MORE: TikTok got me through the pandemic. Then his algorithm turned against me. | Opinion
In the weeks after Musk took control of Twitter, anti-Semitic posts rose more than 61%. Abusive language against black people increased by more than 200%, while offensive language against gay men increased by 58%. The online hatred has been linked to an increase in violence against people of color and immigrants around the world.
But Musk says he’s a free speech absolutist — except when it affects him. The billionaire temporarily suspended the accounts of several journalists and blocked others who rebuked him on Twitter. He also fired employees from SpaceX, one of his other companies who criticized him.
More importantly, Musk fails to understand that freedom of speech is not absolute. As much as this board supports and appreciates the First Amendment, there are rules and regulations about what can be said.
For example, you must not harass others or violate their rights. Just ask Alex Jones. The conspiracy theorist and founder of Infowars has been ordered to pay nearly $1 billion in damages to the families of eight victims of the Sandy Hook Elementary School shooting for repeatedly lying about the massacre being a hoax.
Certainly, the First Amendment makes it harder to regulate social media companies. But doing nothing is not the answer. The rise of artificial intelligence to develop sophisticated chatbots like ChatGPT and deepfake technology will exacerbate the spread of fake news and further threaten democracy. Policymakers will soon have to strike a balance between the First Amendment and social media regulation.
Texas and Florida have already muddied the regulatory debate by passing legislation that will upend social media companies’ already limited content moderation efforts and make the internet even more free-for-all. The US Supreme Court has delayed the hearing of the cases, leaving state laws in limbo for now.
Meanwhile, the European Union is pushing ahead with its own landmark regulation, the Digital Services Act. The measure, which comes into effect next year, aims to impose significant content moderation requirements on social media companies to curb false information, hate speech and extremism.
The spread of false information and disinformation is a growing threat to civil society. Social media companies cannot ignore their responsibilities.