LONDON (AP) – The UK government has abandoned a plan to force tech firms to remove harmful but legal internet content after the proposal drew heavy criticism from lawmakers and civil rights groups.
The UK on Tuesday defended its decision to water down the Online Safety Act, an ambitious but controversial attempt to crack down on online racism, sexual abuse, bullying, fraud and other harmful material.
Similar efforts are underway in the European Union and the United States, but that of the United Kingdom has been one of the most comprehensive. In its original form, the bill gave regulators sweeping powers to sanction digital and social media companies like Google, Facebook, Twitter and TikTok.
Critics had expressed concerns that requiring the largest platforms to remove “legal but harmful” content could lead to censorship and undermine freedom of expression.
Prime Minister Rishi Sunak’s conservative government, which took office last month, has now dropped that part of the bill, saying it could “super-criminalize” online content. The government hopes the change will be enough to get the bill through parliament by mid-2023, where it has languished for 18 months.
Digital Secretary Michelle Donelan said the change removed the risk that “technology companies or future governments could use the laws as a license to censor legitimate views.”
“It was a creation of a quasi-legal category between illegal and legal,” she told Sky News. “That is not what a government should do. It is confusing. In the legal area, it would create a different set of rules online than offline.”
Instead, the bill says companies must set clear terms of service and abide by them. Businesses are free to allow adults to post and view offensive or harmful material as long as it is not illegal. But platforms that pledge to ban racist, homophobic or other objectionable content and then fail to live up to the promise can be fined up to 10% of their annual revenue.
The legislation also requires companies to help people avoid content that is legal but potentially harmful — such as promoting eating disorders, misogyny, and some other forms of abuse — through warnings, content moderation, or other means.
Companies must also demonstrate how they enforce user age limits designed to prevent children from viewing harmful material.
The bill still criminalizes some online activities, including cyberflashing — sending unwanted explicit images — and epilepsy trolling, sending flashing images that can trigger seizures. It also makes it a criminal offense to assist or encourage self-harm, a move that follows a campaign by the family of Molly Russell, a 14-year-old who ended her life in 2017 after viewing self-harm and suicide content online.
Her father, Ian Russell, said he was relieved the stalled law was finally moving forward. But he said it was “very difficult to understand” why protections from harmful material had been watered down.
Donelan emphasized that “legal but harmful” material is only allowed for adults and children remain protected.
“The content that Molly Russell saw will not be allowed under this bill,” she said.