
Under revisions to U.K.’s proposed legislation for online safety, social media platforms like Facebook, TikTok, and Twitter won’t be required to remove “legal but harmful” content.
The Online Safety Bill, which aims to regulate the internet, will be revised to remove the controversial but critical measure, British lawmakers announced Monday.
The government said the amendment would help preserve free speech and give people greater control over what they see online.
The move is, however, viewed by critics as a “major weakening” of the bill, which risks undermining tech companies’ accountability.
Previous proposals would have required tech giants to prevent people from seeing harmful but legal content online, including posts about self-harm, suicide, and abuse.
With the revised rules — dubbed a “consumer-friendly triple shield,” the onus will shift from tech companies to internet users for selecting content. People will be able to filter out harmful content that they are not interested in seeing thanks to the system that tech companies will be required to implement.
Crucially, though, firms will still need to protect children and remove content that is illegal or prohibited in their terms of service.
The empowerment of adults, the preservation of free speech
U.K. Culture Secretary Michelle Donelan said the new plans would ensure that no “tech firms or future government could use the laws as a license to censor legitimate views.”
In today’s announcement, the government focused on the original objectives of the Online Safety Bill: safeguarding children and fighting criminal activity online, while protecting free speech, ensuring tech firms are accountable to their users, and empowering adults to make informed choices about which platforms they use,” the government explained.
The opposition Labour party said the amendment was a “major weakening” of the bill, however, with the potential to fuel misinformation and conspiracy theories.
“Replacing the prevention of harm with an emphasis on free speech undermines the very purpose of this bill, and will embolden abusers, COVID deniers, hoaxers, who will feel encouraged to thrive online,” Shadow Culture Secretary Lucy Powell said.
Meantime, the suicide risk charity group Samaritans said increased user controls should not replace tech company accountability.
The government feels it has to snatch defeat from the jaws of victory by increasing people’s controls rather than holding sites accountable by law, said Samaritans’ chief executive Julie Bentley.
In the details lies the devil
Monday’s announcement is the latest iteration of the U.K.’s expansive Online Safety Bill, which also includes guidelines on identity verification tools and new criminal offenses to tackle fraud and revenge porn.
It follows months of campaigning by free speech advocates and online protection groups. Meantime, Elon Musk’s acquisition of Twitter has thrown online content moderation into renewed focus.
The proposals are now set to go back to the British Parliament next week, before being intended to become law before next summer.
However, commentators say further honing of the bill is required to ensure gaps are addressed before then.
“The devil will be in the detail. A concern is that Ofcom’s oversight of social media terms and conditions, as well as its requirements for ‘consistency,’ may encourage overzealous removals,” noted Matthew Lesh, director of public policy at the Institute of Economic Affairs.
Communications and media regulator Ofcom will be responsible for much of the enforcement of the new law and will be able to fine companies up to 10% of their worldwide revenue for non-compliance.
“There are also other issues that the government has not addressed,” Lesh continued. To comply with this requirement, firms must be able to deduce that content is illegal based on a reasonable degree of probability, which sets a very low threshold and risks preemptive automated censorship.
The LondonPedia is a great place for finding technology related articles.