Britain proposes tougher new online safety law

The Bill is intended to make tech companies more accountable for removing illegal material from their platforms. PHOTO: BLOOMBERG

LONDON (AFP, BLOOMBERG) - The British government on Thursday (March 17) introduced wide-ranging proposals to improve online safety, outlining new measures to protect users including children from harmful content such as pornography and cyber-bullying.

The proposals in the Online Safety Bill, described by ministers as a “milestone”, include tightening up duty of care requirements on tech firms and penalties for breaches.

Failure to comply with legislation could see them face fines up to 10 per cent of their annual global turnover, if the bill goes through parliament.

Tech bosses who fail to cooperate and comply would also run the risk of criminal prosecution and jail terms of up to two years, the government said.

Although its central aims and numerous revisions have been in circulation since 2019, the Online Safety Bill was presented for lawmakers to see in full on Thursday, after a first formal draft was published in May.

The bill is intended to make technology companies more accountable for removing illegal material from their platforms. This includes content that promotes terrorism or suicide, revenge pornography, and child sexual abuse material. Harmful and adult content is also covered by the bill.

Also part of the online harms bill:

- A requirement for age verification on all websites that host pornography.

- A measure to combat anonymous trolling, or abuse and unwanted contact on social media.

- The criminalisation of so-called cyber-flashing.

- Requirement for companies to report child sexual abuse material content to the UK's National Crime Agency.

- The right given to users to appeal to platforms if they think their posts have been taken down unfairly.

Companies will be required to show how they pro-actively tackle such content spreading. The communications regulator, Ofcom, will be given the power to enter offices and inspect data and equipment to gather evidence, and it will be a criminal offence to obstruct an investigator, the Department for Digital, Culture, Media and Sport said in a statement.

 

Digital Secretary Nadine Dorries said tech firms had been “left to mark their own homework” as the Internet developed, and “harm, abuse and criminal behaviour have run riot on their platforms”.

Failure to implement basic protections against online risks “sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms,” she added.

Dorries promised that the proposals were “balanced and proportionate” and would not target freedom of expression, after concerns from rights campaigners.

News content will be “completely exempt” from regulation, and social media firms will be required to protect journalism and democratic political debate, she added.

Social media platforms, however, will be required to tackle “legal but harmful” content, such as exposure to self-harm, harassment and eating disorders. 

Ian Russell, whose 14-year-old daughter Molly killed herself in 2017 after viewing graphic self-harm and suicide material on Instagram, gave his support.

He said the Online Safety Bill was “another important step towards ending the damaging era of tech self-regulation” to protect users, especially children.

Confederation of British Industry chief policy director Matthew Fell said the legislation was "necessary", but that "in its current form raises some red flags, including extending the scope to legal but harmful content. Not only will this deter investment at a time when our country needs it most but will fail to deliver on the aims of this legislation."

It is likely the bill will spend several months of further revisions and votes before gaining royal assent, after which it will become law.

Big Tech businesses like Facebook-owner Meta Platforms are already required to take down illegal content after it has been reported to them. But governments want them to act more quickly. The government is giving Ofcom the responsibility to scrutinise and challenge algorithms and systems inside big technology companies that can propagate harm online - rather than asking officials to chase and litigate on individual bad pieces of content.

The biggest platforms and their apps will be classed as "Category One" and also have to clamp down on legal-but-harmful content, the specifics of which will be added by lawmakers later.

Join ST's Telegram channel and get the latest breaking news delivered to you.