Britain proposes laws to regulate social media

It calls for regulator to define code of practice, with penalties that include hefty fines

Protesters at an opposition rally in central Moscow last month calling for Internet freedom in Russia. Individuals can be fined up to 400,000 roubles (S$8,320) for circulating false information online that leads to a “mass violation of public order”. PHOTO: AGENCE FRANCE-PRESSE

Britain has proposed new regulations to hold technology companies accountable for harmful online content published on their platforms, in a bid to combat the spread of terrorist propaganda and child abuse.

The government released a policy paper yesterday detailing its approach, which will end the era of self-regulation.

It calls for a regulator, funded by the technology industry, to define a "code of practice". The regulator - it is not decided yet if a new body or an existing one will take on the role - will have a suite of powers to take action against companies in breach of their statutory duty of care.

These powers will include charging hefty fines and imposing direct liability on senior managers.

The watchdog will also be empowered to force Internet service providers to block websites that flout the rules.

The aim is to tackle a wide range of damaging Web content, from suicide encouragement to disinformation and cyber bullying.

"The Internet can be brilliant at connecting people across the world, but for too long, these companies have not done enough to protect users, especially children and young people, from harmful content," British Prime Minister Theresa May said in a statement yesterday.

She added that the government is "putting a legal duty of care on Internet companies to keep people safe".

"Online companies must start taking responsibility for their platforms and help restore public trust in this technology," she added.

A public consultation period on the proposals will be held for 12 weeks.

The new rules will apply to any company that allows users to share or discover user-generated content, or interact with one another online. This could range from social media platforms to discussion forums and messaging services.

The move mirrors steps taken by countries such as Germany and, more recently, Singapore and Australia, to counter online false-hoods and prevent the spread of harmful content.

The steps are timely, given Facebook's failure to prevent the live-streaming of terrorist attacks at two mosques in New Zealand.

Online material related to self-harm and suicide has also come into the spotlight after 14-year-old British student Molly Russell committed suicide in 2017. Her family attributed her death to the disturbing material on depression and suicide found on her Instagram account.

Digital Secretary Jeremy Wright said: "The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.

"Tech can be an incredible force for good, and we want the sector to be part of the solution in protecting their users."

But critics question the broadness and ambiguity of the proposals. For one thing, they say it will be tough to regulate misinformation that is not illegal but still harmful.

Mr Mark Bunting, partner at Communications Chambers, a consultancy specialising in telecoms, media and technology, told The Straits Times: "The question is whether the UK government can pull off a trick that has eluded most attempts to regulate online content: To incentivise platforms to address potential harms associated with online communication without encouraging undue censorship and undermining platforms' benefits to free expression and innovation.

"The platforms will be worried about the risk of heavy-handed, directive regulation that is unlikely to be future-proof nor recognise the need for different platforms to take different approaches to different issues.

"A lot of this will depend on the proposed independent regulator's mandate and how it approaches its task."

He added: "This isn't clear yet, so there is still a lot of work to be done."


The global fight against fake news, violent content online

Governments around the world are grappling with how to tackle false information and violent or extremist content online and on social media. Here is a look at what some countries are doing.

SINGAPORE

On April 1, Singapore proposed a law to combat online fake news. Under the draft law, those who spread online falsehoods with a malicious intent to harm public interest can face jail terms of up to 10 years.

Internet platforms, including social media sites such as Facebook and Twitter, will also be required to act swiftly to limit the spread of falsehoods by displaying corrections alongside such posts or removing them. Failure to comply can result in fines of up to $1 million. Individuals can also be directed to put up similar corrections, and can be fined up to $20,000 and jailed for up to 12 months if they refuse to do so.

MALAYSIA

Malaysia, led by the previous Barisan Nasional (BN) coalition, was among the first few countries to introduce an anti-fake news law. The law passed in April last year makes it a crime for someone to maliciously create or spread fake news. Anyone found guilty can be imprisoned for up to six years and fined as much as RM500,000 (S$165,250).

The new Pakatan Harapan government, which ousted the BN from power in last May's general election, has pledged to repeal the law, but the move was blocked by the opposition-led Senate last September.

AUSTRALIA

On April 4, Australia said it will fine social media and Web-hosting companies and imprison executives if violent content is not removed "expeditiously", Reuters reported. Under the new law, companies can face fines of up to 10 per cent of their annual global turnover, while executives can be sentenced to up to three years in jail if they do not remove any videos or photographs that show terrorism, murder, rape or other serious crimes without delay.

EUROPEAN UNION

European lawmakers were yesterday set to vote on new online terror content removal rules under which technology companies such as Facebook, Twitter and Google could face fines if they fail to eliminate terrorist propaganda from their sites quickly enough, Bloomberg reported.

GERMANY

The country passed a law in January last year for social media companies to quickly remove illegal content such as hate speech, child pornography, terror-related items and false information from their sites.

Sites will be given a 24-hour deadline to remove banned content or face fines of up to €50 million (S$76.2 million).

FRANCE

Last October, France passed two anti-fake news laws to rein in false information during election campaigns, following allegations of Russian meddling in the 2017 presidential vote, Agence France-Presse reported. They enable a candidate or political party to seek a court injunction preventing the publication of "false information" during the three months leading up to a national election.

They also give France's broadcast authority the power to take any network that is "controlled by, or under the influence of a foreign power" off the air if it deliberately spreads false information.

RUSSIA

Last month, President Vladimir Putin signed into law tough new fines for Russians who spread what the authorities regard as fake news or who show "blatant disrespect" for the state online, Reuters reported.

The authorities may block websites that do not meet requests to remove inaccurate information. Individuals can be fined up to 400,000 roubles (S$8,320) for circulating false information online that leads to a "mass violation of public order".

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on April 09, 2019, with the headline Britain proposes laws to regulate social media. Subscribe