Social media firms face fines, blocking in S'pore under new Bill to tackle online harm

These platforms will also be required to provide accountability to their users on such measures. PHOTO: ST FILE

SINGAPORE – Social media services with significant reach in Singapore will need to implement measures to limit local users' exposure to harmful content and be more accountable to users, as part of a slate of measures under the Online Safety (Miscellaneous Amendments) Bill tabled in Parliament on Monday.

Failure to do so may attract a fine of up to $1 million, or a direction to have their services blocked in Singapore.

Under the Bill, the Infocomm Media Development Authority (IMDA) will be empowered to issue orders to block or take down egregious content in the event that it is accessed by local users on major social media platforms. These orders will not be issued for private communications.

Egregious content includes posts advocating suicide, self-harm, child sexual exploitation, terrorism and materials that may incite racial or religious tensions or pose a risk to public health.

The Bill is a new addition to the Broadcasting Act, aimed at regulating online communication services, which include major social media firms such as Facebook, Instagram and TikTok.

Parliament will debate on the Bill at a second reading, slated for November.

The Ministry of Communications and Information (MCI) said the Bill comes amid widespread acceptance that online services have a responsibility to keep their users safe from online harms.

"While some online services have made efforts to address harmful content, the prevalence of harmful online content remains a concern, given the high level of digital penetration and pervasive usage of online services among Singapore users, including children," said MCI in a statement.

Platforms with "significant reach or impact" in Singapore may be designated as regulated online communication services, and be required to comply with a draft Code of Practice for Online Safety expected to be in force in the second half of 2023.

Proposed measures under the draft code issued on Monday received support from the public after a month-long consultation that ended in August. The code may be updated following further industry consultation.

Under the code, regulated online platforms will need to establish and apply measures to prevent users, especially children under 18 years old, from accessing harmful content.

The measures include tools that allow children or their parents to manage their safety on these services. The firms will also need to provide practical guidance on what content presents a risk of harm to users and simple ways for users to report harmful content and unwanted interactions.

Social media platforms are expected to be transparent about how they are protecting local users from harmful content, by providing information that reflects users' experience on their services to allow users to make informed decisions.

To ensure the rules are followed, MCI said that the online services may need to undergo audits, report to the IMDA on the safety measures they have implemented for local users, and conduct risk assessments.

The firms will also need to work with researchers approved by IMDA to allow it to understand the nature and severity of online harms on their platforms.

The Bill follows a public consultation with some 600 parents, young people, industry groups and other respondents between July 13 and Aug 10.

In its findings, MCI said that most respondents support its proposal to enhance online safety for users here, especially the young, but sought assurances on privacy concerns and freedom of speech.

Join ST's WhatsApp Channel and get the latest news and must-reads.