2 in 3 encountered harm online, but half of those harmed did not block offending content: Survey
Sign up now: Get ST's newsletters delivered to your inbox
More than three-quarters of those who reported harmful content to the online platforms indicated that they faced issues with the reporting process.
PHOTO: THE NEW PAPER FILE
Follow topic:
SINGAPORE – Two-thirds of Singapore Internet users encountered harmful content online, but nearly half of the people who experienced harm did not block the offending online content or users, or report it to hosting platforms, a survey by the Ministry of Communications and Information (MCI) found.
Even among those who reported the harm, the majority faced issues with the reporting process offered by tech platforms such as Facebook, HardwareZone, Instagram, TikTok, X (formerly known as Twitter) and YouTube.
In the survey conducted online in May, 2,106 Singapore users aged 15 and above were asked whether they had encountered harm in the previous six months.
It found that the most common types of harmful content were related to cyber bullying, sexual content, illegal activities, racial or religious disharmony, violence and self-harm.
Nearly half of those who experienced harm online said they did nothing about it because it did not occur to them to do so, or because they were unconcerned about the content.
Most of the harmful content was hosted on social media platforms and online forums, followed by messaging apps, search engines and e-mails.
Among users who reported harmful online content to the platforms, more than three-quarters indicated that they faced issues with the reporting process.
Some key issues highlighted by users include the platform not taking down harmful online content or disabling the account responsible, taking too long to act, and the lack of updates after their reports.
The survey found that 88 per cent of the respondents were aware of at least one privacy tool that could be used on social media services, with the highest awareness being of tools that allowed users to control access to their profile information or their content, as well as block other users from finding or contacting them.
The survey, which included 515 parent respondents, found that half of them had used parental controls to restrict the types of content that could be accessed by their children, but usage was lower for other child safety tools.
These other tools include parent-child linked accounts to allow parents to monitor children’s online activity, children-only accounts that come with restricted content, and filtering tools offered by Internet service providers to block access to age-restricted sites.
To help parents manage the harms their children face online, MCI launched an Online Safety Digital Toolkit in March in partnership with Google, Meta, ByteDance and X. This toolkit recommends parental controls, privacy and reporting tools, and self-help resources for individuals and parents to manage their own or their children’s safety online.
An inter-ministry toolkit is being developed by MCI, the Ministry of Education, and the Ministry of Social and Family Development, and is expected to be launched in phases from early 2024.
Laws have also been tightened over the past few months to tackle harm online.
In July, the Online Criminal Harms Act was passed
A new code of practice for app stores will address risks associated with harmful content in online games, possibly with the use of a classification system for them. The Republic will also address how children’s personal data is collected and how data can be used in artificial intelligence systems.
The code of practice for app stores will complement the Code of Practice for Online Safety, which took effect in July. Under the code, social media firms with significant reach, such as Instagram and Facebook, must put in place systems to limit Singapore users’ exposure to harmful content, including those related to suicide and self-harm content, cyberbullying, or content promoting terrorism.
Correction note: An earlier version of this story said the survey involved 2,107 Singapore users. This is incorrect. It should be 2,106 users. We are sorry for the error. MCI has also clarified that under the Code of Practice for Online Safety, social media firms with significant reach must put in place systems to limit Singapore users’ exposure to harmful content, including those related to suicide and self-harm content, cyberbullying, or content promoting terrorism. This has been updated in the story.

