4 in 5 in S’pore encounter harmful content online; 2 in 3 call for stronger laws: MDDI surveys

Sign up now: Get ST's newsletters delivered to your inbox

Meta’s Facebook hosted most of the harmful content, followed by YouTube, Instagram, WhatsApp and Telegram.

The survey polled 2,008 Singapore citizens and permanent residents aged 15 and above.

ST PHOTO: KUA CHEE SIONG

Follow topic:

SINGAPORE – More than four in five Singapore residents encountered harmful online content such as scams and cyber bullying in the past year.

This is one of the key findings of the Perceptions of Digitalisation Survey by the Ministry of Digital Development and Information (MDDI) conducted between November 2024 and February 2025, with the results released on Oct 10.

Content supporting illegal activities such as scams or the sale of prohibited items was the most frequently encountered, followed by sexual, violent and cyber-bullying materials as well as posts that cause racial or religious tension.

A similar survey by MDDI in 2024

found that three-quarters of respondents had experienced harmful content online, up from 65 per cent in 2023.

The most recent survey polled 2,008 Singapore citizens and permanent residents aged 15 and above, with a representative sample of the population by gender, age and race.

Respondents found harmful content mostly on Meta’s Facebook, followed by YouTube, Instagram, TikTok, WhatsApp and Telegram.

When queried, a MDDI spokesperson said the 2024 and 2025 polls differed in terms of methodology and questions asked. For instance, the 2024 poll asked respondents about the harmful content they encountered online in the last six months, while the latest survey polled them on the past year.

MDDI’s 2025 Perceptions of Digitalisation Survey also asked about harmful behaviour experienced online, of which catfishing was the most prevalent.

Specifically, 71 per cent of those who encountered harmful behaviour were victims of catfishing – which refers to the practice of luring someone into a relationship with a fake persona. These were mostly carried out over WhatsApp and Facebook.

A quarter received unwanted sexual messages, while 16 per cent were harassed online.

With the emergence of sophisticated artificial intelligence tools, it has become increasingly challenging to detect deception online, said Associate Professor Natalie Pang, head of NUS’ communications and new media department.

She added that the public needs to be more vigilant and do basic checks, such as verifying information about someone trying to get in touch, or checking someone’s digital footprint across several platforms.

Most of the victims of harmful online content tended not to report the offending users or material to the platform, said MDDI in a statement on Oct 10. Instead, many skipped or closed the offending content.

As for those who experienced harmful online behaviour, nearly 80 per cent blocked the user responsible, and around half reported the content or user to the platform.

Dr Chew Han Ei, senior research fellow at the Institute of Policy Studies (IPS), said that many people did not report what they saw as they were either unsure of what to do or thought that reporting would not help.

“When nothing happens after they report something, they just give up,” he said.

The numbers could possibly point to a culture of desensitisation and different thresholds for, say, the sharing of topless or suggestive photos, said the head of governance and economy at IPS.

“Over time, that normalisation breeds desensitisation, where harmful content no longer shocks or prompts action from some segments of the population.”

The Government has been working with industry partners and the community to build a safer online environment, and strengthening legislation to protect Singaporeans, said MDDI.

The Online Safety Commission, which is expected to start operating in 2026, aims to help victims of online harms get recourse faster, especially those who experienced cyber bullying, deep fakes, and non-consensual sharing of intimate images.

To empower the Online Safety Commission, the Online Safety (Relief and Accountability) Bill is expected to be tabled in Parliament in the next few weeks.

The setting up of this commission should make a difference, as people would know who to turn to and platforms would know what is expected of them, said Dr Chew.

Prior to the Bill, the Code of Practice for Online Safety for social media services and app distribution services had been introduced in 2023 and 2025, respectively, to minimise people’s exposure to harmful content.

The Code of Practice for Online Safety for Social Media Services, for one, requires major social media platforms, including Facebook and TikTok, to provide age-appropriate safety features, parental controls and mechanisms for users to report harmful content.

The Code of Practice for Online Safety for App Distribution Services, on the other hand, requires app stores to screen and prevent users aged below 18 from downloading apps meant for adults, such as dating apps or those with sexual content.

A separate MDDI survey – the Smart Nation Policy Perception Survey – of 2,008 residents aged 15 and above from March to May found that two in three respondents were supportive of stronger regulation to protect users from online harms, even if that resulted in less freedom for users online.

“This shows that a clear majority accept that some regulatory restraints are necessary to protect Singaporeans from online harms like scams, inappropriate content and misinformation,” said the ministry.

“In order to help individuals better recognise online risks and cultivate safer digital habits, we will improve public education and outreach to make online safety resources more accessible, practical and action-oriented.”

See more on