For subscribers
Rethinking online safety in the age of deepfakes and nudifiers
Artificial intelligence can be weaponised to create sexualised images. It may be time to set up an independent safety watchdog to minimise potential harm.
Sign up now: Get ST's newsletters delivered to your inbox
AI chatbot Grok has been used to “nudify” real people without consent, with images quickly shared across online networks.
PHOTO: AFP
In 2023, Ms Mathilda Huang was horrified to discover deepfake nude images of herself on “seedy websites” and had to spend time pursuing their removal. In 2024, schoolboys from the Singapore Sports School created and circulated deepfake nude images of female students and teachers. And now Grok – a chatbot developed by xAI and available on X and on mobile apps – has been used to “nudify” real people without consent, with images quickly shared across online networks.
As Singapore enhances its commitment to the development and use of artificial intelligence in Budget 2026 – including plans for a National AI Council, an AI park and national AI initiatives in various sectors – we must confront a parallel reality: AI can also be weaponised. The question to be asked is not whether harm will occur with any particular tool, but whether we will act before it does.


