Singapore cyber-security firm Ensign rolls out detection tool for deepfakes on platforms, apps

Sign up now: Get ST's newsletters delivered to your inbox

Mr Lee Joon Sern (left), vice-president (Machine Learning and Cloud Research), Ensign Labs, giving a demonstration of Aletheia, a real-time deepfake detection solution developed by Ensign, on Sept 17, 2024. ST PHOTO: KEVIN LIM ocfake17

Mr Lee Joon Sern, vice-president of machine learning and cloud research at Ensign Labs, giving a demonstration of Aletheia, a real-time deepfake detection tool.

ST PHOTO: KEVIN LIM

Follow topic:

SINGAPORELike antivirus software, deepfake detectors can be installed on a computer to flag videos that have likely been manipulated by artificial intelligence (AI).

Singapore cyber-security firm Ensign InfoSecurity, which supports more than 800 clients in at least 20 countries, has developed such a tool to alert users in real-time about deepfakes used on video conferencing platforms or hosted on other platforms.

Ensign was set up in 2018 as part of a joint venture between Temasek and StarHub to offer bespoke cyber-security services to enterprises and governments globally.

Aletheia – as Ensign InfoSecurity’s new solution is called – is named after the Greek goddess of truth. It scans for signs of deepfake videos and audio on a user’s screen, such as videos on YouTube or a video-conference call.

The detector, which comes in the form of a Chrome browser plug-in or a separate app, can flag likely deepfakes within seconds.

Aimed at enterprise users, Aletheia, which was launched on Sept 17, adds to a growing range of detection services offered by tech and cyber-security firms as high-quality deepfake software becomes cheaper and more widely available, increasing the likelihood of fraud.

Notably, a widely circulated deepfake video in 2022 showed Ukrainian President Volodymyr Zelensky calling on soldiers to surrender to Russia, while another from early 2024 depicted pop star Taylor Swift endorsing US presidential candidate Donald Trump.

In 2023, executives in Hong Kong transferred some HK$200 million (S$34 million) to fraudsters who used AI to mimic the likeness of their chief financial officer to approve a transaction, sparking concerns of businesses being targeted, especially through video conference calls.

This has been an area of concern for businesses, especially for those who make transactions online or only meet clients virtually, said Mr Lee Joon Sern, vice-president of machine learning and cloud research at Ensign Labs, Ensign InfoSecurity’s research and development branch.

Ensign’s Aletheia analyses videos for awkward facial and body movements and flickering, among other telltale signs of AI-manipulated footage, said Mr Lee.

In a demonstration for the media at Ensign’s office in Kallang on Sept 17, the detector was shown in action: It highlights likely deepfakes through onscreen notifications and automatically saves a recording of the footage as evidence.

Users can activate the software whenever they access audio or visual content to detect if they have been manipulated by AI.

As the detector works in the background, there is no need for users to record and upload the suspected content to another app, he said, adding that such a process is not effective as it relies on users to first suspect that they are encountering a deepfake, which can be difficult as the technology becomes more convincing.

Detection is processed on the user’s device instead of through the cloud, as potential customers had raised cyber-security concerns of video conference footage being shared externally, Mr Lee said.

The detector’s accuracy rate for videos stands at up to 90 per cent, and Aletheia can spot deepfakes made by most AI-image generators, barring highly sophisticated deepfakes generated for the likes of movies, he added.

Mr Lee cited computer-generated renderings of real actors used in Star Wars movies, which used a deepfake of actress Carrie Fisher, who played Princess Leia.

The detection software will continue to be improved to keep up with the increasing realism of deepfakes, he added.

Mr Lee did not state the price of the detector tool, and said it will be decided on a case-by-case basis.

Head of Ensign Labs Tan Ah Tuan declined to reveal which of Ensign’s customers had signed on to the service but said companies in the finance and energy sectors have indicated interest in the product.

Ensign is also in talks with the authorities on how they can work with the public sector to protect the public from deepfake scams, he said.

Ensign joins a slew of tech developers that have rushed to roll out deepfake detection tools to deal with the risks of AI-generated misinformation and fraud.

On Sept 3, Singapore’s ST Engineering launched its

Einstein.AI deepfake detection tool

, which is aimed at financial firms and media companies to scan for misinformation. Users can upload suspicious content on the platform for analysis or run Einstein.AI in the background to scan for AI-generated content.

The Home Team Science and Technology Agency in April showcased a similar detection tool called AlchemiX, which can compare recordings of a suspected deepfake video with a recording of a speaker’s actual voice.

See more on