Why you need to be alert to deepfake scams

Sign up now: Get ST's newsletters delivered to your inbox

Deepfake video has got to a point where it is very hard to tell the difference between an AI-generated image and a real image.

Deepfake videos have got to a point where it is hard to distinguish between a real image and an AI-generated one.

PHOTO ILLUSTRATION: LIANHE ZAOBAO

Follow topic:

The breakneck evolution of artificial intelligence (AI) tools able to generate convincing text, images and even live video is enabling ever smarter and more targeted scams, as cyber-security experts urge internet users to raise their guard.

In recent weeks, a high-profile “romance scam” in France – in which a woman lost over US$840,000 (S$1.14 million) – and fake donation drives for Los Angeles fire victims show that “absolutely everyone, private individuals or businesses, is a target for cyber attacks”, said Mr Arnaud Lemaire, of cyber-security firm F5.

One of the best-known forms of cyber attacks is phishing, the sending of e-mails, texts or other messages under false pretences.

Most of these try to get users to take an action like click a link, install a harmful program or divulge sensitive information.

Phishing and its social engineering cousin “pretexting” together accounted for more than 20 per cent of almost 10,000 data breaches worldwide in 2024, reported US telecoms operator Verizon for the 2024 edition of its industry staple Data Breach Investigations Report.

Mr Lemaire said AI chatbots powered by large language models save attackers time and allow for more elaborate fake messages.

They also mean that “if someone is writing a phishing e-mail... he can make the clues completely vanish” that might give away a non-native speaker of the target’s language.

But the text generators are just the tip of the AI iceberg. For instance, AI can “take advantage of all the data that has been breached over the last few years to automate the creation of highly personalised scams”, said Mr Steve Grobman, chief technical officer at security software maker McAfee. This is “something that just a few years ago would not be possible without an army of humans”.

Safe word

Rather than going for a quick score, attackers often aim to gain the trust of select individuals at target firms over months or years.

If an employee is successfully tricked, attackers “might wait until this person becomes very influential or there’s a good chance for them to extort money” before exploiting the connection, said Dr Martin Kraemer of cyber-security training firm KnowBe4.

The stakes were on display in February 2024, when scammers

swindled US$26 million out of a multinational firm in Hong Kong.

Police said a finance worker believed he was video-conferencing with the company’s CEO and other employees – when in fact, they were all AI-generated deepfakes.

“The latest generation of deepfake video has got to a point where almost no consumers are able to tell the difference between an AI-generated image and a real image,” McAfee’s Mr Grobman said.

Internet users need to start applying the same scepticism to video as many now do to still images – where “photoshop” has become a verb – he added. Faced with a purported news video online, that could be as simple as checking against a trusted source.

In personal communications, “I almost want to say it’s like BDSM, bondage, where you have a safe word”, F5‘s Mr Lemaire joked.

“You say to yourself, here’s the CEO asking me to make a US$25 million bank transfer, I’ll bring something personal in to make sure it’s him.” Other tricks include asking a video caller to pan their camera around – something AI for now has difficulty recreating, Mr Lemaire said.

Horses to automobiles

Mr Grobman said the online scam industry is so lucrative that “just like other businesses... there’s supply chains and an ecosystem of tools to support it”.

Malicious programs for hire include ransomware such as LockBit, which can encrypt data on targets’ computers and threaten to release or delete it unless payment is made. One of its suspected developers was arrested in Israel in December, pending extradition to America.

AI tools include one that allowed a McAfee researcher to replace his own face with that of Hollywood star Tom Cruise in a video for as little as US$5, Mr Grobman said.

However, KnowBe4‘s Dr Kraemer said that even with all the new tools, “I’m not too worried on the defence side that we will be overwhelmed by AI”.

It’s “a tool that we can use for attack as well as for defence”, he added.

Nevertheless, the final line of defence remains human for now.

Said Mr Grobman: “When we moved from walking and riding horses to driving automobiles, we needed to change the way we thought about transportation safety… that’s what consumers are going to need today, the same sort of pivot.” AFP

See more on