Broken English no longer a sign of scams as crooks tap AI bots like ChatGPT: Experts

Chatbots like ChatGPT have helped scammers craft messages in near-perfect language. PHOTO: REUTERS

SINGAPORE – Bad grammar has long been a telltale sign that a message or a job offer is likely to be a scam.

But cyber-security experts said those days may be over as generative artificial intelligence (AI) chatbots like ChatGPT have helped scammers craft messages in near-perfect language.

Cyber-security experts said they have observed improvements in the language used in phishing scams in recent months – coinciding with the rise of ChatGPT – and warned that end users will need to be even more vigilant for other signs of a scam.

Risks associated with ChatGPT were categorised as an emerging threat in a special-issue report in March by security software firm Norton, which said scammers will tap large language models like ChatGPT. While not new, the tools are more accessible than before, and can be used to create deepfake content, launch phishing campaigns and code malware, wrote Norton.

British cyber-security firm Darktrace reported in March that since ChatGPT’s release, e-mail attacks have contained messages with greater linguistic complexity, suggesting cyber criminals may be directing their focus to crafting more sophisticated social engineering scams.

ChatGPT is able to correct imperfect English and rewrite blocks of texts for specific audiences – in a corporate tone, or for children. It now powers the revamped Microsoft Bing, which crossed 100 million active users in March and is set to challenge Google in a fight for the search-engine pie.

Mr Matt Oostveen, regional vice-president and chief technology officer of data management firm Pure Storage, has noticed the text used in phishing scams becoming better written in the past six months, as cases of cyber attacks handled by his firm rose. He is unable to quantify the number of cases believed to be aided by AI as it is still early days.

He said: “ChatGPT has a polite, bedside manner to the way it writes... It was immediately apparent that there was a change in the language used in phishing scams.”

He added: “It’s still rather recent, but in the last six months, we’ve seen more sophisticated attempts start to surface, and it is probable that fraudsters are using these tools.”

The polite and calm tone of chatbots like ChatGPT comes across as similar to how corporations might craft their messages, said Mr Oostveen. This could trick people who previously caught on to scams that featured poor and often aggressive language, he added.

Users are 10 per cent more likely to click on phishing links generated by AI, said Mr Teo Xiang Zheng, vice-president of advisory and consulting at Ensign InfoSecurity. The numbers are based on the cyber-security firm’s phishing exercises that made use of content created by AI.

He added that within Dark Web forums, there has been chatter about methods to exploit ChatGPT to enhance phishing scams or create malware.

In December, Check Point Software Technologies found evidence of cyber criminals on the Dark Web using the chatbot to create a python script that could be used in a malware attack.

Software company BlackBerry reported in February that more than half of the tech leaders it surveyed believe there will be a successful cyber attack attributed to ChatGPT within a year. Most respondents are planning to double down on AI-driven cyber security within two years, said BlackBerry.

Ms Joanne Wong, vice-president of international markets at LogRhythm, said the rise of AI-driven scams is no surprise as the tools come in handy for syndicates hampered by their language skills.

“In most cases, they do not have English as a first language nor a strong command of language – and this is why scam messages have, to date, been poorly written,” she noted.

ESET senior research fellow Righard Zwienenberg said AI tools threaten to boost the ability of fraudsters to launch large-scale attacks, including bogus e-mails targeting work staff to attack companies. He added that cyber criminals have already been found using AI-driven tools for malicious purposes.

His team also tested ChatGPT’s ability to create scam messages, which were mostly free of errors and convincing to the untrained eye.

“Grammatical errors may be a thing of the past, thanks to tools like ChatGPT. Still, fortunately, there are other warning signs to alert users to possible scams,” said Mr Zwienenberg.

Recipients should be cautious when receiving e-mails with surprise pressure tactics, requests for personal information or offering gifts that are too good to be true, he added.

The use of AI in scams comes amid rising cases of phishing scams that leverage apps that are used in the workplace, such as bogus e-mails by fraudsters posing as Microsoft, Google and Adobe.

A scam template written by ChatGPT. PHOTO: ESET

Cyber-security firm Proofpoint said it recorded some 1,600 such campaigns in 2022 across its global customer base.

“While Microsoft was the most abused brand name with over 30 million messages using its branding or featuring a product such as Office or OneDrive, other companies regularly impersonated by cyber criminals included Google, Amazon, DHL, Adobe and DocuSign,” it noted.

Seven in 10 Singapore organisations reported an attempted e-mail attack in 2022, based on respondents surveyed by Proofpoint. Its survey did not ask respondents for the amount lost to scams.

But there are still ways people can guard against AI-generated scams – by looking out for suspicious attachments, headers, senders and URLs embedded within an e-mail, it added in a separate report.

Experts urge the public to verify whether a sender’s e-mail is authentic, and encourage firms to conduct ransomware drills and work with consultants to plug gaps in their system.

Just over half of local firms with a security awareness programme train their entire workforce on scams, and even fewer conduct phishing simulations to prepare staff for potential attacks, said Proofpoint. Familiar logos and branding were enough to convince four in 10 respondents that an e-mail was safe.

The firm’s regional director of cyber-security strategy, Ms Jennifer Cheng, said: “Since the human element continues to play a crucial role in safeguarding companies, there is clear value in building a culture of security that spans the entire organisation.”

Join ST's Telegram channel and get the latest breaking news delivered to you.