State-backed hackers for Russia, China, Iran using ChatGPT

Sign up now: Get ST's newsletters delivered to your inbox

OpenAI and Microsoft said accounts linked to hackers have been shut down as the companies increase efforts to thwart malicious use of the popular AI chatbots by state actors.

OpenAI and Microsoft said accounts linked to hackers have been shut down as the companies increase efforts to thwart malicious use of the popular AI chatbots by state actors.

PHOTO: REUTERS

Follow topic:

NEW YORK Hackers linked to the governments in Russia, North Korea and Iran have turned to ChatGPT to explore new ways to carry out online attacks, OpenAI and Microsoft said on Feb 14.

OpenAI and Microsoft said accounts linked to hackers have been shut down as the companies increase efforts to thwart malicious use of the popular artificial intelligence (AI) chatbots by state actors.

Microsoft is a major financial backer of OpenAI and uses its AI technology, known as large language models (LLM), to power its own apps and software.

“The objective of Microsoft’s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT,” Microsoft said.

According to the company, hackers are using LLMs to “advance their objectives and attack technique”.

OpenAI’s services, which include its world-leading GPT-4 model, were used for “querying open-source information, translating, finding coding errors and running basic coding tasks”, the company behind ChatGPT said in a separate blog post.

According to Microsoft, Forest Blizzard, a group linked to Russian military intelligence, turned to LLMs for “research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine”.

The use was “representative of an adversary exploring the use cases of a new technology”, Microsoft added.

Work by Emerald Sleet, which is linked to North Korea, involved research on think-tanks and experts linked to the communist regime, as well as content likely to be used in online phishing campaigns.

Crimson Sandstorm, which is linked to Iran’s Revolutionary Guard, used ChatGPT to program and troubleshoot malware as well as find tips for hackers to avoid detection, Microsoft said.

OpenAI said the danger was “limited”, but the company sought to stay ahead of the evolving threat.

“There are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits,” OpenAI said. AFP


See more on