Microsoft unveils OpenAI-based chat tools for fighting cyber attacks

Microsoft will start by giving a few customers access to the tool and then add more later. PHOTO: REUTERS

WASHINGTON – Microsoft, extending a frenzy of artificial intelligence (AI) software releases, is introducing new chat tools that can help cyber security teams ward off hacks and clean up after an attack.

The latest of Microsoft’s AI assistant tools – the software giant likes to call them Copilots – uses OpenAI’s new GPT-4 language system and data specific to the security field, the company said on Tuesday. 

The idea is to help security workers more quickly see connections between various parts of a hack, such as a suspicious e-mail, malicious software file or the parts of the system that were compromised. 

Microsoft and other security software companies have been using machine-learning techniques to root out suspicious behaviour and spot vulnerabilities for several years.

But the newest AI technologies allow for faster analysis and add the ability to use plain English questions, making it easier for employees who may not be experts in security or AI.

That is important because there is a shortage of workers with these skills, said Ms Vasu Jakkal, Microsoft’s vice-president for security, compliance, identity and privacy.

Hackers, meanwhile, have only got faster.

“Just since the pandemic, we’ve seen an incredible proliferation,” she said.

For example, “it takes one hour and 12 minutes on average for an attacker to get full access to your inbox once a user has clicked on a phishing link. It used to be months or weeks for someone to get access”.

The software lets users pose questions such as: “How can I contain devices that are already compromised by an attack?” Or they can ask the Copilot to list anyone who sent or received an e-mail with a dangerous link in the weeks before and after the breach.

The tool can also more easily create reports and summaries of an incident and the response.

Microsoft will start by giving a few customers access to the tool and then add more later.

Ms Jakkal declined to say when it would be broadly available or who the initial customers are.

The Security Copilot uses data from government agencies and Microsoft’s researchers, who track nation states and cyber criminal groups.

To take action, the assistant works with Microsoft’s security products and will add integration with programs from other companies in the future.

As with previous AI releases in 2023, Microsoft is taking pains to make sure users are well aware the new systems make errors.

In a demo of the security product, the chatbot cautioned about a flaw in Windows 9 – a product that does not exist. 

But it is also capable of learning from users.

The system lets customers choose privacy settings and determine how widely they want to share the information it gleans.

If they choose, customers can let Microsoft use the data to help other clients, said Ms Jakkal. 

“This is going to be a learning system,” she added.

“It’s also a paradigm shift: now humans become the verifiers, and AI is giving us the data.” BLOOMBERG

Join ST's Telegram channel and get the latest breaking news delivered to you.