Google won't use AI for arms or to cause harm

It unveils principles for the technologies amid outcry over a contract with US military

Google CEO Sundar Pichai says the tech giant is using AI "to help people tackle urgent problems".
Google CEO Sundar Pichai says the tech giant is using AI "to help people tackle urgent problems".

SAN FRANCISCO • Google has announced it would not use artificial intelligence (AI) for weapons or to "cause or directly facilitate injury to people", as it unveiled a set of principles for the technologies.

Chief executive Sundar Pichai, in a blog post outlining the company's artificial intelligence policies, noted that even though Google won't use AI for weapons, "we will continue our work with governments and the military in many other areas", such as cyber security, training, or search and rescue.

The news comes as Google faces an uproar from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

Mr Pichai set out seven principles for Google's application of artificial intelligence, or advanced computing that can simulate intelligent human behaviour.

He said Google is using AI "to help people tackle urgent problems" such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.

"We recognise that such powerful technology raises equally powerful questions about its use," Mr Pichai said in the blog.

"How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right."

The chief executive said Google's AI programmes would be designed for applications that are "socially beneficial" and "avoid creating or reinforcing unfair bias".

He said the principles also called for AI applications to be "built and tested for safety", to be "accountable to people" and to "incorporate privacy design principles".

Google will avoid the use of any technologies "that cause or are likely to cause overall harm", Mr Pichai wrote.

That means steering clear of "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", and systems "that gather or use information for surveillance violating internationally accepted norms".

Google also will ban the use of any technologies "whose purpose contravenes widely accepted principles of international law and human rights", Mr Pichai said.

Some initial reaction to the announcement was positive.

The Electronic Frontier Foundation, which had led opposition to Google's Project Maven contract with the Pentagon, called the news "a big win for ethical AI principles".

Professor Ryan Calo, a law professor from the University of Washington and fellow at the Stanford Centre for Internet and Society, tweeted: "The clear statement that they won't facilitate violence or totalitarian surveillance is meaningful."

The move comes amid growing concerns that automated or robotic systems could be misused and spin out of control, leading to chaos.

The company had faced criticism over the contract with the Pentagon on Project Maven, which uses machine learning and engineering talent to distinguish people and objects in drone videos.

Faced with a petition signed by thousands of employees and criticism outside the company, Google indicated that the US$10 million (S$13.4 million) contract would not be renewed, according to media reports.

But it is believed to be competing against other tech giants such as Amazon and Microsoft for lucrative "cloud computing" contracts with the US government, including for military and intelligence agencies.

AGENCE FRANCE-PRESSE

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on June 09, 2018, with the headline Google won't use AI for arms or to cause harm. Subscribe