Tech giants seek to create AI ethics code

Insiders say it is to ensure AI research benefits people, even as a report points out difficulty in regulating the sector

The US Marine Corps' Modular Advanced Armed Robotic System is an example of AI being used in warfare. As machines become more capable, five leading tech companies have come together to discuss the impact of AI on our lives.
The US Marine Corps' Modular Advanced Armed Robotic System is an example of AI being used in warfare. As machines become more capable, five leading tech companies have come together to discuss the impact of AI on our lives. PHOTO: HANDOUT BY MARINE LANCE CPL JULIEN RODARTE

SAN FRANCISCO • For years, science fiction moviemakers have been making us fear the bad things that artificially intelligent machines might do to their human creators. But for the next decade or two, our biggest concern is more likely to be that robots will take away our jobs or bump into us on the highway.

Now, five of the world's largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence (AI).

While science fiction has focused on the existential threat of AI to humans, researchers at Google's parent company, Alphabet, and those from Amazon, Facebook, IBM and Microsoft have been meeting to discuss more tangible issues, such as the impact of AI on jobs, transportation and even warfare.

Tech companies have long over-promised what artificially intelligent machines can do. In recent years, however, the AI field has made rapid advances in a range of areas, from self-driving cars and machines that understand speech, like Amazon's Echo device, to a new generation of weapons systems that threaten to automate combat.

The specifics of what the industry group will do or say - even its name - have yet to be hashed out. But the basic intention is clear: to ensure that AI research is focused on benefiting people, not hurting them, according to four people involved in the creation of the industry partnership who are not authorised to speak about it publicly.

A memorandum is being circulated among the five companies with a tentative plan to announce the new organisation in the middle of this month. One of the unresolved issues is that Google DeepMind, an Alphabet subsidiary, has asked to participate separately, according to a person involved in the negotiations.

The AI industry group is modelled on a similar human rights effort known as the Global Network Initiative, in which corporations and non-governmental organisations are focused on freedom of expression and privacy rights on the Internet, according to someone briefed by the industry organisers but not authorised to speak about it publicly.

The importance of the industry effort is underscored in a report issued on Thursday by a Stanford University group funded by Dr Eric Horvitz, a Microsoft researcher who is one of the executives in the industry discussions.

It is part of a project called the One Hundred Year Study on Artificial Intelligence, which lays out a plan to produce a detailed report on the impact of AI on society every five years for the next century.

The Stanford report attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities. The authors explore eight aspects of modern life, including healthcare, education, entertainment and employment, but specifically do not look at the issue of warfare.

They said that military AI applications were outside their current scope and expertise, but they did not rule out focusing on weapons in the future.

The authors of the Stanford report, which is titled Artificial Intelligence and Life in 2030, argue that it will be impossible to regulate AI.

"The study panel's consensus is that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn't any one thing), and the risks and considerations are very different in different domains," the report said.

One recommendation in the report is to raise the awareness of and expertise about artificial intelligence at all levels of government, said Dr Peter Stone, a computer scientist at the University of Texas at Austin and one of the authors of the Stanford report.

"We're not saying that there should be no regulation," said Dr Stone. "We're saying that there is a right way and a wrong way."

NEW YORK TIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on September 03, 2016, with the headline Tech giants seek to create AI ethics code. Subscribe