How to regulate artificial intelligence

Mr Elon Musk, founder, CEO and lead designer at SpaceX and co-founder of Tesla, checks out the SpaceX Hyperloop Pod Competition II in Hawthorne, California, US, on Aug 27, 2017. PHOTO: REUTERS

Technology entrepreneur Elon Musk recently urged United States governors to regulate artificial intelligence (AI) "before it's too late". He insists that artificial intelligence represents an "existential threat to humanity", an alarmist view that confuses AI science with science fiction. Nevertheless, even AI researchers like me recognise that there are valid concerns about its impact on weapons, jobs and privacy. It's natural to ask whether we should develop AI at all.

I believe the answer is yes. But shouldn't we take steps to at least slow down progress on AI, in the interest of caution? The problem is that if we do so, then nations like China will overtake us. The AI horse has left the barn, and our best bet is to attempt to steer it. AI should not be weaponised, and any AI must have an impregnable "off switch". Beyond that, we should regulate the tangible impact of AI systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of AI. I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the "three laws of robotics" that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to AI? I suggest a more concrete basis for avoiding AI harm, based on three rules of my own.

First, an AI system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don't want AI to engage in cyber bullying, stock manipulation or terrorist threats; we don't want the FBI to release AI systems that entrap people into committing crimes. We don't want autonomous vehicles that drive through red lights, or worse, AI weapons that violate international treaties.

Our common law should be amended so that we can't claim that our AI system did something that we couldn't understand or anticipate. Simply put, "My AI did it" should not excuse illegal behaviour.

My second rule is that an AI system must clearly disclose that it is not human. As we have seen in the case of bots - computer programs that can engage in increasingly sophisticated dialogue with real people - society needs assurances that AI systems are clearly labelled as such. Last year, a bot known as Jill Watson which served as a teaching assistant for an online course at Georgia Tech fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford.

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf - a bot that humorously impersonated PresidentDonald Trump on Twitter. AI systems don't just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former president Barack Obama in which he convincingly appeared to be speaking words grafted onto a video of him talking about something entirely different.

My third rule is that an AI system cannot retain or disclose confidential information without explicit approval from the source of that information. Because of their exceptional ability to automatically elicit, record and analyse information, AI systems are in a prime position to acquire confidential information. Think of all the conversations that Amazon Echo - a "smart speaker" present in an increasing number of homes - is privy to, or the information that your child may inadvertently divulge to a toy such as an AI Barbie. Even seemingly innocuous housecleaning robots create maps of your home. That is information you want to make sure you control.

My three AI rules are, I believe, sound but far from complete. I introduce them here as a starting point for discussion. Whether or not you agree with Mr Musk's view about AI's rate of progress and its ultimate impact on humanity (I don't), it is clear that AI is coming. Society needs to get ready.

NYTIMES

•The writer is the chief executive of the Allen Institute for Artificial Intelligence.

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on September 05, 2017, with the headline How to regulate artificial intelligence. Subscribe