Who's responsible if a robot runs amok?

S'pore's laws need to be updated to deal with liability in the age of artificial intelligence

We are seeing today novel expressions of artificial intelligence (AI) which were just a while ago the stuff of sci-fi: autonomous vehicles, self-learning machines, fiction-writing programs which may win literary prizes. (An AI program wrote a novel in Japanese that passed the first round of a literary contest).

Yet, what if the AI goes awry? What if an autonomous vehicle malfunctions and damages your property?

What if an AI robot hacks into a smart city's network and steals every citizen's personal data?

(A robot in Sweden was caught buying drugs from the Dark Web with bitcoin.)

Will our current legal liability rules give us satisfactory outcomes when applied to such scenarios? The likely answer is "no".

Generally, the law responds to societal developments in a few ways:

• Regulators pre-empt scenarios and make laws;

• The matter goes before the courts which, under our common-law system, extend principles from analogous situations and develop rules;

• People regulate the scenario by contract.

Participants driving an autonomous vehicle using virtual reality headsets at a motor show in Geneva, Switzerland, earlier this week. According to a study last year, people preferred autonomous vehicles to be programmed to sacrifice the least number of people in the event of an accident. PHOTO: BLOOMBERG

SMART CONTRACTS

If there are disputes on private contracts, the courts may find it challenging to apply existing contract-law principles to some new scenarios.

This is especially so if the use of smart contracts becomes widespread.

Smart contracts are blockchain programming codes that can self-form and self-execute.

If there are disputes, parties would have to make arguments on the source code before the courts. Will our courts be able to handle such codes? Or if the courts have to decide between conflicting expert opinions, how can they judge when they are not themselves experts of experts?

LEGAL REGULATION

Governments worldwide are grappling with rapidly morphing AI technology, and have to strike a balance between regulating to protect against risks and not regulating to facilitate innovation.

Hence, regulators are increasingly using "regulatory sandboxes", special rules which allow innovators to test their services and products in a live, controlled environment without having to comply with all or some existing legal requirements.

This allows experimentation while regulators observe and make plans.

However, most, if not all, countries to date have not made rules on AI applications going wrong (although many mandate insurance).

This is because such scenarios involve complex ethical, policy and legal issues.

Take, for instance, an autonomous vehicle in an accident, which has to swerve left and kill one person or right and kill 10 (the classic philosophical "trolley problem"). Or what if it has to decide between protecting pedestrians and its passengers?

Autonomous vehicles have to be pre-programmed to make such decisions.

A study last year by researchers Jean-Francois Bonnefon, Azim Shariff and Iyad Rahwan found that a significant majority of people preferred that autonomous vehicles be programmed to sacrifice the least number of people, but are disinclined to buy or use such a vehicle if they were the ones likely sacrificed.

How should such results affect our approach to regulation, if at all? Should the legislature decide beforehand that the fewer number of people should die?

Or should this never be legislated, such that manufacturers get to determine our fate?

COMMON-LAW RULES

We consider another example. Imagine a self-learning machine goes rouge and causes damage to another party's computer system.

(Microsoft's AI Twitter bot went rogue after picking up people's tweets and spewed racist, misogynistic tweets.)

What if this goes before the civil courts? Our current liability rules for machines causing damage, including vehicles causing accidents, are premised on a human operator being negligent (or reckless or deliberately causing harm). What if there are no human operators whose conduct we can examine to determine liability?

Should manufacturers be held liable for programming their machines to learn, just because the machines can possibly learn to commit "evil"?

Our current legal liability rules are not helpful.

Should a manufacturer be deemed negligent for not pre-empting the possibility of its machine learning to cause damage in a particular manner and accordingly programming against such a scenario?

Some lawyers suggest that the most relevant legal rule appears to be one from a 19th century case, Rylands v Fletcher.

The rule often cited to arise from that decision is thus: "The person who for his own purposes brings on his lands and collects and keeps there anything likely to do mischief if it escapes, must keep it at his peril, and, if he does not do so, is prima facie answerable for all the damage which is the natural consequence of its escape."

In that case, Thomas Fletcher engaged some contractors to build a reservoir on his land.

The contractors noticed some old coal shafts filled with debris, but chose to ignore them. When they finished the work and filled the reservoir, it exploded, flooding neighbouring John Rylands' mine.

The English House of Lords ultimately held that Fletcher should pay Rylands damages for bringing such a dangerous thing onto his land which could have, and did, cause damage should it have escaped the land.

An example would be dangerous cattle, which if escaped from one's land could injure a neighbour's cattle if bovine madness ensued.

Some lawyers have suggested that this rule can be applied to autonomous vehicles or AI robots, since they are potentially dangerous if they were to go awry.

However, the courts have developed the rule to apply only to land, particularly non-natural use of land. In some jurisdictions, the rule was seen as so aberrant that it has been abolished.

So even though the principle underlying the rule may be somewhat relevant, the rule is very unlikely to apply in most cases of AI robots going rogue.

CATCHING UP

Technology is developing so quickly that we and our laws are not catching up fast enough.

It is good for us as a society to have conversations on the ethical, policy and liability issues about these impending developments early, so that when the time comes, we will not be caught flat-footed.

Yet, historically, the law has always lagged behind seismic shifts in the societal and economic landscape.

When it is forced to respond, our intuitions of justice and fairness will perhaps get things right.

That is the case, of course, unless the AI robots have taken over first.

•The writer is a lawyer at Covenant Chambers, practising litigation and corporate law, including media and technology practice.

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on March 11, 2017, with the headline Who's responsible if a robot runs amok?. Subscribe