By Invitation

Do driverless cars dream of electric sheep?

Autonomous vehicles offer the prospect of safer and cheaper transportation of people and goods. But as Singapore sees the first driverless taxis take to the roads, technology has raced ahead of our laws - and our morals.

ST ILLUSTRATION: MANNY FRANCISCO

Last week, the world's first driverless taxi service began picking up passengers on Singapore's roads. It is a pilot programme with only six cars, 12 designated stops, pre-registered passengers and limited hours that avoid peak traffic. And while the Renaults and Mitsubishis may be "driverless", they come with an engineer ready to take the wheel and a researcher taking notes on passenger experience.

Although these are baby steps, autonomous transportation is set to transform how people and goods move around the world. Before we hand over the keys to our robot cabbies, however, a host of legal and moral issues need to be worked through.

WHO'S IN THE DRIVER'S SEAT?

A key distinction is between automated and autonomous. Many cars now have automated functions like cruise control, which regulates speed. These functions are supervised by the driver, who remains in active control. Autonomous means that the vehicle itself is taking decisions without input from the driver - indeed, there is no need for a driver at all.

So what happens if something goes wrong?

ST ILLUSTRATION: MANNY FRANCISCO

In terms of civil liability - the obligation to compensate another person that is injured, for example - there is nothing particularly difficult here. If I negligently drive over your foot, I may be responsible for your medical expenses. If your foot is injured because my car explodes due to a defective petrol tank, then the manufacturer may be liable. Insurance helps to allocate these costs more efficiently and many jurisdictions already require minimum levels of cover. Last October, Volvo's CEO made headlines when he said that the company would accept full liability for accidents when its cars are in autonomous mode. Since many jurisdictions allowing self-driving cars have mandated higher levels of insurance, this statement was a bit disingenuous. But it is possible that laws may be reviewed to require vehicles to be insured rather than drivers, and to impose strict liability on manufacturers - meaning responsibility for damage without having to demonstrate negligence on their part.

Further amendments may follow if the business model of transportation changes - if vehicles come to be seen as a service to be used rather than a thing to be owned. But the fundamental legal concepts are sound.

Not so in relation to criminal law.

It is less concerned with allocating costs than apportioning blame. If a vehicle registered in my name is caught speeding by a speed camera, then I am presumptively responsible unless I can point to the responsibility of another person. For this reason, California law allows autonomous vehicles on the road provided that a human driver is behind the wheel and alert. If the highway patrol pulls the car over, he or she is going to get the ticket.

Similarly, Singapore law presently focuses criminal responsibility on the driver, meaning the person able to control a vehicle's speed and direction. (In the new "driverless" taxis, that means the engineer.)

But what if there is no driver?

It would probably depend on why the vehicle was speeding. If I were clever enough to hack the security protocols and programmed my car to go faster than the speed limit, I might be guilty of an offence. If it was purely a system malfunction, akin to someone's brakes failing while driving down a steep hill, there may be no fine - but compensation for damage caused would be allocated in accordance with civil liability rules.

Between these extremes is the possibility that an autonomous vehicle itself might decide that it should speed. For example, imagine a situation in which your smartwatch detects that you are having a heart attack and shares this data with your autonomous car. The road is clear and so the car exceeds the limit in order to save your life. Necessity might be a defence for a human driver in such circumstances; it is possible that it might be extended to our robot counterparts also.

However, such a possibility would need to be included within an autonomous vehicle's parameters, which raises larger questions of what such vehicles should do in an emergency.

IN CASE OF EMERGENCY, BRAKE?

When faced with the possibility of a collision, the law typically asks what the reasonable person would do. It may be reasonable to swerve into oncoming traffic to avoid a baby, but probably not to avoid a rat. An autonomous vehicle might respond more swiftly, but lacks the moral compass that is expected to guide a human. That must be programmed in or learnt somehow through experience.

A common illustration of the dilemmas that can arise is the trolley problem used by ethicists. A single-carriage train is heading towards five people and will kill them all. If a lever is pulled, the train will be diverted onto a siding but will kill someone else. Do you pull the lever?

Though many people would do so, there is no "right" answer to this question. When confronted with an analogous situation in which five people are going to die and the only way to stop the train is by pushing someone into its path, most people tend to hold back. The first scenario reflects a utilitarian approach that looks to the consequences of an action (one death versus five). The second feels different because we know intuitively that pushing a person to their death is wrong - even though the choice is still between one person and five people dying.

The Massachusetts Institute of Technology has set up a Moral Machine that offers these and a dozen other scenarios that might confront driverless cars. Should two passengers be sacrificed if it would save five pedestrians? Does it matter if the pedestrians were jaywalking? If they were criminals?

(In real life, faster reaction times mean that braking would be the best choice - but for present purposes we are to assume that the brakes have failed and so the vehicle cannot stop.)

The results are diverse and fascinating. But if humans cannot agree on what to do, how are we meant to advise the machines?

Three quarters of a century ago, science-fiction writer Isaac Asimov imagined a future in which robots have become an integral part of daily life. In this fictional world, a safety device is built into all robots in the form of the three laws of robotics, said to be quoted from the Handbook of Robotics, 56th edn., 2058 AD. The first law is that a robot may not injure a human, or through inaction allow a human to come to harm. Second, orders given by humans must be obeyed, unless that would conflict with the first law. Third, robots must protect their own existence, unless that conflicts with the first or second laws.

A blanket rule not to harm humans is obviously inadequate when forced to choose between the lesser of two evils.

Asimov himself later added a "zeroth" law, which said that a robot's highest duty was to humanity as a whole. In one of his last novels, a robot is asked how it could ever determine what is injurious to humanity as a whole. "Precisely, sir," the robot replies. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide."

GIVE WAY

In the short term, it is unlikely that autonomous vehicles will be given the discretion to speed or swerve into oncoming traffic to avoid a jaywalking pedestrian. If anything, such vehicles are likely to be more law-abiding than the average Singaporean driver.

Leaving aside the legal and moral issues, this raises a more immediate practical concern. If our new robot chauffeurs are not willing to cut into traffic, blow their horn when cut off, or assert their rights aggressively on the road - if they are, in short, better and more courteous drivers than the rest of us - how are they ever going to make it to their destination?

The writer is Dean of the National University of Singapore Faculty of Law. His latest book is the novel Raising Arcadia.

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on September 03, 2016, with the headline Do driverless cars dream of electric sheep?. Subscribe