Life-and-death questions for driverless cars

Now that driverless vehicles are almost here, carmakers are deciding whether they should have power over who lives or dies in an accident

  • The robot car may have to swerve onto a crowded sidewalk to avoid being rearended by a speeding truck or stay put and place the driver in mortal danger.

The gearheads in Detroit, Tokyo and Stuttgart have mostly figured out how to build driverless vehicles. Even the Google guys seem to have solved the riddle. Now comes the hard part: deciding whether these machines should have power over who lives or dies in an accident.

The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can't happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?

Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of grey. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University's Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.

"This issue is definitely in the crosshairs," says Mr Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. "They're very aware of the issues and the challenges because their programmers are actively trying to make these decisions today."

Carmakers and Google are pouring billions into developing driverless cars. This week, Ford said it was moving development of self-driving cars from the research lab to its advanced engineering operations.

Google plans to put a "few" of its self-driving cars on California roads this summer, graduating from the test track. Social Robots Cars can already stop and steer without help from a human driver.

Within a decade, fully automated autos could be navigating public roads, according to Boston Consulting Group. Cars will be among the first autonomous machines testing the limits of reason and reaction in real time.

"This is going to set the tone for all social robots," says philosopher Patrick Lin, who runs the Ethics and Emerging Sciences Group at California Polytechnic University and counsels carmakers. "These are the first truly social robots to move around in society."

The promise of self-driving cars is that they will anticipate and avoid collisions, dramatically reducing the 33,000 deaths on United States highways each year. But accidents will still happen. And in those moments, the robot car may have to choose the lesser of two evils - swerve onto a crowded sidewalk to avoid being rear-ended by a speeding truck or stay put and place the driver in mortal danger.

"Those kinds of questions do have to be answered before automated driving becomes a reality," Mr Jeff Greenberg, Ford's senior technical leader for human-machine interface, said during a tour of the carmaker's new Silicon Valley research lab this week.

Right now, ethicists have more questions than answers. Should rules governing autonomous vehicles emphasise the greater good - the number of lives saved - and put no value on the individuals involved? Should they borrow from Asimov, whose first law of robotics says an autonomous machine may not injure a human being, or through inaction, allow a human to be harmed.

"I wouldn't want my robot car to trade my life just to save one or two others," Mr Lin says. "But it doesn't seem to follow that it should hold our life uber alles, no matter how many victims you're talking about. That seems plain wrong."

That is why we should not leave those decisions up to robots, says Mr Wendell Wallach, author of A Dangerous Master: How To Keep Technology From Slipping Beyond Our Control.

"The way forward is to create an absolute principle that machines do not make life-and-death decisions," says Mr Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University.

"There has to be a human in the loop. You end up with a pretty lawless society if people think they won't be held responsible for the actions they take."

As Mr Wallach, Mr Lin and other ethicists wrestle with the philosophical complexities, Mr Gerdes is conducting real-world experiments.

This summer on a racetrack in northern California, he will test automated vehicles programmed to follow ethical rules to make split-second decisions, such as when it is appropriate to disobey traffic laws and cross a double yellow line to make room for bicyclists or cars that are double-parked.

He is also working with Toyota to find ways for an autonomous car to quickly hand back control to a human driver. Even such a handoff is fraught with peril, he says, especially as cars do more and driving skills degrade.

Ultimately, the problem with giving an autonomous automobile the power to make consequential decisions is that, like the robots of science fiction, a self-driving car still lacks empathy and the ability to comprehend nuance.

"There's no sensor that's yet been designed," Mr Gerdes says, "that's as good as the human eye and the human brain."

Bloomberg

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on June 27, 2015, with the headline Life-and-death questions for driverless cars. Subscribe