Would you take a ride in a self-driving car knowing that in the event of an accident, it would spare the life of the pedestrian over that of the passenger? As countries race to put self-driving vehicles on the road, the makers of these autonomous machines will have to confront thorny ethical dilemmas as well as legal issues. If self-driving cars are to make split-second decisions as to who should live and who should die, how are they to be guided? If the algorithms are to reflect human values and preferences, what are they? Findings from a study by the Massachusetts Institute of Technology published recently offer interesting preliminary insights.
While it may be argued that the respondents are self-selected, it is nonetheless one of the largest studies ever on global moral preferences, and involved 2.3 million participants from over 200 countries. The analysis of the millions of responses matters because it shows that the task of twinning artificial intelligence with moral values is more complex than it may appear at first sight. While the results show there is broad agreement that human lives take precedence over cats and dogs and that it is better to save the many over the few, the exceptions raise questions about the universality of values. For instance, unlike respondents from the West, those from East Asian countries, including Singapore, are more likely to opt to save the elderly over the young. Japanese respondents largely chose to save pedestrians over passengers but respondents from China were more likely to sacrifice the lives of pedestrians.
Already a subscriber? Log in
Read the full story and more at $9.90/month
Get exclusive reports and insights with more than 500 subscriber-only articles every month
ST One Digital
$9.90/month
No contract
ST app access on 1 mobile device
Unlock these benefits
All subscriber-only content on ST app and straitstimes.com
Easy access any time via ST app on 1 mobile device
E-paper with 2-week archive so you won't miss out on content that matters to you