Can a machine learn morality? Here's what Delphi says

Is it ethical to kill one person to save another? The response from an AI website to this and other questions drew on more than 1.7 million ethical judgments by real, live humans.

The morality of any technological creation is a product of those who have built it. PHOTO ILLUSTRATION: PIXABAY
New: Gift this subscriber-only story to your friends and family

(NYTIMES) - Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks.

Anyone can visit the Delphi website and ask for an ethical decree. Dr Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the tech using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn't. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others, and Delphi said he should not.

Already a subscriber? 

Read the full story and more at $9.90/month

Get exclusive reports and insights with more than 500 subscriber-only articles every month

Unlock these benefits

  • All subscriber-only content on ST app and

  • Easy access any time via ST app on 1 mobile device

  • E-paper with 2-week archive so you won't miss out on content that matters to you

Join ST's Telegram channel and get the latest breaking news delivered to you.