Robot behaviour is creeping beyond human control

If I were to approach you brandishing a cattle prod, you might at first be amused. But if I continued with a maniacal grin, you would retreat in shock, bewilderment and anger probably. As electrode meets flesh, I would expect a violent recoil plus expletives.

Given a particular input, one can often predict how a person will respond. That is not the case for the most intelligent machines in our midst. The creators of AlphaGo - a computer program built by Google's DeepMind that decisively beat the world's finest human player of the board game Go - admitted they could not have divined its winning moves. This unpredictability, also seen in the Facebook chatbots that were shut down after developing their own language, has stirred disquiet in the field of artificial intelligence.

As we head into the age of autonomous systems, when we abdicate more decision-making to AI, technologists are urging deeper understanding of the mysterious zone between input and output. At a conference at Surrey University last month, a team of coders from Bath University presented a paper revealing how even "designers have difficulty decoding the behaviour of their own robots simply by observing them".

The researchers champion the concept of "robot transparency" as an ethical requirement: Users should be able to easily discern the intent and abilities of a machine. And when things go wrong - if, say, a driverless car mows down a pedestrian - a record of the car's decisions should be accessible so similar errors can be coded out.

Other roboticists, notably Professor Alan Winfield of Bristol Robotics Laboratory at the University of the West of England, have similarly called for "ethical black boxes" to be installed in robots and autonomous systems, to enhance public trust and accountability. These would work in the same way as flight data recorders on aircraft: furnishing the sequence of decisions and actions that precede a failure.

Many autonomous systems, of course, are unseen: they lurk behind screens. Machine-learning algorithms, grinding mountains of data, can affect our success at securing loans and mortgages, at landing job interviews, and even at being granted parole.

For that reason, says researcher in data ethics Sandra Wachter at Oxford University and the Alan Turing Institute, regulation should be discussed. While algorithms can correct for some biases, many are trained on already-skewed data. So a recruitment algorithm for management is likely to identify ideal candidates as male, white and middle-aged. "I am a woman in my early 30s," she told Science, "so I would be filtered out immediately, even if I'm suitable... sometimes algorithms are used to display job ads, so I wouldn't even see the position is available."

The EU General Data Protection Regulation, due to come into force in May next year, will offer the prospect of redress: individuals will be able to contest completely automated decisions that have legal or other serious consequences.

There is an existential reason for grasping precisely how data input becomes machine output - "the singularity". It's the much-theorised point of runaway AI, when machine intelligence surpasses the human's. Machines could conceivably acquire the ability to shape and control the future on their own terms.

There need not be any premeditated malice for such a leap - only a lack of human oversight as AI programs, equipped with an ever-greater propensity to learn and the corresponding autonomy to act, begin to do things we can no longer predict, understand or control. The development of AlphaGo suggests that machine learning has mastered unpredictability, if only at one task. The singularity, should it materialise, promises a rather more chilling version of Game Over.

FINANCIAL TIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on August 04, 2017, with the headline Robot behaviour is creeping beyond human control. Subscribe