Morality and the future of robots

The rapid growth of artificial intelligence signals the need for serious discussions on whether, and how, to set limits to such systems' capacity for self-improvement. Should the systems be built with a sense of morality, for instance? Or with a 'kill' switch so humans can override them?

In March this year, AlphaGo, a machine created by Google's artificial intelligence (AI) arm, DeepMind, trounced Mr Lee Sedol, a grandmaster at Go, the ancient Chinese game. AlphaGo used cutting-edge AI to beat a player acknowledged to be one of the greatest ever.

For Go aficionados, the game will never be the same again, just as chess was changed when IBM's Deep Blue beat then world champion Garry Kasparov in 1997. That year, it was widely thought that while machines could master chess, beating the world's best at Go - a far more complex game with near-infinite variations of play - was still several decades away.

Deep Blue used brute-force calculation and sheer computing power to beat the reigning world champion. Not so with AlphaGo - a complex machine which used deep neural networks and reinforcement learning, independent of human input. The machine learnt on its own as it progressed and got stronger as it played (it seems too that it may have learnt most from the single game it lost). Seasoned Go players marvel at the complexities of AlphaGo's play - it baffles experts and has the potential to change Go (even human-human play) for good. There was none of these subtleties present in the Deep Blue-Kasparov match.

What AlphaGo has shown is that advances in AI once thought to need several decades to be made can be compressed into a few years.

Change is happening at a very fast rate and policymakers may not have the luxury of time to adjust and to make decisions.

It is time to start thinking about what exactly this all means for us as individuals and for humanity as a whole.

BENEFITS OF A.I.

Certainly, AI will benefit us all in ways that we are only just beginning to fathom. Consider, for example, AI as a tremendous force for good in national security. Machine learning tools have already been applied to complex security situations around the world. There have, for example, been thought-provoking trials on modelling the behaviour of the Islamic State in Iraq and Syria, telling us (through the application of AI to big data) far better than most analysts could where the militant group might plant improvised explosive devices.

The Singapore security architecture already has systems that parse big data and weak signals, such as the Risk Assessment and Horizon Scanning system. Could its predictive capacities be improved with the use of AI? This would not, of course, be a silver bullet to predict when a terrorist attack might occur, but a system that learns what constitutes good predictions could in theory become increasingly proficient at avoiding bad ones, across a whole variety of scenarios. It will not find us the proverbial needle, but at least analysts might have more definition in terms of which haystack to look in.

AI would also be of immeasurable benefit in the new initiative announced by the Home Team - SG Secure - which would have an element of surveillance and the use of closed-circuit television (CCTV) cameras. The smartest CCTV systems are already beginning to use some elements of AI. These obviate the need for tedious and cumbersome cross-checking of profiles and reams of data, and, in some trials, have identified patterns that point to a likely crime even before the act itself.

At this point, we need to distinguish between, on the one hand, the type of AI that uses algorithms to parse massive amounts of data, and, on the other, machine learning independent of humans ("true" or "strong" AI).

WHEN A.I. GETS A MIND OF ITS OWN

Strong AI systems are capable of learning autonomously, improving their capabilities with each iteration of play. This has tremendous implication, as it suggests that a system initially set with a defined utility or purpose might "learn" to develop a utility or purpose different from what the designers intended.

AI researchers have reached the point of being prepared to seriously discuss (for the most part, in academic journals) whether this recursive drive for improvement and resource acquisition inherent in strong AI systems may ultimately mean that a machine's real concept of utility might diverge at some point from what was intended.

The machine might still be designed by a human agent, but it might not be designed well enough. AI systems may seek to obtain more resources for whatever goals they might have. And in doing so, it is possible that an AI system would develop new processes to complete tasks faster and become more capable than it was designed to be. An AI system could essentially "re-architect" itself.

In popular culture, the reference point would be the moment in an early 1990s Hollywood movie, etched into the consciousness of the generation that grew up with it, when Skynet, an AI system created by humans, gains self-awareness and resists human attempts to disable it. The result is a world war that leads to humankind's near-annihilation, with humans barely clinging on in a dystopian future world ruled by killer robots.

AI experts are right, of course, to say we are nowhere near a Skynet moment, and some say the point where we need to start worrying about such possibilities will never happen. But we need to understand that there have been serious developments in AI that should give all of us pause for thought.

In January last year, theoretical physicist Stephen Hawking, high-tech entrepreneur Elon Musk, master technologists as well as AI experts signed an open letter calling for more research on the societal impact of AI. While agreeing that great benefits could be reaped from AI, Professor Hawking and company warned of potential "pitfalls", calling for a degree of circumspection to ensure that AI does not pose an existential threat as it advances towards human-level intelligence.

Some experts argue that it is precisely because of this that there needs to be a set of well-thought- out countervailing instructions (conceivably a type of deep-lying circuit breaker or "kill" switch) embedded in all AI systems, simply because a self-improving, strong AI system would go to lengths that we cannot fully comprehend in order to fulfil its goals. And AI systems might in some circumstances be able (say, because of poorly designed controls) to break their constraints. Therefore, so the argument runs, self-improvement capacity (or the drive to gain more resources) might have to be limited in a coherent and well-thought-out manner, where it is impossible to circumvent those constraints.

If there is proof of concept for the idea of limitation - and this is not certain - then down the line, the "scaffolding" can be a base on which more powerful successor systems - but provably safe ones - can be developed.

The debate is not simply a theoretical one confined to arcane journals. It affects, for example, ongoing debates on lethal autonomous drones. Alone among the major powers, the United States requires human input before lethal force is exercised by its drones. But for how much longer?

As machine learning algorithms improve, we might reach a situation where retaining the human element means a loss of efficiency - removing the human decision-making might give a system the edge over its targets (which might be humans or other drones). But how would we know that the system is making the right decisions?

There is, in fact, an ongoing debate about the issue of building morality into AI systems and robots. Research on how to build a sense of right and wrong and moral consequence into autonomous robotic systems, including lethal autonomous weapon systems, is already being funded.

Whether this can ever be successfully done remains to be seen. Code instructions are deterministic in nature, whereas the understanding of inappropriate or wrong behaviour is subjective.

Within Singapore, technologists, academics, social scientists and the Government should come together in communities of practice to talk about approaches to these issues. This is already beginning to happen.

In the absence of certainty on these issues, many experts, chief executive officers and futurists are calling for commonly recognised norms to regulate advanced AI research, or for a pause in such research until a coherent framework can be developed. This is not Luddite fear-mongering, but basic ethics.

In completely unrelated fields, such as medical research, there are already widely accepted principles that enshrine, within all experimentation, the primary responsibility to humankind, with this responsibility privileged above the interests of science and research. Any academic ever in the position of filling out an ethics board application (usually while wishing he was doing something else) understands this.

In 1928, John Maynard Keynes penned an essay - one that would become famous - in which he discussed technological displacement that the world would see a century from then. To him, the problem would be all the free time that humankind would have, and what to do with it. "For the first time since his creation, man will be faced with his real, his permanent problem - how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won."

Keynes foresaw that industrial advances would mean machines might one day outperform humans at many tasks - even in highly skilled jobs.

What he could not possibly have foreseen, however, is what precisely the entities displacing humans would get up to.

•Dr Shashi Jayakumar is head of the Centre of Excellence for National Security at the S. Rajaratnam School of International Studies, Nanyang Technological University.

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on June 06, 2016, with the headline Morality and the future of robots. Subscribe