Thinking Aloud

What to do about robots and artificial intelligence

Whether the future will be bleak or bright depends on human choices and values

ST ILLUSTRATION : MANNY FRANCISCO

Two weeks ago, I wrote in a column headlined "Please stop saying I'll be replaced by a robot" that fears over machines making humans redundant were often overblown and might well be self-defeating, and that it was better to focus instead on what robots could help people do better.

Some readers welcomed what I had to say, others disagreed.

One thoughtful rejoinder by Ms Ethel Tan Hui Yan was published in The Straits Times Forum under the headline "Let's not take AI lightly". Ms Tan wrote: "At present, robots have attained the level of intelligence where they help more than harm... But when intuitive AI (artificial intelligence) robots with a learning capacity that far exceeds that of man's are developed, they could very well make humans redundant in many jobs.

"It is essential to talk about the augmented economy and the necessary industrial restructuring and educational reforms that need to be made, which will equip our workforce and prepare our younger generation for the challenge ahead."

Mr Frederick Wong said in an e-mail that "AI will impact us deeply in the coming decade and not the next century as some might think. If AI isn't regulated and control is in the hands of free enterprises, the way we live, the way we work, will change fundamentally, and likely not in a positive manner. Such negative impacts can already be seen in the lives of the less educated population. We are currently poorly equipped to handle the impact robotics will bring."

Yet another reader highlighted to me that just a day after my column was published, Mr Jack Ma, chairman of e-commerce giant Alibaba Group Holding, warned in a speech to an entrepreneurship conference in Zhengzhou, China, that "in the next 30 years, the world will see much more pain than happiness" due to job disruptions caused by the Internet.

ST ILLUSTRATION : MANNY FRANCISCO

"Social conflicts in the next three decades will have an impact on all sorts of industries and walks of life," he said, later adding that "machines should only do what humans cannot...Only in this way can we have the opportunities to keep machines as working partners with humans, rather than as replacements".

On this issue, I do not think it is a case of, if the optimists are right, then the pessimists must be wrong, or vice versa. It is obvious that the rise of robots and AI will enable the human race to do what was once impossible, but not without displacing some workers and making it necessary for many other workers to adapt, including those who either do not want to change or fear it.

What I do not subscribe to is techno-determinism, which is the belief that given the march of technology, a specific outcome is inevitable : either automation means job loss, end of story; or augmentation leads to more and better jobs. As software programmer and activist Seth Finkelstein has pointed out, "a technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole. This is not a technological consequence; rather, it's a political choice."

This is best illustrated in Mr Bill Gates' call for a robot tax, which several business columnists have criticised because they argue that it will impede innovation and be a drag on economic growth. But what exactly did Mr Gates say and why?

During an interview with business website Quartz in February this year, he was asked for his thoughts on a robot tax. He replied: "Certainly there will be taxes that relate to automation. Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, you'd think that we'd tax the robot at a similar level.

"And what the world wants is to take this opportunity to make all the goods and services we have today, and free up labour, let us do a better job of reaching out to the elderly, having smaller class sizes, helping kids with special needs. You know, all of those are things where human empathy and understanding are still very, very unique. And we still deal with an immense shortage of people to help out there.

"So if you can take the labour that used to do the thing automation replaces, and financially and training-wise and fulfilment-wise have that person go off and do these other things, then you're net ahead. But you can't just give up that income tax, because that's part of how you've been funding that level of human workers."

Asked how he would levy such a tax without disincentivising innovation, Mr Gates replied: "At a time when people are saying that the arrival of that robot is a net loss because of displacement, you ought to be willing to raise the tax level and even slow down the speed of that adoption somewhat to figure out, 'OK, what about the communities where this has a particularly big impact? Which transition programmes have worked and what type of funding do those require?'

"People should be figuring it out. It is really bad if people overall have more fear about what innovation is going to do than they have enthusiasm. That means they won't shape it for the positive things it can do. And, you know, taxation is certainly a better way to handle it than just banning some elements of it."

I agree that people should indeed be figuring out how to manage job displacement for communities where this will have a particularly big impact. Each society will have to make choices on how they wish to do so and governments will have to craft social policy accordingly. These are decisions for humans to make, in their roles as political, corporate and community leaders, and as citizens and voters of the countries they live in. This is work that cannot be done by any machine, no matter how intelligent.

The real tragedy would be if human beings chose to abdicate their responsibility to help each other prosper in the new economy out of a sense of helplessness in the face of technological advances.

As for the fears about intuitive AI, that is algorithms that can learn in the same way biological systems do how to master any task from scratch using nothing more than raw data, its development and use too will come down to decisions made by humans.

This is what AI researcher Demis Hassabis - whose start-up DeepMind developed the AlphaGo programme which defeated South Korean Go champion Lee Sedol in a five-game match in March last year - had to say on these powerful learning machines. "As these systems become more sophisticated, we need to think about how and what they optimise," he said in an interview with The Guardian in February this year. "The technology itself is neutral, but it's a learning system, so inevitably they'll bear some imprint of the value system and culture of the designer so we have to think very carefully about values."

DeepMind, which was bought by Google in 2014 for a reported US$625 million, has set up an internal ethics committee and advisory board comprising leading figures from diverse scientific and philosophical disciplines, to govern any future use of its AI technology. It is an industry leader in encouraging conversation around the safety issues related to AI.

The super-intelligent machines that are being developed today do not set their own goals, as Dr Hassabis took pains to get across. It is thus incumbent on the human designers to "make sure the goals are correctly specified". In other words, the AI challenge is as much about ethics as it is about technology.

Whether AI develops as a tool to help the human race solve its most intractable problems - be it in healthcare, climate change or economics - or becomes a weapon of mass destruction, depends not on the machines but on humans, the values they choose to uphold and the choices they make.

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Sunday Times on May 07, 2017, with the headline What to do about robots and artificial intelligence. Subscribe