What Stephen Hawking gets right and wrong about 'the most dangerous time for our planet'

Professor Stephen Hawking believes world leaders have to acknowledge that they have failed and are failing the many, to share resources and to help the unemployed retrain. PHOTO: AFP

Professor Stephen Hawking made a bold headline last week: "This is the most dangerous time for our planet."

In an essay in The Guardian, the renowned theoretical physicist wrote: "Whatever we might think about the decision by the British electorate to reject membership of the European Union and by the American public to embrace Mr Donald Trump as their next president, there is no doubt in the minds of commentators that this was a cry of anger by people who felt they had been abandoned by their leaders."

Technology is the main culprit here, widening the gulf between the haves and the have-nots. As Prof Hawking explained, automation has already decimated jobs in manufacturing and is allowing Wall Street to accrue huge rewards that the rest of us underwrite.

Over the next few years, technology will take more jobs from humans. Robots will drive the taxis and trucks; drones will deliver our mail and groceries; machines will flip hamburgers and serve meals. And, if Amazon's new cashier-less stores are a success, supermarkets will replace cashiers with sensors. This is not speculation; it is imminent. (Amazon founder Jeffrey P. Bezos owns The Washington Post.)

The dissatisfaction is not particularly American. With the developing world coming online with smartphones and tablets, billions more people are becoming aware of what they don't have. The unrest we have witnessed in the United States, Britain and, most recently, Italy will become a global phenomenon.

Prof Hawking's solution is to break down barriers within and between nations, to have world leaders acknowledge that they have failed and are failing the many, to share resources and to help the unemployed retrain.

But this is wishful thinking. It isn't going to happen.

Witness the outcome of the elections: We moved backward on almost every front. Our politicians will continue to divide and conquer, Silicon Valley will deny its culpability, and the very technologies, such as social media and the Internet, that were supposed to spread democracy and knowledge will instead be used to mislead, to suppress and to bring out the ugliest side of humanity.

That is why we can't rely on our political leaders for change. All of us must learn about advancing technologies and participate in the decision-making. We still have a voice and a choice.

Uber would be nowhere if it hadn't persuaded passengers to use its services and to lobby for its legalisation. We can choose not to purchase the artificial-intelligence chatbots that Amazon and Google are marketing. And we can certainly decide not to have our morning latte delivered by drone. We can also choose to stop using Facebook until it stops feeding us fake news, and Twitter, unless it banishes the trolls that misuse its platform.

In my forthcoming book, The Driver In The Driverless Car: How Our Technology Choices Will Create The Future, I suggest a filter through which to view advancing technologies when assessing their value to society and humankind.

It boils down to three questions relating to equality, risks and autonomy: Does the technology have the potential to benefit everyone equally? What are the risks and the rewards? Does the technology more strongly promote autonomy or dependence?

Why these three questions? To start, note the anger of the electorates, and then look ahead at the jobless future that technology is creating. If the needs and wants of every human being are met, as technology will make possible, we can deal with the social and psychological issues of joblessness. This won't be easy, by any means, but at least people won't be acting out of dire need and desperation.

We can build a society with new values, perhaps one in which social gratification comes from teaching and helping others and from creative accomplishment in fields such as music and the arts.

And then there are technologies' risks. Do we want the self-driving cars and robotic assistants watching everything we do, learning our needs and doing our chores? Most of us will want the benefits these bring. But what if the makers of these products use them to spy on us and the technologies themselves begin to exceed the intelligence of their creators? We clearly need to incorporate limits into our servant machines.

And what if we become physically and emotionally dependent on our robots? We really don't want our technologies to become like recreational drugs; we want greater autonomy and the freedom to live our lives the way we wish.

No technology is all black or white. It can be used for good and for harm. We have to decide what the limits should be and where the ethical lines are.

As Prof Hawking pointed out, we are at an inflection point with all of these technologies, and we can still take them in a direction that uplifts humankind. But if we don't learn and participate, our darkest fears will become reality.

WASHINGTON POST

  • Vivek Wadhwa is a distinguished fellow and professor at Carnegie Mellon University Engineering in Silicon Valley and a director of research at Center for Entrepreneurship and Research Commercialisation at Duke.

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on December 09, 2016, with the headline What Stephen Hawking gets right and wrong about 'the most dangerous time for our planet'. Subscribe