Zuckerberg and Musk are arguing about the dangers of AI - they're both wrong

In one corner, Facebook's Mark Zuckerberg. In the other, Tesla's Elon Musk.

If Mr Zuckerberg and Mr Musk went to the same primary school, I can practically imagine them firing "Your algorithm is so slow…" insults back and forth on the playground. Instead, as grown-up tech emperors, they barb each other on webcasts and on Twitter.

Last week, Mr Zuckerberg was on Facebook Live while grilling meat and saying things like "Arkansas is great", because he is not at all running for political office. He then went on to criticise artificial intelligence (AI) "naysayers" who drum up "doomsday scenarios" as "really negative, and in some ways… pretty irresponsible".

Mr Musk has been the foremost evangelist for the AI apocalypse.

At a symposium in 2014, he called AI our "biggest existential threat". He genuinely believes that AI will be able to recursively improve itself until it views humans as obsolete.

Earlier last month, he told a gathering of United States governors: "I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react because it seems so ethereal."

Mr Musk even co-founded a billion-dollar non-profit group whose goal is to create "safe AI".

So many, including Mr Musk, took Mr Zuckerberg's comment as a reply to Mr Musk's outspoken views. Last Tuesday, Mr Musk fired back, tweeting: "I've talked to Mark about this. His understanding of the subject is pretty limited."

It seems like faulty logic to say someone who programmed his own home AI has a pretty limited understanding of AI.

But the truth is that they are both probably wrong. Mr Zuckerberg, either truthfully or performatively, is optimistically biased about AI. And there are plenty of reasons to question Mr Musk's scary beliefs.

Recently, Professor Rodney Brooks - the founding director of MIT's Computer Science and Artificial Intelligence Laboratory, and someone whose understanding of AI is unquestionably expansive - pointed out Mr Musk's mistake and hypocrisy in an interview with TechCrunch.

He explained that there is a huge difference between a human's skill at a task and a computer's skill at a task, largely stemming from their underlying "competence".

What does that mean? When people talk about human genius, the late John von Neumann often pops up. His underlying "competence" was so great he made inimitable contributions to physics, maths, computer science and statistics. He was also part of the Manhattan Project and came up with the term "mutually assured destruction".

Prof Brooks seems to be implying, and many agree, that computers lack that kind of fluid intelligence. A computer that can identify cancerous tumours cannot necessarily determine whether a picture contains a dog or the Brooklyn Bridge.

Prof Brooks is saying that Mr Musk is anthropomorphising AI and, thus, overestimating its danger.

He also expressed irritation with Mr Musk's continued general calls to "regulate" AI. "Tell me, what behaviour do you want to change, Elon?" he asked rhetorically. "By the way, let's talk about regulation on self-driving Teslas because that's a real issue."

It's because of this failure to regulate AI that Mr Zuckerberg is right when he calls Mr Musk's fearmongering irresponsible, though he's right for the wrong reasons.

During the same live-stream, Mr Zuckerberg claimed: "In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives."

He was joined by his wife Priscilla, and if "our" refers to the Zuckerbergs, he's right. AI will absolutely deliver improvements in their lives. Their home AI will tailor their living environment to their every need, drones will deliver goods to their home and self-driving cars will chauffeur them to meetings.

However, for many, AI will deliver little more than unemployment cheques. Reports have placed the coming unemployment due to AI at as high as 50 per cent, and while that is almost certainly alarmist, more reasonable estimates are no more comforting, reaching as high as 25 per cent. It's a different hell from the one Mr Musk envisions.

Mr Musk's apocalypse is sexy.

From Blade Runner to 2001: A Space Odyssey, the American public loves the story of robot uprisings and the human cadre brave enough to save us. They'll buy tickets to watch these movies and they'll buy into the idea that a machine revolt could happen.

The real problem, the one of human inequality, is unimaginative and depressing to think about. Neo might be able to save humanity from the Matrix, but he can't deliver us from income inequality, unemployment and economic displacement.

It's in this sense that Mr Musk's comments are irresponsible. If we spend all our time worried that the sky is falling, we have no time left to stop and think about the very real lion's den we're walking into.

One of the world's leading AI experts, Professor Andrew Ng, put it clearly at the 2015 GPU Tech Conference: "Rather than being distracted by evil killer robots, the challenge to labour caused by these machines is a conversation that academia and industry and government should have."

A large part of the disagreement between the apocalyptic crowd (led by Mr Musk) and the practical concerns crowd (like Prof Brooks and Prof Ng) comes down exactly to a depth of understanding. Prof Brooks identified this in the TechCrunch interview when he pointed out that the purveyors of existential AI fear typically aren't computer scientists.

For AI and machine-learning scientists, their day-to-day worry isn't that their creations will evolve a malicious personality- it's that they will roll into a fountain and short-circuit.

Likewise, in academia, research papers try to incrementally improve the accuracy with which computers can identify human actions in video, like eating and playing basketball. Those are concrete goals. Did the robot fall into the fountain? Did we improve our error rate? You can meet these goals or fall short, but whatever happens, the outcome is clear and concrete.

Mr Musk and the public, however, are consumed by the problem of true machine intelligence, a problem with no real definition.

That problem is ill-defined in its bones. As famed British psychologist Richard Gregory put it in a 1998 textbook: "Innumerable tests are available for measuring intelligence.

"Yet, no one is quite certain of what intelligence is, or even just what it is that the available tests are measuring."

If we can't define intelligence for ourselves, how should we define it for our creations?

But the well-defined technical problems of AI and machine learning aren't generally interesting, and the ill-defined philosophical problems of intelligence are intractable even for experts.

Instead of battling over who understands AI better, perhaps Mr Zuckerberg and Mr Musk should team up to address the AI issues really worth worrying about - such as workers displaced by Silicon Valley's creations.

NYTIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on August 01, 2017, with the headline Zuckerberg and Musk are arguing about the dangers of AI - they're both wrong. Subscribe