Why we are in danger of overestimating AI

An autonomous parcel delivery robot, developed by Starship Technologies, at the AI Congress in London last month. Despite advances, the technology is in danger of being overrated, and much more work is needed before machine intelligence matches the h
An autonomous parcel delivery robot, developed by Starship Technologies, at the AI Congress in London last month. Despite advances, the technology is in danger of being overrated, and much more work is needed before machine intelligence matches the human variety, the writer says. PHOTO: BLOOMBERG

Artificial intelligence (AI) is one of the important technological advances of the early 21st century. Already it has meant that machines can read medical images as well as a radiologist, and enabled the auto industry to develop autonomous cars.

The technology is in danger of being overrated, however, and considerably more work is needed before we can reach the long-dreamt-of moment when machine intelligence matches the human variety.

When we discuss AI today we are mainly referring to just one facet of it: deep learning.

This technology has its limitations, says Dr Dave Ferrucci, a former AI expert at IBM. The Watson project he led there contributed to the rise of interest in cognitive systems when seven years ago it beat the best human players at Jeopardy, the US television quiz show.

However, Dr Ferrucci, co-founder and chief executive of Elemental Cognition, stresses that deep learning is simply a statistical technique for finding patterns in large amounts of data.

It has predictive value but no true understanding in the sense that a human does. Having a computer simply spew out an answer "is not sufficient in the long term", he says. "You want to say: 'Here's why'."

If one of the key hopes for deep learning, like autonomous driving, turns out to be misplaced, then the whole field of AI could be in for a sharp downturn, both in popularity and funding.

An autonomous parcel delivery robot, developed by Starship Technologies, at the AI Congress in London last month. Despite advances, the technology is in danger of being overrated, and much more work is needed before machine intelligence matches the h
An autonomous parcel delivery robot, developed by Starship Technologies, at the AI Congress in London last month. Despite advances, the technology is in danger of being overrated, and much more work is needed before machine intelligence matches the human variety, the writer says. PHOTO: BLOOMBERG

The case against deep learning was put forcefully at the start of this year in a paper by New York University psychology professor Gary Marcus, who is a persistent sceptic. His list of complaints extends from its heavy reliance on large data sets to its susceptibility to machine bias and its inability to handle abstract reasoning.

Professor Marcus' conclusion was that "one of the biggest risks in the current overhyping of AI is another AI winter". He was referring to the period in the 1970s when over-optimism about the technology was followed by a period of deep disillusionment.

Some AI experts who are outside the current deep learning mainstream agree it is important to question the current orthodoxy.

"Given the excitement and investment in deep learning, it's important to analyse it and consider limitations," says Dr Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. Referring to recent warnings about the threat to humanity from an all-powerful AI, he says: "If we have Elon Musk and (Oxford university's) Nick Bostrom talking about 'superintelligence', we need Gary Marcus to provide a reality check."

Deep learning is a statistical approach using so-called "neural networks", which are based on a theory of how the human brain works. Information passes through layers of artificial neurons, connections between which are adjusted until the desired result emerges.

The main technique, called supervised learning, involves feeding in a series of inputs to train the system until the right output is obtained: pictures of cats, for instance, should eventually result in the word "cat".

The approach has inherent limits. Stanford University's adjunct professor Andrew Ng, one of the founders of Google Brain, the search company's deep learning project, says the system works for problems where a clear input can be mapped on to a clear output. This means it is best suited to a class of problems involving categorisation.

The applications of this kind of system are broad. The potential of neural networks first came to wide attention in 2012, when one system came close to matching human-level perception in recognising images.

The technique has also brought big leaps in speech recognition and language translation, allowing machines to start doing jobs that were once the preserve of human workers.

However, neural networks can be fooled. Prof Marcus points to research showing how a network trained to look for rifles was tricked by a picture of turtles. Skewed data can also lead to machine bias.

The more fundamental case against deep learning is that the technology cannot deal with many of the problems that humans will want computers to handle. It has no capacity for things the human mind can do easily, like abstraction or inference that make it possible for us to "understand" from very little information, or instantly apply an insight to another set of circumstances.

"A huge problem on the horizon is endowing AI programs with common sense," says Dr Etzioni. "Even little kids have it, but no deep learning program does."

Recent research offers hope that at least some of the limitations of deep learning can be overcome. These developments include transfer learning, where an algorithm trained on one set of data is applied to a different problem, and unsupervised learning, where a system learns without needing any "labelled" data to teach it.

What we need are systems that can master a number of different forms of intelligence, says Dr Ferrucci. What humans think of as "cognition" actually encompasses a number of different techniques, each suited to a different type problem, he says. It will take similar hybrid computers to show that human kind of understanding.

Like Dr Etzioni, Dr Ferrucci suggests that this will require advances in other approaches to AI that are at risk of being sidelined by the fervour for deep learning.

"We need to shift from narrow 'AI savants' that tackle a single problem, to broader AI that can tackle multiple tasks without requiring massive data sets for each," says Dr Etzioni. "The last 50 years of AI research have yielded many insights that can help."

FINANCIAL TIMES

Join ST's Telegram channel and get the latest breaking news delivered to you.

A version of this article appeared in the print edition of The Straits Times on February 07, 2018, with the headline Why we are in danger of overestimating AI. Subscribe