Google AI software cracks complex Go game

Students in China playing Go, known as Weiqi in Chinese, during a competition. The board game invented over 2,500 years ago typically consists of a 19-by-19 square board, where players attempt to capture empty areas and surround an opponent's pieces.
Students in China playing Go, known as Weiqi in Chinese, during a competition. The board game invented over 2,500 years ago typically consists of a 19-by-19 square board, where players attempt to capture empty areas and surround an opponent's pieces.PHOTO: REUTERS

Mastery of ancient game a signal of machines starting to think like humans

SAN FRANCISCO • Computers have learnt to master backgammon, chess and Atari's Breakout, but one game has always eluded them.

It is a Chinese board game called Go invented over 2,500 years ago. The artificial intelligence challenge has piqued the interest of researchers at Google and Facebook, and the search giant has recently made a breakthrough.

Google has developed the first AI software that learns to play Go and is able to beat some professional human players, according to an article published on Wednesday in the science journal Nature.

Google DeepMind, the London research group behind the project, is now getting the software ready for a competition in Seoul against the world's best Go player in March. The event harks back to the highly publicised chess match in 1996 when IBM's Deep Blue computer defeated the world chess champion. However, Go is a much more complex game. It typically consists of a 19-by-19 square board, where players attempt to capture empty areas and surround an opponent's pieces.

Whereas chess offers some 20 possible choices per move, Go has about 200, said Mr Demis Hassabis, co-founder of Google DeepMind. "There's still a lot of uncertainty over this match, whether we win," he said. IBM demonstrated the phenomenal processing power available to modern computers. DeepMind should highlight how these phenomenally powerful machines are beginning to think in a more human way.

The research has implications beyond an old Chinese board game. The systems used by Facebook and Google were not preprogrammed with specific if-this-then- do-that code or explicitly told the rules. Instead, they learnt to play at a very high level by themselves.

Computer scientists have been trying to crack Go for years.

Facebook is working on a similar project using the same sorts of neural-network and search technology as Google. Google's version, called AlphaGo, achieved higher scores than Facebook's, according to data from the firms.

The research has implications beyond an old Chinese board game. The systems used by Facebook and Google were not preprogrammed with specific if-this- then-do-that code or explicitly told the rules. Instead, they learnt to play at a very high level by themselves. These techniques can be adapted to any problem "where you have a large amount of data that you have to find insights in", Mr Hassabis said.

Facebook's Go research will be used to improve its Facebook M virtual assistant and accessibility services, said Mr Ari Entin, a company spokesman.

In October, Google pitted AlphaGo against Mr Fan Hui, the best player in Europe. They played five games. The computer won all.

Mr Hassabis said Google may follow Facebook's lead in making a version of its Go software available online for people to play against. But first, the company must worry about the match in Seoul. AlphaGo is going up against Mr Lee Sedol, the world's top player over the past decade. The winner will receive US$1 million (S$1.43 million).

BLOOMBERG

A version of this article appeared in the print edition of The Straits Times on January 29, 2016, with the headline 'Google AI software cracks complex Go game'. Print Edition | Subscribe