Google computer beats pros at 'world's most complex board game'

It was considered one of the last great challenges between man and machine but now, for the first time, a computer program has beaten a professional player of the ancient Chinese game of Go in a defeat that many had not expected for at least another 10 years.

The machine's victory is being likened to the defeat of reigning world chess champion Garry Kasparov in 1997 by IBM's Deep Blue computer, which became a milestone in the advance of artificial intelligence over the human mind.

Go, however, is more complex than chess with an infinitely greater number of potential moves, so experts were surprised to find that computer scientists had invented a suite of artificial intelligence (AI) algorithms that taught the computer how to win against Europe's top player.

The program, called AlphaGo, defeated European champion Fan Hui by a resounding five games to nil in a match played last October but only now revealed in a scientific study of the moves and algorithms published last night in the journal Nature. A match against the current world Go champion, Lee Sedol from South Korea, is now scheduled for March.

It was the first time a computer had won against a professional Go player on a full-sized board without any handicaps or advantages given to either side, said Demis Hassabis of Google DeepMind, the AI arm of Google in London, who helped to write the program.

"Go is the probably the most complex board game humans play. There are more configurations of the board than there are atoms in the Universe. In the end, AlphaGo won 5-nil and it was perhaps stronger than even we were expecting," Mr Hassabis said.

"AlphaGo discovered for itself many of the patterns and moves needed to play Go. Go is considered to be the pinnacle of AI research - the holy grail. For us, it was an irresistible challenge," he said.

Computer chess programs work by analysing every possible move on the board but this is relatively straightforward when there are about 20 possible moves for each stage of the game. In Go, however, there are about 200 possible moves, making the task of writing a winning program far more difficult.

"The search process itself is not based on brute force but on something akin to [human] imagination. In the game of Go we need this incredibly complex intuitive machinery that we only previously thought to be possible in the human brain," said David Silver of Google DeepMind, the lead author of the study.

AlphaGo uses two neural networks working in parallel and interacting with one another. A "value network" evaluated the positions of the black and white pieces or "stones" on the board, while a "policy network" selected the moves based on continuous learning of both past human moves and the program's own dummy moves, Mr Silver said.

"Humans can play perhaps a thousand games in a year whereas AlphaGo can play millions of games a day. It is conceivable with enough process power, training and search power that AlphaGo could reach a level that is beyond any human," he said.

In tests against other Go computer games on the market, AlphaGo won all but one out of 500 games, even when other programs were given a head-start with pieces already positioned on the board. Mr Silver said the neural networks were able to learn by themselves, unlike the "supervised" training of other artificial intelligence algorithms.

"It learns in a human-like manner but it still takes a lot of practice. It has to play many millions of games to do what a human player can learn in a few games," Mr Silver said.

World champion Lee Sedol said he is looking forward to the challenge match in March. "I have heard that Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time," he said.

Jon Diamond, president of the British Go Association, said: "Before this match the best computer programs were not as good as the top amateur players and I was still expecting that it would be at least 5 or 10 years before a program would be able to beat the top human players; now it looks like this may be imminent. The proposed challenge may well be that day."

Comments