Monday, March 14, 2016 [Tweets] [Favorites]

The Sadness and Beauty of Watching Google’s AI Play Go

Cade Metz (via Jason Snell, Hacker News):

Even after Lee Sedol returned to the table, he didn’t quite know what to do, spending nearly 15 minutes considering his next play. AlphaGo’s move didn’t seem to connect with what had come before. In essence, the machine was abandoning a group of stones on the lower half of the board to make a play in a different area. AlphaGo placed its black stone just beneath a single white stone played earlier by Lee Sedol, and though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular time—a surprise all the more remarkable when you consider that people have been playing Go for more than 2,500 years. The commentators couldn’t even begin to evaluate the merits of the move.

[…]

Then, over the next three hours, AlphaGo went on to win the game, taking a two-games-to-none lead in this best-of-five contest. To date, machines have beaten the best humans at chess and checkers and Othello and Jeopardy!. But no machine has beaten the very best at Go, a game that is exponentially more complex than chess.

Cade Metz:

AlphaGo had already claimed victory in the best-of-five contest, a test of artificial intelligence closely watched in Asia and across the tech world. But on Sunday evening inside Seoul’s Four Seasons hotel, Lee Sedol clawed back a degree of pride for himself and the millions of people who watched the match online.

[…]

Using what are called deep neural networks—networks of hardware and software that mimic the web of neurons in the human brain—AlphaGo first learned the game of Go by analyzing thousands upon thousands of moves made by real live human players. Thanks to another technology called reinforcement learning, it then climbed to an entirely different and higher level by playing game after game after game against itself. In essence, these games generated all sorts of new moves that the machine could use to retrain itself. By definition, these are inhuman moves.

[…]

At this point, AlphaGo started to play what Redmond and Garlock considered unimpressive or “slack” moves. The irony is that this may have indicated that the machine was confident of a win. AlphaGo makes moves that maximize its probability of winning, not its margin of victory. “This was AlphaGo saying: ‘I think I’m ahead. I’m going to wrap this stuff up,’” Garlock said.

Update (2016-03-14): See also: Kirk McElhearn, Gary Robinson, John Langford, Hacker News, Sam Byford.

Update (2016-03-15): Sam Byford (Hacker News):

AlphaGo has beaten world-class player Lee Se-dol for a fourth time to win the five-game series 4-1 overall. The final game proved to be a close one, with both sides fighting hard and going deep into overtime.

Update (2016-03-16): Kieran Healey:

The Google/DeepMind team has a technical paper in Nature describing AlphaGo, the program they wrote.

Update (2016-03-17): Google:

First, this test bodes well for AI’s potential in solving other problems. AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas. Second, while the match has been widely billed as "man vs. machine," AlphaGo is really a human achievement. Lee Sedol and the AlphaGo team both pushed each other toward new ideas, opportunities and solutions—and in the long run that's something we all stand to benefit from.

Comments

Stay up-to-date by subscribing to the Comments RSS Feed for this post.

Leave a Comment