Two competitors duked it out in a 5 match series of Go in Seoul. Representing mankind was Lee Se-Dol, one of the world’s top Go players. On the other end was AlphaGo, the artificial intelligence system created by DeepMind, a company owned by Google.
Go, an ancient game popular in China, Korea, and Japan, is played by taking turns placing stones on a 19×19 grid. Its complexity lies in its simplicity, there are few rules so each turn can have 200 or more possible moves, and the number of different board arrangements is greater than all of the atoms in the universe. Computers have been able to beat even the best humans at chess, which has more rules and limitations and therefore fewer possible moves, since 1997. Members of both the Go and artificial intelligence community thought that it would take at least 10 years for a computer to surpass a human – they were astonished when AlphaGo took the match 4-1.
AlphaGo was taught to play Go with a neural network similar to those used in photo-recognition software and trained by analyzing human games and by playing slightly different versions of itself millions of times. Lee Se-Dol’s only win came in Game 4, when he played a move so strange and unusual that AlphaGo didn’t know how to evaluate it – AlphaGo’s next move was a mistake and it resigned minutes later.
Now that AlphaGo has topped one of the world’s best Go players, DeepMind is turning to healthcare with the launch of DeepMind Health and a partnership with UK’s National Health Service. DeepMind Health is aimed at alerting hospital staff when critical patient information is entered or recorded.