In 2014, Google bought UK startup DeepMind, considered to be the premier lab working on artificial intelligence. Today, head Demis Hassabis announced that they built an AI that can beat a human being at the ancient Chinese game of Go. The game is widely considered to be a benchmark for an AI’s ability to think.
The Google division announced their full findings in the Nature research journal. Go’s complexity arises from the multitude (over a googol times more than chess) of possible combinations. As such, trying to win the game using traditional AI methods that compute all the possible moves does not work.
DeepMind’s approach involves an AI system called AlphaGo that has 12 different neural networks containing millions of neuron-like connections. One is responsible for selecting the next move, while another predicts who’s going to win the game. AlphaGo analyzed 30 million moves from human experts playing the game and eventually was able to predict a human’s next move 57% of the time.
However and more importantly, AlphaGo invented new strategies on its own by internally playing thousands of games and making changes using a process called reinforcement learning, DeepMind’s key expertise. It was able to beat a human professional five games to none last October.
In March, the AI will go against the top player in the world in a five game challenge in South Korea. DeepMind emphasizes that AlphaGo’s techniques are not specific to just winning at Go. It is much more generalized and ultimately can be used to solve actual real-world problems, like disease analysis and climate modeling.
(Image via Wikipedia)