A board game like "Go" might not look complicated on the surface to the untrained eye, which could lead the uninformed to believe that it wouldn't be all that difficult for a computer to best a human player in a head-to-head match. We've seen many examples in the past where that hasn't been the case (IBM's Watson is a good start), and it's because despite their simple nature, the number of solutions/moves at any given time is sometimes astronomical.
Last month, we wrote about Google's DeepMind and its challenge of going up against the world's best Go player, Ke Jie. Fast-forward to now, and we learn that DeepMind's AlphaGo helped it secure a win against the Grandmaster on the very first match-up.
The reason this defeat is notable is because it was done through deep-learning. Standard algorithms were not used here to choose the best path to success; the computer has itself gained the knowledge of how to best approach each situation. In effect, DeepMind mimics a real learning process, but it has the horsepower behind it to churn through lots of data and options very quickly.
The method used to teach DeepMind how to play Go was through reinforced learning, which tells the computer when it's done something good so that it knows it's on the right track. To get really good at Go, AlphaGo played games against itself. There's nothing better than being self-sufficient, right?
There are still more games against Ke Jie to play, as well as other matches that will allow participants to collaborate with AlphaGo. This could help us better understand how AI can complement humans, rather than replace them.