In earlier days, it was a big deal to develop a computer program that could best a player in chess, and then the roles switched—it became a big deal if a human player could beat the computer, and it remains that way today. Now, however, there is a new gold standard in testing artificial intelligence. It's the StarCraft series, which are complex real-time strategy (RTS) games, and for the first time, an AI program has beat a top professional player in a series of StarCraft II matches.
Google's AlphaStar program annihilated Team Liquid's Grzegorz "MaNa" Komincz, one of the world's best StarCraft players, winning 5-0 under professional match conditions on a competitive ladder map. Unlike other AI showcases, this one was played without any game restrictions or modifications of the rules to level the playing field.
"No [previous] system has come anywhere close to rivaling the skill of professional players. In contrast, AlphaStar plays the full game of StarCraft II, using a deep neural network that is trained directly from raw game data by supervised learning and reinforcement learning," Google explains.
There are several different ways to play StarCraft II, though in esports, the most common is a 1v1 tournament played over five games. Players start by choosing one of three difference alien races. The player then has to balance harvesting resources and the management of their economy (macro) with low-level control of their individual units (micro).
"The need to balance short and long-term goals and adapt to unexpected situations, poses a huge challenge for systems that have often tended to be brittle and inflexible. Mastering this problem requires breakthroughs in several AI research challenges," Google says.
One of those challenges is "imperfect information." Unlike games such as chess or Go where the entire board is in view, crucial information is hidden from a StarCraft player and must be discovered by scouting. That is just one of many challenges, though—the AI must deal with game theory (there's no single best strategy), long-term planning, ongoing game play (there is no taking turns in StarCraft II), and so forth.
AlphaStar constructs its behavior based on a deep neural network that is fed input data from the raw game interface, and outputs a sequence of instructions that constitute an action within the game. It uses a "novel multi-agent learning algorithm."
The source link in the Via field below goes into much more detail and is worth a read. It's pretty impressive what Google has done here, and make no mistake, the application of this technology extends beyond video games. According to Google, the underlying technology could be used to tackle real-world challenges, such as weather prediction, climate modeling, language understanding, and more.