DeepMind is an offshoot of Google that has scientific mission in mind, with the goal of developing Artificial Intelligence (AI) systems that have the ability to learn to solve complex problems. DeepMind says that meeting that mission requires it to design agents and then test their ability in a range of environments, with some built specifically for testing of AI and others using things built for humans. DeepMind has announced that it has teamed up with Blizzard Entertainment to open the popular video game StarCraft II for AI testing and research.
The two firms have released SC2LE, which is described as a set of tools that Blizzard and DeepMind hope will accelerate AI research using the real-time strategy game. There are multiple components inside the SC2LE toolkit including a machine learning API that was developed by Blizzard. That tool gives the researchers and developers hooks into the game for the AI to interact with, these tools are also available for Linux, which is a first.
SC2LE also has a dataset of anonymized game replays that at the start has 65,000 replays available. Blizzard says that it plans to increase those replays to more than a million in the coming weeks. Other tidbits in the toolkit include an open source version of the DeepMind toolset called PySC2 that is meant to allow researchers to easily use the Blizzard feature-layer API with their AI agents.
The toolkit also has a series of simple RL mini-games that let researchers test performance of agents on specific tasks. Blizzard and DeepMind have released a joint paper that outlines the environment and reports baseline results in the mini-games inside the toolkit. The toolkit also has a full 1v1 ladder game against the StarCraft II integrated AI.
This is certainly not the first time that StarCraft franchise games have been used for testing out AI agents. The original StarCraft is used by AI and machine learning researchers in the annual AIIDE bot competition. DeepMind says that StarCraft II has a lot of elements that makes it perfect for training and testing AI agents. The main goal is certainly to win the game, but to accomplish this goal, the AI has to manage sub-goals like finding resources, building units, and more. AI agents are also forced to make decisions that may not pay off for multiple rounds.
The partially observed map also requires the agent to use memory and planning to win. DeepMind notes that there are a large number of possible actions for the AI to take in StarCraft compared to the Atari games it also uses for testing. Assuming a small screen size of 84x84, there are roughly 100 million possible actions available to the AI agents. DeepMind's AlphaGo AI played a Go game against Go master Ke Jie in May and won the game showing that AI can defeat human players at the highest levels of a game.