Following a resounding Go victory in 2017, Alphabet’s DeepMind turned to conquering StarCraft II. The game is a “grand challenge” for how successful AI agents are at complex tasks, with DeepMind and Blizzard tomorrow live streaming a demonstration of the latest progress.
DeepMind and other researchers have long used games to determine if artificial intelligence can beat complex tasks that are relatively simple for humans. StarCraft is considered a “grand challenge” because it requires AI agents to “carry out and balance a number of sub-goals” in order to ultimately “beat the opponent.”
For example, while the objective of the game is to beat the opponent, the player must also carry out and balance a number of sub-goals, such as gathering resources or building structures. In addition, a game can take from a few minutes to one hour to complete, meaning actions taken early in the game may not pay-off for a long time. Finally, the map is only partially observed, meaning agents must use a combination of memory and planning to succeed.
In 2017, the Alphabet division and Blizzard Entertainment released Starcraft II Learning Environment (SC2LE). It includes a machine learning API granting researchers and developers hooks into the game, as well as half-a-million anonymized game replays and other research.
The replay dataset is useful for training and aids in sequence prediction and long-term memory research, while the popularity of the game gives AI agents a large pool of talent to compete with.
Compared to simple games that only have up/down/left/right actions, StarCraft is comprised of more than 300 basic actions. Early research showed AI succeeding at mini-games — like moving the camera, collecting mineral shards, or selecting units. However, agents have historically not been able to “win a single game against even the easiest built-in AI.”
The release also contains a series of ‘mini-games’ – an established technique for breaking down the game into manageable chunks that can be used to test agents on specific tasks, such as moving the camera, collecting mineral shards or selecting units. We hope that researchers can test their techniques on these as well as propose new mini-games for other researchers to compete and evaluate on.
DeepMind has been hard at work training their AI (or agent) to better understand StarCraft II. Once it started to grasp the basic rules of the game, it started exhibiting amusing behavior such as immediately worker rushing its opponent, which actually had a success rate of 50% against the ‘Insane’ difficulty standard StarCraft II AI!
After feeding the agent replays from real players, it started to execute standard macro-focused strategies, as well as defend against aggressive tactics such as cannon rushes.
Update: Both streams are now live and set to kick off from DeepMind’s headquarters in London.