WASHINGTON • Earlier this year, researchers' artificial intelligence beat a human in the dazzlingly complex board game known as Go. Not just once, but four times. It was a milestone in machine learning.
Now, the same Google-backed researchers who designed AlphaGo have their sights set on dominating a new game: Starcraft, the classic computer strategy game that has attracted millions of fans, some of whom duel online in professional tournaments hosted by real-life sports leagues.
Researchers from US-based DeepMind want to train a bot that can play StarCraft II in real time, making decisions about which military units to send on scouting missions, and how to allocate resources and ultimately conquer other players.
Starting next year, the game will serve as a research platform for any AI researcher who wants to use it, potentially allowing myriad player- algorithms to train off the same game. And joining the effort is the game's publisher, Blizzard, which is working with DeepMind to set up the platform.
Unlike Go, StarCraft represents an entirely different challenge. Whereas players of the ancient board game take turns putting down stones to control physical territory, StarCraft players have to manage a constantly shifting digital economy to achieve victory.
They have to mine minerals and gases, build defensive structures and offensive troops, survey the terrain and, finally, close with and engage the enemy.
The best players have to know not only what is going on at their home base but also what may be happening in distant corners of the battlefield. Efficiency of motion is key; commentators talk of "actions per minute" as a way of measuring a human player's productive capacity.
"StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world," DeepMind wrote in a blog post last Friday.
"The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks."