Computers that can beat chess grandmasters? Ho-hum. The new arena where artificial intelligence and humans face off is StarCraft.
A squadron of tanks sits patiently on a bridge. Smaller reconnaissance vehicles inch nervously ahead, probing for signs of the enemy. Suddenly, two allied spaceships zoom overhead. They illuminate a horde of hidden alien spider-robots. The aliens' cover blown, they attack. The battlefield erupts into chaos.
Called StarCraft, this space-war strategy game is played in real time. It's normally played by humans, but this particular match is different. The commanders in charge of each side are sophisticated artificially intelligent "bots" competing in the first ever StarCraft AI tournament, the finals of which were held earlier this month at Stanford University in California. The game is emerging as the next arena to put machine intelligence to the test - and could even provide the inspiration for the next big advance in AI.
Games and AI have a history. As far back as the 1950s, computers were programmed to play chess. It wasn't until the late 1980s, however, that they started beating human grandmasters. Since then, other games, such as poker, go, and even the quiz game Jeopardy, have attracted the interest of AI researchers.
"Chess is hard because you need to look very far into the future. Poker's hard because it's a game of imperfect information. Other games are hard because you have to make decisions very quickly. StarCraft is hard in all of these ways," explains Dan Klein, an AI researcher at the University of California, Berkeley, and adviser to one of the tournament teams.
The allure of StarCraft for AI researchers lies in the game's extreme complexity. Players compete to harvest resources, build an army, and battle each other in realms filled with bottlenecks, alleys and strategic high ground. Armies can be as large as 200 independently controlled units, each with different strengths, weaknesses and special abilities, such as invisibility cloaking, flying or teleportation. Unlike chess, units aren't confined to squares, but rather are in constant motion - a couple of seconds' distraction can be the difference between victory and defeat.
"An AI bot has to interact, reason about multiple goals concurrently, act in real time, deal with imperfect information - a lot of the properties of building robust intelligence are there," says tournament organiser Ben Weber, a graduate student at the Expressive Intelligence Studio at the University of California, Santa Cruz.
What's more, while chess AIs traditionally use software that searches for all the permutations of moves and counter-moves, it is infeasible to write such a program for a game as expansive as StarCraft, says David Burkett, a member of a team entered by Berkeley.
One reason for that is that players don't take turns: military units are constantly being built, moving, scouting for advantageous positions and, of course, fighting. And in general, opponents cannot see what the enemy is up to until the fighting begins.
The 28 competitors in the AI tournament coped with this complexity in a variety of ways. The most basic is scripting, where a programmer writes a set script for the bot to follow, independent of what is happening in the game. Weber describes this approach as "rock, paper, scissors", in that the bot may win if it happens to be executing the right script for what the opponent is doing, but if not, it cannot adapt and react.
A more sophisticated approach is the finite state machine (FSM), a technique that designers of video game AI have long used to give the illusion of intelligence. In this approach, a bot has discrete behaviours from which it can choose, depending on the inputs given to it. The ghosts in Pac-Man are a classic example, toggling between "chase" and "evade", depending on whether or not the eponymous yellow gobbler has eaten a power pill. In StarCraft, FSMs can be used both to control individual unit tactics on the battlefield, and at higher strategic levels of deciding which units to produce and when.
FSMs are limited, says Klein, in that a human usually needs to define how and when to transition between behaviours, meaning the bot can fail if it encounters a situation that it wasn't explicitly programmed to handle.
A third approach relies on machine learning. Bots are trained on thousands of hours of game replays to find which strategies and tactics are statistically most likely to be successful, given the current game conditions. This approach can be combined with learning from trial and error, much as a human player might train. The bot learns from its mistakes and from the mistakes of others. Most competitors relied on a mixture of techniques.
The tournament itself was broken up into four categories, designed to make the complexity of the game more manageable for the bots, which are still not as skilled as an expert human player. The first two categories pitted small fixed-size armies against one another on simple terrain. An FSM-based bot won both categories by choosing better attack formations than its opponents.
In the third category, bots had to harvest resources, select from a limited set of buildings and military units, and fight. But unlike the full game, they were allowed to see their opponents preparing. The winning bot used a mimicking strategy, copying its opponent's build order while throwing in a few scripted tricks to gain the upper hand.
The final category of the tournament pitted bots against each other in "best-of-five" rounds on different maps, with access to the full functionality of the game. The winner, the Berkeley team's "Overmind" bot, used a mix of FSMs, machine learning and a limited form of chess-style prediction, to control swarms of flying units which aimed to constantly harass the opponent.
Burkett says that tournaments like this can help advance the field of AI. Simple problems in StarCraft, like finding a path across a map, can be handled by traditional AI. But solving many problems simultaneously and quickly will require new ideas.
"There are a lot of good AI research problems involved in getting this thing to work," says Burkett. His team plans to submit details of the approach employed with Overmind for publication in a journal.
For now, however, human players remain the champions of StarCraft. In an exhibition match at the tournament, Oriol Vinyals, a former world-class player and member of the Berkeley team, took on one of the top-ranking bots. After a brief struggle, he easily defeated his AI opponent. He doubts this will always be the case.
"In two to three years, I would expect bots to be in the top 5 per cent of players," he says. "Beating the best human player doesn't seem out of the question."
New Scientist reports, explores and interprets the results of human endeavour set in the context of society and culture, providing comprehensive coverage of science and technology news.