Friday 10 February 2017

DeepMind is using games to test AI aggression and cooperation

As our ability to create AI grows, it's important that we assess how it behaves in different situations. DeepMind, Google's AI division in London, has been concerned with one aspect in particular: what happens when two or more AI have similar or conflicting goals. The team wanted a test similar to the "Prisoner's Dilemma," a popular game that pits two suspects against one another. In this scenario, you're given a choice: testify against the other person and you'll go free, while they have to serve three years. If you both say yes independently, however, you'll serve two years in jail.

It's a dilemma without a simple answer. To test its AI agents, DeepMind developed two new games, called Gathering and Wolfpack. In Gathering, two colored squares are tasked with picking up "apples" in the middle of the screen. They can also fire a laser which, if accurate, removes the other character from the game temporarily. How co-operative or combative would they be? Unsurprisingly, the pair was quite peaceful at the start, collecting apples at a steady pace. As the fruit pile dwindled, however, they learned to become more aggressive and fire their laser at each other.

Like Prisoner's Dilemma, the agents have to decide whether to "defect" and attack the other player. Curiously, they would become more hostile as their computational power was increased. The rate of firing would go up regardless of how many apples were left on screen too. The reason why is simple: aiming is complicated. It involves timing and tracking the other agent's movement. Ignoring their behavior and simply seeking apples is an easier, but potentially slower route to success. As a result, the AI with lower cognitive capacity would tend to fall back on this basic strategy.

Wolfpack is a little different. It challenges two agents to co-operate in finding a third, fleeing piece of AI. Blocky obstacles are littered on the screen, so the teammates have to figure out when to flank and corner their opponent. DeepMind noted that here, higher cognitive capacity lent to more cooperation between the agents. That's in contrast to Gathering, which produced more conflicts when the 'smarts' were increased. The differing results highlight the importance of intelligence, but also the game and its underlying ruleset. AI will behave differently depending on the task at hand and what's required to be victorious.

The findings are important as humanity releases multiple AI into the world. It's likely some will clash and try to either co-operate or sabotage one another. What happens, for instance, if an AI is managing traffic flow across the city, while another is trying to reduce carbon emissions in the state? The rules of the "game" which govern their behavior then become vital. Setting parameters, and being mindful of other agents, will be crucial if we're to balance the global economy, public health and climate change.

Source: DeepMind



Read the full article here by Engadget

No comments: