The Future Is Here
We may earn a commission from links on this page

Google Test Of AI's Killer Instinct Shows We Should Be Very Careful

If climate change, nuclear weapons or Donald Trump don’t kill us first, there’s always artificial intelligence just waiting in the wings. It’s been a long time worry that when AI gains a certain level of autonomy it will see no use for humans or even perceive them as a threat. A new study by Google’s DeepMind lab may or may not ease those fears.

Advertisement

The researchers at DeepMind have been working with two games to test whether neural networks are more likely to understand motivations to compete or cooperate. They hope that this research could lead to AI being better at working with other AI in situations that contain imperfect information.

Advertisement

In the first game, two AI agents (red and blue) were tasked with gathering the most apples (green) in a rudimentary 2D graphical environment. Each agent had the option of “tagging” the other with a laser blast that would temporarily remove them from the game.

Advertisement

The game was run thousands of times and the researchers found that red and blue were willing to just gather apples when they were abundant. But as the little green dots became more scarce, the dueling agents were more likely to light each other up with some ray gun blasts to get ahead. This video doesn’t really teach us much but it’s cool to look at:

Using a smaller network, the researchers found a greater likelihood for co-existence. But with a larger, more complex network, the AI was quicker to start sabotaging the other player and horde the apples for itself.

Advertisement

In the second, more optimistic, game called Wolfpack the agents were tasked to play “wolves” attempting to capture “prey.” Greater rewards were offered when the wolves were in close proximity during a successful capture. This incentivised the agents to work together rather than heading off to the other side of the screen to pull a lone wolf attack against the prey. The larger network was much quicker to understand that in this situation cooperation was the optimal way to complete the task.

While all of that might seem obvious, this is vital research for the future of AI. More and more complex scenarios will be needed to understand how neural networks learn based on incentives as well as how they react when they’re missing information.

Advertisement

The most practical short-term application of the research is to “be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.”

For now, DeepMind’s research is focused on games with strict rules like the ones above and Go, a strategy game which it famously beat the world’s top champion. But it has recently partnered up with Blizzard in order to start learning Starcraft II, a more complex game in which reading an opponent’s motivations can be quite tricky. Joel Leibo, the lead author of the paper tells Bloomberg, “Going forward it would be interesting to equip agents with the ability to reason about other agent’s beliefs and goals.”

Advertisement

Let’s just be glad the DeepMind team is taking things very slowly— methodically learning what does and does not motivate AI to start blasting everyone around it.

[DeepMind Blog via Bloomberg]