in

Google DeepMind’s Latest AI Agent Learned to Play ‘Goat Simulator 3’


Goat Simulator 3 is a surreal video game in which players take domesticated ungulates on a series of implausible adventures, sometimes involving jetpacks.

That might seem an unlikely venue for the next big leap in artificial intelligence, but Google DeepMind today revealed an AI program capable of learning how to complete tasks in a number of games, including Goat Simulator 3.

Most impressively, when the program encounters a game for the first time, it can reliably perform tasks by adapting what it learned from playing other games. The program is called SIMA, for Scalable Instructable Multiworld Agent, and it builds upon recent AI advances that have seen large language models produce remarkably capable chabots like ChatGPT.

“SIMA is greater than the sum of its parts,” says Frederic Besse, a research engineer at Google DeepMind who was involved with the project. “It is able to take advantage of the shared concepts in the game, to learn better skills and to learn to be better at carrying out instructions.”

Google DeepMind’s SIMA software tries its hand at Goat Simulator 3.

Courtesy of Google DeepMind

As Google, OpenAI, and others jostle to gain an edge in building on the recent generative AI boom, broadening out the kind of data that algorithms can learn from offers a route to more powerful capabilities.

DeepMind’s latest video game project hints at how AI systems like OpenAI’s ChatGPT and Google’s Gemini could soon do more than just chat and generate images or video, by taking control of computers and performing complex commands. That’s a dream being chased by both independent AI enthusiasts and big companies including Google DeepMind, whose CEO, Demis Hassabis, recently told WIRED is “investing heavily in that direction.”

A New Way to Play

SIMA shows DeepMind putting a new twist on game playing agents, an AI technology the company has pioneered in the past.

In 2013, before DeepMind was acquired by Google, the London-based startup showed how a technique called reinforcement learning, which involves training an algorithm with positive and negative feedback on its performance, could help computers play classic Atari video games. In 2016, as part of Google, DeepMind developed AlphaGo, a program that used the same approach to defeat a world champion of Go, an ancient board game that requires subtle and instinctive skill.

For the SIMA project, the Google DeepMind team collaborated with several game studios to collect keyboard and mouse data from humans playing 10 different games with 3D environments, including No Man’s Sky, Teardown, Hydroneer, and Satisfactory. DeepMind later added descriptive labels to that data to associate the clicks and taps with the actions users took, for example whether they were a goat looking for its jetpack or a human character digging for gold.


An AI that can play Goat Simulator is a step towards more useful AI

An AI that can play Goat Simulator is a step towards more useful AI

European Parliament Approves Sweeping Limits on Generative AI

European Parliament Approves Sweeping Limits on Generative AI