The attempt of DeepMind to teach artificial intelligence to play soccer began with a virtual player around on the floor. It nailed at least a single aspect of the game right from the start.

However, WIRED reported that pinning down the mechanics of the beautiful game, from basics such as running and kicking to higher-order concepts like tackling and teamwork, proved a lot more challenging, as a new study from the Alphabet-backed AI firm showed.

The research might appear frivolous, although learning the fundamentals of soccer could someday help robots move around the world in more natural, human ways.

According to research scientist Guy Lever from DeepMind, to solve soccer, one has to involve lots of open problems on the path to AGI or artificial general intelligence.

ALSO READ: AI Clerk Serves Customers at 7-Eleven South Korea for the Opening of Its First-Ever Unmanned Convenience Store

Soccer Balls
(Photo: Bradley Kanaris/Getty Images)
AI is taught to play soccer just as how humans do. DeepMind demonstrates how this technique may help robots function like humans.


Humanoid Player

Describing their research published in Science Robotics, Lever explained that there's controlling the entire humanoid body, coordination, which is certainly tough for AGI, and mastering both low-level motor control and long-term planning.

An AI needs to create everything human players are recreating, even the things the latter don't need to consciously think about, like exactly how to move each muscle and limb to connect it to a moving ball, making hundreds of second decisions.

Both the timing and control necessitate for even the most basic movements can, in fact, be surprisingly tough to nail down, as any person who has ever played the browser game QWOP will remember.

Lever explained it's done without thinking about it, although that is a really difficult problem for AI, and "we're not really sure exactly" how humans are doing that.

Simulated Humanoid Agents

The simulated humanoid agents were modelled on humans, with 56 points of articulation and a restrained motion range, which means that they could not, for example, rotate their knee joint through impossible angles "à la Zlatan Ibrahimovic."

To begin with, the study authors gave the agents, kick a ball, or run, for example, and let them try and find out how to get there through trial and error, as well as reinforcement learning, as was done in the past when researchers taught simulated humanoids to navigate obstacle courses, with humorous, quite unnatural results.

According to another research scientist Nicholas Heess, also from DeepMind, and one of the co-authors of the paper with Lever, this did not work.

A Technique Moving Around the 'Virtual Football Pitch'

Because of the complexity of the problem, the wide range of choices available, and the lack of previous knowledge about the task, the agents did not know where to start, thus the writhing and twitching.

Therefore, Heess and Lever, together with their colleagues, used neural probabilistic motor primitives or NPMP. This teaching technique nudged the AI models towards more human-like movement patterns, anticipating that this underlying knowledge would help solve the problem of how to move around the so-called "virtual football pitch."

A report about the humanoids sent to soccer camp is shown on New Scientist's YouTube video below:

RELATED ARTICLE: DeepMind by Google: Will It 'Wipe Out Humanity in the Future?

Check out more news and information on Artificial Intelligence in Science Times.