After Millions Of Trials, These Simulated Humans Learned To Do Perfect Backflips And Cartwheels

After Millions Of Trials, These Simulated Humans Learned To Do Perfect Backflips And Cartwheels

Using well-established machine learning techniques, researchers from University of California, Berkeley have taught simulated humanoids to perform over 25 natural motions, from somersaults and cartwheels through to high leg kicks and breakdancing. The technique could lead to more realistic video gameplay and more agile robots.

Simulated humanoid pulling off a perfect backflip, which it did after a month of simulated training. GIF: Berkeley Artificial Intelligence Research

Computer animation has never been better, but there’s still plenty of room for improvement. If we’re ever going to venture through to other side of the uncanny valley – that place where viewers can’t discern between what’s simulated and what’s real – it will be because we’ve finally imbued our virtual characters with natural appearances and movement.

To that end, UC Berkeley graduate student Xue Bin “Jason” Peng, along with his colleagues, have combined two techniques – motion-capture technology and deep-reinforcement computer learning – to create something completely new: A system that teaches simulated humanoids how to perform complex physical tasks in a highly realistic manner. Learning from scratch, and with limited human intervention, the digital characters learned how to kick, jump and flip their way to success. What’s more, they even learned how to interact with objects in their environment, such as barriers placed in their way or objects hurled directly at them.

After Millions Of Trials, These Simulated Humans Learned To Do Perfect Backflips And Cartwheels
Bot performing a variety of highly dynamic and acrobatic skills. (Gif: Berkeley Artificial Intelligence Research)

Bot performing a variety of highly dynamic and acrobatic skills. GIF: Berkeley Artificial Intelligence Research

Normally, computer animators have to manually create custom controllers for every skill or task. These controllers are fairly granular, and include discrete skills such as walking, running, tumbling, or whatever the character needs to do. The motions created with this technique look decent, but each of them have to crafted individually by hand. Another approach is to exclusively use reinforcement learning methods, such as DeepMind’s GAIL. This technique is impressive, as simulated humanoids learn how to do things from scratch – but it often produces bizarre, unpredictable and highly unnatural results.

The new system, dubbed DeepMimic, works a bit differently. Instead of pushing the simulated character towards a specific end goal, such as walking, DeepMimic uses motion-capture clips to “show” the AI what the end goal is supposed to look like. In experiments, Bin’s team took motion-capture data from more than 25 different physical skills, from running and throwing to jumping and backflips, to “define the desired style and appearance” of the skill, as Peng explained at the Berkeley Artificial Intelligence Research (BAIR) blog.

Results didn’t happen overnight. The virtual characters tripped, stumbled, and fell flat on their faces repeatedly until they finally got the movements right. It took about a month of simulated “practise” for each skill to develop, as the humanoids went through literally millions of trials trying to nail the perfect backflip or flying leg kick. But with each failure came an adjustment that took it closer to the desired goal.

After Millions Of Trials, These Simulated Humans Learned To Do Perfect Backflips And Cartwheels
Bots trained across a wide variety of skills. (Gif: Berkeley Artificial Intelligence Research)

Bots trained across a wide variety of skills. GIF: Berkeley Artificial Intelligence Research

Using this technique, the researchers were able to produce agents who behaved in a highly realistic, natural manner. Impressively, the bots were also able to manage never-before-seen conditions, such as challenging terrain or obstacles. This was an added bonus of the reinforcement learning, and not something the researchers had to work on specifically.

“We present a conceptually simple [reinforcement learning] framework that enables simulated characters to learn highly dynamic and acrobatic skills from reference motion clips, which can be provided in the form of mocap data [that is, motion capture] recorded from human subjects,” writes Peng. “Given a single demonstration of a skill, such as a spin-kick or a backflip, our character is able to learn a robust policy to imitate the skill in simulation. Our policies produce motions that are nearly indistinguishable from mocap,” adding that “We’re moving toward a virtual stuntman.”

After Millions Of Trials, These Simulated Humans Learned To Do Perfect Backflips And Cartwheels
Simulated dragon. (Gif: Berkeley Artificial Intelligence Research)

Simulated dragon. GIF: Berkeley Artificial Intelligence Research

Not to be outdone, the researchers used DeepMimic to create realistic movements from simulated lions, dinosaurs and mythical beasts. They even created a virtual version of ATLAS, the humanoid robot voted most likely to destroy humanity. This platform could conceivably be used to produce more realistic computer animation, but also for virtual testing of robots.

This work is set to be presented at the 2018 SIGGRAPH conference in August. A preprint of the paper has been posted to the arXiv server.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.