Deep Learning Is Making Video Game Characters Move Like Real People

Deep Learning Is Making Video Game Characters Move Like Real People

As video games give players more freedom to explore complex digital worlds, it becomes more challenging for a CG character to naturally move and interact with everything in it. So to prevent those awkward transitions between pre-programmed movements, researchers have turned to AI and deep learning to make video game characters move almost as realistically as real humans do.

To help make video game characters walk, run, jump, and perform other movements as realistically as possible, video game developers will often rely on human performances that are captured and translated to digital characters.

It produces results that are faster and better looking than animating video game characters by hand, but it’s impossible to plan for every possible way a character will interact with a digital world, according to the researchers. Game developers try to plan for as many possibilities as they can, but they ultimately have to rely on software to transition between animations of a character walking up to a chair, and then sitting down on it, and more often than not, those segues feel stilted, unnatural, and can diminish a player’s experience.

Computer scientists from the University of Edinburgh and Adobe Research have come up with a novel solution they’ll be presenting at the ACM Siggraph Asia conference being held in Brisbane, Australia, next month. And like many breakthroughs that have come before it, it involves leveraging the capabilities of deep learning neural networks to smooth over the animation hiccups that video games currently exhibit.

To create the convincing but unsettling deepfake videos you’ll find all over the internet now, a neural network is first trained by studying a given person’s face (often a celebrity) from every possible angle and with every imaginable expression using a database of tens of thousands of headshots of the subject. It’s a time consuming process, but with that knowledge, face swaps can be automatically created that look impossibly lifelike.

A similar approach is being taken for this research, but instead of training a neural network on a database of faces, it studies a collection of motions captured and digitised from a live performer on a soundstage. For the best results, it does require a fairly large database of motions for the system to analyse, with a performer going through the motions of picking up objects, climbing over things, or plopping down in a chair. But it doesn’t have to be infinitely inclusive, the neural network can take what it’s learned and adapt it to almost any situation or environment, while still producing natural looking results and movements, according to the researchers. It’s filling in the gaps between a character walking up to a chair, slowing down, turning their body, and then sitting, but intelligently linking all of those movements and animations together to hide the seams.

There are other advantages to teaching video games how characters should move and interact with things instead of pre-animating those motions, such as helping to reduce the file sizes of games and the amount of data that has to be processed and shared: something that will become even more relevant as streaming games becomes more and more prevalent.

This approach also paves the way for more complex interactions for video game characters. How often do you see more than two characters end up fighting each other? Never, except in pre-animated cut scenes. The next version of Red Dead Redemption could finally include real barroom brawls.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.