A New Method Of Spotting Deepfake Videos Looks For The Subtle Movements We Don’t Realise We Make

A New Method Of Spotting Deepfake Videos Looks For The Subtle Movements We Don’t Realise We Make

The quality and speed at which videos can now be faked using neural networks and deep learning processes promises to make the upcoming US presidential election even more of a nightmare. But by exploiting something overlooked in current deepfake techniques, researchers have found an automated way to spot fake videos.

Deepfake videos are far from perfect right now. Created from giant libraries of images scraped from the internet, they’re often generated at low resolutions (which helps hide imperfections) and appear overly compressed.

But the technology is improving at a startling rate, and flaws in the process, such as deepfake videos that were easy to spot because the subjects never blinked, are quickly improved to make them more and more believable.

[referenced url=”https://gizmodo.com.au/2018/06/most-deepfake-videos-have-one-glaring-flaw/” thumb=”https://i.kinja-img.com/gawker-media/image/upload/t_ku-large/e9yiey4qhmwmys50ybnt.png” title=”Most Deepfake Videos Have One Glaring Flaw” excerpt=”The rate at which deepfake videos are advancing is both impressive and deeply unsettling. But researchers have described a new method for detecting a “telltale sign” of these manipulated videos, which map one person’s face onto the body of another. It’s a flaw even the average person would notice: a lack of blinking.”]

It’s an arms race that neither side is going to stand down from any time soon, but researchers from UC Berkeley and the University of Southern California believe they’ve developed the next weapon when it comes to battling, or simply just accurately identifying faked videos.

Using a similar process to how deepfakes are created — by studying existing footage of the current crop of presidential hopefuls — they trained an AI to look for the presence of each person’s “soft biometric” signature.

It sounds complicated, but when speaking, we all have subtle but unique ways we move our bodies, heads, hands, eyes and even lips. It’s all done subconsciously — you don’t realise your body is doing it, nor does your mind immediately recognise when someone else is — but as a result, it’s a detail that current deepfake processing techniques don’t take into account when creating a fake.

In testing, the new AI was able to accurately spot deepfakes at least 92 per cent of the time, including videos created using several techniques, and those with degraded image quality due to video files being overly compressed. The researchers plan to further improve the AI’s success rate by taking into account the unique cadence and characteristics of a person’s actual voice.

But the reality is that deepfake techniques are evolving and improving at such a rate that they’ll probably compensate and be able to fool this AI before 2020 even arrives. This research represents a battle won, but the war for truth online is still going to wage on.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.