LinkedIn might be where you go to find jobs, but it’s also apparently where spies lurk. Experts are now saying they’ve found an instance where a spy used an AI-generated face to connect with targets, according to a report from the Associated Press.
If you’ve ever used LinkedIn, you know the website advises not to connect with anyone you don’t actually know. However, many users regularly ignore that advice in the hopes of making connections advantageous to their careers—a fact that experts say make it a prime hunting ground for spies. In one case, the AP uncovered an account of someone going by the name Katie Jones.
Despite only having 52 connections, the profile was linked to some influential folks. The profile boasted a job at a well-known think tank and counted among its connections prominent pundits, government officials, and even Paul Winfree—a potential candidate for a Federal Reserve seat. Of course, the AP discovered that “Katie Jones” was undoubtedly a phantom profile.
Her place of employment at the Center for Strategic and International Studies in Washington was found to be bogus, as was her degree in Russian studies from the University of Michigan. The Katie Jones profile has since been deleted.
Another interesting tidbit: expert analysis of the profile picture found that it was almost certainly created by a generative adversarial network (GAN). That’s a type of AI where a computer can create a hyperrealistic face of people who don’t exist. You only have to look at sites like ThisPersonDoesNotExist.com to see how realistic GAN-generated faces can be, and why intelligence experts are concerned.
That said, while this may be the first reported instance of a spy using GAN to get close to a target, AI imposters are not new. Deepfake videos have been known to impersonate well-known figures, including Russian President Vladimir Putin and former President Barack Obama.
Most recently, a deepfake video of Facebook CEO Mark Zuckerberg went viral. It doesn’t take much imagination to extrapolate how this tech could be misused for the forces of evil.
On the other hand, in practice, the Katie Jones example is not that different from other methods of creating online imposters, like using stock photos or stealing actual profile pics from other social media accounts. Yes, experts say GAN was used in this instance, but given the average person—and even influential persons—does not put a whole lot of effort into vetting every stranger who friends them, it seems like a lot of trouble to bother with an artificial photo. And given the fact that facial recognition, personal photo uploads, and search capabilities are advancing so fast, it’s conceivable that sometime soon, we’ll all be more suspicious of a photo that doesn’t return a bunch of similar hits when we give it the old reverse-image search treatment.
So again, basic security hygiene is key here. Even if it seems harmless, maybe don’t friend people on LinkedIn you don’t actually know.