Nvidia’s New AI Lets You Know If Your Pet Would Look Cuter As A Different Animal

Nvidia’s New AI Lets You Know If Your Pet Would Look Cuter As A Different Animal
Gif: <a href="https://blogs.nvidia.com/blog/2019/10/27/ai-gans-pets-ganimals/">Nvidia</a>

Sure, you love your dog/cat/currently trendy animal people are keeping as pets, but is there a chance you might like it more if it were a different breed? You can’t simply trade in your Golden Retriever for a weekend test drive with a Schnauzer, but with Nvidia’s new GANimals tool, you can at least see if your beloved pet looks even cuter as another animal.

Earlier this year, Nvidia Research wowed the internet with its AI-powered GauGAN drawing tool that took crude sketches that looked like they were created in a basic tool like MSPaint, and turned them into near photorealistic images. That tool required users to indicate which parts of an image were supposed to be water, trees, mountains, and other landmarks by choosing the appropriate brush colour, but GANimals is completely autonomous. You simply upload a photo of your pet, and it generates a series of other photorealistic images that all appear to be sharing your best friend’s expression.

In a paper being shared at the International Conference on Computer Vision in Seoul, Korea, this week, the researchers describe an algorithm they’ve developed call FUNIT, which stands for Few-shot, UNsupervised Image-to-image Translation. When using AI to translate the characteristics of a source image onto a target image, the artificial intelligence typically needs to be trained on a large collection of target images, with varying levels of light and camera angles, to accurately produce results that genuinely look like the source and targets have been properly merged. But putting together a large database of images like that is time consuming, and limits what the AI-powered translation network can do. If you’ve trained it to turn chickens into turkeys, that’s the only thing it will be good at.

By comparison, the FUNIT algorithm can be trained using just a few images of the target animal, which it repeatedly practices with (in a manner of speaking) so that it can eventually generalise the translations needed to merge two images. Once sufficiently trained, the algorithm just needs a single image of the source and target animals, which can both be completely random and never previously processed or analysed, to work its magic.

You can try out GANanimals for yourself over at Nvidia’s AI Playground, but for the time being the results are low-res and not suitable for anything other than novelty purposes. The researchers hope to eventually improve the AI and algorithm’s capabilities so that some day soon, face swaps could be accomplished without the need for giant databases of carefully curated images.