In case you didn’t already feel like Google was a creepy stalker, its artificial intelligence tools are rapidly crossing over into uncanny. The latest one is PlaNet, a new deep-learning machine that specialises in figuring out where a photo was taken — using nothing but the image’s pixels.
Today, MIT Tech Review reports on a new effort led by Tobias Weyand, a computer vision specialist at Google, to create a computer that sees a photo and can instantly figure out where in the world it’s from. The system was fed over 90 million geotagged images across the planet, and trained to spot patterns based on location. Basically, it smashes the image into pixels and cross-references those pixels with its massive memory bank.
In a trial run using 2.3 million geotagged images, PlaNet determined the country of origin with 28.4 per cent accuracy and the continent of origin in 48 per cent of cases. Now, those figures might not sound so impressive, but as MIT Tech Review points out, PlaNet is already performing quite a bit better than humans, whose squishy organic brains have a lifetime of ecological and cultural cues to draw on. And with more image training, PlaNet has the potential to get even better.
“We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-travelled human to distinguish,” Weyand told MIT Tech Review.
If you’re a photography buff who sometimes forgets to geotag your images, tools like PlaNet could one day become your best friend. Then again, if you were already worried about Google watching your every move, it might be time to start avoiding cameras entirely.
Image via Adam Bautz/Flickr