Researchers at the University of Washington's Graphics and Imaging Laboratory, which crunched a lot of the code behind the original Photosynth, have devised new algorithms that scale the photo-cloud-to-3D-model concept way, way up:
The key difference is that Photosynth was aimed at doing a single monument or landmark, which meant that it was scaled to a couple hundred or a thousand photographs, after which it became too slow. We can now process truly huge data sets — the big breakthrough here was being able to match the images fast.
To these lab-sheltered folks, fast means "about a day", in which time they were able to render all manner of scenes, from the Trevi Fountain and the Colosseum in Rome, and the entire Old City in Dubrovnik, video'd above.
The best thing about this is that the U of W team doesn't have to worry about anything beyond their algorithms—once they've perfected the software that can recognise and arrange these images, they can slap together a 3D rendering of pretty much any location that Flickr users have taken a few thousand pictures of. [PopSci]