Before Gizmodo, I worked in the bowels of the broadcast industry for a number of years. I was either shooting video or cutting video every day, all day. And while Final Cut Pro and Adobe After Effects were both tools I used with some proficiency on a daily basis, I've never seen a post production demo as incredible as this clip from the University of Washington.
Essentially, you shoot some crappy, low-rez video of a still scene. You then reshoot the same scene with a digital camera (with higher resolution). Software can automagically combine these images to upconvert the video AND fix problems in the image-- all while compensating for 3D space. Make sense? The remarkable demo will clarify things a bit:
What's especially notable is that the software can fill in the nasty bits of the scene despite the videographer/photographer rotating their view (you see this as they shoot around the tree) and despite any lens differences (the software can compensate for different lens sizes/distortions).
Also, note that many details from the source video are retained (the glass reflections in the statue shot may be the best example), which means that the photograph's information isn't the only information we see in the composite image.
I'm not quite convinced that the entire process is quite as automatic as the students would make it, but the technology is extremely promising all the same. And at this point, it should only be a matter of time before we see the idea work its way into our favourite post production products. [Project Page via bbGadgets]