The University of Rochester has just devised a way of reproducing music in a file that's compressed 1,000 times smaller than an MP3 file. The way they do it—physically modelling an instrument in a computer and then feeding it input variables (breath, tongue, fingers) in order to generate the output tone—seems super obvious. People were making music with MOD files by recording one tone and generating different notes with it back in the '90s. But actually reproducing the instrument wholesale? That's amazing.
Instead of recording music like we do now, we can just model the instrument the performer uses and what they do with their hands/mouths/feet. This way you can get a (theoretically) 1:1 reproduction of music even years after the original recording is gone. And why stop at instruments? Why not model a guy's vocal chords, allowing Sinatra to croon on about how it's tough to find love when you're stuck in a casket in the year 2525. Putting words into his mouth, in essence. Well, not his, since he's not around to model, but you get the point.
The processing power needed to play this is going to be pretty intimidating, but this is what we see happening for iPods and other playback devices in a few decades. So says Gizmodo. [Eurekalert via Hypebot via Tech Digest]