When you listen to music, when its waves of sound collide with your ear, you don’t hear a wall of sound. A great deal of information might travel in a sound wave, and if that sound wave was actually a giant wave of water rushing onto a beach, you might expect to feel it as a big shove like any other big wave of water coming in from the ocean. Except that’s not what happens when this particular wave hits you.
Standing there ankle deep in the surf, you brace for it to crash against your body, but when it does arrive, it’s not a “hit” at all. Instead, you feel a hundred different things at once, all on different parts of your body. Some places it’s a cool brushing, others it soft slap or the feeling of a light sunburn. And then the wave is passed.
Audio engineer nerds all over the world invest untold hours and millions upon millions of dollars trying to perfect the science of sound. But as our friends at Motherboard explain, no one has come close to mother nature’s tools of sound reproduction.
Of course, that’s not how waves of water work. But it is how sound waves work, at least when they’re introduced to a human ear. Instead of a wall of noise, your ear perceives a variety of different frequencies. Those frequencies might add up to still be “noise,” but they might also be speech, or they might be music. It’s actually a mystery how this translation happens, the exact method by which we turn the big, looming wave into differentiated little ones.
The assumption was that ears use something akin to a Fourier transformation. A Fourier transform, named after the French mathematician who also identified the Greenhouse Effect, is essentially when a sound wave is stretched way out until its details are revealed. In more mathy terms, you take a signal, which is a mathematical function of time — a mechanical thing of air molecules travelling through space — and turn it into an array, or series of different frequencies. The Fourier transform is found all over science, and not just sound.
The transformation is done through what’s called an “integration” of the original, mechanical function of time. (If you’ve taken calculus, you should remember integration.) Basically, this is taking that function and recovering information from it by mathematically slicing it up into tiny bits. It’s pretty neat. This, it turns out, is how we get meaning (words, music, whatever) from sound (that big wave in the ocean). Or so scientists have thought.
Turns out this might not be quite the case. Researchers at Rockefeller University devised an experiment to test the limit of this kind of analysis via Fourier transformation. A limit always exists in Fourier transfers because, as we stretch those waves out to infinity and we gain more information about those details, we also lose some information about the sound’s duration. This is called the Gabor limit. Look at it like this: we know a sound’s duration by the nature of the wave(s) we’re zooming in on, so as you zoom in more and more on the wave, the less we know about the wave itself. We just see little bits of it, without really knowing the story of the whole wave anymore. This is also the principle behind the much more widely known Uncertainty Principle.
So the Rockefeller researchers, Jacob Oppenheim and Marcelo Magnasco, took a group of 12 composers and musicians and tested them to see if they could analyse a sound beyond the uncertainty limit of Fourier analysis. And guess what? They busted it down. “Our subjects often exceeded the uncertainty limit, sometimes by more than tenfold, mostly through remarkable timing acuity,” the authors write in Physical Review Letters.
The upshot of this is that hearing remains a mystery at its very fundamental level. Fourier analysis is certainly part of the process, but only a part. Of course, we understand well enough the mechanical process by which ears receive sound wavesand by which information is transmitted to the brain: different frequencies vibrate different hair cells within the cochlea structurein your inner-ear, whichserve to both amplify sound andconvert the vibrations to electrical signals to be sent to the brain.
But the actual maths being done in there — what the ear is doing in terms of information — is TBD. (If you don’t like the idea of ears doing maths, just think of it as signal processing.) And, of course, learning that a mystery is the same thing as finding out the actual answer is more interesting. If there is indeed something better than Fourier out there (or in there, as it were), than it would certainly mean improvements for things like recording technology, music compression, speech recognition, and beyond. Instead of mimicking ears via the Fourier transform, we can use whatever this even better process is.
Motherboard is an online magazine and video channel dedicated to the intersection of technology, science and humans. Republished with permission.