At All Things D, my friends at Fullpower did a demo of a accelerometer-equipped headset that can pick up a call by tapping it in a different way than you tap a regular headset.
That’s not so new in concept, but the trick is, they use math to filter out the background noise—in this case, motion from walking, jumping, etc—so it doesn’t hang up on you when you move around while doing it.
The headset also knows when you place it on a table and powers down. All by using math and a regular accelerometer.
You’ve seen games and GPS apps from Fullpower but those are just apps demoing the company’s tech. Fullpower’s motion detection engine tech is described as doing for motion what voice recognition does for voice. It interprets the raw data and figures out what a person is doing, eliminating confusing data, which I think is interesting because up to now, most developers have just had to deal with raw accelerometer XYZ information. Hard to parse in itself, but up to now, really hard to take that info and decipher what exactly the person holding the device is doing.
Next up is an AMAZING demo of a camera app that filters out motion using the accelerometer. Typically, software that have done this has done it by using gyroscopes, or mechanical parts, or by digitally scanning the image as you move it. The accelerometer here helped the camera, mounted on a wildly shaking platform. The images are taken on a crappy smartphone sensor (a slow sensor), came out very sharp when stabilisation is applied. I’m unsure if it’s timing it properly to snap when the motion is at its slowest, but that would make sense, since there’s no way to increase shutter speed. The tech can scale to all sorts of high end cameras, using just cheap accelerometer parts, not the typically high end stuff you see in DSLRs now. I look forward to getting this stuff in smartphones.
The demos were just concepts, but I’m sure we’ll see more of this tech in products, soon. [Fullpower]
[Disclosure: these guys are my friends.]