It’s the future, and that being the case, you’re going to want to talk to your smartphone and have it make heads or tails of what you’re saying. Getting that to work can be a pretty tough job, however, unless your phone can learn like a human. Wired explains that’s exactly what Google’s Jelly Bean operating system does.
According to Google’s Vincent Vanhoucke, a researcher who was integral in wrapping “neural network” technology into Android, voice recognition errors in Jelly Bean dropped a whole 25 per cent thanks to the tech. As Vanhoucke explained to Wired:
“It really is changing the way that people behave.” …When you talk to Android’s voice recognition software, the spectrogram of what you’ve said is chopped up and sent to eight different computers housed in Google’s vast worldwide army of servers. It’s then processed, using the neural network models built by Vanhoucke and his team.
And while neural network technology is most prevalent and advanced in Jelly Bean’s voice capabilities, that’s not where the application ends. Human-like brain-learning is also super promising when it comes to better and more useful image search capability. Eventually, it could help computers recognise images as actual objects instead of just jumbles of pixels.
It’s a push towards a more intuitive style of computer-human interaction I think we can all get behind, at least until the SkyNet shows up. You can hop over to Wired to read more about it, and the awesomeness to come. [Wired]