Reporting live from Sydney’s TechFest, there is one particular innovation that is bound to interest Gizmodo readers: a search engine that switches text for human action.
The NICTA based company, in conjunction with the University of New South Wales is using its vast research into video imagery to create a sort of reverse video search engine.
It works like this: imagine you want to find videos of people swimming and doing backstroke. Instead of the traditional textual search method, where you would ordinarily rely on tags and video descriptions to illuminate the results of a video search (e.g YouTube), one day you might be able to pick up all the instances of a swimming video which actually includes the precise human action in its original context.
The research, which is still 2-3 years away from a commercial platform, has a vast database of human actions to draw upon, and researchers hope the search results will prove more accurate than what Google can provide today using traditional ranking algorithms.
Oh and you’ll be able to specify exactly where the action is taking place too, as the setting constitutes a valuable part of the search criteria. It could mean a more precise video match, especially in an age of poorly tagged videos that have nothing to do with their titles or descriptions.
When asked if it would pick up certain other human actions (we’ll leave that up to your imagination), Dr Jian Zhang, Principal Researcher and Project Leader, said that it would prove slightly more difficult to filter out the more ‘interesting’ of human actions in a real-world scenario. Call me crazy, but the bright lights in California are likely watching this technology very closely if you know what I mean.
Would you use a video search, if it meant you could narrow down the action and the setting? Let us know below.