For all the power and connectivity that modern mobile devices offer these days, why are we still typing on screens (or, God forbid, numerical pads) barely three fingers wide? A new wearable GUI system aims to turn any surface within arm’s reach into an input device.
The OmniTouch system was developed by Carnegie Mellon University in conjunction with Microsoft Research and allows users to interact with applications on any surface. It utilises a pico projector to display the interface on a surface — whether it’s a wall, table, arm, lap, leg or what-have-you — then employs a custom-built short-range depth sensor (similar to a Microsoft Kinnect) to track your fingers as they type.
The system’s software supports multi-touch input and can track digits in 3D space, differentiating between a finger that’s hovering over a surface and one that’s actively “clicked” an area. And, since it’s shoulder-mounted, the system’s first-person perspective doesn’t require any user calibration or special training. A user can even transfer the interface from one surface to another, say, from the back of a notebook onto a nearby wall.