For three decades, most of us have interacted with computers in exactly the same way: We point with a mouse (or a finger!), click, and watch the screen. In one way, it’s the most outdated element of human computer interaction around. But in another it’s the thing that’s shaped every operating system and device designed since its invention. We’re starting to leave it behind though. Here’s what’s coming next.
Changing an interaction as deeply entrenched as clicking is, well, monumentally challenging. It’s also extremely exciting. It asks us to rethink the way we interact with technology altogether. The catch-all name for this field is tangible (or graspable) user interface design, and we’re hearing about it more and more often. Here’s a simplified version of what our complex future has in store.
Giving Physical Objects Digital Meaning
For the past few years, the conversation around user interface design has been peppered with words like “invisible” and “disappearing”. The thinking goes that as interfaces develop, they’ll eventually disappear — we’ll just be gesturing in an empty room. Or thinking a command to a fleet of brain-embedded sensors.
It’s easy to see how conceptually speaking, it’s not a far jump from “intuitive” to “invisible”. Yet they’re not the same thing, as Mark Wilson pointed out a few months back. An interface that’s easy to use isn’t synonymous with one you can’t see. Invisible UIs are confusing. We can’t tell whether they’re working or if they’re erroring. It’s hard to learn them. As Berg’s Timo Arnall puts it, “literal invisibility can cause confusion, even fear, and they often increase unpredictability and failure.”
But what about an interface that’s woven into the fabric of everyday life? What if future interfaces aren’t just visible, but feel-able? What if they’re linked to physical objects that control digital environments? That’s the basic foundation of tangible interface design.
It Began With An Answering Machine
In 1992, a design student named Durrell Bishop (who went on to work at IDEO and found Luckybite) created what some cite as the first tangible UI. His Marble Answering Machine symbolized each incoming messages with a marble. To listen to a particular message, you’d pick up a marble. To call back, you’d drop it back on the machine.
Bishop’s design laid a foundation for other groups. In 1995, a trio of researchers introduced a UI prototype called Bricks, which let users move Lego-like boxes to manipulate objects on a computer screen. One of those researchers — Hiroshii Ishii — is the founder of MIT’s Tangible Media Group.
Since then, graspable UI has wound up in a huge array of applications, ranging from the sensible to the wildly speculative. It can be as simple as Hibou, a radio that you control by touching its palladium surface in different ways:
Or as complex as this MIT project that uses levitating magnets to manipulate, for example, the light levels in a living room:
Not a Mouse, But a Menagerie
But isn’t the computer mouse a tangible interface? Yep — and so are touch-screens. They’re not ideal for a couple of reasons, though, from a lack of precision to a limited user experience (who wants to stare at a screen all day?).
Besides the obvious health risks involved with how we use computers today (see: digital dementia), there are plenty of social reasons that tangible UI makes sense. At the design consultancy Frog, engineers are creating room-sized interfaces controlled by speaking or gesturing, letting users call out a Seamless order or watch a YouTube video on the kitchen table. “It has the potential to be more heads-up, allowing users to be present in their environment,” project leader Jared Ficklin told me a few months ago. “Eventually, room-size computing will touch everything.” Here’s his prototype in use:
In other cases, users might not actually be able use a traditional interface. Tangible UIs could help the blind use conventional design software, or help the computer-illiterate elderly communicate with loved ones. Good Night Lamp is aimed at long-distance relationships: When one person turns on their light, an identical one goes on wherever the other person is located. It’s an ambient way to keep in touch, even when there’s not very much to say:
For kids, there’s DIRTI, an iPad app that lets young children control an audiovisual symphony by playing with a bowl of tapioca:
So tangible UIs aren’t so much a category as a theme — a way of thinking about human-computer interaction that ventures beyond the console-and-mouse archetype.
The Great Inversion
If you look closely, though, there’s a certain symmetry between the past and future of UI design. In the early days of PCs, Microsoft and Apple introduced first-time users to computers with interfaces that looked like objects with which they were already familiar. Microsoft Bob, the failed assistant from the 1990s, placed applications and actions into a “house” as books and objects. The Apple OS was symbolized as a “desk,” where objects like files and trashcans created easy-to-understand visual metaphors. Today, we know this technique as skeuomorphism.
With tangible and graspable interfaces, though, the relationship between digital and physical is reversed. Rather than trying to make digital features feel like physical ones, designers are imbuing physical objects with digital properties. If the original point of skeumorphism was to help first-time users understand computers, the point of tangible computing is to ween digital natives from the tyranny of the console — or at the very least, to break open the computer and weave its contents into the natural world. So the future of UI might not be as “invisible” as we imagine it — in fact, it could be far more visceral and touchable than it is today.