The past decade’s march towards better gadgets shows a trend line pointing towards ultra-powerful gadgets with UIs so seamless, they make Macs look like punchcard computers. But if you think about it, we — not hardware — are the limitation.
Besides processing power, price and battery life improvements, our preferences for gadgets and the direction of those desires point towards three things: Richer displays, more seamless inputs and smaller packages — the first two being in direct conflict with the last. Looking at where we’ve been and where we are, I don’t think we can keep pursuing these goals without going gadget prosthetic.
Now here’s a trip: For the first time this decade, design choices are being made to limit resolution in screens to show mercy to the human eye. Apple’s recent iMac revision increased the desktop monitor’s pixels per inch rating to about 110. That’s the equivalent of laptop levels of density, but on a big 27-inch screen, and it was so sharp it hurt. Any desk jockey can tell you that as displays get sharper, the strain goes up. On mobiles, which are already the most pixel dense of the gadget kingdom, designers are frequently bashing into conflicting goals of fitting lots of pixels onto pocketable devices. Resolution-independent operating systems (that rely on vector-based graphics) are important but if we don’t take displays inside the human body, gadgets can’t get much smaller — there’s no way for them to become as pixel-rich as desktops while continuing to get smaller than they already are.
The the idea for hybridised HUDs featuring reality and computed interfaces has been around for ages. Science fiction has already dreamed up what it is we want to see in animations like Ghost in the Shell. But the recent explosion of augmented reality apps — powered by smartphones with directional compasses, internet connections, location awareness, cameras and the power to draw data-driven overlays — are simply prototypes for real HUD and in-eye/mind displays. It's not a conceptual problem as much as it is a question of how.
Keyboards and buttons are easier to understand as a limitation, as we type on increasingly baby-finger sized keyboards on smartphones with appendages that look like hot dogs. Keyboards just need to go away. Towards that trend, software keyboards may be error-prone but when used by the proficient, the typing is way faster and the devices are way smaller. Further away from traditional keyboards, Microsoft Research's projects point towards gesture and voice commands. I don't see how we could get full work days done that way though, and there's the rub. There's not even a good concept for controlling a PC to the level we need to without keyboards and pointers now. Mind control is a joke.
In user-interface design, we've always trended towards the invisible. Instead of seams, we want the seamless. Instead of four clicks, any given major task is better with three. Maybe one day, none — the blink of an eye. Funny enough, the only mentally controlled gadgets these days are toys. And usually the low-end QVC valley where high-end tech ends up after dripping down from the peak of military or space program development to gadget fiends, and finally their kids. I would guess the sloppy capabilities of such toys, like the Mindflex Brainwave, make it inappropriate, unsafe and unusable for anything but hovering a ball in mid air.
It's funny looking back at attempts of strap-on computing. We always thought these clunky setups — "wearable" PCs Velcro'd to our arms or slung over our backs — were the predecessors to in-body computing. I've long assumed that getting to prosthetic gadgets was an issue of micronisation. "When we can fit a computer into the profile of a Bluetooth headset, people will use 'em," we thought. But it's clear to me that it's about the interface; the inputs and outputs.
Gadgets don't have much more room for revolutionary improvement unless we bypass our own natural limitations of fingers meant to peel bananas and eyes designed to spot prey and predators, and get these damn things we love and depend on so much routed directly into our brains.
This week, Gizmodo is exploring the enhanced human future in a segment we call This Cyborg Life. It's about what happens when we treat our body less as a sacred object and more as what it is: Nature's ultimate machine.