When you picture a pair of smart glasses, chances are you’re envisioning something slick. Perhaps, a stylish pair of aviators that with either a discreet tap or voice command, will immediately pull up an array of information based on whatever you’re looking at. In any case, you probably imagine someone like James Bond or Tony Stark wearing them.
That’s because science fiction has played a huge role in how we, the average consumer, think this technology should work. But the truth is smart glasses aren’t some futuristic gadget that has yet to come to fruition. They’re already here—and they’ve been around for quite some time.
So if smart glasses are already a thing, why doesn’t everyone already own a pair? Sure, design and cost and factor in, but the answer isn’t as straightforward as you might think. To find out what’s taking so long, we talked to a bunch of designers, engineers, historians, sci-fi authors, and futurists about smart glasses—where that tech’s been, how it was developed, and what the future actually looks like.
The most “recognisable” pair of smart glasses might be the Google Glass Explorer Edition, but you can trace the idea of an information-laden visual apparatus all the way back to World War II pilots.
It was hard for pilots to identify targets based on voice instruction alone, so naturally, militaries turned to technology to solve the problem. The result was heads-up displays (HUDs). By the end of the war, some pilots were capable of reflecting radar information onto glass in front of a pilot’s controls.
But the first HUD with computing power came in the late 1960s in the form of a huge, hulking device called the Sword of Damocles. The name is a reference to the old tale of Damocles, an annoying courtier in King Dionysius II’s court. To demonstrate the perils of ruling, Dionysius sat Damocles on the throne—with a giant sword hanging by a single hair over his head. The tech version was the invention of Ivan Sutherland and his team of computer scientists at the University of Utah, and is widely regarded as the first virtual reality and augmented reality headset.
Except, no one really knew what to do with this thing. Its size and bulk made it more of a theoretical proof of concept than an actual piece of technology anyone could use. That didn’t stop people from tinkering around though.
The Private Eye came out in the late 1980s and was popular with the DIY community. Meanwhile, Steve Mann—oft-quoted as the father of wearables—developed the Eye Tap in the late 1990s. They were closer to what we envision smart glasses to look like, but still weren’t the sleek, discreet designs we’ve come to expect.
And that’s one of the main problems with designing a pair of smart glasses—how these devices look is incredibly important. People are incredibly vain, so much so that needing glasses can feel like a death sentence to Eternal Dorkdom when you’re a kid. Glasses are way more fashionable these days, but smart glasses operate within a smaller margin. Yes, they have to look great but they can’t simply be fashionable. They also have to add all that AR functionality roo. The challenge is creating a comfortable enough pair, that would appeal to people’s individual styles, accommodate everyone’s vision, but also be mass reproducible. Oh, and not cost a fortune.
That’s a tall order and the technology hasn’t been there yet. So in the meantime, science fiction has to fill the gaps. Writers like William Gibson and Robert A. Heinlein included descriptions of augmented reality headsets in novels like Neuromancer and Starship Troopers. George Lucas’s Star Wars: A New Hope in 1977 introduced audiences on a massive scale to HUDs thanks to Luke’s Death Star trench run.
That continued on in the 1980s in Terminator and RoboCop with computer vision. Hell, even Sailor Mercury had a form of AR headset in the popular ‘90s anime Sailor Moon.
“They create sort of an ideal vision,” Madeline Ashby, a science-fiction novelist and futurist, told Gizmodo. “They create a vision of what might be possible, and that’s sort of the science fiction writer’s privilege. We can write about what we think is possible without having to do all the leg work of actually designing it. We write the wish list. Someone else has to fulfil it.”
Given the limitations, it makes sense that it took until 2012 and Google Glass for the first viable consumer smart glasses to burst onto the scene. Lighter than your average pair of sunglasses, these babies were sleeker than what had come before—even if they did cost a whopping $US1,500 ($2,210). Plus, the concept video Google dropped looked amazing. It gave us a first-person view into what a future with smart glasses might actually look like.
Except that’s where the expectations science fiction gave us and reality of what was possible began to split. Functionally, Glass had a handful of useful apps and some promising use cases—but nothing compelling enough that anyone but the earliest of adopters would shell out for. There was, so to speak, no “killer app.” Plus, while sci-fi novelists, Hollywood, and futurists were laser-focused on the positive possibilities, the public had different concerns once Glass was in the wild.
“The issue Google Glass experienced when it launched was two-fold. The way it looked, you literally had a piece of tech hanging off your face,” says Chuck Yust, a designer with Frog Design. “The second part is people reacting to being filmed all the time and having cameras in their face.”
Yust told Gizmodo that he’d heard Glass described as a Segway for your face. Yeesh. But that uber nerdy design is just one reason wearers were called Glassholes. (Though you really do have a problem when even supermodels can’t make a thing look cool.) Aside from aesthetic concerns, Glass triggered some legitimate societal concerns, especially with privacy and security.
Countless think pieces were penned, but perhaps the most famous incident involved a woman who was attacked at a bar in San Francisco for wearing Glass in 2014. The whole scuffle happened because bar patrons felt upset at the idea the woman could be recording them at any moment in a public area.
“Smart glasses to me, are the easiest to observe people in what they think is a private environment,” Ashby added. “That’s why they always show up in spy movies.”
“Even though logically we know the smartphone has a camera, we have a very good sense if someone is filming you with a smartphone,” says Marc Weber, the Internet history program founder at the Computer History Museum. “It’s harder with a headset because there’s not as many cues. But the immediate default assumption is that you could be filmed without your knowledge.”
So here we are in 2019—four years after the Explorer Edition of Google Glass was scrapped. The design, technological, and societal challenges facing smart glasses are still the same. So is that it? Are all these challenges just too expensive and complicated? Are we going to have to rely on science fiction for another 20, 30 years before another company delivers something viable? Perhaps not.
Backlash to Google Glass was brutal, but far from a final deathblow. All the designers, engineers, and futurists we spoke to agree on one thing—smart glasses are coming, and right now, they’re finding a second life in enterprise.
This is a two-part video series exploring the challenges of making smart glasses people would actually use. Part two will air next week.