Since we’re all friends here on the internet, I’ll let you in on a little secret: I hate driving. I drive too slowly, lurch violently when I change lanes, and the thought of having to merge onto a speeding California highway makes my heart speed up a little even as I write this. I’m no good at driving and never have been.
Delphi’s autonomous car outfitted with Intel technology. Image: Intel
All of this should probably make me an enthusiastic supporter of emergent self-driving car technology. And yet, as I sat in the backseat of an autonomous Audi outfitted with the latest tech from Intel and Delphi for a test-drive, I was instead overwhelmed by mistrust and suspicion. Had my driver been a human, I probably would have marvelled jealously at the fact that the car maintained perfect speed and actually signalled before every turn. Instead, when it changed lanes too abruptly before a right turn once, I remarked to my tour guide that the ride was just a tad too bumpy. The car was definitely a much better driver than I will ever be. But for the price of putting my trust in a machine, that still didn’t seem quite good enough.
On Wednesday, at Intel’s San Jose headquarters, the company opened the doors to its new research centre for autonomous vehicles to the press, and showed off the first of 40 BMW 7 Series self-driving cars that any day now will join the fleets of autonomous vehicles on the Bay Area’s roads. Intel is building the computing platforms for cars in partnership with the car technology company Delphi and Mobileye, which makes vision systems for self-driving cars. Intel wants to be the brains inside every single autonomous vehicle out there. At the event, it showcased its latest attempts to overcome the many technical hurdles to getting autonomous vehicles on the road, including efforts to develop a bonafide 5G wireless network that moves massive amounts of car sensor data between cars and the cloud, to make a power-efficient deep learning system and maps that change in real time to give cars the ability to see more than just its sensors allow.
As complicated as all that sounds though, people like me may present an even bigger hurdle to the widespread adoption of the technology. I’m a technology reporter, I know the numbers. More than 30,000 people die on the road every year in the US, and a majority of those deaths are attributed to human error. Unlike a computer, we can’t help but fiddle with the radio or sneak a peek at a text. And yet, when I realised that the driver who had piloted that autonomous Audi out of the parking lot had taken her hands off the wheel, I freaked out. Every jerk of the car was a surefire signal that I was on my way to meet an untimely death at the hands of an invisible operator.
“The industry has a lot of technological problems to solve,” Matt Yurdana, the creative director for Intel’s Internet of Things Experiences Group, told me. “But just as important are the interactions with people. How do people feel comfortable, psychologically, in one of these cars?”
In other words, if you’re going to get into a car, you’re going to have to trust it first. Like so many fears, our fears of self-driving cars are probably illogical. They’re still there.
You may have seen a self-driving car from Google or Uber before. They look weird, with all kinds of protruding sensors and often more junk taking up a whole lot of space in the trunk. The autonomous Audi I rode in had none of that. Its 26 sensors were all fashioned so that you probably wouldn’t notice them unless you were looking for them. Its trunk was free of computational junk. And, most importantly, in place of your everyday GPS display was a screen that showed you exactly what the car was seeing. Outlined was its trajectory, so you could see where it planned to go on the road. If the car planned to turn right, a blinker appeared on the screen’s right-hand side. If it was stopped at a light, the traffic light appeared on screen, along with others in its field of vision in a more transparent relief behind it, so that you could have confidence it was reading the right light.
The Delphi Autonomous Audi. Image: Intel
The idea here was to make the car’s actions more transparent — turning it from a mysterious, opaque machine into something consumers understand.
Uber’s pilot project has taken a similar approach, testing how much information about the car consumers need to feel safe. Other self-driving cars, instead, seem to hope that the magic of the technology will dazzle you enough to win you over. The minimalist interior of Google’s prototype car doesn’t even have a steering wheel, let alone a neat user interface to let you in on what the computer is up to. Unsurprisingly, Google has since abandoned that vision and is instead now piloting more normal-looking vehicles, because getting its vision of weird toy cars out of the workshop and onto the road was just too difficult.
Intel isn’t actually interested in designing consumer-facing user interfaces for vehicles. But, Yurdana told me, it does want to understand how to make the best human-machine interfaces so that it can build the technologies to support it. Yurdana’s group is focused on researching and prototyping different human-machine interfaces to help its partners address issues of consumer trust. I told him about my experience in the autonomous Audi, where I felt less safe than I had when I’m driving, even though the car was certainly a more skilled driver.
“That’s because you didn’t have control,” he said.
Making consumers feel comfortable, he said, will be in part rooted in giving them some control. He demonstrated for me a prototype of a hypothetical app for hailing an autonomous Uber or Lyft. From the app, the car allowed me to select if I wanted it to do things like flash its headlights so I could see it better on a crowded street. When it pulled up, it flashed my name on the window and prompted me to unlock the door from a phone. Once I was in, I had to enter a pin code and then hold down a button that said “Go” for four seconds. Once the ride started, the car kept me abreast of the route and any changes to it not just on the phone screen, but also on two other screens within the car. It was all about communication — allowing the car to communicate its actions to me, and then giving me options for communicating back to it.
The interface for a hypothetical autonomous ride-hailing app. Image: Intel
Yurdana said that redundant communication systems will be vital for getting people on board. The car should let the passenger know everything that its doing in multiple ways across multiple screens. The company plans to test the system soon on people to gather feedback on what made them feel most comfortable.
Given all the thought that seemed to have been put into making these autonomous cars people-friendly, I asked Delphi CEO Glen De Vos about my bumpy ride.
The aggressiveness of the car, he said, was all part of the car’s personality.
“Two years ago, it drove like my grandma, and that was even more aggravating,” he said. “The ride quality has to feel natural, and like we expect.”
In different places, he said, cars might be “tuned” to drive differently or consumers might even be able to change their driving behaviour themselves. In California, our reputation for aggressive on-road behaviour seems to have made its way into our self-driving cars.
In the end, De Vos said nothing will probably do more to create consumer confidence in cars than exposure. By way of explanation, he pointed to the example of the elevator. Initially, elevators were considered such complicated pieces of technology that they required operators. In 1900, a company created the first-ever driverless elevator. But for decades after, they never caught on.
“People had visions of things like the driverless elevators chopping them in half,” said De Vos.
It took an elevator operator strike that nearly shut down New York City in 1945 before people were willing to fully embrace the driverless lift. By that example, it may be an awfully long time before we’re willing to turn over the wheel to a computer.
I had taken an Uber from Oakland to Intel’s San Jose headquarters, in part because of my aforementioned hatred of the road. On the way back, my driver was awful. He got lost en route to the freeway, despite the fact that his phone had turn-by-turn navigation. He merged onto the highway at a horrifying 65km/h, inviting a chorus of angry honks. He was a worse driver than me.
Had I instead hailed one of Uber’s pilot autonomous vehicles, I’m certain it would have been a smoother, faster ride. And yet, as he swerved across two lanes of traffic as he tried to answer a call, I was still more relaxed than I had been in the back of the autonomous Audi a few hours earlier.