We Probably Should Be Worried About Driverless Cars

The advent of electric vehicles and increasingly effective battery storage heralds a new age of propulsion, but interestingly, that change seems to have coincided with another major alteration to how we move ourselves around: the potential removal of human control from the operation of the vehicle. This won’t have a big impact on emissions, but it will have big impacts on safety, perception of risk and culture.

I’ve previously explored the fairly secure prediction that, in the long-run, handing over driving control to computers will significantly improve safety. In that analysis, I found that humans are responsible for around 90% of all traffic incidents. The human brain is good at a lot of things, but sadly, combining the flaws of human perception with a metal box powered by explosions results in injury and death. A machine would probably serve us better, in this particular task.

There’s a possibility that seems a little under-discussed, though. Commuting will still entail spending millions of collective hours on the road, in a very large variety of situations. I suspect there’s a collection of real risks that will briefly emerge during the early years of this major technological shift, along with some predictably irritating political reactions.

When we made the process of navigation autonomous, initially through GPS units and now mostly through software on our smartphones, there were plenty of instances of cars plunging into rivers. This has been largely smoothed over, but it caused real, quantifiable harm to a small subset of drivers. In the early years of automation and the rapid spread of new technology, the sheer novelty of situations is almost impossible to pre-empt, and so I suspect there will be some harm incurred.

You can minimise these initial problems through an iterative and cautious approach. A fair few cars already feature autonomous extras, such as self-parking or lane-steering, like Tesla’s Autopilot. But even that feature can hit its limits early on - are we sure every manufacturer will know the limits of its own technology? Tesla seem quite responsible, but I wonder about new companies that might seek to cash in on a trend without an eye for risk management.

Autopiloted Teslas Screwing Up Proves Self-Driving Cars Aren't Quite Here Yet

Recently, I was going through the Highlands in Scotland, on a day of fairly serious flooding. Storm Frank caused some insane flooding, and we had to navigate busy roads that also happened to be coated with surprisingly deep water. We were fine, but being driven through these terrifying corners, I couldn’t help thinking about how a robot brain would see this situation, and how it might react. And, a human being, sitting in a seat with no steering wheel or pedals, would be seriously terrified. We will almost certainly develop the computational skill to deal with these situations. But what happens during our trip up the learning curve? As an avowed, unashamedly-irrational early adopter, I can’t help but feel a twinge of hesitancy with this technology.

One of the major advantages of autonomous vehicles is the ability to network - install devices that enable telemetry and transmission, and you can create amazingly efficient intersections, and significantly decrease traffic congestion by enabling vehicles to drive very close to one another. Yet, I wonder about the privacy and security implications of this change. It’s not surprising that exciting new capabilities creates new risks, but this particular change makes me consider that it might be worth mixing my excitement with caution. We’re climbing inside these things, and propelling our fleshy selves at very high speeds. The fundamental problem is that the organic matter we’re made of really does not like to go from a high speed to a low speed in a short period of time, and this problem still exists with driverless cars.

Another potential problem is the tweaking of vehicle software by users. Perhaps there will be some regulation or agreement to ensure you can’t put your car in ‘hothead’ mode - but surely, someone will crack it, and we won’t have programmed uncracked vehicles to deal with this scenario.

Some incident with a driverless will garner intense media coverage, there will be a rapid and poorly-considered political reaction, and regulation will be passed that isn’t an effective method of reducing risk, but certainly creates the impression that it is. Useful legislative changes that are related to new technology and informed by a range of experts are almost always stuck in ‘development hell’ for years, and implemented long after technology has changed again. I don’t doubt this will very much be the case with driverless cars.

I’m quite confident that handing over the controls to a collection of computers will have a net positive impact on road safety. But I do worry about that brief period, where bugs that aren’t yet ironed out will have surprisingly serious consequences. I also worry that we won’t respond to these roadblocks with a good mix of reason and haste - these events will likely feed into already-established patterns around new technology and flawed risk perception. Our initial reactions will be too hasty, and our useful reactions will take too long.

I suspect driverless cars will be one of the biggest and most exciting technological changes in my lifetime, and it’s going to be almost impossible to quell the thrill of this shift. But the more I dwell on it, the more it seems logical to include a healthy dose of caution in there, too.

Trending Stories Right Now