Why Self-Driving Cars Should Never Have Steering Wheels

Why Self-Driving Cars Should Never Have Steering Wheels

The head of Google's self-driving car division made headlines recently for asking US federal regulators to allow a vehicle without human-facing features like a steering wheel. Now he's made a very good case for why no autonomous vehicle on the road should have these things at all. In an interview with NPR that aired this week, Google's Chris Urmson hit home the point that it's simply not a good idea to any to have any kind of human-oriented controls in self-driving cars:

You wouldn't imagine that in the back of a taxi, we put an extra steering wheel or brake pedal there for the passenger to grab ahold of anytime. It would just be crazy to think about doing that. But at the same time, I could imagine that there are vehicles where most of the days you don't really want to drive it, so let it take you to and from work in the morning, for example, but on the weekend when you get a chance to get out onto some open road, that you might enjoy driving in that location. But I think the idea that you want the person to jump in who hasn't been paying attention or maybe had a couple of drinks with dinner and then jump in to override is probably not the right idea.

Although Urmson gives some examples of when you might want your car to go back and forth from semi-autonomous to fully autonomous, it's pretty clear from his last sentence that the steering wheel-free car is really the only safe option here. And that's what we know Google is pursuing.

Urmson's concerns made the rounds a few weeks ago when the US National Highway Traffic Safety Administration responded to a letter from Google requesting what's called a "rule interpretation" on the design of its vehicles. Most of the buzz was about the NHTSA possibly considering Google's AI to be the "driver", at least from a regulatory standpoint, which was widely misinterpreted as the equivalent of giving a robot human rights.

But the biggest news was actually that a car company (or, the company that writes the software to run the car) is requesting to eliminate elements like a steering wheel, accelerator pedal and brake pedal. These are things that are required by NHTSA since it is assumed that a human is operating a car. But Google is right to want to get rid of them because they're actually dangerous — as Urmson says, drunk humans could attempt to take control of the car when it's not safe.

So the question is this: How can Google's self-driving car program adhere to safety standards while designing vehicles for these new conditions in which humans are not driving?

I asked this question of Mike Lukuc, program manager for Connected and Automated Transportation at Texas A&M's Transportation Institute. Before he came to Texas A&M, Lukuc was NHTSA's program manager for Connected Vehicle Research, where he handled rule-making situations like Google's letter. "One problem with the NHTSA process is that they generally develop their standards based on historical crash data, so it's reactive rather than proactive," says Lukuc. It would take a very long time for the NHTSA to collect enough crash data on Google's vehicles to change that rule.

But what's happening now is that onboard technologies are evolving so quickly that NHTSA doesn't have years of previous data to compare them to, says Lukuc. So NHTSA is adjusting: A recent rule recommending vehicle-to-vehicle communication (V2V) for all cars was the first decision which was based on simulations and modelling instead of historical data. For a case like Google's, where lives are at stake, the NHTSA can't afford to wait. As Urmson points out in the NPR interview, 33,000 people are killed on American roads every year, with 94 per cent of those crashes due to human error.

So this where most important part of America's shiny new autonomous car policy comes into play. As part of the USDOT's new guidelines, announced a few weeks ago, NHTSA claimed it would grant automakers exemptions to help test potential safety innovations. BMW was able to get one, for example, to test its self-parking technology. The agency's exchange with Google makes it seem like this would be a prime candidate for one of those special exceptions.

This would also help with another issue that Google specifically faces. Remember that only a few months ago, California's Department of Motor Vehicles released their own draft rules on autonomous vehicles, which required that self-driving cars have human-focused features like steering wheels as well as a licensed driver in the vehicle at all times. (This is one of the reasons why Google might have recently started testing its cars in Kirkland, Washington.) The NHTSA's ruling would override any particular California policies, provided that the USDOT releases its set of best practices as promised within six months. It would be one federal exemption for Google in the name of safety, but it could end up making all autonomous vehicles even safer.

[NPR]

Photo credit: AP Photo/Tony Avelar, File


Comments

    As a driving enthusiast I would hate this but I guess this isn't directed at driving enthusiasts. I love the idea of a self-driving car though. It would make taxis null and void. Features that I look forward to are the ability to "call" your car to your location from anywhere and have it come and pick you up or chauffer you home in the passenger seat when you're too drunk to drive. It would be excellent.

      Gizmodo should have more users like yourself... I like how you think.....

    And when the self-driving car refuses to drive (hardware malfunction, software bug, communication problem, hacking) your car will essentially BRICK itself... cause you cant move it without a steering wheel (such as push it off the road) or limp it to a service station.

    Joe Bloggs buys a wonderful new self driving car but Joe is a clod and doesn't get a covernote.. Joe's card sees a kitten on the road and panics, sideswiping your car and breaking your elbow while you were parked. Joe's car continues off over the horizon.

    You try to recover the cost of damages but who from? Joe wasn't driving - you can't sue a car.. The police are also interested in this hit and run, but who will they prosecute - not Joe.. he didn't drive away.

    And this isn't going anywhere near the whole 'automated cars will be hardwired to kill' . Just as autonomous weaponized drones are deemed an ethical mess, a heavy fast moving object that does what it likes is gong to be difficult to deal with in a legal sense when things go wrong.

    I know people like to see the good in everything, but you must expect the worst if you're to be prepared. I cannot see who the heck can be held liable in the case of everday accidents when an autonomous vehicle is involved.

      Thats going to be the big issue that none of them have seemed to cover pushing these, liability, if the person isnt driving, then the liability is on the manufacturer. Considering all the automotive law suits that are still unresolved over things like faulty tires, exploding airbags, dangerous ignitions, emissions cheating, exploding fuel tanks etc. that spend a decade going up the court of appeals route its ridiculous that we are trusting car manufacturers to get this right.

      Scarey. Volvo wants driverless trucks... multi tonne, potentially hazardous material payloads, travelling at highway speeds. Cruise Control Missles.

      These cars dont even have deadman switches... a train that goes in a straight line, that runs on a set schedule and has little chance of a hazard, still has a person on board holding the throttle.

      Trust the car!! Computers never crash... right :)

    I agree - one of the most important aspects of the driverless car revolution is to ensure that our legislation is up to the job of covering the worst scenarios that we can assume will some day happen. I would imagine in the scenario you described, it would be essential that all cars are covered with a public/third-party liability policy, tied to the owner (not necessarily the operator) of the car.

    With the hardwired to kill issue, obviously that's all sorts of messy, but it would be even more important to ensure that there is some legal standard for what the car and cannot do. Of course, it will be almost impossible to get all manufacturers to agree that the car "must always place the wellbeing of it's passengers ahead of all other considerations, including the wellbeing of other people", or "the car must behave in a way to minimise the harm done in the case of a likely collision" (i.e. sacrificing the driver, rather than plowing into a crowd of people). But as you say, these questions MUST be addressed before driverless cars can be embraced

Join the discussion!

Trending Stories Right Now