Welcome back to Giz Asks, a series where we ask experts hard questions about science, technology and humanity's future. Today, we're trying to find the latest consensus on law and ethics of self-driving cars hitting pedestrians.
Illustration: Sam Woolley/Gizmodo
The hype around autonomous vehicles is reaching a fever pitch. New plans for our autonomous future are regularly announced. Self-driving Ubers are already rolling around Pittsburg, and Michigan recently passed legislation clearing the way for autonomous vehicles on public roads. Volvo, Nissan, Honda and Toyota all plan to have self-driving cars on the market by 2020. But as innovation marches forward, we are still figuring out how autonomous vehicles will transform culpability when there's a crash, something that's particularly important if the crash harms or kills a pedestrian.
We asked transportation experts, ethicists and lawyers where the blame will lie if someone gets hurt in this rapidly approaching new reality.
Research Faculty, Toulouse School of Economics
Author, "The social dilemma of autonomous vehicles", Science
It really depends on whether you're talking about a fully autonomous vehicle or a vehicle where control is shared between the vehicle and the driver/passenger. We're not yet at a stage where we have fully autonomous cars. At this point, we are looking into various forms of shared control, which means that any accident is going to have a complicated story about the exact sequence of decisions and interventions from the vehicle and from the human driver.
What I can not speculate about is what future lawyers are going to do. But as a psychologist, I think that right now people would think that if the vehicle is fully autonomous then what happens [in a crash] is on the car manufacturer.
Of course, "car manufacturer" is an umbrella term, because there are going to be many different firms providing many different pieces that go into the autonomous vehicle, which is going to complicate the story again. And, if we're talking about fully autonomous vehicles, we're also talking about some infrastructure that allows them to operate. It's not going to just be the car, it's also going to be all the broader systems within the city that allow the car to navigate. So [we could] see accidents where the blame is really on the public infrastructure that did not allow the car to make a correct decision.
Research Scientist, Virginia Transportation Research Council
Like most things, it generally depends on the specific circumstances. Assuming that the car was following the law, a human driver is typically not liable when a person suddenly and unexpectedly moves into their path. There are some exceptions, for example the driver is responsible for being aware of children playing nearby, and for assuming that children will run into the road in a way an adult would not.
An interesting distinction between humans-driven and self-driving cars is in how well they must avoid the crash. Say a child darts out from behind a tree and into the road The car does not have time to stop, but does have time to swerve. This falls under the Sudden Emergency doctrine, which generally excuses the human driver from negligence, as a person cannot be expected to use the same degree of care or accuracy of judgement in an emergency. There's some debate about whether the Sudden Emergency doctrine should apply to self-driving cars, as software won't experience panic the way humans do. If this doctrine does not apply, self-driving cars may have more of a duty to avoid a crash than a human has, even if the crash was primarily the pedestrian's fault.
Director at Ethics + Emerging Sciences Group, Associate Professor in the Philosophy Department at the California Polytechnic State University
If (and when) a self-driving car injures or kills a pedestrian for the first time, I think we can expect a few things to happen:
(1) Industry will try to point to a lower injury/fatality rate with autonomous cars vs. human-driven cars to distract from the accident. This is the PR equivalent of "Hey look, a squirrel!" Even if there's enough data to demonstrate a lower accident/fatality rate -- which there isn't yet -- that doesn't address the specific issue of the accident: why did it happen, and could it have been prevented? As an analogy, a cancer drug could save tens of thousands of lives, but if it killed a patient, we'd be right to want an investigation. People shouldn't be treated as only statistics; these are real lives and could be your family or mine.
(2) Everyone involved could be sued. The victim or his family could sue the manufacturer and technology suppliers: if it weren't for the self-driving car, the accident wouldn't have happened, or maybe the car could have been designed to handle the situation differently. The owner could be sued, since it was her car, or maybe she was operating it in an unsafe way that wasn't intended by the manufacturer (and the manufacturer should have anticipated and addressed this, too). The insurance company and regulators could be sued for underestimating the risk and allowing those vehicles to operate on public roads without more testing.
(3) The easiest case would be if the pedestrian was clearly at fault. Maybe he was trying to test the car's limits or to commit insurance fraud -- maybe he jumped in front of the car from a hiding spot that the car's sensors couldn't see, like behind a large truck. Robot cars, no matter how smart, still can't defy the laws of physics, and technology will sometimes fail. This means they can't avoid all accidents. Still, at least the manufacturer will be in the hot-seat to show that it had done all it reasonably could to prevent the accident. Even if the car had the right of way, that doesn't absolve the manufacturer from responsibility, if it could have done better. We know with certainty that abuse will happen, and manufacturer should foresee this, too -- whether for a big payday or infamy, that's just what some jackasses do.
(4) Trust could erode with the broader public, and this could impact the adoption of autonomous vehicles. Certainly, there could be great benefits from the technology, especially in the potential to save lives. As Mark Rosekind, administrator for the National Highway Traffic Safety Administration (NHTSA) puts it, the 35,000 lives lost on US roads every year is the equivalent to a 747 plane crash every week -- we should be outraged and can do better. But, given the stakes, the industry can't afford to make any missteps, since they can be fatal. This isn't like beta-testing office software, where a crash means lost data. With robot cars, the crash can be all too literal, and the payment is in lives.
All this really depends on the context; there are a lot of variables at play here. Perhaps the pedestrian was inside a research park and signed a release of liability, given the presence of self-driving test vehicles. It matters whether the accident was a result of the car's design (including cybersecurity vulnerabilities) vs. technology failure vs. operator negligence. It matters how much training or education the manufacturer or dealer gave to the car's owner when she purchased it. Did the car have state-of-the-art sensors, or did the manufacturer choose less capable technology to cut costs? Did it give enough thought to programming decisions, especially when there was no obviously right way to go? And so on.
It will be messy, especially as "robot law" is still emerging, and the courts as well as legal scholars disagree on how those future issues should be handled. And this is an opportunity for regulators and society to proactively create guidelines and clarify existing law to sort out as much of this mess as we can ahead of time, rather than during emotional trials and pressure for a quick resolution.
Bryant Walker Smith
Assistant Professor of Law, University of South Carolina
Per this link, liability depends on the specific facts of the crash (as well as the relevant law). I'd give the same answer if you asked about a pedestrian-driver crash today. And, tragically, currently about ten pedestrians/cyclists are killed every day in the United States. Automation could change the power dynamic between drivers and vulnerable road users.