Your Self-Driving Car Will Be Programmed To Kill You, Deal With It

Your Self-Driving Car Will Be Programmed To Kill You, Deal With It

A recent survey shows that people want self-driving cars to be programmed to minimise casualties during an accident, even if it causes the death of the rider. Trouble is, the same survey shows that people don’t actually want to ride in cars that are programmed this way. That’s obviously a problem — and we’re going to have to get over it.

Image: Hot Tub Time Machine 2

These are the kinds of thought experiments that are taught to Ethics 101 students during the first weeks of class — but now they’re actually being applied to real life. Similar to the vexing trolley problem, manufacturers are struggling to come up with new rules for autonomous vehicles to guide them when an accident is inevitable, and the lives of people, both inside and outside of the car, are at stake.

A new study published in Science shows there’s a big disconnect between the kinds of ethical programming we want these vehicles to have, and the kinds of cars we actually want to ride in. Surveys done last year demonstrate that people tend to take a utilitarian approach to safety ethics. That is, they generally agree that a car with one rider should swerve off the road and crash to avoid a crowd of 10 pedestrians. But when the survey’s respondents were asked if they’d actually ride in a vehicle programmed in this way, they said no thanks.

Your Self-Driving Car Will Be Programmed To Kill You, Deal With It
Decisions, decisions: What would you want your self-driving vehicle to do? Would your answer change if you’re in the vehicle? (Image: Iyad Rahwan)

Decisions, decisions: What would you want your self-driving vehicle to do? Would your answer change if you’re in the vehicle? (Image: Iyad Rahwan)

“Most people want to live in in a world where cars will minimise casualties,” said Iyad Rahwan, a professor at MIT who co-authored the study. “But everybody wants their own car to protect them at all costs.”

The researchers call this a “social dilemma” whereby consumer choice — and the urge to act in one’s own self-interest — could make road conditions less safe for everyone. Frustratingly, there’s no known way to design a cake-and-eat-it-too algorithm that reconciles our moral values and the understandable human desire to not die.

Your Self-Driving Car Will Be Programmed To Kill You, Deal With It
Situations involving imminent unavoidable harm: the autonomous vehicle must decide between (A) killing several pedestrians or one passerby, (B) killing one pedestrian or its own passenger, and (C) killing several pedestrians or its own passenger. (Image and caption credit: J. Bonnefon et al., 2016)

Situations involving imminent unavoidable harm: The autonomous vehicle must decide between (A) killing several pedestrians or one passerby, (B) killing one pedestrian or its own passenger and (C) killing several pedestrians or its own passenger. (Image and caption credit: J. Bonnefon et al., 2016)

Results of the survey showed that people are on board with utilitarian-minded robotic vehicles, and would be content to see others buy them. This is an easy sell; the needs of one or two individuals, we tend to agree, is greatly outweighed by the needs of the many. The more lives saved, the more inclined people are towards this utilitarian attitude. As shown in the survey, as many as 76 per cent of respondents were cool with a vehicle being programmed to sacrifice one passenger if it meant saving the lives of 10 pedestrians.

But these same people showed considerably less enthusiasm when it came to their desire to purchase or ride in one of these autonomous vehicles. When asked to rate the morality of a car programmed to crash and kill its own passenger to save 10 pedestrians, the favourable rating dropped by an entire third when respondents had to consider the possibility that they’d be the ones riding in that car.

The study also revealed that people don’t like the idea of having the government regulate the auto industry to enforce utilitarian principles. Respondents were only a third as likely to purchase a vehicle regulated in this way, as opposed to an unregulated vehicle, which could conceivably be programmed in any number of ways.

Rahwan and his colleagues warned that the concerns over regulations could “paradoxically increase casualties by postponing the adoption of a safer technology”.

Patrick Lin, the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, says we humans are a fickle lot, and that we don’t always know what we want or what we can live with.

“What we intellectually believe is true and what we in fact do may be two very different things,” he told Gizmodo. “Humans are often selfish even as they profess altruism. Car manufacturers, then, might not fully appreciate this human paradox as they offer up AI and robots to replace us behind the wheel.”

What’s particularly bizarre about this latest research is that many of these survey respondents, should they find themselves in a situation where they’re driving a car and are suddenly confronted with a similar situation, would probably go out of their way and make a suicidally evasive manoeuvre to avoid a crowd. There appears to be a bit of a disconnect between the morality of human decision-making in these matters, and having robots make these decisions on our behalf. We’re clearly uncomfortable with it, but we’re going to have to get over it if we ever want to see safe and responsible self-driving vehicles on the road.

“It shouldn’t be surprising that ordinary people haven’t thought deeply enough about ethics to be consistent,” said Lin, who wasn’t involved in the study. “Most people think that ethics is just about your ‘gut instinct’, but there’s so much more to that. It’s practically a science, complete with guiding principles or laws.”

Lin also noted that we trade safety for other priorities and conveniences all the time. “Humans are notoriously bad at risk assessments: we drink and drive, we text and drive, we go way over the speed limit, and so on,” he said. “If we really didn’t care about being killed, we wouldn’t be in a car in the first place or allow guns in society.” To which he added: “So, by asking for opinions from ordinary people, the study is collecting uninformed answers, and that’s not very helpful in resolving dilemmas, which may be useful in advertising and marketing, but not so much for law and ethics.”

At the same time, Lin acknowledges that public opinion can be a powerful thing. All it would take is for a Hollywood movie to embed a self-driving car in the story and show how wonderful and safe things will be when autonomous cars hit the road. At the same time, a high-profile accident caused by a self-driving vehicle could turn opinions away from the technology, “so manufacturers really need to be careful”.

Rahwan says these opinions “are not guaranteed to persist”. Indeed, as people learn more about autonomous vehicles and how they work, it’s likely that public opinion will come around. And if not, that’s when auto manufacturers, the government, the law and the insurance industry will all step in.

[Science]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.