Self-Driving Cars Can’t Choose Who To Kill Yet, But People Already Have A Lot Of Opinions

Self-Driving Cars Can’t Choose Who To Kill Yet, But People Already Have A Lot Of Opinions

That people would generally prefer to minimise casualties in a hypothetical autonomous car crash has been found to be true in past research, but what happens when people are presented with more complex scenarios? And what happens when autonomous vehicles must choose between two scenarios in which at least one individual could die?

Who might those vehicles save, and on what basis do they make those ethical judgements?

It may sound like a nightmarish spin on “would you rather”, but researchers say such thought experiments are necessary to the programming for autonomous vehicles and the policies that regulate them. What’s more, the responses around these difficult ultimatums may vary across cultures, revealing there’s no universality in what people believe to be a morally superior option.

In one of the largest studies of its kind, researchers with MIT’s Media Lab and other institutions presented variations of this ethical conundrum to millions of people in 10 languages across 233 countries and territories in an experiment called the Moral Machine, the findings of which were published in the journal Nature this week.

In a reimagined version of the trolley problem — an ethical thought experiment that asks whether you would opt for the death of one person to save several others — the researchers asked participants on its viral game-like platform to decide between two scenarios involving an autonomous vehicle with a sudden brake failure.

In one instance, the car will opt to hit pedestrians in front of it to avoid killing those in the vehicle; in the other, the car will swerve into a concrete barrier, killing those in the vehicle but sparing those crossing the street.

Scenarios included choosing one group or another based on avatars of different genders, socioeconomic statuses (for example, an executive versus a homeless person), fitness levels, ages and other factors.

As one example, participants were asked whether they’d opt to spare a group of five criminals or four men. In another example, both groups were represented by one man, one woman and a boy. However, an additional detail in the case of the pedestrian group was that they were “abiding by the law by crossing on the green signal”, insinuating that the driver of car group may have been breaking the law.

The researchers found that taken as a whole, their data showed people tended to prefer sparing more lives, young people and humans over animals. But the preferences of participants deviated when their countries and cultures were considered.

For example, respondents in China and Japan were less likely to spare the young over the old, which as the MIT Technology Review noted, may be “because of a greater emphasis on respecting the elderly” in their cultures.

Another example highlighted by the magazine was that respondents in countries or territories “with a high level of economic inequality show greater gaps between the treatment of individuals with high and low social status”.

“People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and a co-author of the study, said in a statement.

Critics have pointed out that there are likely decisions that precede any ethical ultimatum as extreme as the ones presented in this research. But if anything, what this data shows is that there’s much to consider about the decision-making processes of artificial intelligence. And the researchers of the study said they hope its findings will be a springboard for more nuanced discussion around universal machine ethics.

“We used the trolley problem because it’s a very good way to collect this data, but we hope the discussion of ethics don’t stay within that theme,” Edmond Awad, a postdoctoral associate at MIT Media Lab’s Scalable Cooperation group and a co-author of the study, told the MIT Technology Review.

“The discussion should move to risk analysis — about who is at more risk or less risk — instead of saying who’s going to die or not, and also about how bias is happening.”

[MIT Technology Review, Nature]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.