Should Autonomous Cars Be Forced To Save Lives In An Emergency? 

Should Autonomous Cars Be Forced To Save Lives In An Emergency? 

We’ve discussed the need for a coherent set of ethics for autonomous cars before. After all, these are really 1,588kg robots that will be roaming all over our cities; we need to decide what acceptable and predictable behaviours will be for these machines. But lately I’ve been wondering something else about autonomous cars: should we force them to save lives?

What I’m asking is whether autonomous cars should be able to be employed, by a properly-sanctioned law-enforcement, disaster assisting, or peacekeeping organization, to help protect people and assist in efforts to mitigate potential threats, both from natural causes or human-created dangerous events.

This goes beyond the passive damage-mitigation conundrums of things like the trolley problem. Let’s imagine that the next decade or so will go as planned for autonomous vehicles, and we’ll start to see fully-autonomous cars mixed in with regular, sweaty, human-driven traffic. For the sake of these thought-experiments, let’s say that in any given near-future parking lot of, oh, 50 cars, there’s about three to five fully autonomous vehicles.

Let’s also say that in this near-future, there’s some sort of emergency situation—a terrorist attack, a shooting. If police were able to determine that there were, those, say, five autonomous cars in nearby parking lots, what if they could use them to help?

Since we’re making everything up, let’s also make up that there’s a law for autonomous cars that says, unless certain criteria are met for an opt-out, any self-driving car must be receptive to commands sent on a special encrypted channel from authorized law-enforcement agencies.

Normally, this could be used to stop suspected stolen vehicles, or to help keep traffic slow and safe around accidents, but it can also be used to give driving commands to the cars, when the cars report back that no one is inside and they’re idle.

The police send commands to the nearby cars, sending them to the location of the active shooter, where one blocks an entrance to a building full of people, while the other four attempt to box in the shooter, and prevent him from running indoors.

During this process, all five cars sustain significant damage, with two being completely disabled, but thanks to their intervention, the progress of the shooter was severely impaired, and soon regular police forces were able to come in and take control of the situation.

There’s no real reason why something like this shouldn’t be technically possible. A central command center would be informed to the locations of the available cars, and would be able to send them GPS coordinates of where they want them positioned. Theoretically, the cars could be commanded in formations, sent to block escape routes, or even be made to track or follow a suspect.

Or, consider a disaster like a fire, or a flood: autonomous cars could be called upon to ferry people away from areas of danger to safe zones, or could be called upon to create instant roadblocks or barriers as needed, long before official crews can get on the scene.

Sure, there’s plenty of issues here–it’s essentially police commandeering private property, and that property has a high probability of being damaged or destroyed. It’s a lot of power to give to law enforcement at a time when trust in that institution has eroded quite a bit among many people and for good reason. Plus, having such a command channel at all leaves open the possibility that it could be hacked, and people you very much don’t want commanding an army of robot cars with that ability.

Even so, I’m sort of inclined to think the possible benefits may outweigh the risks. A system where, in emergency situations, a number of car-sized robots can be deployed could be an incredibly valuable resource for helping to keep people safe and to keep dangerous situations contained.

I think that if I own and autonomous vehicle at some vague point in the future, I’d be okay with it being used for something that could potentially save lives. I mean, come on, it’s not one of my old crappy vintage cars I actually care about, after all.

There’d have to be some sort of reasonable compensation system set up for the owners of the cars that become ad-hoc Robocops, and I’m sure that system will be full of issues, but that general idea isn’t unheard of.

The general idea of police commandeering private property has been around for a while. The laws are known as posse comitatus statues, and the idea that government officials need to pay for equipment or vehicles that have been damaged in official, commandeer’d use is something that’s been considered since at least the Civil War:

… what if they destroy or damage my property? That’s less clear. In United States v. Russell the Supreme Court was faced with a claim for three steamers commandeered by military authorities during the Civil War. The Russell court found it obvious that “the taking of such property under such circumstances creates an obligation on the part of the government to reimburse the owner to the full value of the service.”

The court continued, “private rights, under such extreme and imperious circumstances, must give way for the time to the public good, but the government must make full restitution for the sacrifice.”
Self-driving cars will be the first real robots to be released into mainstream society, and as such I think we’ll encounter more and more issues like this. I think this is actually a potentially helpful development. Heroism isn’t an easy quality to come by, and while most of us would like to believe that we have the capability if the situation demanded it, it’s comforting to know that there could be a potential pool of available resources out there that can do what people cannot.

Of course, since there will always be a human in the equation deciding things, there’s always the potential for trouble, too. I’m not sure what to think, really, but I am sure that this is something worth thinking about.