NVIDIA Co-Founder: Self-Driving Cars Shouldn’t Make Their Own Ethical Decisions

NVIDIA Co-Founder: Self-Driving Cars Shouldn’t Make Their Own Ethical Decisions

Your self-driving car is charging towards a pedestrian caught on a narrow road with no hope of stopping in time. Its systems could force it off the road and into a catastrophic crash to save the trapped pedestrian’s life, or it could do nothing and let the pedestrian die. What would you do? According to the co-founder of NVIDIA, self-driving cars shouldn’t be making these ethical decisions for us in the first place.

Jen-Hsun Huang co-founded NVIDIA back in 1993, and at CES this year he and his company are talking about the super-fast new X1 mobile processor and its place in the cars of the future with new automotive boards like the Drive CX and Drive PX.

In a roundtable discussion this morning, conversation quickly turned to the cars of the future and the sort of things they’d be able to do.

While Jen-Hsun said that he’s happy to talk about future tech like self-driving cars, he made sure to note that the technologies NVIDIA announced at CES 2015 are a “stepping stone” to the future, rather than a platform designed for running commercially viable, self-driving cars.

The co-founder was frank, saying that the market doesn’t really need self-driving cars just yet:

Nobody wakes up in the morning and says, ‘I’d like my car to drive itself’, but everyone wants a less stressful driving experience. The car needs to become more aware of its environment.
I don’t think we need a car that lets you take your hands off the steering wheel and fall asleep. I don’t think that’s very safe.

What he means by “more aware of its environment” is a car running the DrivePX platform with 20 connected cameras around the exterior of the vehicle, and a cloud-connected machine learning engine which recognises obstacles, signs and even obscured pedestrians. It’s less about creating a self-driving car, and more about creating a smart car.

Jen-Hsun gave an example of a parent turning around from the driver’s seat to face the rear seat for a moment in a bid to scold some naughty kids. In those seconds, another car stopped suddenly in front of the vehicle, and there was no way the parent driver could stop in time to avoid an impact. What a DrivePX-enabled car could do is, in milliseconds, identify that the lane adjacent to it is empty, and quickly change lanes to perform an emergency stop and avoid a collision entirely. It’s able to do that because it thinks and reacts faster than a person ever could.

Not only does the NVIDIA co-founder seem down on self-driving cars, he’s also keen to keep moral judgments on the road within his own control if they ever do become a reality.

Ethicists have posed important questions about self-driving cars. One of those questions we discussed a while back was whether your driverless car should kill you to save two other people.

Here’s the scenario:

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you.

The NVIDIA co-founder agrees that there are important questions that need to be asked, but his solution to many of them is simple: “if my machine can’t help me, then please don’t help me.”

“If it came up against that sort of an ethical conundrum, the car should just do nothing. If it’s an ethical question, the human makes the choice. I don’t want my car to make that decision on my behalf. The technology shouldn’t make you worse. this is something we can consider,” he said.

What do you think? Should smart cars be making ethical decisions for us?