Would You Feel Safer If Your Self-Driving Car Could Explain Itself?

Would You Feel Safer If Your Self-Driving Car Could Explain Itself?

With each passing breakthrough in artificial intelligence, we’re asking our machines to make increasingly complex and weighty decisions. The trouble is, AIs are starting to act beyond our levels of comprehension. In high frequency stock trading, for example, this had led to so-called flash crashes, in which algorithms make lightning-quick decisions for reasons we can’t quite grasp. In an effort to bridge the growing gap between man and machine, the Pentagon is launching a new program to create machines that can explain their actions in a way we puny humans can understand.

Image: Hot Tub Time Machine 2

The Defence Advanced Research Projects Agency (DARPA) is giving $US6.5 million ($8.7 million) to eight computer science professors at Oregon State University’s College of Engineering. The Pentagon’s advanced concepts research wing is hoping these experts can devise a new system or platform that keeps humans within the conceptual loop of AI decision-making, allowing us to weigh in on those decisions as they’re being made. The idea is to make intelligence-based systems, such as self-driving vehicles and autonomous aerial drones, more trustworthy. Importantly, the same technology could also result in safer AI.

Part of the problem of humans not understanding AI decision-making stems from how AI works today. Instead of being programmed for specific behaviours, many of today’s smartest robots operate by learning on their own from many examples, a process called machine learning. Unfortunately, this often leads to solutions that the system’s developers don’t even understand — think computers making chess moves that baffle even the game’s top grandmasters. At the same time, the system cannot provide any sort of feedback explaining itself.

Accordingly, we’re becoming increasingly wary of machines that have to make important decisions. In a recent study, most participants agreed that autonomous vehicles should be programmed to make difficult ethical decisions, such as killing the car’s occupant instead of 10 pedestrians in the absence of any other options. The trouble is, the same respondents said they wouldn’t want to ride in such a car. Seems we want our intelligent machines to act as ethically and socially responsible as possible, so long as we’re not the ones being harmed.

Perhaps it would help us to trust our machines more if we could peer under the hood and see how AIs reach their decisions. If we’re not happy with what we see, or how an AI reached a decision, we could simply pull the plug, or choose not to purchase a certain car. Alternately, programmers and computer scientists could provide the AI with new data, or different sets of rules, to help the machine come up with more palatable decisions.

Under the new DARPA four-year grant, researchers will work to develop a platform that facilitates communication between humans and AI to serve this very purpose.

“Ultimately, we want these explanations to be very natural — translating these deep network decisions into sentences and visualisations,” said Alan Fern, principal investigator for the grant and associate director of the College of Engineering’s Collaborative Robotics and Intelligent Systems Institute.

During the first stage of this multi-disciplinary effort, researchers will use real-time strategy games, such as StarCraft, to train AI “players” that will have to explain their decisions to humans. Later, the researchers will adapt these findings to robotics and autonomous aerial vehicles.

This research may become crucial not just for improving trust between humans and self-driving cars, but any kind of autonomous machine — including those with even greater responsibilities. Eventually, artificially intelligent war machines may be required to kill enemy combatants. At that stage, we will most certainly need to know why machines are acting in a particular way. Looking even further ahead, we may one day need to peer into the mind of an AI vastly beyond human intelligence. This won’t be easy; such a machine will be able to calculate thousands of decisions in a split second. It may not be possible for us to understand everything our future AI do, but by thinking about the problem now, we have a better shot at constraining future robots’ actions.

[Oregon State University]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.