Tesla’s Autopilot Can Be Fooled With A Projector

Tesla’s Autopilot Can Be Fooled With A Projector

Maybe you shouldn’t take your hands off the steering wheel of your self-driving car just yet.

Tesla’s autopilot tool is generally considered the current gold standard in autonomous vehicles, although we learned in 2018 it can be fooled into thinking there’s a human behind the wheel by simply placing an orange in the steering wheel. This is a human-introduced problem for autopilot, though, and Gizmodo Australia’s reviews of Tesla’s autopilot suggest that humans are often the issue.

So what happens if somebody sets out to deliberately trick the underlying AI to cause havoc?

New research reported by Ars Technica suggests that while there’s some work to be done generally on autonomous AI, it’s also feasible to deliberately “fool” the exisiting autopilot feature on a Tesla with nothing more complex than a $US300 projector.

Ben Nassi from Ben-Gurion University in Israel has written a paper outlining his experiments using both the Tesla Model X and the Mobileye 630 PRO systems with a range of projected images including a human figure and a street speed sign. The idea behind the test is that a person could spoof a real world situation in order to confuse or take a level of control over an autonomous vehicle.

In one experiment, Nassi shows how it’s feasible to present “fake” road lines that a Tesla autopilot reads as an instruction to cross over to the other side of the road, which could, of course, be quite chaotic in real world driving situations. He’s also experimented with flash frame “phantom objects” projected from drones to show how the attack could be implemented without needing a human to be present.

Nassi asserts that this isn’t a case of bugs in Tesla and Mobileye’s software, but instead a “fundamental flaw” of the image recognition model currently being used that allows “phantom” objects to be recognised as real ones. He notes that:

Phantoms are definitely not bugs.
They are not the result of poor code implementation in terms of security.
They are not a classic exploitation (e.g., buffer overflow, SQL injections) that can be
easily patched by adding an “if” statement.
They reflect a fundamental flaw of models that detect objects that were not trained to distinguish
between real and fake objects.

In many cases it does look like the Tesla Autopilot is taking a safest-route approach to these images, which is probably desirable in terms of correctly identifying real world images. While it’s alarming to see the Tesla veer to the wrong side of the road, it appears likely that the same system would also detect oncoming traffic and simply apply the brakes anyway, which is what happens in his safer theoretical example.

It seems pretty likely that as the AI software underlying autonomous autopilot systems improve these kinds of attacks could be at least reduced in efficacy, especially with research like this out in the public domain.

At the same time, it’s concerning that it’s currently capable of being tricked so very simply, so for the time being, it’s worth keeping your hands on the wheel.

[Ars Technica]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.