Japanese Researchers Trick AI Into Thinking 3D-Printed Turtle Is A Rifle

Japanese Researchers Trick AI Into Thinking 3D-Printed Turtle Is A Rifle

Japanese researchers have used a startlingly simple exploit to trick object recognition AI into classifying a 3D-printed turtle as a rifle. Incredibly, they did it by changing a single pixel.

Photo: Kyushu University

Typically, object-recognition works by complex pattern matching: The software measures the pixels in an image and matches that to an internal blueprint of a given object’s dimensions. The “one pixel attack”, as researchers describe it, identifies weaknesses and slightly alters these pixels, forcing the AI to “see” something else.

By altering a single pixel in a 1024-pixel image, the attack was successful 74 per cent of the time. When changing five pixels, the success rate of the attack rose to 87 per cent. Researchers as far back as July were able to fool software using 2D images, but here, with 3D objects in real time, the exploit has greater real-world ramifications.

Here it is in action:

Tricking AI into seeing a gun is particularly troubling, as object recognition is quickly becoming a key element in smart policing. In September, security start-up Knightscope unveiled a new line of “crime fighting robots”, self-driving dune buggies equipped with surveillance gear and object recognition, marketing them as supplemental security for airports and hospitals. What happens when robots report a high-level threat to authorities because of a paper turtle? Similarly, Motorola and Axon (formerly Taser) have invested in real-time object recognition in their body cameras. If this exploit can be used to trick AI into mistaking something harmless as dangerous, could it to do the opposite, disguising weapons as turtles?

Anish Athalye, co-author on the MIT paper, says the problem isn’t as simple as fixing a single vulnerability; AI needs to learn sight beyond simply recognising complex patterns:

“It shouldn’t be able to take an image, slightly tweak the pixels, and completely confuse the network,” he told Quartz. “Neural networks blow all previous techniques out of the water in terms of performance, but given the existence of these adversarial examples, it shows we really don’t understand what’s going on.”

But privacy experts may hedge the need to accelerate AI-fuelled recognition. We already live in a largely unregulated and perpetual surveillance state. Half of all American adults are in a federal face recognition database and simply unlocking your phone means you could be matched to a database. Better “sight” for AI inevitably means stronger surveillance. It’s an uneasy trade-off, but with AI poised to paradigmatically shift every aspect of modern life, including health, security, transportation and so on, we need to predict and prevent these exploits.

[Quartz via MIT Technology Review]