Thousands Of Reasons That We Shouldn’t Trust A Neural Network To Analyse Images

Thousands Of Reasons That We Shouldn’t Trust A Neural Network To Analyse Images

When it comes to image recognition tech, it’s still remarkably easy to fool the machines. And while it’s some good comedy when a neural network mistakes a butterfly for a washing machine, the consequences of this idiocy are pretty nightmarish when you think about rolling these flawed systems out into the real world.

Researchers from the University of California, Berkeley, the University of Washington, and the University of Chicago in the U.S. published a paper this month to really drive home the weaknesses of neural networks when it comes to correctly identifying an image. They specifically explored natural adversarial examples or naturally occurring examples in the wild that fool a machine learning model into misclassifying an object.

The researchers curated 7500 natural adversarial examples into a database called IMAGENET-A. The images selected for the dataset were pulled from millions of user-labelled animal images from the website iNaturalist as well as objects tagged by users on Flickr, according to the paper.

The researchers first downloaded images related to a class of the database, deleted ones that were classified correctly by a separate machine learning model and then manually selected high-quality images from the remaining batch.

The researchers gave an example of this process in the paper by illustrating how it broke down images specifically of a dragonfly. They downloaded 81,413 dragonfly images from iNaturalist and filtered that down to 8925. An “algorithmically suggested shortlist” spit out 1452 images and from there, they manually selected 80.

The thousands of images ultimately included in the database all failed to correctly classify an object in an image for a number of reasons, none being an intentional malicious attack. The neural nets fucked up due to weather, variations in the framing of a photo, an object being partially covered, leaning too much on texture or colour in a photo, among other reasons. The researchers also found that the classifiers can overgeneralise, over-extrapolate, and incorrectly include tangential categories.

That’s why the neural network classified a candle as a jack-o-lantern with 99.94 per cent confidence, even though there were no carved pumpkins in the image. It’s why it classified a dragonfly as a banana, in what the researchers guess is because there was a shovel nearby that was yellow.

It’s also why, when the framing of an alligator swimming was slightly altered, the neural network classified it as a cliff, lynx, and a fox squirrel. And that’s also why the classifier overgeneralised tricycles to bicycles and circles, and digital clocks to keyboards and calculators.

These findings aren’t revelatory, but the robustness of the database gives a helpful sense of the scope of all of the ways in which image recognition systems can fail. As the researchers point out in the study, this is “an important research aim as computer vision systems are deployed in increasingly precarious environments.”

Most notably, these systems are being rolled out in self-driving cars and in increasingly automated warehouses. In fact, earlier this year, researchers simply rotated photos of 3D objects to fool a deep neural network, specifically pointing out how this flaw is disturbingly dangerous when it comes to autonomous vehicles leaning on image recognition tech.

“You can imagine the robots in the warehouse or the mobile home robots that look around and try to pick up the stuff or find keys for you,” Anh Nguyen, assistant professor of computer science at Auburn University and a researcher on the study from March told Gizmodo in a phone call. “And these objects lying around can be in any pose in any orientation. They can be anywhere. You do not expect them to be in canonical poses and so they will be fooled by the adversarial poses.”

Nguyen also pointed out how slightly adjusting the angle of an object might impact image recognition for TSA at airports and other security checkpoints, or for automated target recognition in battlefields. “There are many applications where this vulnerability will be a bigger problem,” he said, and this merely applies to one adversarial example. As researchers in July’s paper indicated, that’s just a drop in the bucket.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.