Turtle which looks like a rifle to image recognition software

Algorithm tricks Google AI into identifying turtle as a rifle

Image credit: Labsix/MIT

Researchers at Massachusetts Institute of Technology (MIT) have developed a method for tricking neural networks into misidentifying objects and subject of photographs from a range of angles.

Artificial neural networks (multi-layered machine learning systems inspired by the structure and activity of real brains) for image recognition are used in a huge range of applications today. They are used by researchers to categorise visual scientific data, by automakers training self-driving cars to recognise hazards, by security staff on the lookout for potentially dangerous behaviour and by tech companies to categorise our photographs by subject.

These networks can be fooled in some cases by an ‘adversarial image’: a picture with a pattern laid over the top designed to trick the network: a sort of optical illusion that works for computers. These patterns are sometimes obvious – such as a pair of glasses overlaid over a photograph of a face that tricks facial recognition software into identifying the subject as somebody else – but are often barely perceptible to humans.

Cat that looks like a bowl of guacamole

Labsix/MIT

Image credit: Labsix/MIT

Adversarial images are not considered a serious threat in the real world, as changing the angle of the image, zooming, altering the colour balance and other simple transformations can result in the subject of the image being correctly detected. For instance, the MIT team demonstrated that an adversarial image of a tabby cat could be made to be identified as a bowl of a guacamole, but only at a certain angle.

The students were interested in developing an adversarial image that would trick image recognition software every time. They wrote an algorithm, Expectation Over Transformation (EOT), capable of fooling these neural networks over a range of transformations. EOT works at almost any angle, and can be applied to 3D printed objects as well as photographs.

The team demonstrated the efficacy of EOT by fooling Google’s open source image recognition software, Inception v3, into labelling a 3D-printed turtle as a rifle, a 3D-printed baseball as a cup of coffee, and misidentifying several 2D images.

“In concrete terms, this means it’s likely possible that one could construct a yard sale sign which to human drivers appears ordinary, but might appear to a self-driving car a pedestrian which suddenly appears next to the street,” the group of students wrote.

“Adversarial examples are a practical concern that people must consider as neural networks become increasingly prevalent (and dangerous).”

The demonstration by the MIT students that adversarial images can be made to work in ‘real world’ situations could be of concern, given that it throws up the possibility of misleading road signs being used to confuse self-driving cars, or weapons being concealed from image recognition systems designed to detect them.

The students’ paper can be read here.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close