Neural network learning images

Artificial intelligence: the new ghost in the machine

Image credit: Dreamstime

Technologists throw around ‘artificial intelligence’ as a key component in our future lives, but as it stands we don’t fully understand how these systems are working. Should we be trusting our autonomous vehicles and automated factories to them?

Throughout his career, robotics and AI researcher Professor Tetsuya Ogata of Waseda University in Japan has had to deal with a conflict in approaches. Despite seeming inexorably linked, the two disciplines have proceeded along separate lines. “There is a big barrier between AI and robotics research. Robots are mechanical systems. The engineer wants to model the devices physically and deterministically,” Ogata explained at a seminar in June on AI at the VLSI Technology Symposium in Honolulu. “In AI, they use statistical, probabilistic models. The mathematics are different.”

It’s a dualism that echoes early 20th-​century philosopher Gilbert Ryle’s term for the idea of separate existences for the mind and body that René Descartes postulated: the ghost in the machine. Ryle coined the phrase to try to draw attention to the absurdity of Descartes’ position – that the mind is simply the result of physical reality.

Yet the dualism seems inevitable. When a robot moves its arm, it needs to be sure that the different motor movements it makes microsecond by microsecond will move the limb to its correct point. It can take measurements of different forces and use those to come up with accurate predictions of where it will land. The reason why it moves its arm is another matter. Early robots were deterministic: they follow preprogrammed paths. But this makes them inflexible and dangerous to be around humans. They need to be contained in safety cages for the simple reason that they will, unless explicitly told not to, crunch through anything that gets in the way of their preprogrammed paths.

A new generation of robots uses ideas similar to those pursued for decades by Ogata since he created the piano-playing android Wabot-2. Increasingly, he has made use of deep-learning techniques to control the robots, which, as a result, have far less deterministic behaviour. But because they can react to events in the outside world in a more flexible manner, they are potentially safer – just as long as they understand what they are seeing and hearing.

Seemingly anomalous behaviour in these and other computer systems can lead to sometimes baffling and troubling results – the causes of which continue to elude their developers. They seem to acquire a mind of their own. It’s an extremely simple mind but it currently defies deterministic explanation, though researchers think they are getting closer to uncovering the reasons why machine learning systems get confused.

Professor Naresh Shanbhag of the University of Illinois at Urbana-Champaign says: “There is a great fascination with deep learning, but it’s often overkill in AI applications. And it’s fragile: if you perturb its inputs, it can be fooled easily. There are also problems with the interpretability of the results. There are so many unknowns and yet we are going forward full-force. We never push back and ask ‘do you really need this sort of network?’.”

A curious feature of the deep neural network (DNN) is its apparent ability to hallucinate, sometimes in ways that resemble the confusion the human brain experiences when presented with an ambiguous image or sound. Even after extensive training, DNNs can make mistakes that seem incomprehensible to us.

Making a lot of tiny changes to an image can make a DNN change its output radically. To a human, the image looks a bit noisier or sometimes the changes are practically invisible. However, five years ago Google researcher Christian Szegedy and colleagues showed you can convince a trained DNN that a yellow bus is a brown ostrich. Attempts to fool the classifiers soon went further. In early 2015, Anh Nguyen and Jeff Clune of the University of Wyoming worked with Cornell University’s Jason Yosinski to find out what would happen when DNNs are presented with images that look to humans like noise or psychedelic patterns. It turned out the networks will confidently classify them as recognisable objects rather than “unknown”.

One reassuring aspect of the early adversarial images is that the manipulations are difficult to pull off in the real world. The types of high-frequency noise added to images to force a misclassification would be filtered out by most cameras with no additional effort. And random noise has no effect on the results other than possibly reducing the network’s ability to make a confident prediction. But as work on adversarial images continued, computer scientists found it is possible to alter real-world objects to fool DNNs.

‘There is a great fascination with deep learning. But it’s often overkill in AI applications. And it’s fragile, if you perturb its inputs it can be fooled easily. There are also problems with the interpretability of the results. There are so many unknowns.’

Professor Naresh Shanbhag, University of Illinois

With concerns growing over the safety of self-driving vehicles, the most startling examples were put together by University of Michigan PhD student Kevin Eykholt and colleagues late last year. Road-sign recognition is a skill DNNs have been shown to excel at. One test from 2011 showed how a network could correctly classify signs that were almost entirely bleached out and which, naturally, puzzled human test subjects. The Michigan team, however, discovered brightly coloured stickers would change their meaning to the DNN. In one demonstration, using video shot from a moving car, the network thought a stop sign, with its one-word instruction clearly visible, said ‘speed limit 45mph’. A later experiment rendered the stop sign practically invisible to the network.

Other mistakes made by DNNs seem more understandable and mirror those made by humans. One way to even the score between people and computers is to restrict the amount of time subjects get to look at an image. Neuroscientists have found if you flash up an image for less than a tenth of a second, the brain does not get to use all the tools it possesses, such as feedback networks that provide context, to understand what is going on. Earlier this year, Gamaleldin Elsayed and colleagues at Google Brain and Stanford University used this approach to see if humans would make similar mistakes when presented with subtly altered images. There were situations when they did, though they were never as drastic as confusing a bus for a bird. Instead, they might tag a picture of a cat as a dog.

Although there are parallels with human cognition, no-one expects DNNs to work the same way as the human brain. Designing an effective DNN takes some forethought. You cannot simply slap together a formless mass of simulated neurons and expect to train them to do something useful. The brain, in contrast, does self-organise, albeit following a lot of external stimulus.

The DNN needs to be formed into layers that are tuned for the target application. Each type of layer has a reasonably specific function. Convolutional layers are similar to those with filters that blur or sharpen images in photo-editing software. Pooling layers compress data from large numbers of neurons using a kind of voting system. If enough neurons vote the same way on the input to a pooling layer, its output turns positive. Then there are the fully connected layers that hark back to the earliest work in artificial neural networks, where each neuron in a layer takes inputs from every neuron in the preceding one.

Despite the head start a DNN gets through its structure, perhaps the most surprising aspect of the DNN is that it works as well as it does. The model used for the neuron is extremely simple and nowhere near as complex as that used in the mathematical models developed by biologists, even the one created by Eugene Izhikevich which boiled the behaviour down to several differential equations but which emulates many different types of neuronal chattering. But put lots of simple neurons together and complex patterns of behaviour emerge.

Computer scientists struggle to explain why this happens, though they believe they are edging closer to an answer. In the mid-2000s, AI researchers got what may, in hindsight, turn out to be a lucky break. Although deep-learning pioneer Geoffrey Hinton has expressed reservations about the technique, backpropagation is the way most DNNs are trained. Some, however, such as Nguyen and Clune, favour an evolutionary algorithm. This works over a series of steps and is an attempt to fit each neuron in the stack with a set of coefficients that match the expected output to the many, many inputs. The not-so-secret sauce for deep-learning backpropagation is stochastic gradient descent (SGD), which works so well some computer scientists have found the safety wheels intended to keep training on track actually are not all that necessary.

Overtraining used to be a concern. The idea was that they are able to store so much information in the millions of coefficients they contain, that they could simply remember most of the data in the training images. So, they would recognise those images 100 per cent of the time but be useless with other images. Instead, neural networks generalise extraordinarily well, just like us.

However, where we and DNNs part company is what gets learned. Researchers such as Nguyen have been trying to reverse-engineer trained networks to see what they think they see. The results are intriguing and veer into the disturbing when DNNs are asked to ‘dream’, as Google researcher Alexander Mordvintsev found with the DeepDream project.

As with Nguyen’s earlier work, the approach uses a second neural network to recreate images from the output of another trained on real images. While it is running, subtle changes to what neurons perceive slowly push it away from reality, generating new patterns in the reconstruction network. In doing so, things appear where they should not. Regular images of the world acquire a psychedelically coloured David Cronenberg body-horror makeover. Eyes ‘grow’ in the coat of a cat until the faces of dogs seem to form in the swirly images overlaid on the fur. Fed with random noise, artificial landscapes grew pagodas and arches.

The swirls themselves are clues to what is going on. Each layer in a DNN tends to specialise on specific features. Low-level layers look for edges of a particular orientation, while high-level layers favour larger image chunks, versions of which are found in many images in the training set. The networks do not classify these chunks in the way we might expect. Closer inspection of those layers revealed images of dumbbells attached to hands and arms. They were perceived as being indivisible units because the network had not seen enough images where the two were separate and so stored versions of these and other chimera.

It’s possible that we do, in fact, see the world in a similar way to DNNs. But as we cannot visualise the process the way we can with a computer model, there is no way to be sure (see Hilary Lamb’s ‘Neuroimaging’ feature).

However, humans may have the advantage of being able to acquire higher-level processing that can separate dumbbells from hands and, as a consequence, reason about the state of the world around us rather than treat everything we’ve remembered as a set of templates – it’s these feedback networks that were effectively disabled in work like Elsayed’s. Perhaps as a result, the seemingly LSD-inspired images of DeepDream do not look all that strange when considered alongside common reports of hallucinations and claims of faces being seen in burnt toast or clouds. Using comparisons with fMRI images captured from human subjects when awake and asleep, Tomoyasu Horikawa and Yukiyasu Kamatani of Kyoto University claimed they could detect parallels with high-level features extracted from DNNs.

Although researchers such as David Evans of the University of Virginia see a full explanation being a little way off in the future, the massive number of parameters encoded by DNNs and the avoidance of overtraining due to SGD may have an answer to why the networks can hallucinate images and, as a result, see things that are not there and ignore those that are. One way to look at the processing performed by DNNs is that it crunches out a single value from an input that has a huge number of dimensions of freedom – each pixel’s intensity and colour is a vector and there are millions of pixels.

The mathematics of high-dimensional spaces are difficult to comprehend. Nguyen’s work pointed to this in 2015. It’s easy to mentally picture the collection of neurons that activate when they see a cat as being distinct from those that are trained to activate on a tractor. But, without overtraining, the neural network may well incorporate inputs from a lot of neurons that were not heavily involved in cat recognition during training. As a result, the class of ‘cat’ to the neural network is much, much bigger than the set of features a human might pull together to make that decision.

Misunderstanding the mathematics of high-dimension spaces may have led users to place false confidence in the ability of DNNs to make good decisions. Evans says: “It is very hard for humans to visualise high-dimensional spaces. Getting to four dimensions is very hard. Hundreds of thousands really doesn’t fit our intuition of the spaces that we are living in.”

He points to work by PhD student Mainuddin Jonas that shows how adversarial examples can push the output away from what we would see as the correct answer. “It could be just one layer [that makes the mistake]. But from our experience it seems more gradual. It seems many of the layers are being exploited, each one just a little bit. The biggest differences may not be apparent until the very last layer.”

The question is how best to deal with the problems that the soft boundaries between classifications present to DNNs when used in the field. One argument is that adversarial examples form a class of inputs that smarter training or architecture may exclude. On the other hand, neglecting to see what is present or seeing things that are not really there may be part of the learning experience: it’s only through having multiple systems contribute to a decision that intelligent systems overcome attempts to deceive them.

In either case we cannot afford to have self-driving cars convince themselves a stop sign is not really there. Researchers such as Evans predict a lengthy arms race in attacks and countermeasures that may on the way reveal a lot more about the nature of machine learning and its relationship with reality.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles