Concept art of artificial neural network

Deep learning platforms used by autonomous vehicles found to be riddled with errors

Image credit: Dreamstime

US researchers have developed the first efficient white-box testing model, which looks into the inner workings of deep neural networks. Using this approach, they found that the deep-learning platforms used in autonomous vehicles are riddled with errors.

Deep learning uses multi-layered networks modelled approximately after the human brain, with ‘neurons’ (nodes) connected in an extremely complex network. Each layer uses the output from the previous layer as its input, allowing for new pattern recognition in each layer. This structure allows for the rapid, sophisticated analysis of training data.

Despite the evident usefulness of these systems, the behaviour of neural networks when processing data remains mysterious - even to the engineers who create the network. This is why artificial neural networks are often described as ‘black boxes’.

When errors occur – such as when a Tesla Model S on autopilot failed to spot a white vehicle against a bright sky, causing a fatal crash in 2016 – it is frustratingly difficult to understand why they occurred and how to fix them. ‘Corner case’ errors such as these are errors which occur beyond standard operating parameters.

In order to give some insight into how and where these errors occur, researchers from Lehigh University and Columbia University have now presented an automated ‘white box’ technique for looking inside deep-learning systems.

Named DeepXplore, it works by cross-referencing multiple neural networks, and identifying inputs leading to inconsistent results.

“For instance, given an image captured by a self-driving car camera, if two networks think that the car should turn left and the third thinks that the car should turn right, then a corner case is likely in the third deep neural network,” said Professor Junfeng Yang, a computer scientist at Columbia University.

The team used DeepXplore to investigate datasets collected by autonomous vehicles driving on real roads during the Udacity self-driving car challenge and were able to identify thousands of corner-case errors.

“Our DeepXplore work proposes the first test coverage metric called “neuron coverage” to empirically understand if a test input set has provided bad versus good coverage of the decision logic and behaviours of a deep neural network,” said Professor Yinzhi Cao of Lehigh University, who led the research.

According to the researchers, by retraining an artificial neural network on inputs generated by DeepXplore, it could also be used to retrain systems autonomously in order to improve classification accuracy.

“Our ultimate goal is to be able to test a system, like self-driving cars, and tell the creators whether it is truly safe and under what conditions.”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close