A smartphone-based system can be much better in identifying objects on the road than costly radar sensors

Smartphone-based object recognition paves way for driverless cars

Smartphone-based systems capable of locating a user outside of the reach of the GPS signal and recognising objects on the road have been developed, paving the way for more cost-effective technologies for driverless cars. 

Costing a fraction of the price of currently available sensors, the technologies developed by the University of Cambridge can not only find the location but also identify various components of the road scene in real time.

The first of the freely available systems, dubbed SegNet, can take an image of a street scene and sort objects into 12 different categories, such as the road, street signs, buildings, pedestrians or cyclists. Operating in real time, the system showed higher reliability in trials than extremely costly laser and radar-based sensors that are currently in use.

These results could mean a major breakthrough for the driverless car industry as radar-based sensors are frequently more expensive than the car itself.

The Segnet application was manually trained by Cambridge undergraduate students, who labelled every pixel in some 5,000 images used for the training.

“It’s remarkably good at recognising things in an image, because it’s had so much practice,” said Alex Kendall, a PhD student in the university's Department of Engineering. “However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.”

The system is not yet at the point where it can be used to control a car or truck, but it could be used as a warning system, similar to the anti-collision technologies currently available on some passenger cars.

“Vision is our most powerful sense and driverless cars will also need to see,” said Professor Roberto Cipolla, who led the research. “But teaching a machine to see is far more difficult than it sounds.”

The second system, used for localisation, employs a similar architecture to SegNet and can determine the user’s location based on a single colour image in a busy urban scene. Far more accurate than GPS, the system also works in tunnels and indoor areas, which are out of reach of the satellite signal.

Analysing the geometry of a scene, the system determines its precise location including whether it is looking at the east or west side of a building, even if the two look almost the same.

“Work in the field of artificial intelligence and robotics has really taken off in the past few years,” said Kendall. “But what’s cool about our group is that we’ve developed technology that uses deep learning to determine where you are and what’s around you – this is the first time this has been done using deep learning.”

The system has been tested along a kilometre-long stretch of King’s Parade in central Cambridge, successfully determining both location and orientation to within a few metres and degrees – a much better result than that provided by the GPS.

“In the short term, we’re more likely to see this sort of system on a domestic robot – such as a robotic vacuum cleaner, for instance,” said said Professor Cipolla. “It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics.”


Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them