Cars on a road in Phuket

Laser technique could allow autonomous cars to see around corners

Image credit: Dreamstime

Driverless cars could gain the superhuman ability to spot hazards around corners, thanks to a laser-based technique in development at Stanford University’s Computational Imaging Lab.

Autonomous cars use a range of techniques to sense their surroundings, including Lidar. Lidar (originally ‘light and radar’) works by detecting pulses of laser light reflected from surrounding objects back to the instrument in order to calculate the distance to those objects. Using many bursts of light, a Lidar system can build up a picture of its surroundings.

A new laser-based detection technique could allow the field of view of these vehicles to stretch even further: around corners.

While some techniques already exist for directing light around corners, the Stanford team has developed an efficient algorithm allowing for images of hazards – such as pedestrians, animals or cones – to be produced while hidden from view. According to the researchers, a major challenge in viewing hidden objects has been efficiently reconstructing the 3D structure of the object from a noisy data set.

“It sounds like magic, but the idea of non-line-of-sight imaging is actually feasible,” said Professor Gordon Wetzstein, an electrical engineer at Stanford and senior author of the Nature report detailing the technique.

The research team placed a laser next to a sensitive light detector and laser light was fired in pulses at a wall and bounced off objects around a corner, then back to the wall and to the detector.

Using current technology, scanning an object with this laser-based technique takes anything from two minutes to an hour; far too slow to prove useful on a busy, real-life road. However, the Stanford researchers have developed an algorithm which creates a sharp image from the scan in less than a second. This algorithm is efficient enough to run on an ordinary laptop.

It is possible, the researchers believe, to speed this process up such that it is effectively instantaneous. The algorithm must be altered such that it is better equipped to handle variation in less controlled environments; for instance, the presence of ambient light in real-world conditions and with objects in motion, such as cyclists.

“This is a big step forward for our field that will hopefully benefit all of us,” said Wetzstein. “In the future, we want to make it even more practical in the ‘wild’.”

So far, the researchers have found that their system works well outside – it is particularly adept at detecting road signs, for example – but it struggles in direct light.

“We believe the computation algorithm is already ready for Lidar systems,” said Dr Matthew O’Toole, co-lead author of the paper. “The key question is if the current hardware of Lidar systems supports this type of imaging.”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close