driverless cars

Ultrafast camera boosts reaction time for self-driving vehicles and drones

An ultrafast, high-contrast camera has been developed by Singapore scientists that is designed to help self-driving cars and drones see better in extreme road conditions and bad weather.

Unlike typical optical cameras, which can be blinded by bright light and unable to make out details in the dark, NTU’s new smart camera can record the slightest movements and objects in real time.

The new camera records the changes in light intensity between scenes at nanosecond intervals, much faster than conventional video, and it stores the images in a data format that is many times smaller as well.

An in-built circuit allows the camera to do an instant analysis of the captured scenes, highlighting important objects and details.

Developed by assistant professor Chen Shoushun, from the Nanyang Technological University (NTU) in Singapore, the new camera named Celex is now in its final prototype phase.

“Our new camera can be a great safety tool for autonomous vehicles, since it can see very far ahead like optical cameras but without the time lag needed to analyse and process the video feed,” Shoushun said.

“With its continuous tracking feature and instant analysis of a scene, it complements existing optical and laser cameras and can help self-driving vehicles and drones avoid unexpected collisions that usually happen within seconds.”

A typical camera sensor has several million pixels, which are sensor sites that record light information and are used to form a resulting picture.

High-speed video cameras that record up to 120 frames or photos per second generate gigabytes of video data, which are then processed by a computer in order for self-driving vehicles to “see” and analyse their environment.

The more complex the environment, the slower the processing of the video data, leading to lag times between “seeing” the environment and the corresponding actions that the self-driving vehicle has to take.

To enable an instant processing of visual data, NTU’s camera records the changes between light intensity of individual pixels at its sensor, which reduces the data output. This avoids the needs to capture the whole scene like a photograph, thus increasing the camera’s processing speed.

The camera sensor also has a built-in processor that can analyse the flow of data instantly to differentiate between the foreground objects and the background, also known as optical flow computation. This innovation allows self-driving vehicles more time to react to any oncoming vehicles or obstacles.

In development since 2009, Shoushun expects that the new camera will finally be commercially available by the end of this year and they are already in talks with global electronic manufacturers.

E&T recently looked at a number of different technologies required for self-driving vehicles to achieve true autonomy. 

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close