Welcome Your IET account
human eye retina

Optical sensor mimics retina’s response to movement

Image credit: Dreamstime

A new optical sensor has been developed that is capable of mimicking the human eye’s ability to perceive changes in its visual field.

The Oregon State University researchers behind the project believe it is a “major breakthrough” for fields such as image recognition, robotics and artificial intelligence.

Previous attempts to build a human-eye type of device, called a retinomorphic sensor, have relied on software or complex hardware.

The new sensor’s operation is part of its fundamental design, using ultrathin layers of perovskite semiconductors that change from strong electrical insulators to strong conductors when placed in light.

“You can think of it as a single pixel doing something that would currently require a microprocessor,” said lead researcher John Labram.

The researchers believe the sensor could be a boon to technologies like self-driving cars, robotics and advanced image recognition.

Unlike traditional computers, which process information sequentially as a series of instructions, neuromorphic computers are designed to emulate the human brain’s massively parallel networks.

A spectacularly complex organ, the eye contains around 100 million photoreceptors.

However, the optic nerve only has 1 million connections to the brain. This means that a significant amount of preprocessing and dynamic compression must take place in the retina before the image can be transmitted.

Labram said that human vision is well adapted to detect moving objects but is comparatively “less interested” in static images and therefore gives priority to signals from photoreceptors detecting a change in light intensity.

Images are scanned across a two-dimensional array of sensors, pixel by pixel, at a set frequency. Each sensor generates a signal with an amplitude that varies directly with the intensity of the light it receives, meaning a static image will result in a more or less constant output voltage from the sensor.

By contrast, the retinomorphic sensor stays relatively quiet under static conditions. It registers a short, sharp signal when it senses a change in illumination, then quickly reverts to its baseline state.

“The way we test it is, basically, we leave it in the dark for a second, then we turn the lights on and just leave them on,” he said. “As soon as the light goes on, you get this big voltage spike, then the voltage quickly decays, even though the intensity of the light is constant. And that’s what we want.”

“We can convert video to a set of light intensities and then put that into our simulation.

“Regions where a higher-voltage output is predicted from the sensor light up, while the lower-voltage regions remain dark. If the camera is relatively static, you can clearly see all the things that are moving respond strongly. This stays reasonably true to the paradigm of optical sensing in mammals.”

The researchers believe the technology would be ideal for allowing robots to track moving objects, something that could also be used for in-development self-driving cars.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them