Cheap 3D camera developed using a single lens
An easy-to-build camera that produces 3D images from a 2D image with just a single lens has been developed by researchers.
In an initial application of the technology, the researchers plan to use the new camera, which they call DiffuserCam, to watch microscopic neuron activity in living mice without a microscope.
Ultimately, it could prove useful for a wide range of applications involving 3D capture.
The camera is compact and inexpensive to construct because it consists of only a diffuser - essentially a bumpy piece of plastic - placed on top of an image sensor. Although the hardware is simple, the software it uses to reconstruct high resolution 3D images is very complex.
“The DiffuserCam can, in a single shot, capture 3D information in a large volume with high resolution,” said the research team leader Laura Waller, University of California.
“We think the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or automatically classify objects.”
The researchers show that the DiffuserCam can be used to reconstruct 100 million voxels, or 3D pixels, from a 1.3-megapixel image without any scanning.
The researchers used the camera to capture the 3D structure of leaves from a small plant.
“Our new camera is a great example of what can be accomplished with computational imaging - an approach that examines how hardware and software can be used together to design imaging systems,” said Waller.
“We made a concerted effort to keep the hardware extremely simple and inexpensive. Although the software is very complicated, it can also be easily replicated or distributed, allowing others to create this type of camera at home.”
A DiffuserCam can be created using any type of image sensor and can image objects that range from microscopic in scale all the way up to the size of a person. It offers a resolution in the tens of microns range when imaging objects close to the sensor.
Although the resolution decreases when imaging a scene farther away from the sensor, it is still high enough to distinguish that one person is standing several feet closer to the camera than another person, for example.
The DiffuserCam is a relative of the light-field camera, which captures how much light is striking a pixel on the image sensor as well as the angle from which the light hits that pixel.
In a typical light-field camera, an array of tiny lenses placed in front of the sensor is used to capture the direction of the incoming light, allowing computational approaches to refocus the image and create 3D images without the scanning steps typically required to obtain 3D information.
Until now, light-field cameras have been limited in spatial resolution because some spatial information is lost while collecting the directional information. Another drawback of these cameras is that the microlens arrays are expensive and must be customised for a particular camera or optical components used for imaging.
“I wanted to see if we could achieve the same imaging capabilities using simple and cheap hardware,” said Waller. “If we have better algorithms, could the carefully designed, expensive microlens arrays be replaced with a plastic surface with a random pattern such as a bumpy piece of plastic?”
After experimenting with various types of diffusers and developing the complex algorithms, Nick Antipa and Grace Kuo, students in Waller’s lab, discovered that Waller’s idea for a simple light field camera was possible.
In fact, using random bumps in privacy glass stickers, Scotch tape or plastic conference badge holders, allowed the researchers to improve on traditional light-field camera capabilities by using compressed sensing to avoid the typical loss of resolution that comes with microlens arrays.
Although other light-field cameras use lens arrays that are precisely designed and aligned, the exact size and shape of the bumps in the new camera’s diffuser are unknown.
This means that a few images of a moving point of light must be acquired to calibrate the software prior to imaging. The researchers are working on a way to eliminate this calibration step by using the raw data for calibration. They also want to improve the accuracy of the software and make the 3D reconstruction faster.