Painting with light

Digital photography has come a long way in a short time. But, as E&T finds out, there's plenty more to come.

The recent rise of digital imaging has brought photography back to the masses, and given amateurs access to picture-making tools that would have been beyond their grasp five or ten years ago. The latest Nikon professional camera packs a 21-million pixel sensor and has been hailed by some as the greatest camera yet - although Canon users might disagree. But there's still plenty of work to be done.

Most digital cameras are still some way from being able to match the image quality achievable with film. The images they produce can suffer from digital noise, the modern and generally less visually appealing equivalent of film grain. And they lack the dynamic range and sensitivity that a skilled photographer can extract from a combination of film and the printing process.

Digital cameras can have problems accurately representing colours and can create fringes around dark areas when they are placed against bright backgrounds. Too much light in one area can saturate nearby pixels - causing a bloom effect you don't see on film.

The problem is that marketeers have taught customers that more pixels is better, and that it won't cost them any more to have them. With die sizes fixed by target market prices, more pixels means smaller pixels, with less photon gathering capabilities and therefore lower signal to noise ratios.

Pixel affordability

Guy Meynants is founder and CEO of CMOS imager design house CMOSIS. He also designed the 14Mpixel sensor used in the last of Kodak's range of DSLRs, which are still sought after on the second-hand market for the quality of the skin tones his sensor produces.

In the drive to create affordable 10Mpixel sensors - which means packing a lot of pixels into a space that allows hundreds or thousands of sensors to be cut from a single silicon wafer - a lot has been given up, says Meynants. "What matters most for image quality is pixel size: the modern compact SLRs and cellphone cameras have such small pixels that the sensitivity and dynamic range is much reduced. The sensors are also too small for nice portraits."

Professional portrait photographers like to use shallow depth of field for their shots because they leave only the face in focus, the background becomes a pleasing blur. But as the sensor shrinks, the depth of field increases which pushes more of the image into clear, yet less-pleasing, focus. It is not helped by the tendency for amateur photographers to snap their subjects in front of walls.

The problems caused by shrinking pixels can be countered by increasing the area dedicated to sensing. The pixel positions are generally partially obscured by metal wires that connect the sensors to the read-out circuitry. With a smaller signal from a smaller sensor, it also becomes important to look after that signal, using process tricks to suppress the dark-current noise of random carrier movements in the substrate to reduce noise.

One approach that is gaining popularity is to use 'backside illumination' according to Rich Turner, vice president of marketing and applications for Foveon, an image sensor company. The basic trick is to turn the sensor over, thin its substrate and then light it from behind.

"Optical efficiency means getting the light to go into the silicon where you want it, by thinning the layer above the sensor and having less metal to reflect the light away, as well as reducing the distance from the micro-lens to the sensor to reduce the spreading of the light. But in general you're looking at about a 40 per cent increase of optical efficiency using backside illumination. A factor of two may be a stretch."

Meynants says that there may be problems applying the technique to CMOS. Backside illumination has been around on charge-coupled devices (CCDs), often found in professional systems and top-end cameras for years, he says. But CMOS sensors face a problem.

Stray photons

A CCD could have a 10µm pixel and a 10µm-thick back layer. The CMOS sensor will have pixels as small as 1.4µm with a backside thickness of 5µm. The difference in ratio means much more of the light can wind up in the wrong pixels. "And if you thin the backside too much you lose the red sensitivity," says Meynants.

Nonetheless, backside illumination means that manufacturers don't have to try and reach past the micro-lens array to access the bond pads for each sensor on a wafer. And it may be easier to deposit colour filter layers and the microlens array on the back of the wafer than on the front.

"In backside illumination devices there is a coating which matches the refractive index of air on one side and silicon on the other, which increases the acceptance of photons," says Meynants. "In front-side illumination devices they're trying to implant light pipes between the metallisation."

Kodak has made another inversion, changing its CMOS sensor circuitry to count holes rather than electrons, in a bid to improve noise performance.

In standard CMOS pixels, the signal is measured by detecting electrons that are generated when a photon interacts with the silicon of the sensor to create an electron-hole pair. As more light strikes the sensor, more electrons are generated, resulting in a higher signal at each pixel.

In Kodak's Truesense pixel, the circuitry has been redesigned so it senses the presence of holes, which represent the absence of electrons. What's useful about this is that holes have less mobility in silicon than electrons, and so are more likely to be captured by the circuitry at each pixel than an equivalent approach with the more mobile electrons.

"When a photon hits a sensor site it creates a hole and electron pair," explains Michael de Luca, marketing manager for image sensors at Kodak, adding that the high mobility of electrons is useful in CCD designs. "But where we combine imaging and data blocks, for example in CMOS sensors, it's a problem.

"We're getting our CMOS pixels operating to CCD performance, in a way that provides us the image quality and preserves the integration," says de Luca.

Although the technique demands a big change to the base CMOS process, Meynants agrees that the technique can work: "I think it's quite effective. Electrons get trapped in the channel and affect the characteristics, which increases noise. This is less likely with holes."

One reason why digital cameras have difficulty handling colour is that most CMOS photodiodes are set up to act as colour-blind intensity sensors. The array of photodiodes is then overlaid with a pattern of colour filters, so that red is sensed in a pixel next to blue, and blue next to green. This causes the colour fringing that dismays some photographers, although the sensor manufacturers do their best to write interpolation algorithms that hide the problem.

Photodiodes

Most image sensors use a basic unit of four photodiodes, one covered with a red filter, one with a blue filter and two with green filters, since the eye is most sensitive to this colour and so it can be used as a proxy for brightness.

Kodak, who developed the Bayer pattern, has recently introduced a new pattern and the supporting algorithms, which take the green filter off one of the four photodiodes in the basic unit, so that it sees the whole spectrum of colour.

The approach counters the decreasing sensitivity of ever smaller photodiodes by removing the light-absorbing filter in front of one of them. More photons make it into that photodiode, and the resultant output can be regarded as measuring overall light intensity and the proportion of each of the three main colours.

De Luca claims that combining this filter array with the Truesense hole-sensing pixel has enabled the company to build a 5Mpixel sensor with a 1.3µm pixel that performs like a 2.2µm pixel.

The downside of the approach is that it takes a completely different colour processing system to handle the output, and that the colour information is now at a lower resolution than the luminance information.

"But because of the way we work with the data, and because human eyes are less sensitive to chroma information than luminance, I would contest we do a really good job of extracting colour," says de Luca. "It all becomes less of an issue as the resolutions go up. One of the things that instigated the new colour filter was our scientists saying 'what else can we do with this glut of pixels that might provide a definite benefit?'"

Foveon's approach is different. It uses the fact that different wavelengths of light penetrate to different depths in a silicon substrate to produce a sensor in which all the colours are sensed in the same place.

"The fundamental thing our technology can do is gather more information per unit area of silicon," says Turner. "In the mobile space that is interesting because of the pressure on size and cost."

Like Kodak's new pixel, Foveon's approach demands an entirely new approach to handling the output data, since there is more overlap between the outputs of each sensing layer in the design.

"The big misconception is to think of colour bleedover, that the red gets into the green gets into the blue," said Turner. "Yes, there is some blue light that creates current in the green channel and that is also true in traditional sensors. There's more overlap in our sensors than in the traditional ones."

The crosstalk we face is not between pixels but in the spectrum responses, which are not as sharp [as with Bayer filtering], but we fix that with the colour matrix. And you do need some overlap. What matters is whether the colour curves form a good basis for a trans-formation to the end result.

Not everyone likes the approach. Meynants says that because each pixel carries a lot of transistors, they achieve a low fill factor - the percentage of the sensor array that is actually used for sensing. And he doubts whether the Foveon approach, which demands that different wavelengths are sensed at different depths, will be compatible with backside illumination techniques.

Handling brightness

One problem with most current digital cameras is that their sensors are unable to handle the range of brightnesses that we find in nature - think of looking out of a dark cave on a summer's day, for example. Anyone who has a digital camera will know the deep frustration of taking a decent outdoor portrait only to find that all the detail in the sky has disappeared because the sensor is overloaded by its brightness. Modern sensors can only handle a small proportion of the range of brightness we live with and the dynamic range has only recently approached that of film for which Ansel Adams developed his nine-stop zone exposure method.

There are various approaches to this problem including, most recently, very competent sensor designs from the likes of Nikon that have been measured as handling 13.7 stops, or doublings of intensity, through excellent circuit designs. Others have taken an approach similar to that of the Bayer filter - in a basic unit of cells, include one photodiode which is less sensitive than the others so that it can handle extreme brightness well, and so retain highlight detail.

Turner points out that you might be able to achieve something similar by giving one photodiode less time to sample the light than the rest, so that it can capture intensity without being swamped. He also argues that Foveon could add another layer to its stacked sensor architecture for a similar result.

Meynants points out that with a decent CMOS sensor and a 6µm pixel "you can get 12 stops of dynamic range if you really design for that, but you have to re-optimise the pixel for that dynamic range and give up speed and other characteristics".

Recent cameras, such as the Nikon D3, massively outpace film in terms of sensitivity (defined by its ISO rating). Where film was regarded as 'fast' - and generally delivered grainy results - if it had an ISO rating of 800, the D3 can supply an ISO of 6400 with far less image degradation. Cameras may be able to go further.

'Black silicon' is treated with femtosecond laser pulses to make it much more receptive to incoming photons than conventional silicon. Others have proposed quantum confinement techniques to improve sensitivity. Although these cameras are some way off, photographers may wind up moaning how difficult it is to get motion blur when they want it.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close