Texture, colour, contrast, and sharpness  can contribute to more realistic computer-generated images

Harvard researchers make computer vision more realistic

American researchers are exploring the principles of human visual perception to create computer-generated images that are almost as realistic as those of the real world.

One of the key benefits of human perception is the ability to recognise what objects are made of. Computer vision has to date offered no equivalent of this, but Todd Zickler, Harvard School of Engineering and Applied Sciences (SAES) researcher, says that a number of tools can be employed to address this.

At the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, Zickler presented a paper focusing on analysis of translucent materials.

The main tool at his disposal involves the use of  the phase function, which describes the angular distribution of light reflected from a body when illuminated from a specific direction. This function is crucial in determining what people see and how they see it.

Zickler’s team has used gradually improving levels of compute power to reduce the potential space of images defined by the phase function to a manageable size. Consequently, the team can render thousands of computer-generated images of one object with different computer-simulated phase functions, thus changing the translucency of the object. As a result, each image will appear slightly different.

By comparing pixel colours and brightness of the images, the computer is able to assess the subtle differences between the images. Through this process, the software created a map of the phase function space according to the relative differences of image pairs, making it easy for the researchers to identify a much smaller set of images and phase functions that were representative of the whole space.

“This study, aiming to understand the appearance space of phase functions, is the tip of the iceberg for building computer vision systems that can recognise materials,” Zickler said, adding the next step in the research will involve finding ways to accurately measure a material’s phase functions instead of making them up computationally.

In another study, the team investigated a new type of screen hardware that displays different images when lit or viewed from different directions.

By creating tiny grooves of varying depths across the screen’s surface, Zickler’s team created optical interference effects that cause the thin surface to look different when illuminated or viewed from different angles.

The solution takes advantage of mathematical functions (called bidirectional reflectance distribution functions) that represent how light coming from a particular direction will reflect off a surface.

Past attempts to control surface reflection for graphics applications have only been accomplished for surfaces displaying huge images with pixels 1in2 in size. Zickler’s work, however, demonstrates that interference effects can be exploited to control reflection from a screen at micron scales using well-known photolithographic techniques.

In future, this kind of optimisation could enable multi-view, light-sensitive displays, where a viewer rotating around a flat surface could perceive a 3D object while looking at the surface from different angles, and where the virtual object would correctly respond to external lighting.

"Looking at such a display would be exactly like looking through a window," claimed Zickler.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close