Disney’s ‘Makeup Lamps’ concept projects moving makeup onto actors

Engineers have developed a new technique called ‘Makeup Lamps’, which tracks the movement and facial expressions of performers during live events and projects virtual makeup onto the actors’ faces.

The hardware and software system, developed by Disney Research and Princeton University engineers, effectively paints an actor’s face with light rather than stage makeup. This can add wrinkles, facial hair, cartoonish face paint and other details, allowing for instant onstage transformations.

While stage technicians have long been able to project images onto static objects, this new technique is designed to track facial movements and adjust to a variety of arbitrary movements.

The technology draws heavily on motion capture, the process of tracking the movement of objects or people. In filmmaking, motion capture uses the performance of a real actor to create a virtual character, such as Gollum from Peter Jackson’s Lord of the Rings trilogy.

Motion capture was brought to the theatre last year, when a live projection of the sprite Ariel was projected onto sculpted net curtains during a Royal Shakespeare Company production of The Tempest. This achievement was the result of a two-year collaboration with Intel and The Imaginarium Studios, a motion capture company co-founded by Andy Serkis - the real-life actor whose body movements were used to animate the virtual Gollum.

Despite the warm reception to the Royal Shakespeare Company’s use of motion capture, the technology typically requires an actor to wear visible markers on their body and face to track movement. Makeup Lamps does not require markers to be placed on the actors’ faces, instead using an infrared camera to detect distinct facial features such as the eyelids and lips.

“Leveraging these technologies to augment the appearance of live actors is the next step and could result in amazing transformations before our eyes of stage actors in theatres or other venues,” said Markus Gross, vice president at Disney Research.

According to Dr Anselm Grundhöfer, principal research engineer at Disney Research, one of the greatest roadblocks engineers have come across in developing live augmentation is the delay between generating an image that matches a pose and facial expression and displaying it. This delay leads to misalignment between the actor’s face and the projection.

This delay was reduced by limiting the complexity of the algorithms and processing images in two dimensions rather than three by using a setup where the projector is aligned with the camera recording facial expressions. The engineers also used a method called Kalman filtering, which predicts the next facial movements based on previous movements.

By adjusting the illumination, stage technicians can display any colour or texture with high-quality projections to transform characters on stage, undergoing ageing, injuries or growing a beard.

“We believe that [this technique] could give rise to a wide variety of creative applications in the near future,” said Dr Grundhöfer. For example, a makeup design could be projected on a face before being applied.

The technology will be presented today at the European Association for Computer Graphics conference in Lyon, France.

At the same conference, Disney engineers will also report on a new method to simulate hair – a notorious challenge in computer graphics – by capturing parameters from videos of real hair in motion.

Disney Research - a network of research centres associated with the Walt Disney Company - works on graphics, robotics, machine learning and other areas of research to advance Disney’s technological capabilities.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close