Researchers created 3D digital reconstructions famous actors by having an algorithm analyse their online images

Digital Tom Hanks created in 3D just from online images

Researchers have created an algorithm capable of producing an animated model of a well photographed person from pictures available online that speaks and looks like the real deal. 

Taking Oscar-winning actor Tom Hanks as an experimental model, the researchers created a 3D reconstruction of his face that can deliver authentically-looking speeches the actor never gave.

The team even managed to transfer a digitally created expression of one person onto the face of another, for example making the current US President Barack Obama speak with expressions of his predecessor George W Bush.

"We asked, 'Can you take internet photos or your personal photo collection and animate a model without having that person interact with a camera?'" said Supasorn Suwajanakorn, a graduate student at the University of Washington, who led the study.

"Over the years we created algorithms that work with this kind of unconstrained data, which is a big deal."

To reconstruct faces of celebrities like Tom Hanks, Barack Obama and Daniel Craig, the machine learning algorithms mined a minimum of 200 internet images taken over time in various scenarios and poses.

To capture minute differences in expressions that occur when a person smiles or looks puzzled, the team developed a technique that uses different lighting conditions across different photographs to map what differentiates features of one person from those of another.

In addition to frequently photographed celebrities, the researchers would like to be able to create similar talking models of regular people using their family albums or personal photo collections.

The researchers envision that in the future, they will be able to create realistic 3D representations of anyone, for example a relative living overseas, that could be used in Skyping sessions instead of a 2D video image.

"You might one day be able to put on a pair of augmented reality glasses and there is a 3D model of your mother on the couch," said senior author Kemelmacher-Shlizerman. "Such technology doesn't exist yet - the display technology is moving forward really fast - but how do you actually re-create your mother in three dimensions?"

Creating holograms currently requires quite a complex and elaborate process. The person to be ‘hologramed’ usually needs to be brought into a movie studio and have his or her pictures taken from every possible angle. The way they move also needs to be carefully recorded.

The technique developed by the University of Washington team would on the contrary enable doing more or less the same without any extra hassle, by simply using existing images.

"Imagine being able to have a conversation with anyone you can't actually get to meet in person - LeBron James, Barack Obama, Charlie Chaplin - and interact with them," said co-author Steve Seitz, UW professor of computer science and engineering. "We're trying to get there through a series of research steps. One of the true tests is can you have them say things that they didn't say but it still feels like them? This paper is demonstrating that ability."

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close