Fix it in post
Simon Robinson, chief scientist at The Foundry talks about movie special effects and their foray into 3D.
The world of special effects (SFX) in movies is changing. Say goodbye to obvious computer generated imagery (CGI) and hello to a new world of film where reality is carefully assembled inside a computer.
"There is a strong move away from the early days of CGI where people were pleased to show that they could do CGI. A lot of our work is going into the subtlety of SFX work," says Simon Robinson of Oscar-winning effects developer The Foundry.
"Your standards of what you are prepared to accept as an audience shifts over time. What looked flawless ten years ago but looks a bit ropey now shows how far things have come."
Unlike the days of 'The Matrix' where much of the publicity around the movie surrounded how its bullet-time effects were created, Robinson says so many post-production effects go into scenes that it is almost impossible to see the effect of just one.
"The trend is to undetectable realism in SFX work," says Robinson. "But there is no one set of techniques that made that happen. It involves a lot of effort across all areas."
The ability to inject photorealistic imagery into a film scene has had one important effect on how directors view the process they are involved in, says Robinson. "There has been a shift towards post-production from production. Directors are more used to shooting things where they know what will be in the scene is not actually in the shot.
"In the 1960s, everything used to be in the shot. Now with the sophistication of post-production the directors are very much more used to the idea that the backdrops don't have to be done until later. Elements not in the shot will be added later and the lights will be done later."
Even the selection of camera position will be made if the trend towards post-production continues. This is where a lot of The Foundry's effort is directed today: using 3D processing to let directors make use of "the deferral of decision-making", as Robinson puts it. "The insertion of effects can be done late, so why not?"
The move to combine effects during post-production has led The Foundry to shift its own development towards the creation of a compositing system rather than concentrating on discrete effects that have been its bread and butter in past years. "It allows us to develop new kinds of technology. Plug-ins are a nice way to deliver technology but they are limited in what you can do, because each one is a black box.
"The ability to speed up or slow down image sequences or remove objects, those are still mainstays of what we do. But we have a whole research programme that takes all of that image processing technology one stage further," Robinson explains, pointing to the way that the software is evolving from existing tools, such as the company's Frank.
"Frank extracts metadata and with that level of scene knowledge, you can do things such as speed up and slow down movement. The limit of what you can do comes down to how we extract that data.
"We see images as 2D: a bunch of pixels. But what we have come to realise is that, to go further, we need to treat the world as 3D not 2D," Robinson claims.
"The next generation is about trying to extract more 3D information. We have delayed the move to 3D long enough. It is probably not just us but a trend in the whole industry. You get to a point where the machine vision technology and computer power behind it has moved so far that this all becomes feasible."
The idea behind performing 3D processing of a series of 2D images will make it possible to edit scenes. Robinson uses the example of a series of shots taken while moving down nearby Oxford Street in London. If the director does not like a building, she can have it removed and replaced with something that looks better in the scene. Some of this can be done using 2D manipulation but the effect works a lot better if the computer doing the edit understands how the building and its replacement fit into the scene. It can apply lighting and reflections that are much more convincing and have the foreshortening of the walls change based on the camera position.
Robinson points to the crossover between this kind of digital effects work and what is happening in the field of machine vision.
The Foundry is taking part in a couple of EU research projects. One of them is I3Dpost, which focuses on extracting 3D information from 2D video and stills. The work does not just cover buildings and sets but people.
"There is a lot of interest in capturing faces and full body motion in 3D. It can all be done today: the trick is working out newer, faster and better ways of doing them. With facial capture, it is moving towards the idea of having synthetic actors. People will be able to reuse themselves in films. I feel that maybe we should go out and capture ourselves so that we are able to use all that data later. Actors will be able to keep playing roles that are too young for their actual bodies," says Robinson.
Robinson points to work performed by I3Dpost partner BUF Compagnie in its work on Luc Besson's film 'Arthur and the Invisibles' using motion capture. "They had the actors going through moves in a minimal set and captured the action from many angles. Then they got the animators to fit animated cartoon models to the action. It was a human labour-intensive way of doing what will become more machine-driven and easier over time."
BUF applied a similar technique in 'Alexander' (main picture), this time for live action sequences. Extras involved in fight sequences were filmed from many angles. "The cameras were arranged in a ring around them," explains Robinson.
The motion-captured sequences were applied to models and then rendered into battle scenes many times over: the different angles stopped them looking like the same performers.
In an increasing number of shoots, the compositing software can get help from the filming setup as the more advanced cameras are able to track their position automatically and computers can use markers in the scene to gauge where the camera sits.
"Some people do laser scans. They will take lots of photographs and put a laser scan in the middle to show where the data in the shot lives," says Robinson.
Moving more work to 3D will blur the distinction between 2D and 3D movie-making. "The industry wants to move into stereo filming because it can provide an environment that puts bums on seats," Robinson claims.
Among the films now going into production that will use 3D extensively is 'Avatar', directed by James Cameron. The 'Titantic' director has already worked in 3D having shot the 'Ghosts of the Titantic' spinoff shown in Imax cinemas.
"But 3D introduces a problem in post-processing," Robinson notes.
The film editor has to gauge how a scene will look in the cinema. Get it wrong, and the effect is lost. It will become particularly important as directors move away from using 3D as an effect - no more shots of spears and sharks heading out into the audience - and towards more subtle use of 3D to build a sense of space, or lack of it, in scenes.
"In a lot of the stereo work I have seen up to now there is continually the idea that you have to wow people with it. It comes down to those so-called stereo moments where people feel they get their money's worth."
Making your mind up
Shooting in 3D today presents the director with a problem: it's one decision they cannot defer until post-production. "Once you have the shot set up with a stereo rig it is very difficult to change the geometry later," explains Robinson. "And stereo makes compositing harder. But we can do things there."
One possibility that 3D manipulation of the images captured by the cameras offers is to move the camera geometry after shooting or to make it easier for 2D directors to alter shots to get a better camera angle after the fact.
"Maybe you can create virtual views," Robinson suggests. "A shoot may be captured from multiple viewpoints. If you have a single camera fixed in space, you have no depth. But, in stereo you get parallax that lets you make approximations about 3Dness. It provides more opportunities to extract metadata than we used to have. Stereo may be a headache for compositing now but maybe it could lead to things being much easier in the future.
"It is a bit science fiction today but there is no reason to believe it shouldn't happen in the future. I quite like the idea of multiple camera capture becoming normal. Maybe even carry forward this process of deferred decision-making even further. You put in lots of cameras then, in post, create the scenes you actually wanted."