How the technology behind Hollywood CGI is revolutionising product design
Avatar, Inception and the Jungle Book are just some of the films that have won Oscars on the back of computer generated imagery produced using tools created by UK-based company Foundry.
Since its creation in 1996, Foundry has been mostly associated with the entertainment industry, but is now branching out into other sectors, such as product design and visualisation.
Improving tools and more powerful computers are allowing companies to create detailed graphical representations of products or spaces and interact with these in both virtual reality (VR) and augmented reality (AR) before they are created physically.
“The total content spend in AR and VR by 2021 will be 70 per cent in the enterprise space; gaming is actually well below,” said Jon Wadelton, Foundry’s CTO since 2015.
The technology can be applied to a wide range of sectors, with Foundry working on architectural visualisations, engineering and construction projects, product design and training scenarios for employees.
“All of these guys don’t have any idea how to make any visualisations so we thought this is our opportunity to take what Hollywood has done for years and years and bring that pipeline infrastructure to the other industries so that they can cope with this huge amount of data that they’re going to need in the future,” he said.
Foundry demonstrated a hotel room that it had built for a customer that could be navigated in VR.
Using an HTC Vive headset and controller, their client was able to move around the room and change design elements such as the wallpaper pattern, the type of bed or the colour of the carpets on the fly.
In this way the client could experience what being in the room would feel like years before the building itself had started construction. It even used a 3D photo taken by a drone to demonstrate what the view from the hotel window would look like.
Foundry was keen to extol the benefits of its forthcoming ‘Project Bunsen’ technology, software that will allow designers to export their CAD prototypes in order to create a virtual environment or product that can be viewed in a VR (or AR) space.
“We realised that AR and VR could be used by all these other industries to create a huge amount of visualisation,” Wadelton said.
“We have done some work with [a well-known sports shoe manufacturer] to help them visualise their shoe designs. Getting them off the paper into 3D applications is the first step. When you export them into 3D software, the typical lifecycle design is hastened.
“In the past, companies would go through a few iterations [of a product] by looking at boards that are printed out. The designs are then sent to China, where samples are made. These are then sent back to the US and someone with a suitcase would take it to the concept store in New York to physically put it on the shelf; it’s a long process.
“But if you had decent VR, you could have a mock-up of the concept store there and then. The shoe can be taken directly from the designer and virtually placed on the shelf in order to get that feedback immediately. AR gives you the next level which allows you to put that shoe in the context of the store directly.”
But he notes that AR still faces a lot of challenges before it can be used reliably in this setting. For example, Pokémon Go, which came out on mobile devices last year, was one of the first examples where AR technology hit the mainstream
“But it’s all flatly lit CGI, it doesn’t look realistic,” Wadelton explained. “It doesn’t blend in with the surroundings. A lot of what we’re doing is working on capturing all the light in a room so that when you view it through VR it will be lit with the same shadows and even cast a shadow on the ground and any surfaces below it.”
He believes that once it’s perfected, the technology could save companies millions and significantly speed up the product-design process.
But technological limitations, especially with regards to mobile technology, hamper these efforts. Currently there are two options.
Firstly, mobile VR and AR which uses smartphone technology to provide a portable, lightweight experience that allows great freedom of movement but has limited immersion due to the inability to display complex, computer-generated scenarios at an acceptable frame rate thanks to relatively underpowered hardware. The vast quantity of data needed also presents a challenge here.
Alternatively there is tethered VR such as the HTC Vive and Oculus Rift that features headsets physically hooked up to large, expensive computing rigs and delivers an exceptionally realistic experience while sacrificing freedom of movement (due to the tether) and convenience.
But Wadelton believes Foundry has found a short-term solution to this either/or problem until technology improves.
“What you can do is run a powerful server in the cloud and stream the results to the headset,” he said. “There’s latency if you move very quickly but there’s motion compensation stuff that you can do to predict which way you’re going to go in order to partially rectify this. If you do something really fast, it’s still going to fall apart but for the most part it works quite well.”
In the future Foundry wants to solve the ‘parallax’ problem inherent in VR video. The problem with VR cameras at the moment is that they only capture a certain amount of data. You cannot see behind the camera, for example.
“As soon as you move up or down, or left to right or too far it doesn’t work because it doesn’t have that information.
“Everybody in the industry is clambering to fix this problem, including us. There are lots of different solutions, such as an array of cameras that capture everything. Another way is volumetric capturing.
“People really want to fix that because you will still want to be able to take photographs and video of the real world. Not everything is going to be computer generated.”