Clear objects can now be scanned and replicated in 3D
Transparent objects can now be digitally scanned and later rendered as a 3D model thanks to a new imaging technique.
Such a feat has proven difficult in the past and even state-of-the-art 3D rendering methods have struggled with clear objects.
The ability to create detailed, 3D digital versions of real-world objects and scenes can be useful for movie production, creating virtual-reality experiences, improving design or quality assurance in the production of clear products and even for preserving rare or culturally significant objects.
“By more accurately digitising transparent objects, our method helps move us closer to eliminating the barrier between the digital and physical world,” said Jonathan Stets, at the Technical University of Denmark, and co-leader of the research team that developed the pipeline. “For example, it could allow a designer to place a physical object into a digital reality and test how changes to the object would look.”
Transparent objects are challenging to digitise because their appearance comes almost completely from their surroundings. Although a CT scanner can acquire a clear object’s shape, this requires removing the object from its surroundings and lighting, which must also be captured to accurately recreate the object’s appearance.
A key innovation in developing the new method was the use of a robotic arm to record the precise locations of two cameras used to image scenes containing a clear object. Having this detailed spatial information allowed the researchers to take photographs of the scene, remove the object and scan it in a CT scanner and then place it back into the scene – both digitally and in real life – to accurately compare the real-life scene and its virtual reconstruction.
“The robotic arm allows us to obtain a photograph and a 2D computed, or rendered, image that can be compared pixel by pixel to measure how well the images match,” said Alessandro Dal Corso, co-leader of the research team. “This quantitative comparison was not possible with previous techniques and requires extremely precise alignment between the digital rendering and photograph.”
Once the digital versions of the objects are finalised, the method provides information about the object’s material properties that are distinct from its shape. “This allows the scanned glass objects to still look realistic when placed in a completely different digital environment,” explained Jeppe Frisvad, a member of the research team. “For example, it could be placed on a table in a digital living-room or on the counter in a virtual kitchen.”
Using an optical setup containing readily available components, the researchers tested their new workflow by digitising three scenes, each containing a different glass object on a table with a white and grey checkerboard backdrop.
They began by acquiring structured light scans of the scene, an imaging method that uses the deformation of a projected pattern to calculate the depth and surfaces of objects in the scene.
They also used a chrome sphere to acquire a 360-degree image of the surroundings. The scene was illuminated with LEDs arranged in an arc to capture how light coming from different angles interacted with the opaque parts of the scene. The researchers also separately scanned the glass objects in a CT scanner, which provided information to reconstruct the object’s surface. Finally, the digital version of the scene and the rendered glass object were combined to produce a 3D representation of the whole scene.
Quantitative analysis showed that the images of the digital scene and the real-world scene matched well and that each step of the new imaging workflow contributed to the similarity between the rendered images and the photographs.
“Because the photographs are taken under controlled conditions, we can make quantitative comparisons that can be used to improve the reconstruction,” said Frisvad. “For example, it is difficult to judge by eye if the object surface reconstructed from the CT scan is accurate, but if the comparison shows errors, then we can use that information to improve the algorithms that reconstruct the surface from the CT scan.”