Human shoulder joint in X-ray on gray background

Deep learning could help visualise X-ray data in 3D

Image credit: Vadimrysev/Dreamstime

Scientists in the US have developed a computational framework that they say can create 3D visualisations from X-ray data hundreds of times faster than traditional methods can.

According to the scientists at the US Department of Energy’s (DOE) Argonne National Laboratory, artificial intelligence (AI) has emerged as a versatile solution to the issues posed by big data processing in the medical sector.

Scientists who use the Advanced Photon Source (APS), a DOE Office of Science User Facility at Argonne, to process 3D images, could turn X-ray data into visible, understandable shapes at a much faster rate. A breakthrough in this area could have implications for astronomy, electron microscopy, and other areas of science dependent on large amounts of 3D data, the scientists said.

“In order to make full use of what the upgraded APS will be capable of, we have to reinvent data analytics. Our current methods are not enough to keep up. Machine learning can make full use and go beyond what is currently possible,” said Mathew Cherukara at Argonne.

The research team, which includes scientists from three Argonne divisions, developed the computational framework called 3D-CDI-NN. CDI stands for coherent diffraction imaging, an X-ray technique that involves bouncing ultra-bright X-ray beams off of samples. Those beams of light will then be collected by detectors as data, and it takes some computational effort to turn that data into images.

According to Cherukara, leader of the Computational X-ray Science group in Argonne’s X-ray Science Division (XSD), part of the challenge is that the detectors only capture some of the information from the beams.

But there is important information in the missing data, and scientists rely on computers to fill in that information. Cherukara noted that, while this takes some time to do in 2D, it takes even longer to do with 3D images. The solution is to train AI to recognise objects and the microscopic changes they undergo directly from the raw data, without having to fill in the missing info.

To tackle this, the team started with simulated X-ray data to train the neural network – a series of algorithms that can teach a computer to predict outcomes based on data it receives. Henry Chan, a postdoctoral researcher in the Center for Nanoscale Materials (CNM) at Argonne, led this part of the study.

“We used computer simulations to create crystals of different shapes and sizes, and we converted them into images and diffraction patterns for the neural network to learn,” Chan explained. “The ease of quickly generating many realistic crystals for training is the benefit of simulations.”

Once the network is trained, said Stephan Hruszkewycz, physicist and group leader with Argonne’s Materials Science Division, it can come close to the right answer, quickly. However, there is still room for refinement, he added, so the 3D-CDI-NN framework includes a process to get the network the rest of the way there.

“The Materials Science Division cares about coherent diffraction because you can see materials at few-nanometre length scales – about 100,000 times smaller than the width of a human hair – with X-rays that penetrate environments,” Hruszkewycz said.

He added: “This study is a demonstration of these advanced methods, and it facilitates the imaging process. We want to know what a material is, and how it changes over time, and this will help us make better pictures of it as we make measurements.”

As a final step, 3D-CDI-NN’s ability to fill in missing information and come up with a 3D visualisation was tested on real X-ray data of tiny particles of gold, collected at beamline 34-ID-C at the APS. The researchers found that the computational method was hundreds of times faster on simulated data and nearly as fast on real APS data. The tests also showed that the network can reconstruct images with less data than is usually required to compensate for the information not captured by the detectors.

The next step for this research, according to Chan, is to integrate the network into the APS’s workflow, so that it learns from data as it is taken. If the network learns from data at the beamline, he said, it will continuously improve.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles