Concept art showing matrix multiplication with light

Photonic processor reaches unprecedented computing density

Image credit: University of Oxford

An international team of researchers has developed a new method and architecture for photonic processors, which speeds up complex mathematical tasks in the field of machine learning.

The emergence of an increasing number of AI applications – such as in autonomous vehicles, smart cities and speech recognition – places a heavy burden on current computer processors to keep up with demand.

A team of scientists has developed an approach to the problem, combining processing and data storage on a single chip using photonic (light-based) processors. These can surpass conventional electronic chips by processing information in parallel and more rapidly.

“Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs,” said Professor Wolfram Pernice of Münster University, who led the project. “This is much faster than conventional chips which rely on electronic data transfer such as graphic cards or specialised hardware like TPUs.”

The scientists developed a hardware accelerator for matrix-vector multiplications, which are the backbone of artificial neural networks: networks loosely inspired by biological brains often used to process image or audio data. As light of different wavelengths don’t interfere, they were able to use multiple wavelengths for parallel calculations (multiplexing), which opens the door to photonic processors with higher data rates and more operations per unit area.

Seizing this opportunity, however, required the use of another technology as a light source: the chip-based “frequency comb” developed at EPFL.

“Our study is the first to apply frequency combs in the field of artificial neural networks,” said EPFL’s Professor Tobias Kippenberg. Kippenberg’s work has pioneered the development of frequency combs, which provides a variety of wavelengths which can be processed independently within the same photonic chip.

The researchers also chose to combine the photonic structures with phase-change materials as energy-efficient storage elements. This made it possible to store and preserve matrix elements without the need for an energy supply.

After fabricating the photonic chips, the researchers tested them on a neural network for recognising handwritten numbers. According to the researchers, the operation between the input data and one or more filters – which can identify, for instance, edges in an image – are well suited to their matrix architecture, allowing the researchers to reach unprecedented computing densities.

The University of Oxford’s Dr Johannes Feldman, lead author of the study, explained: “Exploiting light for signal transference enables the processor to perform parallel data processing through wavelength multiplexing, which leads to a higher computing density and many matrix multiplications being carried out in just one timestep. In contrast to traditional electronics, which usually work in the low GHz range, optical modulation speeds can be achieved with speeds up to the 50 to 100GHz range.”

The research, which is published in Nature this week, has extremely far-reaching applications. This could include higher simultaneous processing of data for AI applications; larger neural networks for more accurate forecasts and precise data analysis; larger amounts of clinical data to assist diagnosis; more rapid evaluation of sensor data in autonomous vehicles, and an expansion of cloud computing infrastructure.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles