The world's largest artificial neural network can learn and recognise objects, characters and voices in the same way as humans

GPU based neural network cleverer than Google's

The world’s largest artificial neural network, 7 times smarter than the one built by Google in 2012, aims at much more than just recognising cats.

A result of cooperation between the graphic-processing unit manufacturer NVIDIA and Stanford University researchers, the graphic-processing unit based (GPU) network promises a breakthrough in machine learning. The creators believe it will be able to learn how to recognise objects, characters and voices 'almost' in the same way as humans.

The project and its achievements have been introduced today at the International Supercomputing Conference (ISC) in Hamburg, Germany. 

Compared to the previous record-setting network of Google, the NVIDIA and Stanford University team has managed not only to increase the network's size, but at the same time decrease the number of computer servers needed to create it. 

While Google had used approximately 1,000 central processing unit-based (CPU) servers, or 16,000 CPU cores, NVIDIA achieved better results with only three servers based on graphic processing technology previously developed by the company.

While Google’s network taught itself to recognise cats in a series of YouTube videos and operated with 1.7 billion parameters representing virtual  connections between neurons, NVIDIA’s network can handle 6,5 times more – overall 11.2 parameters,  getting another step closer to the ability to mimic human learning processes.

“Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modelling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at NVIDIA.  “Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

Machine learning, a fast-growing branch of the artificial intelligence (AI) field, wants to achieve the point when computers will be able to act without having to be previously programmed by humans. The technology is already being used in effective web search tools or self-driving cars. Researchers are currently developing several machine-learning based applications, for example for speech recognition or human genome mapping.

“GPUs significantly accelerate the training of our neural networks on very large amounts of data, allowing us to rapidly explore novel algorithms and training techniques,” said Vlad Sejnoha, chief technology officer at Nuance, a company that is currently training its neural network to understand natural speech. “The resulting models improve accuracy across all of our core technologies in healthcare, enterprise and mobile-consumer markets.”

Once the models are trained, they can then recognise the pattern of spoken words by relating them to the patterns that the model learned earlier.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them