
Artificial neural networks could run on cheap computers using new method
Image credit: Dreamstime
The proposed method, Sparse Evolutionary Training, accelerates machine-learning algorithms with no decrease in accuracy and could allow for complex problems to be solved on ordinary desktop computers.
Machine-learning algorithms learn to detect and predict patterns in data by processing many prior examples (training data) with or without human supervision. Through this approach, they can be used to perform specific tasks, such as image or voice recognition.
Artificial neural networks (ANNs) – a machine-learning system which loosely mimics the structure of the biological brain – are used widely in medicine, research and manufacturing. ANNs require numerous ‘layers’ and many millions of nodes to process training data, requiring a colossal amount of computational power and making them difficult to run without high-performance computers (such as gaming computers or even supercomputers).
“[ANNs] are at the very heart of the artificial intelligence revolution that is sharping every aspect of society and technology. They have led to major breakthroughs in various domains including speech recognition and computer vision,” said Professor Antonio Liotta, director of the University of Derby’s Data Science Research Centre. “However, the networks we have been able to handle so far are nowhere near the capacity of the human brain, made up of billions of neurons.”
“The very latest supercomputers would struggle with a 16 million neuron network the size of a frog’s brain, while it would take more than a dozen days for a powerful desktop computer to process a mere 100 million neuron network.”
Now, a group of data scientists from the University of Derby, Eindhoven University of Technology and the University of Texas, Austin, have proposed a new system for accelerating ANNs. In theory, this could allow supercomputers to run ANNs as complex as biological brains.
This method, Sparse Evolutionary Training, accelerates the training process by replacing artificial neuronal networks with sparse layers inspired by the properties of sparsity and scale-freeness found in biological brains.
“Our method replaces artificial neural networks fully connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy,” the researchers wrote in Nature Communications.
The data scientists demonstrated their method using 15 datasets from various fields such as genetics and natural language processing, and concluded that the approach could allow ANNs to scale up to beyond what is currently possible.
“This work represents a major breakthrough in fundamental artificial intelligence and has immediate practical implications in industry and academia alike, enabling the analysis of vast sets of data, beyond what is currently possible,” added Liotta.
Accelerating ANNs could allow for complex algorithms – such those analysing entire genomes to understand the origins of genetic disease – to be run on less powerful and expensive computers than those required today.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.