Artificial neural networks may benefit from ‘sleep’ breaks
Image credit: Dreamstime
Scientists at Los Alamos National Laboratory have found that artificial neural networks (ANNs) which closely mimic biology may benefit from sleep-like cycles. These cycles appear to quell the instability associated with uninterrupted unsupervised learning.
“We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development,” said Yijing Watkins, who led the study.
Watkins and her colleagues had been developing artificial neural networks (ANNs) which mimic how humans and other animals learn to process images. They found that the network simulations have a tendency to become unstable after long periods of continuous unsupervised training, in which the networks classified images of objects without having prior examples to which to compare them.
Unexpectedly, stability seemed to return when they exposed the networks to sleep-like states: “It was as though we were giving the neural networks the equivalent of a good night’s rest,” Watkins said.
According to the researchers, the complication of keeping machine learning systems from tumbling into instability during unsupervised learning is only relevant when attempting to use certain ANNs which closely mimic biology: spiking neural networks. These networks have neurons which only fire when a membrane potential (a quality analogous to electrical charge in a biological neuron) reaches a threshold value. A neuron firing generates a signal which affects the membrane potential of other neurons.
“The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system,” said co-author Garrett Kenyon.
The researchers described the decision to expose the networks to a digital sleep analogue as a last-ditch attempt to stabilise them. They experimented with various types of noise comparable to white noise and found that the most effective were waves of Gaussian noise: noise with a normal distribution. The scientists said that this noise mimics the input received by human neurons during slow-wave sleep. This could suggest that slow-wave sleep may help ensure that neurons maintain their stability, preventing humans from effects such as hallucination.
Watkins and her colleagues next plan to implement their algorithm on Intel’s Loihi neuromorphic research chip, which uses a spiking neural network to implement unsupervised learning. They hope that allowing the chip to 'sleep' occasionally will enable it to process information from a silicon retina camera in real time with greater stability.
If their findings confirm the need for a sleep analogy in ANNs, this rule may be applicable to other biologically inspired AI systems in the future.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.