Neural network learns numbers by mimicking babies
Image credit: Rice University
A neural network mimicking how small babies learn about the world has surpassed the abilities of earlier networks that required extensive human input.
The neural network created by experts from Rice University and Baylor College of Medicine, both based in Texas, essentially learns to interpret images on its own with little engineering input.
While traditional neural networks require researchers to feed them massive data sets, the network presented at the Neural Information Processing Systems conference in Barcelona, Spain, needs only a very small amount of information.
The network’s task was to learn to distinguish handwritten digits. First, the researchers showed the network ten examples of each handwritten digit between zero and nine. The algorithm further developed its ability on its own using several thousands of examples.
The technique is called semi-supervised learning and closely resembles how newborns learn to understand the world.
“When babies learn to see during their first year, they get very little input about what things are,” explained Ankit Patel, assistant professor at Baylor and Rice, who led the study
“Parents may label a few things - 'bottle', 'chair', 'momma' - but the baby can’t even understand spoken words at that point. It’s learning mostly unsupervised via some interaction with the world.”
Traditional neural networks using so-called supervised learning would require thousands of examples of each handwritten digit.
The network developed by the Rice-Baylor team essentially mimics how the human visual cortex operates. It consists of several layers of artificial neurons, each looking for patterns.
The first layer scans the image and looks for patterns that would identify edges and colour changes. The second layer examines what the first layer has found and looks for patterns in its output.
“You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you’ve got a really deep and abstract understanding of the image,” Patel explained.
The network starts completely blank and gradually forms its understanding of the environment as it interacts with the world. During the learning process, each layer of neurons becomes more and more specialised as it is exposed to the visual stimuli.
Such networks are being used in driverless cars.