Google has said that it is dabbling with getting computers to simulate the learning process of the human brain as one of the unusual projects for researchers in its X Lab.
Computers programmed with algorithms intended to mimic neural connections “learned” to recognise cats after being shown a sampling of YouTube videos, Google fellow Mr Jeff Dean and visiting faculty Mr Andrew Ng said in a blog post.
“Our hypothesis was that it would learn to recognise common objects in those videos,” the researchers said.
“Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats,” they continued.
“Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat.”
The computer, essentially, discovered for itself what a cat looked like, according to Mr Dean and Mr Ng.
The computations were spread across an “artificial neural network” of 16,000 processors and a billion connections in Google data centres.
The small-scale “newborn brain” was shown YouTube images for a week to see what it would learn.
“It ‘discovered’ what a cat looked like by itself from only unlabeled YouTube stills,” the researchers said.
“That’s what we mean by self-taught learning.”
Google researchers are building a larger model and are working on ways to apply the artificial neural network approach to improve technology for speech recognition and natural language modelling, according to Mr Dean and Mr Ng.
“Someday this could make the tools you use every day work better, faster, and smarter,” they said.
Mr Dean and Mr Ng conceded that there is a long road ahead, since an adult human brain has around 100 trillion connections.
Google X Lab headed by company co-founder Mr Sergey Brin is known for its work on innovations such as a self-driving car and “Terminator” film style glasses that provide Internet information about what is being seen.