Deep learning is a machine learning and pattern representation and recognition technique based on multi-layered, statistical neural networks. Deep learning is all the rage lately. Big corporations like Google, Facebook and others are spending billions to set up labs and acquire experts and companies with experience in the technology. In this article, I argue that the current approach to deep learning will not lead to human-like intelligence because this is not the way the brain does it.
There is no question that the brain classifies knowledge using a hierarchical architecture. The representation of objects in memory is compositional. That is to say, higher level representations are built on top of lower level ones. For examples, low level visual representations might consist of edges and lines. These can be combined to form higher level objects such as a nose or an eye. So the one thing deep learning neural networks have going for them is that they use multiple layers to form a hierarchical structure of representations.
A deep learning network consists of multiple layers of neurons. Each layer is a restricted Boltzmann machine or RBM.
|Restricted Boltzmann Machine|
There are a number of problems with deep learning networks that make them unsuitable to the goal of emulating the brain. I list them below.
- A deep learning network encodes knowledge by adjusting the strengths of the connections between visible and hidden units. There is no evidence that the brain uses variable synaptic strengths to encode degrees of certainty during sensory learning.
- Every visible unit is connected to every hidden unit in an RBM. There is no evidence that sensors make connections with every downstream neuron in the brain's cortex. In fact, as the brain learns, the number of connections (synapses) between sensors and the cortex is drastically reduced. The same is true for intracortical connections.
- Deep learning networks must be fine-tuned using supervised learning or backpropagation. There is no evidence that sensory learning in the brain is supervised.
- Deep learning networks are ill-suited for invariant pattern recognition, something that the brain does with ease.
- Deep learning networks use highly complex learning algorithms based on complex mathematical functions that require fast processors. There is no evidence that cortical neurons solve complex functions.
- Deep learning networks use static examples whereas the brain is bombarded with a constantly changing stream of sensory signals. Timing is essential to learning in the brain.
Current approaches to deep learning assume that the brain learns visual representations by computing input statistics. As a result, one would expect a gradation in the way patterns are recognized, especially in ambiguous images. However, psychological experiments with optical illusions suggest otherwise.
It seems much more likely that the cortex uses a winner-takes-all strategy whereby all possible patterns and sequences are learned regardless of probability. The only criterion is that they must occur often enough to be considered above mere random noise. During recognition, the patterns and sequences compete for activation and the ones with the highest number of hits are the winners. This kind of pattern learning is simple (no math is needed), fast and requires no supervision.
Secrets of the Holy Grail, Part II for more on this alternative approach to pattern learning.
In view of the above, I conclude that, in spite of its initial success, deep learning is just a red herring on the road to true AI. It is not true that the brain maintains internal probabilistic models of the world. After all is said and done, deep learning will be just a footnote in the annals of AI history. The same can be said about the Bayesian brain hypothesis, by the way.
Mainstream AI Is Still Stuck in a Rut
The Myth of the Bayesian Brain
Why Convolutional Neural Networks Miss the Mark