High tech companies (e.g., Microsoft, Google, FaceBook, Netflix, Intel, Baidu, Amazon, etc.) are pouring billions of dollars into a branch of artificial intelligence called machine learning. The two main areas of interest are deep learning and the Bayesian brain. The goal of the industry is to use these technologies to emulate the capabilities of the human brain. Below, I argue that, in spite of their initial successes, current approaches to machine learning will fail primarily because this is not the way the brain works.
This Is Not the Way the Brain Works
Some in the business have argued that the goal of machine learning is not to copy biological brains but to achieve useful intelligence by whatever means. To this I say, phooey. Symbolic AI, or GOFAI, failed precisely because it ignored neuroscience and psychology. The irony is that the most impressive results in machine learning occurred when researchers began to design artificial neural networks (ANNs) that were somewhat inspired by the architecture of the brain. Deep learning neural networks, especially convolutional neural nets, are attempts at copying the brain's cortical architecture and the early results are impressive, relatively speaking. But this is unfortunate because researchers are now under the false impression that they have struck the mother lode, so to speak. Below, I list some of the reasons why, in my opinion, they are not even close.
- Deep learning nets encode knowledge by adjusting connection strengths. There is no evidence that this is the way the brain does it.
- Deep learning nets use a fixed pre-wired architecture. The evidence is that the cortex starts out with a huge number of connections, the majority of which disappear as the brain learns.
- Convolutional neural nets are hard wired for translational invariance. The evidence is that the brain uses a single mechanism for universal invariance.
- Unlike the visual cortex, convolutional neural nets do not depend on saccades or microsaccades. This tells us that the brain uses a different method to process visual signals.
- Deep learning nets use a single hierarchy for pattern learning and recognition. The evidence is that the brain's perceptual system uses two hierarchies, one for patterns and one for sequences of patterns.
- The Bayesian brain hypothesis assumes that the brain uses probabilities for prediction and reasoning. The evidence is that the brain is not a probability thinker but a cause-effect thinker.
- Proponents of the Bayesian brain assume that events in the world are inherently uncertain and that the job of an intelligent system is to compute the probabilities. The evidence is that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.
It feels like I am preaching in the wilderness but someone has to do it. Of course, wherever there is a lot of money exchanging hands, self preservation and politics are sure to be lurking right under the surface. My arguments will be dismissed by those who stand to profit from it all and I will be painted as a crackpot and a lunatic (I don't deny that I'm insane) but I don't really care. My message is simple. There is no doubt that the industry is building an expensive castle in the air. Sure, they will have a few so-so successes here and there that will be heralded as proof that they know what they are doing. Google's much ballyhooed cat recognizing neural network comes to mind. But sooner or later, out of nowhere, and when they least expect it, someone else will come out with the real McCoy and the castle will come crashing down. The writing is on the wall.
The Myth of the Bayesian Brain
The Second Great AI Red Herring Chase
Why Deep Learning Will Go the Way of Symbolic AI
Why Convolutional Neural Networks Miss the Mark
Secrets of the Holy Grail