Saturday, February 15, 2014

The Billion Dollar AI Castle in the Air


High tech companies (e.g., Microsoft, Google, FaceBook, Netflix, Intel, Baidu, Amazon, etc.) are pouring billions of dollars into a branch of artificial intelligence called machine learning. The two main areas of interest are deep learning and the Bayesian brain. The goal of the industry is to use these technologies to emulate the capabilities of the human brain. Below, I argue that, in spite of their initial successes, current approaches to machine learning will fail primarily because this is not the way the brain works.

This Is Not the Way the Brain Works

Some in the business have argued that the goal of machine learning is not to copy biological brains but to achieve useful intelligence by whatever means. To this I say, phooey. Symbolic AI, or GOFAI, failed precisely because it ignored neuroscience and psychology. The irony is that the most impressive results in machine learning occurred when researchers began to design artificial neural networks (ANNs) that were somewhat inspired by the architecture of the brain. Deep learning neural networks, especially convolutional neural nets, are attempts at copying the brain's cortical architecture and the early results are impressive, relatively speaking. But this is unfortunate because researchers are now under the false impression that they have struck the mother lode, so to speak. Below, I list some of the reasons why, in my opinion, they are not even close.
  • Deep learning nets encode knowledge by adjusting connection strengths. There is no evidence that this is the way the brain does it.
  • Deep learning nets use a fixed pre-wired architecture. The evidence is that the cortex starts out with a huge number of connections, the majority of which disappear as the brain learns.
  • Convolutional neural nets are hard wired for translational invariance. The evidence is that the brain uses a single mechanism for universal invariance.
  • Unlike the visual cortex, convolutional neural nets do not depend on saccades or microsaccades. This tells us that the brain uses a different method to process visual signals.
  • Deep learning nets use a single hierarchy for pattern learning and recognition. The evidence is that the brain's perceptual system uses two hierarchies, one for patterns and one for sequences of patterns.
  • The Bayesian brain hypothesis assumes that the brain uses probabilities for prediction and reasoning. The evidence is that the brain is not a probability thinker but a cause-effect thinker.
  • Proponents of the Bayesian brain assume that events in the world are inherently uncertain and that the job of an intelligent system is to compute the probabilities. The evidence is that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.
A Castle in the Air

It feels like I am preaching in the wilderness but someone has to do it. Of course, wherever there is a lot of money exchanging hands, self preservation and politics are sure to be lurking right under the surface. My arguments will be dismissed by those who stand to profit from it all and I will be painted as a crackpot and a lunatic (I don't deny that I'm insane) but I don't really care. My message is simple. There is no doubt that the industry is building an expensive castle in the air. Sure, they will have a few so-so successes here and there that will be heralded as proof that they know what they are doing. Google's much ballyhooed cat recognizing neural network comes to mind. But sooner or later, out of nowhere, and when they least expect it, someone else will come out with the real McCoy and the castle will come crashing down. The writing is on the wall.

See Also:

The Myth of the Bayesian Brain
The Second Great AI Red Herring Chase
Why Deep Learning Will Go the Way of Symbolic AI
Why Convolutional Neural Networks Miss the Mark
Secrets of the Holy Grail

1 comment:

Logan Bier said...

Thank god somebody finally wrote this article. I've been trying to tell people these things for years now.

You're correct, there are major gaps between how brains actually function and how billion-dollar neuro/computer science is building synthetic mimics. And these gaps in understanding go even deeper than what you alluded to here, although that's all true too.

Let's consider for a moment that the Penrose-Hameroff microtubule model of consciousness is accurate. This model suggests that quantum resonance action in neural microtubules gives rise to not only extant brainwave patterns, but consciousness as such. Even if this particular model doesn't do the noumena justice, it's pretty hard to argue that biology would never utilize quantum phenomenon. Well, we don't even have computers that can perform the statistics to model the physics of that situation across a whole brain. In a sense, linguistic description of the phenomenon of consciousness has exceeded the computational modeling capacity of machine languages.

This is just scratching the surface of the ignorance though. All the likely computer languages these researchers are using are bound within first-order predicate logic. So, they can't even execute higher order statements without faking it through emulation. To assume without evidence that all minds on Earth operate within first-order bounds, or even need Peano axioms at all to function, is specious.

Neurophysics doesn't even presume to account for the implications of long-range entanglement of bosons in the neural material, either. Quantum physics just doesn't "scale" that far as the internal logic of the local situation exceed the data space of both deterministic and non-deterministic Turing machines. How could we even know what topological order emerges from interaction with negentropic objects in deep space?

Anyway, artificial intelligence researchers wear no clothes. We don't know what we don't know about the brain.