In Part I, I wrote that the Bayesian brain hypothesis is the last great barrier to our gaining a full understanding of intelligence. I described the difference between the Bayesian model of perception and the Rebel Science model. In this post, I explain why the latter is superior.
The Real GOFAI Lesson
History repeats itself. During the second half of the 20th century, AI experts were convinced that intelligence was just symbol manipulation. Their rationale was based on the observation that humans are very good at using and manipulating symbols. It was a classic case of equating a system with its behavior, i.e., with what it does. Sure, we can manipulate symbols but we can do much more than that. Needless to say, symbolic AI (aka "good old fashioned AI" or GOFAI) died a slow and painful death.
Did the AI cognoscenti learn their lesson from the GOFAI debacle? Not really. They are busy repeating the same mistake. The new rationale is that the brain is a Bayesian system because it is very good at handling probabilities. That is regrettable because the real lesson of GOFAI is that what we do and how we do it are two different things. Our Bayesian-like behavior is just one of the many products of our intelligence, not the basis of it.
Why is the Rebel Science model of perception superior to the Bayesian model? Here is why:
- Humans use absolute certainties to reason with, not probabilities. This is true even when we reason about probabilities.
- Even though our senses are plagued with noise and incomplete data, humans do not assume that the world is probabilistic.
- It is certainly true that, at the quantum level, everything is probabilistic but not at the macroscopic level in which our senses operate. At that level, almost everything is deterministic. Probabilistic events are few and far between.
In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own.In a weird sort of way, we seem to be going full circle. GOFAI scientists, especially the recently deceased John McCarthy, used to believe that AI was based on formal logic. The idea was abandoned when it became clear that formal logic could not handle common sense and the uncertainties in the sensory space. But, if humans are not probability thinkers, how is it that they handle probabilistic sensory stimuli so well? Counterintuitively, the solution rests in the assumption that the world is perfect. That's the topic of my next post in this series. Stay tuned. I don't have much time but I think this is something that must be heard, not just because my entire approach to AI revolves around it but also because these ideas are crucial to our understanding of intelligence.
Speech Recognition Theory
The Second Great AI Red Herring Chase