Previously, I wrote that, by embracing Bayesian statistics, the artificial intelligence community has embarked on yet another great red herring chase. I claimed that Bayesian statistics will prove to be just as wasteful of time, brain and money as the symbolic AI craze of the last century. In this article, I explain why the Bayesian mindset is injurious to progress in our understanding of intelligence. I do not argue that the Bayesian approach is bad in and of itself but that, when it comes to explaining the brain’s ability to handle uncertainty, there is a competing model that is orders of magnitude better.
The Last Great Barrier to Fully Understanding Intelligence
Scientists are a conservative and taciturn lot. They will live with a myth or obvious falsehood for decades and even centuries because the humiliation and other hardships that come from rejecting the lie are too painful for them to bear. The Bayesian brain is just such a myth. The problem is that it is now so firmly entrenched in the AI community that accommodating a different perspective would be suicidal to many careers. I will argue that the Bayesian mindset is the last great barrier to progress in AI because it cripples our understanding of the most important aspect of intelligence: perception. I believe that having a correct understanding of perception will unleash a flood of insights that will quickly lead to a full understanding of intelligence, artificial or otherwise.
Two Competing Models of Perception
Below is the essence of the two competing models of perception.
- The Bayesian model assumes that events in the world are inherently uncertain and that the job of an intelligent system is to discover the probabilities.
- The Rebel Science model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.
The Second Great AI Red Herring Chase