"Vision Is More Like a Song than a Painting"
About two weeks ago, in Part II of my article A Fundamental Flaw in Numenta's HTM Design, I wrote that "visual recognition is not unlike speech or music recognition." Yesterday, I did a little search on Google and discovered that Jeff Hawkins and I are pretty much in agreement. On page 58 of his book, On Intelligence, Hawkins writes that "vision is more like a song than a painting." On page 31, he writes, "this is how people learn practically everything, as a sequence of patterns." Hawkins and I are on the same page, at least as far as this aspect of intelligence is concerned.
One would think that Hawkins would stick to his original principles about how the brain learns but he turns around and contradicts himself after he co-founded Numenta Inc. In Numenta's HTM model, Hawkins abandons his gut instincts and gives in to Dileep George's erroneous ideas on how visual learning should work. George believes that the visual cortex sees an image as a hierarchy of small patterns, as opposed to a hierarchy of sequences of patterns. He thinks that the brain sees an entire image all at once and that the visual receptive field increases as one goes up the hierarchy. As I have shown previously on this blog, George is wrong, period. Hawkins is correct that we learn and see everything as a sequence of patterns. We never see a whole picture all at once, in terms of small concurrent patches. There is no pattern learning except at the bottom level of the memory hierarchy and it does not work the way George believes it does either. Why did Hawkins change his mind?
Blinded by that Old Math Magic of Academia
I think Hawkins allowed himself to be bamboozled by George's mathematical sleights of hand. It's the old "when all you've got is a hammer, everything looks like a nail" sort of thing, all over again. George is still playing the math hand at his new AI venture, Vicarious Systems, Inc. I wish him well but he will fail, in my opinion. AI and the brain are not about math. As Hawkins claims in his book, On Intelligence, intelligence is almost entirely about prediction and sequences of patterns. When we finally unravel the workings of the brain, we will be amazed at the simplicity of its principles.
I am going to say something that will come as a surprise to many. Some may even take offense. But, you know me, I always tell it like I see it. That whole Bayesian learning crap that Dileep George introduced to Numenta's HTM is just that, crap. The brain does not use Bayesian statistics to learn which pattern may succeed another. The brain learns as many patterns and sequences as it can, regardless of their probability of occurrence. What matters is that they occur frequently enough to be more than just random noise. Probability only comes into play long after the learning phase, during decision making, e.g., when it comes time to determine whether an object is an apple or an orange. Recognition happens correctly because many branches of the tree of knowledge compete for attention and the strongest (i.e., the most appropriate) branch wins. It's that simple. No Bayesian crap is required. I'll have more to say about temporal learning in an upcoming article.
Shooting AI in the Foot and Bragging About it
My advice to Hawkins is to be true to his original vision and stop listening to academia. Academia have been shooting AI in the foot for the last sixty years or so. That whole symbolic AI nonsense would be laughable if it weren't so pathetic. What a waste of time and brains. But guess what, they are not about to change anytime soon. Worse, they have no shame about their failure. They brag about their "accomplishments".
You've heard it here first. :-)
Jeff Hawkins Is Close to Something Big
A Fundamental Flaw in Numenta's HTM Design