Thursday, August 23, 2012

The Second Great AI Red Herring Chase

Vicarious in the News

Vicarious Systems is in the news again. They have managed to raise $15 million to continue their ongoing research in their flagship artificial intelligence technology, the Recursive Cortical Network. Nice chunk of change. Unfortunately, nobody seems to know much about RCN and Vicarious is not talking. But we all know why: it's the old "we got something big but our IP lawyer told us not to say anything" excuse all over again. Still, it is easy to speculate that RCN is just a different take on Numenta's hierarchical temporal memory.

An Addiction to Math and Bayes

Back in October of last year, I wrote a critical article about Vicarious' approach to solving the AI problem and their chances of success. Here's an excerpt:
So what do I think of Vicarious' chances of solving the AI problem? I'll be blunt. I think they have no chance whatsoever. Zilch. Here's why. Dileep George, the brain of the company, is a PhD electrical engineer and mathematician who believes that math is essential to solving the AI puzzle. This alone tells me that he has no real understanding of the problem. Furthermore, although I think that a study of the brain can eventually lead to a major breakthrough, it is highly unlikely that this approach will lead to a breakthrough in the foreseeable future. The brain has a bad habit of hiding its secrets in a forest of apparent complexity. The Wright brothers never had to deal with hidden knowledge. They, like everyone else, could easily observe the gliding flight of birds and derive useful principles.

As an example, let's take George's adoption of Bayesian inference for sequence prediction in hierarchical memory. Bayesian statistics is the sort of thing that a mathematician like George would find attractive just because it's math. But is it based on the known biology of the brain? Not at all. Can George search the neuroscience and biology literature to find out what method the brain uses for prediction? The answer is no because biologists have not yet discovered how the brain does it. They just know from psychology that the brain is very good at judging probabilities based on experience. That is the extent of their knowledge.
So has there been any substantial changes in George's approach to AI since? I don't think so. Perusing their help wanted page, I can see that, other than getting their hands on a load of cash with which to hire new engineers and buy new equipment, nothing has changed much. Notice that George takes pains not to mention the word Bayesian in the section on desired skills. However, he wants his hired engineers to have experience "with belief propagation and approximation methods", which is essentially the same thing as Bayesian statistics. Of course, George wouldn't be George is he didn't also ask for solid math skills. The man is a mathematician at heart and he is absolutely convinced there can be no brain-like machine intelligence without math. The man is mistaken, in my opinion, and I'll explain why in a future article.

The Second Great AI Red Herring Chase

Back in the 1950s, thanks mostly to ideas advanced by Alan Turing, the scientific community embraced symbol manipulation as the correct approach to solving the AI problem. They were wrong from the start. Unfortunately, it took over half a century for them to realize that they had been chasing after a red herring all these years. What a waste! Even now, some are still not convinced but, by and large, AI researchers have moved to greener pastures.

Lately, the community has plunged headlong into yet another great red herring chase. They're convinced that Bayesian statistics are the key to AI. They are convinced because Bayesian models (e.g., the hidden Markov model) are currently being used to develop very impressive speech recognition products and other applications that deal with uncertainty and probability. Unfortunately the technology has run into a nasty brick wall that refuses to budge: unlike the human brain, the accuracy of Bayesian systems drops precipitously in the presence of noise. Try speaking commands into your smartphone or use dictation software at a crowded party and you'll see what I mean. There is no question that the brain can effectively and efficiently process probabilistic stimuli. My claim is that it does not use Bayesian statistics to do it.

I'll have more to say about this topic in the near future. Stay tuned.

See Also:

Vicarious Systems' Disappointing Singularity Summit Talk
The Myth of the Bayesian Brain


Mutual Disdain said...

Looks like they're claiming success now:

Louis Savain said...

I am not surprised. One can get very good pattern recognition results with the Bayesian approach.

Louis Savain said...

I should add that I am impressed with this latest announcement from Vicarious. Scott Phenix, the CEO of the company, claims that their software requires only about 10 examples per letter compared to the thousands of examples required by other visual pattern recognition programs. If true, it's a major advance.