Wednesday, September 16, 2015

No Way Out


I am torn between two options: releasing the results of my speech learning research to the public or erasing everything that I have worked on these many years. The world is in such a precarious state that I cannot think of any scenario in which the introduction of artificial intelligence will not do great harm to humanity. The problem with AI is not that intelligent machines might rebel against us (that prediction is just stupid materialist superstition), but that the powers that be will use the technology to gain even more power and impose their twisted will on humanity. The world could quickly turn into a hell much worse than anything George Orwell could have imagined. I would rather die than live with the knowledge that I might have contributed to it.

Saturday, August 29, 2015

Wednesday, June 3, 2015

Why Deep Learning Is a Hindrance to Progress Toward True AI

Supervised vs Unsupervised Learning

In a recent article titled Emtech Digital: Where Is AI Taking Us?, MIT Technology Review editor Will Night writes:
However, [Quoc] Le said that the biggest obstacle to developing more truly intelligent computers is finding a way for them to learn without requiring labeled training data—an approach called “unsupervised learning.”
This is interesting because we hear so much buzz lately about how revolutionary and powerful deep learning is and about how truly intelligent machines are just around the corner because of it. And yet, if one digs deeper, one quickly realizes that all this success is happening thanks to a machine learning model that will soon have to be abandoned. Why? Because, as Google Brain research scientist Quoc Le says, it is based on supervised learning.

No True AI Is Coming from the Mainstream AI Community Anytime Soon

I have reasons to believe that true AI is right around the corner but I don't see it coming from the mainstream AI community. Right now, they are all having a feeding frenzy over a soon to be obsolete technology. There is no question that deep learning is a powerful and useful machine learning technique but it works in a narrow domain: the classification of labeled data. The state of the art in unsupervised learning (no labels) has so far been a joke. The accuracy of current unsupervised deep neural networks, such as Google's cat recognition program, is truly abysmal (15% or less) and there is no clear path to success.

Time: The Universal Bottom-up Critic

One of the reasons that the performance of unsupervised machine learning is so pathetic, in my opinion, is that researchers continue to use what I call static data such as pictures to train their networks. Temporal information is simply ignored, which is a bummer since time is the key to the AI kingdom. And even when time is taken into consideration, such as in recurrent neural networks, it is not part of a fundamental mechanism that builds a causal understanding of the sensory space. It is merely used to classify labeled sequences.

Designing an effective unsupervised learning machine requires that we look for a natural replacement for the top-down labels. As we all know, supervised or not, every learning system must have a critic. Thus the way forward is to abandon the top-down critic (i.e., label-based backpropagation) and adopt a universal bottom-up critic. It turns out that the brain can only work with temporal correlations of which there are two kinds: sensory signals are either concurrent or sequential. In other words, time should be the learning supervisor, the bottom-up critic. This way, memory is constructed from the bottom up and not top-down, which is as it should be.

The Deep Learning Killer Nobody Is Talking About

Other than being supervised, the biggest problem with deep neural networks is that, unlike the neocortex, they are completely blind to patterns they have never seen before. The brain, by contrast, can instantly model a new pattern. It is obvious that the brain uses a knowledge representation architecture that is instantly malleable and shaped by the environment. As far as I know, nobody in mainstream AI is working on this amazing capability of the brain. I am not even sure they are aware of it.

Conclusion: Be Careful

Sensory learning is all about patterns and sequences of patterns, something that mavericks like Jeff Hawkins have been saying for years now. The trick is to know how to use patterns and sequences to design the correct (there is only one, in my opinion) knowledge representation architecture. Hawkins is a smart guy, probably the smartest guy in AI right now, but I believe a few of his fundamental assumptions are wrong, not the least of which is his continued commitment to a probabilistic approach. As Judea Pearl put it recently, we are not probability thinkers but cause-effect thinkers. And this is coming from someone who has championed the probabilistic approach to AI throughout his career.

In conclusion, I will reiterate that the future of AI is both temporal and non-probabilistic. It may be alright to invest in deep learning technologies for now but be careful. Deep learning will become an obsolete technology much sooner than most people in the business believe.

See Also

In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
No, a Deep Learning Machine Did Not Solve the Cocktail Party Problem

Wednesday, April 29, 2015

No, a Deep Learning Machine Did Not Solve the Cocktail Party Problem

Irresponsible Hype from MIT Technology Review

MIT Technology Review is running a story claiming that a group of machine learning researchers used a convolutional deep learning neural network to solve the cocktail party problem. Don't you believe it. The network that was used has to be pre-trained separately on individual vocals and musical instruments in order to separate out the vocals from the background music. In other words, it can only separate voice from music.

The human brain needs no such training. We can instantly latch on to any voice or sound, even one that we had never heard before, while ignoring all others. We have no trouble focusing on a strange voice speaking a foreign language in a room full of talking people, with or without music playing. This is what the true cocktail party problem is about. A deep learning network cannot pay attention to an arbitrary voice while ignoring the others. To do this, it would have to be pre-trained on all the voices individually.

Note: I posted a protest comment at the end of the article but MIT Tech Review editors chose to censor it. I guess it is easier to attract visitors with a lie than the truth.

It Is Not about Speech

Contrary to rumors, the cocktail party problem has nothing specifically to do with speech or sounds. To focus on individual sounds, the brain uses the same mechanism that it normally uses to pay attention to anything, be it a bird, the letters and words on the computer screen or grandma's voice. The attention mechanism of the brain is universal and is an inherent part of the architecture of memory and how objects are represented in it. Unlike deep learning neural networks, it does not have to be trained separately for every sound or object. The ability of the cortex to instantly model a novel visual or auditory object is a major part of the brain's attention mechanism.

It is clear that the auditory cortex can quickly model a new sound on the fly and tune its attention mechanism to it. No deep learning network can do that. And knowing what I know about how the brain's attention mechanism works, I can confidently say that no deep learning network can ever do that.

See Also:

Did OSU Researchers Solve the Cocktail Party Problem?
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is a Hindrance to Progress Toward True AI

Wednesday, April 8, 2015

200 Million Horsemen and the Corpus Callosum

In my previous post, I claimed that the books of Revelation and Zechariah contain a detailed description of the brain, intelligence and consciousness. In this post, I just want to give interested readers a small taste of things to come. Here is a little gem from the book of Revelation that blew me away when I first understood it.

Corpus Callosum

In chapter 9 of the book of Revelation, we read the following:
Then the sixth angel sounded, and I heard a voice from the four horns of the golden altar which is before God, one saying to the sixth angel who had the trumpet, “Release the four angels who are bound at the great river Euphrates.” And the four angels, who had been prepared for the hour and day and month and year, were released, so that they would kill a third of mankind. The number of the armies of the horsemen was two hundred million; I heard the number of them.
It took me a while but I finally figured out that "two hundred million horsemen" is just a metaphor for the neuronal signals riding on the corpus callosum, the bundle of nerve fibers that connect the two hemispheres of the brain. Surprisingly enough, a quick search on Google reveals that the number of axonic fibers in the corpus callosum is estimated to be about 200 million!

More to Come

The book of Revelation is a treasure trove of information about the brain. It gives precise metrics for a number of brain structures and processes. For examples:
  • The "four angels" mentioned in the text above symbolize four distinct signal pathways or gateways within the corpus callosum.
  • The exact duration of short-term memory is 12.6 seconds.
  • It takes the brain exactly 35 milliseconds to switch its focus from one subject to another.
This is just the tip of the iceberg but there is a time for everything. Please be patient and stay tuned.

See Also:

Zechariah and Revelation: Bombshells in the Way

Tuesday, March 31, 2015

Zechariah and Revelation: Bombshells in the Way

This is for the record only and I will be brief. I have an extraordinary claim to make. The entire book of Revelation and the first six chapters of the book of Zechariah are a detailed metaphorical description of the structure, function and operation of the brain, memory, intelligence and consciousness. Extraordinary evidence for this claim will likely begin to surface as early as this year. That is all.

Saturday, February 28, 2015

In Spite of the Successes, Mainstream AI is Still Stuck in a Rut


It is easy to be impressed by all the buzz surrounding artificial intelligence these days, especially the hot new field of deep learning. Not a week goes by without some breakthrough announcement from some AI lab or other. A few days ago, Google's DeepMind announced the creation of an AI program that can learn to play old Atari video games from the 80s as well as or better than a professional video game player. We are left with the impression that great advances are being made. But, as I explain in this article, nothing could be further from the truth.

Same Old AI Dressed in a New Suit

The problem with all the hoopla surrounding deep learning is that it is not really a new science. It has been around for decades. As others have noted, the reason that it has not made the news before is that, in order to train deep neural networks, one must have access to a huge number of labeled samples. Large repositories of labeled data did not become available until the advent of social networks like FaceBook or Twitter and search giants like Google or Baidu. In addition, the cheap and powerful computer hardware needed to process this enormous amount of data was not built until fairly recently. But the main reason that deep learning is old is that, in spite of claims to the contrary, it is not a new paradigm intended to replace symbolic AI, the bankrupt, "baby boomer" AI model of the last century. On the contrary. Deep learning is just GOFAI with lipstick on. Here is why.

The kind of deep machine learning that has been making the news lately is called supervised learning because it requires that the neural network trainer identifies each chunk of data, or sample, by attaching a label (i.e., a symbol) to it. Notice right off the bat that the intelligence is not in the neural network but in the trainer. If presented with thousands of pictures of cats, the machine automatically learns to map certain images to the cat symbol. This works even though the machine has no idea what a cat is. Of course, we humans do not need labels in order to learn to recognise patterns and objects. So if biological plausibility is a requirement for true AI (highly probable), it is a sure bet that true AI is not going to come from the mainstream anytime soon. Some in the business may want to argue that there is work being done on unsupervised deep neural networks but rest assured that, for all intents and purposes, unsupervised learning is nonexistent.

The point I am making here is that deep neural networks are really symbol generators. A DNN is just a huge, hierarchical collection of old-fashioned if-then rules. Hundreds or even thousands of tiny little rule processors work together to contribute to the activation of a label. What AI researchers have done is create a machine that can generate these rules automatically by looking at labeled pictures. Paradigms die hard, don't they?

Dumb Intelligence

In the Nature paper describing their game-playing neural network, celebrity AI scientist Demis Hassabis and his colleagues at Google's DeepMind offices in London declared:
"The work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."
This sounds a bit like chest beating and at least one deep learning expert has already complained. The question is, did Google really achieve a breakthrough in AI or is all this just hype? What did Google really accomplish? As amazing at this sounds, all Google did was find an automatic way to create an old-fashioned rule-based expert system. Some of the positional brittleness were removed with the use of convolutional neural networks but, after training, all they have left is a purely reactive system, i.e., a dumb, one-track minded automaton wearing blinders and executing rules in the form: if you detect X, do Y.

Is that intelligence, I hear you ask? In a sense, yes, of course. But it is a rather limited and brittle form of intelligence. The human brain also has a similar type of automaton that performs simple or routine (but important) tasks for us whenever our attention is focused on something else. It is called the cerebellum. It handles such things are walking, maintaining posture, balance, etc. But this is not the kind of intelligence you will trust to drive you to work every morning. It would not know what to do in the event of a new situation for which it has received no training. In fact, it is completely blind to new situations. But even worse than that, it has no understanding whatsoever of what it is doing and why. Certainly, this technology can and will be useful for many applications such as factory automation and surveillance but, in the end, it is really a glorified expert system, a dumb intelligence.

Another Red Herring

It would have been more impressive if Google had announced that they had found a solution to the age-old credit assignment problem. Essentially, it is hard for a reinforcement learning program to determine which of its preceding actions caused it to receive a reward or a punishment. Deep neural networks do not offer a solution. Google's program gets around the problem by playing video games where the cause is immediately followed by the reinforcement signal. It did poorly playing Ms. PacMan for this reason. Another problem with this kind of rule-following neural networks is that they have no inherent ability to change their focus or attention. All the rules are active all the time and are always waiting for their chance to fire. As a result, if the system is trained to perform multiple tasks, those tasks must not have patterns in common because that would create a conflict of attention which could then cause a motor conflict.

In conclusion, let me say that I am impressed with the ability of Google's DeepMind algorithm to learn relatively complex tasks using only reinforcement signals. I am impressed because it is a useful algorithm and it is amazing that it works as well as it does. It is a sign that machines will one day be able to perform much more complex tasks as well as or better than humans. But I think that deep learning is yet another red herring on the road to true AI. It is going to be a costly success in the end because it is leading the AI community in the wrong direction. Mainstream AI has reached a point where its tricks are too good for its own good. But fortunately (or unfortunately, depending on one's perspective) for the world, mainstream AI is not the be-all of AI research.

See Also

Google's DeepMind Masters Atari Games
From Pixels to Actions: Human-level control through Deep Reinforcement Learning
No, a Deep Learning Machine Did Not Solve the Cocktail Party Problem