Tuesday, July 25, 2017

Why Google's DeepMind Is Clueless About How Best to Achieve AGI


In this article, I argue that DeepMind's stated goal of achieving artificial general intelligence (AGI) is hopelessly misguided. I further argue that the blame can be laid at the feet of its co-founder, Demis Hassabis.

Hammer and Nails

In a recent paper published in the neuroscience journal Neuron, Demis Hassabis and members of his team at Google's DeepMind argued that progress in AI will benefit from studying how the brain works. While there is nothing controversial about this, Hassabis et al strongly defend the hypothesis that backpropagation, the mechanism of learning in supervised deep neural networks, is also used by the brain. Here is a quote from the paper, emphasis added:
A different class of local learning rule has been shown to allow hierarchical supervised networks to generate high-level invariances characteristic of biological systems, including mirror-symmetric tuning to physically symmetric stimuli, such as faces (Leibo et al., 2017). Taken together, recent AI research offers the promise of discovering mechanisms by which the brain may implement algorithms with the functionality of backpropagation. Moreover, these developments illustrate the potential for synergistic interactions between AI and neuroscience: research aimed to develop biologically plausible forms of backpropagation have also been motivated by the search for alternative learning algorithms.
Hassabis believes that sensory learning in the brain is supervised. Why would a world renown AI expert believe in something so absurd? The answer is two-fold. First, his knowledge of neuroscience is rather lacking since most knowledgeable neuroscientists know that cortical learning is unsupervised. Second, supervised learning is the only effective type of learning that Hassabis is aware of. His entire perspective on AI is built on supervised learning driven by reinforcement signals. In other words, when all you have is a hammer, everything looks like a nail.

Cortical Feedback Is Not Backpropagation

The brain uses lots of feedback signals. There are feedback pathways from the top level of the sequence hierarchy in the cortex down to the first or entry level. But it does not stop there. The feedback pathways continues even further down into the thalamus where the brain's sensory pattern hierarchy resides. Hassabis and his team are obviously confusing feedback pathways with backpropagation.

Cortical feedback is used only during the recognition process and has nothing to do with learning. It is not backpropagation. Backpropagation is something that is used in a deep neural network as a way to propagate an error signal from the output layer down to the first layer of the network during pattern learning. Backpropagation is an integral part of supervised learning. Unfortunately for Hassabis and deep learning experts, this is not the way the brain learns. Cortical learning is 100% unsupervised and is strictly based on signal (spike) timing.

Hassabis Is Clueless About Learning in the Brain and About How to Achieve AGI

Demis Hassabis pretends to know a thing or two about neuroscience but continues to insist that supervised sensory learning has a role to play in the brain and AGI. This is absurd. Obviously Hassabis has never studied the organization and operation of the human retina or the cochlea. If he had, he would know that the eye is nothing like a camera and that the ear is nothing like a microphone. More importantly, he would know that timing, not backpropagation, is the basis of learning in the brain.

Every learning mechanism is based on trial and error. As such, it must have a critic, i.e., a way to correct errors. DeepMind's roadmap to artificial general intelligence (AGI) consists of using reinforcement signals (pain and pleasure) as the only critic for learning. This is wrong in so many ways. Reinforcement signals cannot possibly teach the brain how to understand the intricacies of the world around it.

Note that, even without a background in neuroscience, anybody with a modicum of common sense can tell that humans learn almost anything about their environment without supervision. We don't need a label to tell us how to recognize anything. We can learn to recognize objects and sounds without reinforcement, directly from the sensory data. So how did Hassabis gain his fame as an AI pioneer while being so clueless? Answer: He did it by using the deep learning inventions of others in various narrow domain applications (mostly game playing) as a way to make a name for himself. Hassabis is clueless about how to achieve AGI. He is a charlatan. Soon he will be just a footnote in the history of AI.

The Danger of the Cult of Materialism

The reader may ask, why am I so harsh on Demis Hassabis? The answer is that Hassabis and almost everyone else in the AI community are materialists. That is, they believe and teach others to believe in all sorts of pseudoscientific dogmas that support their core doctrine that God does not exist. For examples, they believe that matter is all there is, that the universe created itself, that life emerged out of dirt all by itself, that they can gain immortality by transferring the contents of their brains to a computer and that computers can achieve consciousness by some unexplainable magic called emergence.

In my opinion, materialists are not just crackpots and pseudoscientists. They are a formidable danger to humanity in this impending age of artificial general intelligence. Their ultimate goal is to eradicate traditional religions by force, if necessary. Materialism is now a full blown machine-worshipping cult whose members preach that intelligent machines should be treated as sentient beings and be given legal rights similar to human rights. If a significant percentage of mankind begins to worship machines as conscious agents or saviors, we are doomed. What I am saying is that materialism is just as evil and dangerous as the other religions of the world, and possibly even more so.


My goal is to show that the materialist elite is not as knowledgeable or as intelligent as they think they are or as they want others to believe. In fact, in many respects, the level of their stupidity is mind boggling. Demis Hassabis is a case in point. One thing about AI research that brings a smile to my face as much as any other is my conviction that materialists have as much chance of figuring out AGI as my dog. Of this, I am certain.

See Also:

The Missing Link of Artificial Intelligence
Why We Have a Supernatural Soul
Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI

No comments: