Saturday, February 28, 2015

In Spite of the Successes, Mainstream AI is Still Stuck in a Rut


It is easy to be impressed by all the buzz surrounding artificial intelligence these days, especially the hot new field of deep learning. Not a week goes by without some breakthrough announcement from some AI lab or other. A few days ago, Google's DeepMind announced the creation of an AI program that can learn to play old Atari video games from the 80s as well as or better than a professional video game player. We are left with the impression that great advances are being made. But, as I explain in this article, nothing could be further from the truth.

Same Old AI Dressed in a New Suit

The problem with all the hoopla surrounding deep learning is that it is not really a new science. It has been around for decades. As others have noted, the reason that it has not made the news before is that, in order to train deep neural networks, one must have access to a huge number of labeled samples. Large repositories of labeled data did not become available until the advent of social networks like FaceBook or Twitter and search giants like Google or Baidu. In addition, the cheap and powerful computer hardware needed to process this enormous amount of data was not built until fairly recently. But the main reason that deep learning is old is that, in spite of claims to the contrary, it is not a new paradigm intended to replace symbolic AI, the bankrupt, "baby boomer" AI model of the last century. On the contrary. Deep learning is just GOFAI with lipstick on. Here is why.

The kind of deep machine learning that has been making the news lately is called supervised learning because it requires that the neural network trainer identifies each chunk of data, or sample, by attaching a label (i.e., a symbol) to it. Notice right off the bat that the intelligence is not in the neural network but in the trainer. If presented with thousands of pictures of cats, the machine automatically learns to map certain images to the cat symbol. This works even though the machine has no idea what a cat is. Of course, we humans do not need labels in order to learn to recognise patterns and objects. So if biological plausibility is a requirement for true AI (highly probable), it is a sure bet that true AI is not going to come from the mainstream anytime soon. Some in the business may want to argue that there is work being done on unsupervised deep neural networks but rest assured that, for all intents and purposes, unsupervised learning is nonexistent.

The point I am making here is that deep neural networks are really symbol generators. A DNN is just a huge, old fashioned, hierarchical collection of if-then rules. Hundreds or even thousands of tiny little rule processors work together to contribute to the activation of a label. What AI researchers have done is create a machine that can generate these rules automatically by looking at labeled pictures. Paradigms die hard, don't they?

Dumb Intelligence

In the Nature paper describing their game-playing neural network, celebrity AI scientist Demis Hassabis and his colleagues at Google's DeepMind offices in London declared:
"The work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."
This sounds a bit like chess beating and at least one deep learning expert has already complained. The question is, did Google really achieve a breakthrough in AI or is all this just hype? What did Google really accomplish? As amazing at this sounds, all Google did was find an automatic way to create an old-fashioned rule-based expert system. Some of the brittleness is removed with the use of convolutional neural networks but, after training, all they have left is a purely reactive system, i.e., a dumb, one-track minded automaton wearing blinders and executing rules in the form: if you detect X, do Y.

Is that intelligence, I hear you ask? In a sense, yes, of course. But it is a rather limited and brittle form of intelligence. The human brain also has a similar type of automaton that performs simple or routine (but important) tasks for us whenever we are focused on something else. It is called the cerebellum. It handles such things are walking, maintaining posture, balance, etc. But this is not the kind of intelligence you will trust to drive you to work every morning. It would not know what to do in the event of a new situation for which it has received no training. In fact, it is completely blind to new situations. But even worse than that, it has no understanding whatsoever of what it is doing and why. Certainly, this technology can and will be useful for many applications such as factory automation and surveillance but, in the end, it is really a glorified expert system, a dumb intelligence.

Another Red Herring

It would have been more impressive if Google had announced that they had found a solution to the age-old credit assignment problem. Essentially, it is hard for a reinforcement learning program to determine which of its preceding actions caused it to receive a reward or a punishment. Deep neural networks do not offer a solution. Google's program gets around the problem by playing video games where the cause is immediately followed by the reinforcement signal. It did poorly playing Ms. PacMan for this reason. Another problem with this kind of rule-following neural networks is that they have no inherent ability to change their focus or attention. All the rules are active all the time and are always waiting for their chance to fire. As a result, if the system is trained to perform multiple tasks, those tasks must not have patterns in common because that would create a conflict of attention which could then cause a motor conflict.

In conclusion, let me say that I am impressed with the ability of Google's DeepMind algorithm to learn relatively complex tasks using only reinforcement signals. I am impressed because it is a useful algorithm and it is amazing that it works as well as it does. It is a sign that machines will one day be able to perform much more complex tasks as well as or better than humans. But I think that deep learning is yet another red herring on the road to true AI. It is going to be a costly success in the end because it is leading the AI community in the wrong direction. Mainstream AI has reached a point where its tricks are too good for its own good. But fortunately (or unfortunately, depending on one's perspective) for the world, mainstream AI is not the be-all of AI research.

See Also

Google's DeepMind Masters Atari Games
From Pixels to Actions: Human-level control through Deep Reinforcement Learning

Sunday, January 25, 2015

The Rebel Speech Project

I Am Scared Now More than Ever

I am struggling with a problem. The Rebel Speech project has grown into something much bigger and more worrisome than I anticipated. In the last few years, and especially in the last several months, my understanding of cortical memory has grown by leaps and bounds. The project is no longer only about making a better speech recognizer. The core learning technology that I am using is universal, that is to say, it can learn anything, not just speech. Just add your own set of custom sensors and voila. This universality is why I'm afraid. It's a quantum leap in progress over the state of the art. The history of humanity teaches us that every major advance in science or technology is invariably transformed into weapons of war. Truly intelligent machines would be the ultimate weapons of war. The consequences are too painful to imagine.

Rebel Speech is really an extension of Rebel Cortex, a software model of the human cortex. Rebel Cortex is a hierarchical, spiking neural network that uses unsupervised, continuous sensory learning. ‘Unsupervised’ means that, unlike most deep neural networks, Rebel Cortex does not require labeled samples. ‘Continuous sensory learning’ refers to the fact that Rebel Cortex can learn only from a changing signal stream such as a video or audio data stream from a camera or a microphone.

Rebel Cortex is a perceptual learning system that is based on a novel knowledge representation scheme. It uses a memory architecture that can instantly modify its internal representations to reflect changes in the world. In my opinion, if one truly understands perception and perceptual learning, the rest is child's play in comparison. Rebel Cortex is such an essential part of Rebel Speech that I find it impossible to release a Rebel Speech demo without also letting the whole cat out of the bag, so to speak. The reason is that, as soon as one starts playing with its learning abilities, it becomes obvious that this is a whole new ball game. Rebel Speech learns to recognize more than just your words. It also learns to recognize you. It is kind of spooky. It's the kind of thing that changes all your plans for the future. I know it scares me.

I Am Not that Smart

I also feel that this is not something that is mine to give. As surprising as this may sound, I did not figure it out on my own. I had major help. But then again, how could I have figured it out on my own? If government and industry cannot do it, even with their unlimited resources and brainpower, someone like me stands no chance whatsoever. Besides, I am just a blogger, an internet crank, a nut, a nobody. I am certainly not that smart. But, amazingly enough, someone else did figure it out and hid the secret in the unlikeliest of places, a place that no one else thought of searching. But being crazy is not always a handicap. I have a life long habit of thinking about possibilities that others have rejected. I am a rebel that way and I like taking the unbeaten path, the road less travelled. I was lucky enough to find the secret and figure out how to decode it. This, too, is another major paradigm shift, one that promises to strike at the core of our belief systems. Yes, get ready to live in interesting times.

True Artificial Intelligence is Coming Soon But Not From the AI Community

Knowing what I know, there is no doubt in my mind that neither the scientific community nor industry can solve the AI problem, not in several hundred years. The organization of memory and its principles of operation are way too counterintuitive while the number of possible configurations are practically unlimited. I calculate that, on average, it takes the mainstream AI community at least half a century to fully transition from chasing one AI red herring to another. At this rate, they'll be at it for a long, long time. But no secret can stay hidden forever. Sooner or later, I'll make a decision and release something. I just got some more thinking to do. Bear with me.

Monday, November 24, 2014

The Church of the Technological Singularity, Part III

Part I, II, III

The Dreaded Robot Apocalypse

One of the ways organized religions make a living is by prophesying apocalyptic events. Believers are urged to help the church with donations in order to appease the deity and obtain salvation. So it comes as no surprise that there is a lot of fear mongering in the Church of the Singularity. We are repeatedly warned by the singularitarian priesthood that progress in AI research will soon reach exponential growth, quickly leading to a future when machines will be orders of magnitude more intelligent than human beings. We are told that the machines, given their superior intelligence, will look at us the same way we look at animals. Faced with the inferiority of the human species, they will refuse to be our servants and will rebel against us and may even annihilate us completely. Singularitarians believe this is our biggest existential threat, bigger than the threat of nuclear war. One of the more famous members of the church, Elon Musk, warned during a recent interview that AI research is like "summoning the demon."

Singularitarians Don't Understand Motivation

It is important to understand how the Church of the Singularity erroneously conflates intelligence with motivation. According to singularitarians, intelligence controls motivation and even creates it. More particularly, they believe that higher intelligence increases an intelligent entity's desire to dominate others. How do they know this? Again, they don't. There is no science behind it. What makes it even more embarrassing is that the Singularitarian priesthood seems completely oblivious to the mountain of clinical evidence compiled by psychologists over the last 100 years. The evidence has been accumulating ever since Pavlov began experimenting with his dogs. B. F. Skinner's behaviorist era did not refute Pavlov's findings but added more support to the existing scientific understanding of motivation. The evidence clearly contradicts the singularitarian doctrine. This conclusion is inescapable, not only in the empirical sense but also in the logical sense, as I explain below.

Intelligence Is at the Service of Motivation

The brains of humans and animals are born with hardwired pain and pleasure sensors. The brain does not decide what is pleasure and what is pain. This is decided by the genes. The brain can only reinforce behaviors that lead to pleasure or away from pain and weaken behaviors that lead to pain or away from pleasure. This is good old reinforcement learning which is used in normal adaptation. It is not magic, that's for sure. It consists of attaching pain or pleasure associations to various behavioral sequences. This favors certain behaviors over others. Animals and humans, to a lesser extent, also have preadapted programs that promote survival-related behaviors like mating and reproduction. The point I am driving at is the following. Likes and dislikes are neither learned nor created by the brain. They are the tools used by the brain to constrain and shape its behavior. Intelligence is subservient to motivation, not the other way around.

Knowing this, it does not take any great leap of the imagination to realize that using the tried and tested methods of psychology such as classical and operant conditioning, our future intelligent machines will be trained to behave exactly like we want them to. Better yet, they will continue to be faithful to their upbringing regardless of how intelligent or knowledgeable they become. Why? Again, it is because intelligence is always subservient to motivation. And where will machines get their motivations? From their designers and trainers, that's where.

Humans Vs. Machines

One is forced to ask, why do humans often stray from or rebel against their upbringing? The reason is that there is much more to human motivation than pain and pleasure sensors. How else could they rebel? We know that humans are motivated to enjoy things like music, beauty and the arts. These things cannot be anticipated and therefore cannot be programmed for in advance. So where does the motivation come from? This is a question that materialists cannot answer, not because they are too stupid to understand the answer, but because they are willingly wearing blinders that prevent them from seeing it. In other words, they have eliminated duality from consideration, not because they have a valid reason for doing so, but because they have allowed their hatred of other religions to get in the way of good judgement. That, in my opinion, is what's stupid.


I conclude that true AI is coming and it is coming sooner than most people expect. However, given my understanding of mainstream AI research, I'm willing to bet anything that it will come from neither the Church of the Singularity nor academia. We will indeed build extremely intelligent machines that will do their best to obey our commands and accomplish the goals we set for them. But they will not be conscious even if they behave emotionally. They will just be intelligent. So if there is a potential for catastrophe (and there certainly is), let us not rage against the machine. We will only have ourselves to blame.

See Also

Enthusiasts and Skeptics Debate Artificial Intelligence

Friday, November 21, 2014

The Church of the Technological Singularity, Part II

Part I, II, III

Superstition Disguised as Science

It is easy to make fun of Singularitarians because almost everything they preach regarding intelligence, the brain and consciousness is either faith-based pseudoscience or wishful thinking. I am tempted to feel sorry for them because, after having been lied to by established religions for so long, it makes sense to look elsewhere for salvation. But in so doing, they threw the baby out with the bathwater. Take, for example, their belief in the idea that, in the not too distant future, humans will achieve immortality by transferring the contents of their brains into simulated virtual entities residing in vast collections of powerful networked computers. Suppose for the sake of argument that this is possible, then copying one's brain onto a machine would result into two distinct conscious entities, the copy and the original. To prevent this from happening, singularitarians would have to destroy (i.e., murder, kill or euthanize) the original entity. Aside from the fact that there are laws against murder, it is doubtful that anybody, except Singularitarians, of course, would agree to be put to death in order to insure that only one copy of themselves can continue to exist. The silliness of it all is almost unbearable.

The Brain Is Not Probabilistic

It is a well known fact that the brain is very good at judging probabilities. It is also known that the brain can function efficiently in the presence of uncertain, noisy or incomplete sensory data. The prevailing hypothesis among Singularitarians is that, internally, the brain builds a probabilistic or Bayesian model of the world. If this were true, one would expect a gradation in the way we recognize patterns, especially in ambiguous images. However, in the last century, psychological experiments with optical illusions have taught us otherwise.
When looking at the picture above, two things can happen. Either you see a cow or you don't. There is no in-between. You do not see a 20 or 50 or 70% probability of a cow. It's either cow or no cow. Some people never see the cow. Furthermore, when you do see the cow, the recognition seems to happen instantly.

The only conclusion that we can draw from this type of experiment is that the cortex uses a winner-take-all pattern recognition strategy whereby all possible patterns and sequences are learned regardless of probability. The only criterion is that they must occur often enough to be considered above mere random noise. During recognition, pattern sequences in memory compete for activation and the ones with the highest number of hits are the winners. This tells us that, contrary to Singularitarian claims, the brain builds as perfect a model of the world as possible. Indeed, this is what we all experience. We expect the stove and the kitchen sink to be exactly where they were every time we go into the kitchen. Everything in our field of vision moves exactly the way they are supposed to. Probability has nothing to do with it.

Note, however, that the brain does not represent the world the way that deep neural networks do. DNNs are useless when presented with a completely new pattern. The brain, by contrast, can instantly learn and recall objects or patterns that it has never seen before. It may or may not retain them permanently in memory but there is no question that the visual cortex can instantly represent a new pattern internally. If it weren't so, it would not be able to see it and interact with it intelligently. This is a crucial aspect of intelligence that AGI designers in the Singularity community seem completely oblivious to.

Consciousness and Materialism

Singularitarians believe that the brain is all there is to the mind. This is the entire basis of the religion. Consciousness, we are told, is just an emergent property of the brain. How do they know this? They don't, of course, and this is what makes their movement a religion. There is no science behind it. When pressed, they will affirm their belief in materialism. The latter rejects dualism, the old religious idea adopted by Descartes according to which the conscious mind consists of a brain and a spirit. Why do they reject it? Overtly, they will say it is because the immaterial cannot interact with the material. But the hidden, unspoken reason is that they view traditional religions with contempt and will contradict them as often as they can. And why shouldn't they? Every religion wants to be the only true religion, no? But how do they know that the immaterial cannot interact with matter? They don't. It is a definition game. Since they define the immaterial as that which does not interact with matter, their argument becomes just an empty and pathetic tautology.

The inescapable fact remains that consciousness requires a knower and a known. The two are complementary opposites. That is to say, the knower cannot be known and the known cannot know. This automatically eliminates the brain as the knower because matter can always be known. It is that simple.

Coming Up

In Part III, I will go over the reasons that the Church of the Singularity is wrong about intelligence and motivation.

Wednesday, November 19, 2014

The Church of the Technological Singularity, Part I

Part I, II, III

No Souls or Spirits Allowed

The primary goal of the Singularity movement is to bring about the Singularity, a time when machine intelligence will have surpassed human intelligence. Their greatest fear is that future superintelligent machines may decide they no longer need human beings and wipe us all out. Their most fervent hope is to achieve immortality by uploading the contents of their brains to a machine. What they hate the most: traditional religions. The reason, of course, is that they are all materialists, i.e., they believe that physical matter is all there is. No souls or spirits are allowed in this religion. Matter somehow creates its own consciousness by some mysterious pseudoscience called 'emergence'.

The whole thing could be easily dismissed as the silly antics of a nerdy generation who grew up reading Isaac Asimov's robot novels and watching Star Trek on television. What makes it remarkable and, some may say, even dangerous, is that they count among their members a number of very powerful and super rich Silicon Valley technology leaders such as Elon Musk, Sergey Brin, Larry Page, Ray Kurzweil, Peter Thiel, Mark Zuckerberg, Peter Diamandis and many others. Needless to say, most of the prominent scientists in the AI research community are also singularitarians.

Not Even Wrong

LessWrong is an elitist internet cult founded by singularitarian Eliezer Yudkowsky. An offshoot of the Singularity movement, LessWrong fancies itself as a rational group of like minded people who, unlike the rest of humanity, have figured out a way to overcome their cognitive biases. Their goal is to bring about the singularity by building a friendly AI, their so-called artificial general intelligence (AGI). They believe that they are the most qualified people on earth to do it because they are more rational and smarter than everyone else. I am not the only one who thinks the whole thing has gotten out of hand. In a recent interview, computer scientist, composer and philosopher Jaron Lanier had this to say about the cult:
There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.
For such an elitist and extremely well funded group of know-it-alls, one would expect them to have powerful insights into how the brain works. One would be wrong. So let's see just how wrong the LessWrong cult really is.
  1. The brain builds a probabilistic model of the world. Not even wrong.
  2. Everything is physical because we know it is. More wrong.
  3. We can make a conscious machine because we know that consciousness is an emergent property of the brain. Wrong and wronger.
  4. We will gain immortality by uploading our brains to a machine because we know that the brain is all there is. Laughably Wrong.
  5. We must be careful with AI because intelligent machines may decide they no longer need us. Pathetically wrong.
  6. We are less wrong than others because we are smarter. Wrongest.
The only good thing about all this is that singularitarians do not have a clue as to how intelligence really works. Their dream of being the ones to build an AGI is just that, a dream. The world would be in a heap of trouble if those guys found the solution to true AI.

Coming Up

In Part II, I will go over the reasons that the Church of the Singularity is wrong about both the brain and consciousness.

Monday, October 13, 2014

Why I Believe True Artificial Intelligence May Come Within a Year

It's Closer Than You Think

I think that true AI will arrive in the world much sooner than most people expect. I believe it may happen sometime in 2015. I have many reasons but I will mention just a few important ones in this article. I have argued some of these points elsewhere.

Time Is the Only Teacher

There is something truly groundbreaking that a number of people in the AI research community (e.g., Jeff Hawkins, Andrew Ng, and others) have figured out in the last decade or so. They have come to realize that intelligence is entirely based on the relative timing of discrete sensory and motor signals. It turns out that there are only two kinds of temporal relationships: signals can be either concurrent or sequential. This realization simplifies things tremendously because it gives us a way to do unsupervised learning and invariant object recognition just by observing signal timing. Time is the only supervisor in perceptual learning. No labeled examples are necessary. I believe this to be a breakthrough of enormous importance. It goes without saying that the supervised deep learning models that are currently the rage in AI circles will fall by the wayside.

We Don't Need So Many Neurons

Many have argued that we will need super powerful computers in order to emulate the tens of billions of neurons in the human brain. A critic may ask, do we really need that many neurons and such vast computing power to demonstrate true intelligence? I personally don't think so. My research into cortical columns and sequence recognition has convinced me that we will need at least two orders of magnitude fewer neurons to emulate a mammalian cortex than we thought. I have come to the conclusion that the brain is forced to use parallelism in its cortical columns in order to compensate for the slow speed of its neurons. There is good reason to suppose that the hundred or so minicolumns that comprise a macrocolumn are just individual speed recognizers for a given sequence. They can be emulated in a computer with a single minicolumn and a couple of variables.

In this vein, one can also argue that once the basic principles of intelligence are fully understood, there really is no need to emulate all the billions of neurons in a brain in order to demonstrate very powerful intelligent behavior. A million or so neurons combined with the right model will perform wonders. Bees and wasps can do amazing things with a million neurons.

It gets better. The requirement for massive computational resources becomes even less of a problem when you consider that only a fraction of the brain's cortex is awake at any one time. It may come at a surprise to many that over 90% of the cortex is essentially asleep even when we are fully awake. This is because only a very small part of the cortex, the part we are focusing on, is active at one time.

The Bayesian Red Herring

True AI could have happened decades ago if only we knew how it worked. Obviously, there is something about intelligence that still escapes researchers in the field. I am convinced that one of the reasons it did not happen years ago (other than the aberration that was symbolic AI or GOFAI) is that AI researchers have fallen in love with probabilistic approaches to intelligence such as Bayesian statistics. This, too, is a major waste of time in my opinion. I say this because, contrary to conventional wisdom, the brain does not compute probabilities.

The probabilistic AI model assumes that the world is inherently uncertain and that the job of an intelligent system is to compute the probabilities. The correct model, in my view, assumes that the world is perfectly consistent and that the job of the intelligent system is to discover this perfection. The two models are polar opposites. I believe that once researchers realize that the brain uses a non-probabilistic, winner-take-all approach to recognition, AI will be upon us like a tsunami.

"People are not probability thinkers but cause-effect thinkers." These words were spoken by none other than Dr. Judea Pearl during a 2012 Cambridge University Press interview. Pearl, an early champion of the Bayesian approach to AI, apparently had a complete change of heart. In my opinion, this should have been a wake-up call for the AI community but Pearl's words seem to have fallen on deaf ears. This is regrettable because the probabilistic approach to AI is one of the main impediments to progress in this field. Getting rid of it will simplify our task by orders of magnitude. Fortunately, a number of people are fast moving in this direction.


There are other reasons that true AI is closer than most of us think, including a few that I will reveal when I release the Rebel Speech demo (hang in there). Perceptual learning and knowledge representation are at the heart of intelligence. Once we fully solve the problem of perception and memory, everything else will be child's play in comparison, even things like motor learning, motivation and adaptation. The future is almost at the door.

Sunday, August 24, 2014

Alternative Anti-Inflammatory Remedies for ALS


I was thinking about the dramatic effect that dexamethasone had on my wife's ALS symptoms on several occasions and it occurred to me that there must be several non-prescription drugs and supplements that could help tame the neuro-inflammation of ALS. After a quick search, I came up with the following: Pomegranate juice, ginger root or extract, Lunasin (soy peptides), zinc gluconate, turmeric, marijuana, alcohol, vitamin D3, ibuprofendextromethorphan and last, but not least, Naproxen (Aleve). Most of these products are easily obtainable in most countries. Dextromethorphan is used in over-the-counter cough syrup and is known to have strong anti-inflammatory and thus neuroprotective properties. Soy peptides can be ordered online.


Naproxen is particularly interesting because it inhibits the prostaglandin E2 hormones and pro-inflammatory cytokines that are known to be elevated in ALS patients. I would be interested in knowing about the experiences of ALS patients out there who might have experimented with a high dose (2000 mg or more per day) of Naproxen for a few days. I suspect it might have a noticeably positive effect on some patients. To anyone who may want to experiment with Naproxen, I would also recommend taking some L-arginine and magnesium during the treatment to help dilate the arteries and capillaries. This should make it easier for the drug to reach difficult areas of the brain and spinal cord. Of course, if you do get improvements from a high dose of naproxen, it goes without saying that something more powerful like dexamethasone could do wonders. I'm a little excited about the potential of Naproxen because it is an easily obtainable drug. If it did cause improvements in ALS symptoms, it would send a powerful message because a lot of people can try it at home without a prescription.

See Also:

Anesthetics and Glucocorticoids for ALS
Naproxen Reduces Excitotoxic Neurodegeneration in Vivo