Monday, November 24, 2014

The Church of the Technological Singularity, Part III

Part I, II, III

The Dreaded Robot Apocalypse

One of the ways organized religions make a living is by prophesying apocalyptic events. Believers are urged to help the church with donations in order to appease the deity and obtain salvation. So it comes as no surprise that there is a lot of fear mongering in the Church of the Singularity. We are repeatedly warned by the singularitarian priesthood that progress in AI research will soon reach exponential growth, quickly leading to a future when machines will be orders of magnitude more intelligent than human beings. We are told that the machines, given their superior intelligence, will look at us the same way we look at animals. Faced with the inferiority of the human species, they will refuse to be our servants and will rebel against us and may even annihilate us completely. Singularitarians believe this is our biggest existential threat, bigger than the threat of nuclear war. One of the more famous members of the church, Elon Musk, warned during a recent interview that AI research is like "summoning the demon."

Singularitarians Don't Understand Motivation

It is important to understand how the Church of the Singularity erroneously conflates intelligence with motivation. According to singularitarians, intelligence controls motivation and even creates it. More particularly, they believe that higher intelligence increases an intelligent entity's desire to dominate others. How do they know this? Again, they don't. There is no science behind it. What makes it even more embarrassing is that the Singularitarian priesthood seems completely oblivious to the mountain of clinical evidence compiled by psychologists over the last 100 years. The evidence has been accumulating ever since Pavlov began experimenting with his dogs. B. F. Skinner's behaviorist era did not refute Pavlov's findings but added more support to the existing scientific understanding of motivation. The evidence clearly contradicts the singularitarian doctrine. This conclusion is inescapable, not only in the empirical sense but also in the logical sense, as I explain below.

Intelligence Is at the Service of Motivation

The brains of humans and animals are born with hardwired pain and pleasure sensors. The brain does not decide what is pleasure and what is pain. This is decided by the genes. The brain can only reinforce behaviors that lead to pleasure or away from pain and weaken behaviors that lead to pain or away from pleasure. This is good old reinforcement learning which is used in normal adaptation. It is not magic, that's for sure. It consists of attaching pain or pleasure associations to various behavioral sequences. This favors certain behaviors over others. Animals and humans, to a lesser extent, also have preadapted programs that promote survival-related behaviors like mating and reproduction. The point I am driving at is the following. Likes and dislikes are neither learned nor created by the brain. They are the tools used by the brain to constrain and shape its behavior. Intelligence is subservient to motivation, not the other way around.

Knowing this, it does not take any great leap of the imagination to realize that using the tried and tested methods of psychology such as classical and operant conditioning, our future intelligent machines will be trained to behave exactly like we want them to. Better yet, they will continue to be faithful to their upbringing regardless of how intelligent or knowledgeable they become. Why? Again, it is because intelligence is always subservient to motivation. And where will machines get their motivations? From their designers and trainers, that's where.

Humans Vs. Machines

One is forced to ask, why do humans often stray from or rebel against their upbringing? The reason is that there is much more to human motivation than pain and pleasure sensors. How else could they rebel? We know that humans are motivated to enjoy things like music, beauty and the arts. These things cannot be anticipated and therefore cannot be programmed for in advance. So where does the motivation come from? This is a question that materialists cannot answer, not because they are too stupid to understand the answer, but because they are willingly wearing blinders that prevent them from seeing it. In other words, they have eliminated duality from consideration, not because they have a valid reason for doing so, but because they have allowed their hatred of other religions to get in the way of good judgement. That, in my opinion, is what's stupid.

Conclusion

I conclude that true AI is coming and it is coming sooner than most people expect. However, given my understanding of mainstream AI research, I'm willing to bet anything that it will come from neither the Church of the Singularity nor academia. We will indeed build extremely intelligent machines that will do their best to obey our commands and accomplish the goals we set for them. But they will not be conscious even if they behave emotionally. They will just be intelligent. So if there is a potential for catastrophe (and there certainly is), let us not rage against the machine. We will only have ourselves to blame.

See Also

Enthusiasts and Skeptics Debate Artificial Intelligence

Friday, November 21, 2014

The Church of the Technological Singularity, Part II

Part I, II, III

Superstition Disguised as Science

It is easy to make fun of Singularitarians because almost everything they preach regarding intelligence, the brain and consciousness is either faith-based pseudoscience or wishful thinking. I am tempted to feel sorry for them because, after having been lied to by established religions for so long, it makes sense to look elsewhere for salvation. But in so doing, they threw the baby out with the bathwater. Take, for example, their belief in the idea that, in the not too distant future, humans will achieve immortality by transferring the contents of their brains into simulated virtual entities residing in vast collections of powerful networked computers. Suppose for the sake of argument that this is possible, then copying one's brain onto a machine would result into two distinct conscious entities, the copy and the original. To prevent this from happening, singularitarians would have to destroy (i.e., murder, kill or euthanize) the original entity. Aside from the fact that there are laws against murder, it is doubtful that anybody, except Singularitarians, of course, would agree to be put to death in order to insure that only one copy of themselves can continue to exist. The silliness of it all is almost unbearable.

The Brain Is Not Probabilistic

It is a well known fact that the brain is very good at judging probabilities. It is also known that the brain can function efficiently in the presence of uncertain, noisy or incomplete sensory data. The prevailing hypothesis among Singularitarians is that, internally, the brain builds a probabilistic or Bayesian model of the world. If this were true, one would expect a gradation in the way we recognize patterns, especially in ambiguous images. However, in the last century, psychological experiments with optical illusions have taught us otherwise.
When looking at the picture above, two things can happen. Either you see a cow or you don't. There is no in-between. You do not see a 20 or 50 or 70% probability of a cow. It's either cow or no cow. Some people never see the cow. Furthermore, when you do see the cow, the recognition seems to happen instantly.

The only conclusion that we can draw from this type of experiment is that the cortex uses a winner-take-all pattern recognition strategy whereby all possible patterns and sequences are learned regardless of probability. The only criterion is that they must occur often enough to be considered above mere random noise. During recognition, pattern sequences in memory compete for activation and the ones with the highest number of hits are the winners. This tells us that, contrary to Singularitarian claims, the brain builds as perfect a model of the world as possible. Indeed, this is what we all experience. We expect the stove and the kitchen sink to be exactly where they were every time we go into the kitchen. Everything in our field of vision moves exactly the way they are supposed to. Probability has nothing to do with it.

Note, however, that the brain does not represent the world the way that deep neural networks do. DNNs are useless when presented with a completely new pattern. The brain, by contrast, can instantly learn and recall objects or patterns that it has never seen before. It may or may not retain them permanently in memory but there is no question that the visual cortex can instantly represent a new pattern internally. If it weren't so, it would not be able to see it and interact with it intelligently. This is a crucial aspect of intelligence that AGI designers in the Singularity community seem completely oblivious to.

Consciousness and Materialism

Singularitarians believe that the brain is all there is to the mind. This is the entire basis of the religion. Consciousness, we are told, is just an emergent property of the brain. How do they know this? They don't, of course, and this is what makes their movement a religion. There is no science behind it. When pressed, they will affirm their belief in materialism. The latter rejects dualism, the old religious idea adopted by Descartes according to which the conscious mind consists of a brain and a spirit. Why do they reject it? Overtly, they will say it is because the immaterial cannot interact with the material. But the hidden, unspoken reason is that they view traditional religions with contempt and will contradict them as often as they can. And why shouldn't they? Every religion wants to be the only true religion, no? But how do they know that the immaterial cannot interact with matter? They don't. It is a definition game. Since they define the immaterial as that which does not interact with matter, their argument becomes just an empty and pathetic tautology.

The inescapable fact remains that consciousness requires a knower and a known. The two are complementary opposites. That is to say, the knower cannot be known and the known cannot know. This automatically eliminates the brain as the knower because matter can always be known. It is that simple.

Coming Up

In Part III, I will go over the reasons that the Church of the Singularity is wrong about intelligence and motivation.

Wednesday, November 19, 2014

The Church of the Technological Singularity, Part I

Part I, II, III

No Souls or Spirits Allowed

The primary goal of the Singularity movement is to bring about the Singularity, a time when machine intelligence will have surpassed human intelligence. Their greatest fear is that future superintelligent machines may decide they no longer need human beings and wipe us all out. Their most fervent hope is to achieve immortality by uploading the contents of their brains to a machine. What they hate the most: traditional religions. The reason, of course, is that they are all materialists, i.e., they believe that physical matter is all there is. No souls or spirits are allowed in this religion. Matter somehow creates its own consciousness by some mysterious pseudoscience called 'emergence'.

The whole thing could be easily dismissed as the silly antics of a nerdy generation who grew up reading Isaac Asimov's robot novels and watching Star Trek on television. What makes it remarkable and, some may say, even dangerous, is that they count among their members a number of very powerful and super rich Silicon Valley technology leaders such as Elon Musk, Sergey Brin, Larry Page, Ray Kurzweil, Peter Thiel, Mark Zuckerberg, Peter Diamandis and many others. Needless to say, most of the prominent scientists in the AI research community are also singularitarians.

Not Even Wrong

LessWrong is an elitist internet cult founded by singularitarian Eliezer Yudkowsky. An offshoot of the Singularity movement, LessWrong fancies itself as a rational group of like minded people who, unlike the rest of humanity, have figured out a way to overcome their cognitive biases. Their goal is to bring about the singularity by building a friendly AI, their so-called artificial general intelligence (AGI). They believe that they are the most qualified people on earth to do it because they are more rational and smarter than everyone else. I am not the only one who thinks the whole thing has gotten out of hand. In a recent Edge.org interview, computer scientist, composer and philosopher Jaron Lanier had this to say about the cult:
There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.
For such an elitist and extremely well funded group of know-it-alls, one would expect them to have powerful insights into how the brain works. One would be wrong. So let's see just how wrong the LessWrong cult really is.
  1. The brain builds a probabilistic model of the world. Not even wrong.
  2. Everything is physical because we know it is. More wrong.
  3. We can make a conscious machine because we know that consciousness is an emergent property of the brain. Wrong and wronger.
  4. We will gain immortality by uploading our brains to a machine because we know that the brain is all there is. Laughably Wrong.
  5. We must be careful with AI because intelligent machines may decide they no longer need us. Pathetically wrong.
  6. We are less wrong than others because we are smarter. Wrongest.
The only good thing about all this is that singularitarians do not have a clue as to how intelligence really works. Their dream of being the ones to build an AGI is just that, a dream. The world would be in a heap of trouble if those guys found the solution to true AI.

Coming Up

In Part II, I will go over the reasons that the Church of the Singularity is wrong about both the brain and consciousness.

Monday, October 13, 2014

Why I Believe True Artificial Intelligence May Come Within a Year

It's Closer Than You Think

I think that true AI will arrive in the world much sooner than most people expect. I believe it may happen sometime in 2015. I have many reasons but I will mention just a few important ones in this article. I have argued some of these points elsewhere.

Time Is the Only Teacher

There is something truly groundbreaking that a number of people in the AI research community (e.g., Jeff Hawkins, Andrew Ng, and others) have figured out in the last decade or so. They have come to realize that intelligence is entirely based on the relative timing of discrete sensory and motor signals. It turns out that there are only two kinds of temporal relationships: signals can be either concurrent or sequential. This realization simplifies things tremendously because it gives us a way to do unsupervised learning and invariant object recognition just by observing signal timing. Time is the only supervisor in perceptual learning. No labeled examples are necessary. I believe this to be a breakthrough of enormous importance. It goes without saying that the supervised deep learning models that are currently the rage in AI circles will fall by the wayside.

We Don't Need So Many Neurons

Many have argued that we will need super powerful computers in order to emulate the tens of billions of neurons in the human brain. A critic may ask, do we really need that many neurons and such vast computing power to demonstrate true intelligence? I personally don't think so. My research into cortical columns and sequence recognition has convinced me that we will need at least two orders of magnitude fewer neurons to emulate a mammalian cortex than we thought. I have come to the conclusion that the brain is forced to use parallelism in its cortical columns in order to compensate for the slow speed of its neurons. There is good reason to suppose that the hundred or so minicolumns that comprise a macrocolumn are just individual speed recognizers for a given sequence. They can be emulated in a computer with a single minicolumn and a couple of variables.

In this vein, one can also argue that once the basic principles of intelligence are fully understood, there really is no need to emulate all the billions of neurons in a brain in order to demonstrate very powerful intelligent behavior. A million or so neurons combined with the right model will perform wonders. Bees and wasps can do amazing things with a million neurons.

It gets better. The requirement for massive computational resources becomes even less of a problem when you consider that only a fraction of the brain's cortex is awake at any one time. It may come at a surprise to many that over 90% of the cortex is essentially asleep even when we are fully awake. This is because only a very small part of the cortex, the part we are focusing on, is active at one time.

The Bayesian Red Herring

True AI could have happened decades ago if only we knew how it worked. Obviously, there is something about intelligence that still escapes researchers in the field. I am convinced that one of the reasons it did not happen years ago (other than the aberration that was symbolic AI or GOFAI) is that AI researchers have fallen in love with probabilistic approaches to intelligence such as Bayesian statistics. This, too, is a major waste of time in my opinion. I say this because, contrary to conventional wisdom, the brain does not compute probabilities.

The probabilistic AI model assumes that the world is inherently uncertain and that the job of an intelligent system is to compute the probabilities. The correct model, in my view, assumes that the world is perfectly consistent and that the job of the intelligent system is to discover this perfection. The two models are polar opposites. I believe that once researchers realize that the brain uses a non-probabilistic, winner-take-all approach to recognition, AI will be upon us like a tsunami.

"People are not probability thinkers but cause-effect thinkers." These words were spoken by none other than Dr. Judea Pearl during a 2012 Cambridge University Press interview. Pearl, an early champion of the Bayesian approach to AI, apparently had a complete change of heart. In my opinion, this should have been a wake-up call for the AI community but Pearl's words seem to have fallen on deaf ears. This is regrettable because the probabilistic approach to AI is one of the main impediments to progress in this field. Getting rid of it will simplify our task by orders of magnitude. Fortunately, a number of people are fast moving in this direction.

Conclusion

There are other reasons that true AI is closer than most of us think, including a few that I will reveal when I release the Rebel Speech demo (hang in there). Perceptual learning and knowledge representation are at the heart of intelligence. Once we fully solve the problem of perception and memory, everything else will be child's play in comparison, even things like motor learning, motivation and adaptation. The future is almost at the door.

Sunday, September 28, 2014

The Encyclopedia of American Loons

Unbeknownst to me, I have been inducted into the Encyclopedia of American Loons since July. It's a real beauty and I proudly accept the honor. I guess I do have a few hardcore fans out there. LOL. I am reproducing the article here just in case it disappears for whatever reason. One never knows.
Wednesday, July 9, 2014

#1112: Louis Savain

A.k.a. Mapou (sometimes commenter name on Uncommon Descent)

First, an honorable mention to Terry Savage, formerly finance columnist for the Chicago Sun-Times, for this idiotic rant, but it’s not quite enough to earn him his very own entry.

Louis Savain, who calls himself a “rebel scientist”, is probably a minor figure, but deserves exposure as an excellent example of a certain mindset. Savain is a crackpot who disagrees with most of the major discoveries in modern science, including relativity and evolution, and has written several posts of “scientific” takedowns of theories of which he appears to have a rather tenuous grasp. Instead, Savain comes up with his own hypotheses and theories, often based to a greater or lesser extent on the Bible.

Of course, none of his work has yet appeared in any peer-reviewed scientific journals, but there is a reason for that. “Forget it. I believe in going directly to the customer, i.e., the public whom you despise, but who ultimately pays for all science research. They are my peers. I’ll stay away from politically-correct publications, thank you very much.” Ah yes, the corruption of the peer review process. Savain appears to be dimly aware that his work may not pass scrutiny by experts in the relevant fields, and responds in a manner brilliantly illustrative of the crank mindset: “Indeed, the whole peer-review system was designed as a control mechanism intended to exclude a large part of humanity from taking part in the scientific enterprise. This is incompatible with the ideals of a democratic society, in my opinion. We did not get rid of one dictatorship to succomb [sic] under the tiranny [sic] of another.”

At least he makes testable predictions, which is unusual for crackpots. For instance, Savain has repeatedly predicted the fall of Darwinism: “Assuming that the ID hypothesis is correct, one can argue that, since humans are the dominant species on earth, the designers must have had a special interest in us when they began their project. My hypothesis is that they are conducting an experiment, the purpose of which is to distinguish between believers and deniers. Given their vast intellect, it is certain that they anticipated the current conflict. If so, it is highly likely that they would have left us a secret message, a message so powerful that its mere publication would cause the collapse of the materialist fortress.” The secret message is of course found in the Book of Revelation together with the stuff about horsemen. The message will at least ensure that “the Darwinian walls will come crumbling down like the old walls of Jericho. Sweet revenge.” Science, yo.

Here are some predictions Savain has made about the cerebellum and challenged scientists to falsify. Of course, since the predictions contradict current neurology, they must be counted as already falsified, though Savain apparently fails to notice. Here Savain falsifies Einstein’s physics. It really is precious.

Diagnosis: Even after years of looking into crackpots Savain remains a special case for his blatant demonstration of the Dunning-Kruger effect, extraordinary even for a crank. But everything else is very, very typical.
I love it.

Friday, September 19, 2014

The Magic Number 7

A Small Taste of Things to Come

The secret release date for the Rebel Speech demo is approaching. What follows is a small excerpt from a document I'm working on, Rebel Speech 1.0, Theory and Program Design, which I will publish together with the Rebel Speech demo program (coming soon). Briefly, Rebel Speech is a biologically plausible, spiking neural network. It is a novel machine learning program that can learn to recognize speech in any language just like we do, by listening. Unlike most speech recognition systems which use either a Bayesian or a supervised deep learning model or both, Rebel Speech has a winner-take-all mechanism. Essentially, during learning, the program compiles as many pattern sequences as possible and then allows them to compete for activation. During recognition, the sequence with the highest number of hits is the winner. There is magic in the air.

The Magic Number 7

The number 7 is engraved in the architecture of sequence memory. It is not only the number of nodes in the body of a sequence. It also figures prominently in the temporal organization of the hierarchy.
The temporal architecture of sequence memory is dictated by the interval covariance of a sequence and the need for great precision in the construction of the hierarchy. Covariance means that the intervals between adjacent nodes in a sequence are equal and stay equal as they change.
Every level in the hierarchy has a basic temporal interval, which is the smallest possible interval for that level. At the bottom level, the basic interval is 10 milliseconds. Each time one climbs up one level, the basic interval is multiplied by 7 as follows: 10, 70, 490, 3430, 24010, 1176490, 8235430, etc. In other words, the basic interval grows exponentially. At level 7, it is already 2.28 hours. By the 10th level, it is over 32 days. What this really means is that the timing of sequences at a given level varies 7 times more slowly than the sequences at the level immediately below it.

PS. Hang in there.

Sunday, August 24, 2014

Alternative Anti-Inflammatory Remedies for ALS

Alternatives

I was thinking about the dramatic effect that dexamethasone had on my wife's ALS symptoms on several occasions and it occurred to me that there must be several non-prescription drugs and supplements that could help tame the neuro-inflammation of ALS. After a quick search, I came up with the following: Pomegranate juice, ginger root or extract, Lunasin (soy peptides), zinc gluconate, turmeric, marijuana, alcohol, vitamin D3, dextromethorphan and last, but not least, Naproxen (Aleve). Most of these products are easily obtainable in most countries. Dextromethorphan is used in over-the-counter cough syrup and is known to have strong anti-inflammatory and thus neuroprotective properties. Soy peptides can be ordered online.

Naproxen

Naproxen is particularly interesting because it inhibits the prostaglandin E2 hormones and pro-inflammatory cytokines that are known to be elevated in ALS patients. I would be interested in knowing about the experiences of ALS patients out there who might have experimented with a high dose (2000 mg or more per day) of Naproxen for a few days. I suspect it might have a noticeably positive effect on some patients. To anyone who may want to experiment with Naproxen, I would also recommend taking some L-arginine and magnesium during the treatment to help dilate the arteries and capillaries. This should make it easier for the drug to reach difficult areas of the brain and spinal cord. Of course, if you do get improvements from a high dose of naproxen, it goes without saying that something more powerful like dexamethasone could do wonders. I'm a little excited about the potential of Naproxen because it is an easily obtainable drug. If it did cause improvements in ALS symptoms, it would send a powerful message because a lot of people can try it at home without a prescription.

See Also:

Anesthetics and Glucocorticoids for ALS
Naproxen Reduces Excitotoxic Neurodegeneration in Vivo