Sunday, January 25, 2015

The Rebel Speech Project

I Am Scared Now More than Ever

I am struggling with a problem. The Rebel Speech project has grown into something much bigger and more worrisome than I anticipated. In the last few years, and especially in the last several months, my understanding of cortical memory has grown by leaps and bounds. The project is no longer only about making a better speech recognizer. The core learning technology that I am using is universal, that is to say, it can learn anything, not just speech. Just add your own set of custom sensors and voila. This universality is why I'm afraid. It's a quantum leap in progress over the state of the art. The history of humanity teaches us that every major advance in science or technology is invariably transformed into weapons of war. Truly intelligent machines would be the ultimate weapons of war. The consequences are too painful to imagine.

Rebel Speech is really an extension of Rebel Cortex, a software model of the human cortex. Rebel Cortex is a hierarchical, spiking neural network that uses unsupervised, continuous sensory learning. ‘Unsupervised’ means that, unlike most deep neural networks, Rebel Cortex does not require labeled samples. ‘Continuous sensory learning’ refers to the fact that Rebel Cortex can learn only from a changing signal stream such as a video or audio data stream from a camera or a microphone.

Rebel Cortex is a perceptual learning system that is based on a novel knowledge representation scheme. It uses a memory architecture that can instantly modify its internal representations to reflect changes in the world. In my opinion, if one truly understands perception and perceptual learning, the rest is child's play in comparison. Rebel Cortex is such an essential part of Rebel Speech that I find it impossible to release a Rebel Speech demo without also letting the whole cat out of the bag, so to speak. The reason is that, as soon as one starts playing with its learning abilities, it becomes obvious that this is a whole new ball game. Rebel Speech learns to recognize more than just your words. It also learns to recognize you. It is kind of spooky. It's the kind of thing that changes all your plans for the future. I know it scares me.

I Am Not that Smart

I also feel that this is not something that is mine to give. As surprising as this may sound, I did not figure it out on my own. I had major help. But then again, how could I have figured it out on my own? If government and industry cannot do it, even with their unlimited resources and brainpower, someone like me stands no chance whatsoever. Besides, I am just a blogger, an internet crank, a nut, a nobody. I am certainly not that smart. But, amazingly enough, someone else did figure it out and hid the secret in the unlikeliest of places, a place that no one else thought of searching. But being crazy is not always a handicap. I have a life long habit of thinking about possibilities that others have rejected. I am a rebel that way and I like taking the unbeaten path, the road less travelled. I was lucky enough to find the secret and figure out how to decode it. This, too, is another major paradigm shift, one that promises to strike at the core of our belief systems. Yes, get ready to live in interesting times.

True Artificial Intelligence is Coming Soon But Not From the AI Community

Knowing what I know, there is no doubt in my mind that neither the scientific community nor industry can solve the AI problem, not in several hundred years. The organization of memory and its principles of operation are way too counterintuitive while the number of possible configurations are practically unlimited. I calculate that, on average, it takes the mainstream AI community at least half a century to fully transition from chasing one AI red herring to another. At this rate, they'll be at it for a long, long time. But no secret can stay hidden forever. Sooner or later, I'll make a decision and release something. I just got some more thinking to do. Bear with me.


Maksym Diachenko said...

Is there any chance we will see a more detailed technical description alongside with some demo?

Louis Savain said...

Hi Maksym,

I have prepared documents for the theory and the software implementation. I also have commented C# source code for the demo. I plan to release everything as soon as I can think through all the ramifications. There are many real dangers in the picture and I must be careful not to attract too much attention. I'm OK now because I have a crackpot reputation. Just be patient for a little while longer. There is a time for everything.

Edward ThomasM said...

Since this new AI takes only a video or audio changing signal stream, it would seem to have a similarity with conventional software. Namely:

"Garbage in, garbage out"

Even though this AI is "unsupervised" as you describe, ii would seem to be qt least biased by
whatever unavoidable constraints are inherent in the source of the signals, and that source would have been selected by someone, UNLESS the AI were provided the means to select it's own input from a rich set of choices.

If you were to choose an audio stream consisting of the text of a pile of physics writings, including your own, it could be interesting to observe the output.

The output from this AI was unspecified. Let's assume that audio in provides audio out.

For this AI to learn anything from a stream of words it would, I think, would have to have a dictionary of definitions that it had, by some means, an understanding. Starting from scratch it seems it starts without any understanding.

Another issue is you said it is written using the "C" language.
This indicates it rides on top of a conventional computer as an interpreter of either video or audio streams. It sounds terribly slow.

Written words are much more explicit than a strung together stream of spoken words. They are both symbols that represent meaning but starting from scratch the AI would first have to figure out the meaning. Not an easy task but if the AI does have merit then feed it a huge dictionary of the English language and sit back and wait several millennia to see what comes out on it's own or even with some coaxing with a stream of questions about the meaning of the words it's been digesting.

What you have said, so far, casts a thick blanket of doubt over me.

Edward Medalis

Louis Savain said...

Hi Edward,

For the time being, Rebel Speech simply learns to recognize spoken speech from a microphone. It does not understand what the words mean. It can also learn to recognize other sounds such as music, animal sounds, machinery, etc.

Rebel Speech is what is known as an unsupervised neural network. That is to say, unlike current speech learning neural networks, sounds do not have to be prelabeled. Rebel Speech creates its own labels by associating learned sounds with specific neurons. Labels can be attached to those neurons after learning if desired.

That being said, given a powerful enough computer and both visual (camera) and audio sensors (microphone), I'm confident that Rebel Cortex can learn to understand the world around it. If used as part of a robot's brain, other neural modules would have to be added to handle motor output and motivation (including pain and pleasure sensors).

In order for it to understand text, it would have to be taught to read, pretty much the same way children are taught.

dashxdr said...

I too have been very concerned about the question of releasing The Real Deal secret to intelligence (specifically, learning). I too completely discount the so-called geniuses who consider AI to be a danger in and of itself, as if it will acquire motivation (destroy all humans!) and wipe us out. What a joke...

The danger I see it is the same as you: Vile, control-freak humans imposing their will on humanity through the use of an army of slave intelligent machines. The concept of centralized, irrevocable, inescapable all-encompassing control is my concept of hell. And everything I see governments trying to do points in that direction. The puppet politicians just want to be higher up in the slave system than average. They don't hope for freedom, they fear it. They just want to have more authority over the slaves than average.

It seems to me the only way to combat centralized control is to spread the knowledge of how to create intelligent machines as far as possible. Rather than one zealot (NSA with infinite funding) being the only one to have the secret, you spread it around so many entities have the secret. Sort of mutually assured destruction in the age of intelligent machines.

I don't worry about destruction of all human life. I think that wouldn't be possible or even necessarily such a tragedy. It would be sort of a report card on humanity. Whether humanity passes or fails... only matters to humanity. The universe doesn't care.

I worry about an inescapable dark ages, where nothing ever changes, nothing is ever allowed to change, because intelligent machines are used to impose "order" on the population. No one can be trusted with such power. Because "order" for one is "stagnation" for another.

Louis Savain said...


It seems to me the only way to combat centralized control is to spread the knowledge of how to create intelligent machines as far as possible. Rather than one zealot (NSA with infinite funding) being the only one to have the secret, you spread it around so many entities have the secret. Sort of mutually assured destruction in the age of intelligent machines.

Yeah, I see what you mean and I agree that it would probably keep things manageable for a little while. However, it would be impossible to keep an eye on, let alone control, what different groups are doing or planning to do with their newfound AI powers. The opportunity for malfeasance are everywhere and the risks are extremely high. We, humans, have a major problem, a curse even. As a species, we have no honor. Knowledge is power and power is dangerous. I see no way that humanity can survive the coming age of knowledge unless we get help from somewhere.

dashxdr said...

I see no way that humanity can survive the coming age of knowledge unless we get help from somewhere.

Maybe this is always the way of life when it comes about. It achieves intelligence, then commands its own evolutionary advancement, then it blows itself to hell. Rinse and repeat...

I've always thought the way around such a risk is to get off the earth, but as individuals. With all life on earth it's all the eggs in a single basket. If we're spread out more around the solar system, at least a single destructive event won't end all life as we know it.

I don't think it's realistic to hope for help from outside. The age of superintelligent machines is unavoidable. I don't even think it should be avoided.

I'd love for the miracle to occur in my lifetime. I'm curious to see how it all plays out.

Incidentally you might take a look at the Black Mirror tv series. It has some cool insights.