Tuesday, November 28, 2017

I'm Writing an Article on Invariant Object Recognition But...

... I Have to be Careful Not to Reveal My Hand

As much as I would like to, the time has not yet arrived for me to reveal the full extent of my AI research. It is hard to explain invariant object recognition without letting the cat out of the bag. And a beautiful cat it is. Some of my readers are very clever and could probably extrapolate everything if I reveal too much. I have to think things through. Stay tuned.


Robert said...

Psalm 19:9; Daniel 8:26; 1 John 2:8; Revelation 22:6




Peter (stn1986@hotmail.com) said...

Take your time. Also, I think it is not your cat (and not your bag either). You are the messenger, don't feel bad.

Also, I actually implemented a simple version of your ideas (pattern neurons, sequences, branches etc) and it's quite interesting..

Louis Savain said...

Peter, thanks for the comment. You're right. It is neither my bag nor my cat. I have been reluctant to reveal crucial aspects of my research because I know that the time has not yet arrived. I reveal bits and pieces so that those among us who are watching for a sign may know that the time is drawing near. Rejoice.

To those who have an ear to hear, keep in mind that there are not just one but two beautiful cats in the bag. One is the brain (intelligence) and the other has to do with fundamental physics. Both cats will transform the world as we know it. They will not be released until they are fully ready.

Peter (stn1986@hotmail.com) said...

Hi Louis,

I understand. Thanks for all the hard work mr. Savain.

May I ask you a question? It's OK if you don't have time, I understand. I'm just very interested in this problem.

I have prepared a sample figure to show my problem:


If you don't trust this link I can upload it somewhere else. I've used the free service https://imgbb.com.

I've marked the sensors which will active with a purple X. You'll see that through different paths the same pattern is expressed and no closed loops are formed.

My problem is this: I've found some patterns can be expressed through different pathways. E.g. different combinations of different patterns will form the same result in higher pattern levels.

I believe pruning thieves will not prevent this problem. I hope the figure makes it clear.

Louis Savain said...

Peter, thanks for the comment. This is actually a problem I've encountered in my experiments. I noticed that some of the pattern neurons never get connected to sequence memory because of the rules of sequence learning.

There is a simple way to get rid of those duplicate patterns. It's actually an occult principle. Trees (pattern neurons) that do not produce fruits (connections to sequence memory) are eventually eliminated.

I never bothered to figure out why this was happening. Now I know, thanks to you.

PS. As an aside, I also noticed that new pattern discovery quickly dies out after a while. I anticipate that future intelligent robots will be built with fully formed pattern memories. There are only so many elementary patterns that can be discovered.

Peter (stn1986@hotmail.com) said...

Much obliged. These specific rules of sequence learning are secrets for now, I assume?

I'm guessing one should just record all sequences that are observed and then apply some rule(s) to prevent excessive duplication. However, I'm just speculating. Have a nice weekend.

Peter (stn1986@hotmail.com) said...

I've found a simple way to prevent these redundant connections: only allow a new pattern if the the two input neurons/sensors have no _active_ parents (i.e. they are not yet contributing anything to the overall activation).

In my example, the active sensors all have activated pattern neurons. These new purple connections would not be allowed as they wouldn't contribute anything new. I was actually kinda worried about this, because it'd explode the pattern space.

Interestingly, the pattern problem can be attacked from an information theory angle by considering the entire sensor/pattern space as just a bunch of bits. Finding the signal in the noise etc. Some clever shifting of bits can make this quite performant.

(I run a small software shop, but I'm no scientist, no worries. :)