Saturday, July 7, 2018

No Pattern Hierarchy in the Thalamus

I Make Mistakes All the Time

I was wrong about the need for a pattern hierarchy in the thalamus. There is no 10-level binary tree. The only memory hierarchy is in the cortex and it is a hierarchy of minicolumns. I was wrong because I trusted my own flawed assumptions over the occult texts that I use in my AI research. I should know better by now. The texts are never wrong. I have no excuse.

The result of this latest revision is that pattern learning is much, much faster and easier to implement than I had assumed. There are about four times as many sensors as pattern neurons and the pattern neurons have only four inputs on average. My newest experimental program now has fewer neurons than before and its recognition accuracy and noise tolerance have improved dramatically.

The Demo

I am working on a demo application (speech recognition) that I plan to release without the learning module (sorry). What will make this app special is that it will have unprecedented (spooky might be a better word) noise immunity. However, I do not want to attract too much attention. I don't need it. So I must plan this carefully. Hang in there.

See Also

Fast Unsupervised Pattern Learning Using Spike Timing

Friday, June 29, 2018

Sparse Pattern Recognition on Cheap Hardware

Just a quick note. Pattern recognition in spiking neural networks is almost magical. Once the network is properly trained, good recognition performance requires amazingly little information and is strongly immune to noise and distortion. This is what allows us to see shapes in the clouds and recognize a huge variety of typefaces and handwriting styles. It is all due to the magic of timing. The really good news is that it is possible to get excellent results using a regular multi-core processor because one can disable a lot of random neurons in a trained spiking neural network without significantly affecting performance.

Have a good weekend.

Tuesday, June 26, 2018

I May Have Been Wrong About the Organization of Pattern Memory

The Two Olive Trees

Special Note: This is for believers only.

After going over my current interpretation of the occult texts regarding pattern learning (I retrace my steps all the time), it is quickly dawning on me that I may be wrong. Pattern memory and pattern learning may be even simpler than I thought. I had long assumed that pattern memory had to be organized hierarchically, like a tree. I had originally arrived at that conclusion because the occult text (Zechariah 4) mentions two olive trees, an olive tree being the occult symbol for a hierarchical memory structure. At the time, I assumed that it meant that there was one hierarchy for sequence memory and another for pattern memory. I subsequently concluded that the two olive trees only referred to the two complementary hierarchies (Yin and Yang) of sequence memory. Each hierarchy contains cortical columns that complement the columns in the other one.

The Fig Trees

I continued to use a hierarchical pattern memory even though I was having problems with it in my computer experiments. My implementation did learn to detect concurrent patterns and it was a very fast learner. The problem was that it created way too many pattern neurons. I held on to my assumption because the occult text (Zechariah 3) does use a tree to symbolize pattern detectors although it is a fig tree and not an olive tree. Still, I felt that something was wrong. Why would the text use an olive tree to symbolize the hierarchical organization of sequence memory as a whole and a fig tree to symbolize a single pattern detector? It did not add up.

Much Simpler Than I Thought

I am currently thinking that pattern memory may not be hierarchical at all. But I have to give it more thought. This is a major revision to my previous model. I may be wrong again because I have been wrong many times before in my research. I am writing code to test my new hypothesis. I will post an update soon whether or not the new model is successful. Stay tuned.

Monday, June 25, 2018

The Many Functions of Cortical Columns

This is a short post. I don't know when but, when the time comes, I will post a multipart article on the many functions of cortical columns. They are responsible for or participate in the following:

  • Short and long term memory.
  • Knowledge acquisition.
  • Common sense understanding of the world.
  • Predictions.
  • Language understanding.
  • Reasoning.
  • Attention and focusing.
  • Motor learning and goal oriented behavior.
  • Invariant recognition.
  • Object clustering.
  • Planning.
  • Recollection.
  • Metaphors and analogies.
In addition, cortical columns are involved in reinforcement learning (adaptation) based on appetitive and aversive inputs. As with everything else in the brain, the underlying operating principle is temporality and the organizing principle is complementarity (Yin and Yang). Hang in there.

Sunday, June 24, 2018

True Free Market Capitalism Is the Solution

Related image
Intelligent Machines Will Eliminate Human Labor Within Our Lifetimes

Many people are beginning to worry that automation will take their jobs. There is no question that intelligent machines will eventually do pretty much everything for us. In any just system, workers would welcome this with open arms. We are afraid of losing our jobs because we live in a slave system created by thieving plutocrats. The thieves know they got a big pressing problem on their hands. Their solution: socialism. But why should the unemployed masses receive a subsistence handout while the equally unemployed plutocrats live in decadent luxury? What makes them so special?

We Want True Capitalism, Not Socialism

We need no socialist programs like free education, housing, health care or whatnot. We are not children in need of babysitters. We just want what belongs to us by right to spend as we see fit in a free market system. Only a free market system can determine the proper value of property, goods and services. What we want is true capitalism where the capital belongs to all the people who own it by right. In true capitalism, the corporations belong to the people and so do the profits. If necessary or desired, workers can make more money via fair competition in a free market. When all workers are replaced by intelligent machines, the system simply continues without fail. True AI (AGI) will make everyone rich. But first, we must get rid of the plutocratic thievery, otherwise we are heading straight for disaster. And above all, do not accept the so-called universal basic income (UBI) that is being heavily promoted by the plutocrats. It is a trick, another socialist handout disguised as a humanitarian gesture. In reality, they steal your stuff and give you back a tiny percentage of it while pretending to be generous. Do not fall for this ruse.

In a Just Economic System, There is No Taxation

Our current economic systems have it completely backwards. The government should not be taxing the people. It should be the other way around. In a just society, there is no taxation: everything becomes a for-profit corporation that works for the people. This includes cities, counties, states, nations, etc. Everything! The people are the government and the shareholders. As shareholders, they can vote to change the direction of the corporations when necessary. This is direct democracy. The land and its resources, which are represented by the total amount of currency in circulation (the capital), belong to all and are leased to corporations for exploitation. The currency should be pegged to the average lease price of the land. This would eliminate fluctuations and insure a stable currency. Special banking and investment corporations would invest the capital in various corporations and collect the profits on our behalf. The current stock market system is an abomination.

The Writing Is on the Wall

Here is a direct warning to the thieving plutocracy: Stop stealing from the people. Soon, thanks to intelligent automation, it will become obvious to them that they have been robbed for centuries. Unless you change your thieving ways, they will wake up and kick your asses.

Wednesday, June 13, 2018

Robotics, Automation and the Cerebellum

(Drake et al. 2010. Gray’s Anatomy for Students 2nd edn)

The cerebellum can learn complex sensorimotor tasks using a simple technique called imitation. If you are a roboticist or an automation expert, you will find the powerful supervised learning technique I describe below of special interest because it could potentially simplify your work. What makes this technique so powerful is its sheer simplicity and its ability to a learn complex tasks very fast.

The First, the Last and Everything in Between

There are two kinds of sensors in the brain. One kind (poor) is used by the neocortex and the other (rich) is used by the cerebellum. Poor sensors come in complementary pairs, stimulus onset and offset. For example, we may have a sensor (A) that fires a single pulse when the amplitude of an audio frequency climbs above a particular level. The complementary sensor (B) would fire when the amplitude falls below the same level. It so happens that there is a train of pulses between A and B but the neocortex does not care about what happens between them. What matters to it is the precise timing of the first and last pulses. Of course, for every type of stimulus, the brain uses many sensors to handle multiple levels or amplitudes.

Unlike the neocortex, the cerebellum is a hungry beast because it wants it all: the first, the last and everything in between. Thus every sensory input going into the cerebellum is a train of pulses. This might seem like a total waste of pulses but it is actually essential to the learning method used by the cerebellum. Again, for emphasis, I differentiate between the two types of sensors by referring to cerebellum sensors as rich sensors. Single pulse sensors (first and last) are poor sensors.

Cerebellar Neurons

Cerebellar cortical neuronal circuits. Mossy fibers from pontine nuclei etc., send excitatory synaptic outputs to granule cells. A granule cell forms one or a few excitatory glutamatergic synapses on a Purkinje cell, where LTD occurs depending on the activity of the granule cell and a climbing fiber. Molecular layer interneurons (stellate and basket cells) receive excitatory synaptic inputs from granule cells and inhibit Purkinje cells. At inhibitory GABAergic synapses between a stellate cell and a Purkinje cell, rebound potentiation (RP) is induced by climbing fiber activity.
Tomoo Hirano and Shin-ya Kawaguchi
Regulation and functional roles of rebound potentiation at cerebellar stellate cell—Purkinje cell synapses

The main neuron in the cerebellum is the Purkinje cell (PC) which was named after its discoverer, Czech physiologist Jan Evangelista PurkynÄ›. There are approximately 15 million PCs in the human brain. Each PC emits pulses that are used to control a motor effector. They are arranged in tight formations like a forest with lots of parallel fibers running through the dendrites like telephone wires. Each PC can receive signals from as many as 200,000 parallel fibers. Each parallel fiber is a long bifurcated axon of a granule cell, an intermediary neuron that conducts sensory signals arriving on mossy fibers. However, not all of the input signals arriving on mossy fibers have sensory origins. Some are control signals that are used to inhibit the PCs when necessary. These fibers are likely used for task control. They do so via so-called Stellate and Basket cells which make inhibitory synaptic connections with the PCs.

Supervised Learning in the Cerebellum

The second most important entity in the cerebellum is the climbing fiber (CF). There is one CF for every PC. The CF carries training input signals to the PC. Those signals originate from the inferior olivary nucleus in the medulla oblongata which relays motor signals from motor effectors in the spinal cord to the cerebellum.

In order to understand how the cerebellum is trained to perform a sensorimotor task, it is important to know how motor effectors work. An effector is the opposite of a sensor. It, too, has a first (start) and last (stop) pulse and pulses in between. It is attached to a muscle and generates a train of pulses that contracts the muscle for as long as the pulses keep coming. The cerebellum accomplishes motor control via the use of a mix of excitatory neurons, inhibitory neurons and tonic neurons. The latter are neurons that continually generate pulses unless they are inhibited. The exact circuit details are not important and is implemented differently in various animals. What matters are the principles.

Learning in the cerebellum consists of finding parallel fiber inputs to Purkinje cells that activate and deactivate motor effectors at the correct time. The training occurs while the neocortex is going through a given sensorimotor task. The cerebellum learns to faithfully imitate the task. Remember that parallel fibers carry pulse trains from rich sensors. These fibers try to make synaptic connections with as many PCs as possible. To train a PC, the training mechanism only needs to send corrective signals to the PC via the climbing fiber whenever the associated motor effector stops firing. The CF signal will suppress and disconnect any parallel fiber connection that is still receiving sensory pulses. The end result is that only parallel fibers that cause the PC to fire and stop firing at the right time will remain connected.

Once the cerebellum has fully learned a task, the neocortex can just turn it on or off whenever it needs to in order to focus on other important matters.


This training system can be put to good use in all sorts of applications that require automation. Notice that there is no need for either pattern detectors or a conventional multi-layered neural network. Lots of simple rich sensors will do the trick. Sensors are essentially connected directly to motor effectors. Potential applications can range from self-driving trains, cars and buses to self-flying aircrafts and self-navigating ships. The learning system simply learns by imitating human operators.

Robots might be a little harder to train. It would require a human trainer to wear a harness fitted with special sensors that can record precise movements. These could then be used as training signals for the robotic cerebellum. I expect training to be extremely fast.

Coming Soon

In an upcoming article, I will describe how I got my understanding of the cerebellum. Stay tuned.

Thursday, June 7, 2018

I'm Rather Busy But Cerebellum Post Is Coming Soon

"Behold, I stand at the door and knock"

"If anyone hears my voice and opens the door, I will come in to him and will dine with him, and he with me." Believe it or not, this metaphor from the Book of Revelation is the essence of supervised sensorimotor learning in the cerebellum. It is as simple as it is powerful. If you are into robotics, you will not want to miss this. Stay tuned.