Monday, April 24, 2017

Professor Hubert Dreyfus (1929 - 2017)


UC Berkeley Professor Hubert Dreyfus has passed away at the age of 87. Professor Dreyfus is a hero of mine. He was a fearless rebel at heart, the first to criticise the AI community for their symbolic AI nonsense. They hated him for it but he was right, of course. Did the AI community ever apologise for their personal attacks on him? Of course not. The AI community has always been full of themselves and they still are.

Dreyfus contributed more to the field of artificial intelligence than its best practitioners. His insistence that the brain does not model the world is an underappreciated tour de force. His ability to connect the works of his favorite philosophers (Martin Heidegger, Maurice Merleau-Ponty) to the working of the brain was his greatest intellectual achievement in my opinion. I wrote an article about this topic in July of last year. Please read it to appreciate the depth of Dreyfus' understanding of a field that rejected him.

The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

The world owes Professor Dreyfus a debt of gratitude. Thank you, Professor.

Monday, April 10, 2017

Signals, Sensors, Patterns and Sequences

[Note: The following is an excerpt from a paper I am writing as part of the eventual release of the Rebel Speech demo program, the world's first unsupervised audio classifier. I have not yet set a date for the release. Please be patient.]

Abstract

Signals, sensors, patterns and sequences are the basis of the brain’s amazing ability to understand the world around it. In this paper, I explain how it uses them for perception and learning. Although I delve a little into the neuroscience at the end, I restrict my explanation mostly to the logical and functional organization of the cerebral cortex.

The Perceptual System

Four Subsystems

Perception is the process of sensing and understanding physical phenomena. The brain’s perceptual system consists of four subsystems: the world, the sensory layer, pattern memory and sequence memory. Both pattern and sequence memories are unsupervised, feedforward, hierarchical neural networks. As explained later, the term “memory” is somewhat inadequate. The networks are actually high level or complex sensory organs. An unsupervised network is one that can classify patterns, objects or actions in the world directly from sensory data. A feedforward network is one in which input information flows in only one direction. A hierarchical network is organized like a tree. That is to say, higher level items are composed of lower level ones.

The world is the main perceptual subsystem because it dictates how the rest of the system is organized. The brain learns to make sense of the way the world changes over time. Elementary sensors in the sensory layer detect minute changes in the world (transitions) and convert them into precisely timed discrete signals that are fed to pattern memory where they are combined into small concurrent patterns. These are commonly called “spatial” patterns although it is a misleading label because concurrent patterns are inherently temporal and used by all sensory modalities, not just vision.

Signals from pattern detectors travel to sequence memory where sequences (transformations) are detected. Sequence memory is the seat of attention and of short and long-term memory. It is also where actual object recognition occurs. An object is a top-level sequence, i.e., a branch in the sequence hierarchy. A recognition event is triggered when the number of signals arriving at a top sequence detector surpasses a preset threshold. Recognition signals (green arrow) from sequence memory are fed back to pattern memory. They are part of the mechanism used by the brain to deal with noisy or incomplete patterns in the sensory stream.

Sequence memory can also generate motor signals but that is beyond the scope of this paper. What follows is a detailed description of each of the four subsystems.

(to be continued)

Thursday, March 16, 2017

Thalamus Prediction

Concurrent Pattern Hierarchy

This is just a short post to make a quick prediction about the internal organization of the thalamus, a relatively small but complex area of the brain that is thought to serve primarily as a relay center between various sensors and the sensory cortex. Given my current understanding of the brain and intelligence, I predict that the parts of the thalamus that process sensory signals (e.g., the lateral and medial geniculate nuclei) will be found to be hierarchically organized. The function of the hierarchy is to discover small concurrent patterns in the sensory space. These are commonly called "spatial patterns" in neuroscience. I personally don't like the use of the word "spatial" to refer to patterns because I think it is misleading. All patterns are temporal in my view, even if they refer to visual patterns. Here are some of the characteristics of the thalamic pattern hierarchy as predicted by my current model:
  • The hierarchy consists of a huge number of pattern detectors organized as binary trees.
  • The bottom level of the hierarchy receives signals from sensors.
  • The hierarchy has precisely 10 levels. This means that the most complex patterns have 1024 inputs.
  • Every level in the hierarchy makes reciprocal connections with the first level of the cerebral cortex.
  • Every pattern detector receive recognition feedback signals from the first level of the cerebral cortex.
The cerebral cortex (sequence memory) can instantly stitch these elementary patterns to form much bigger entities of arbitrary complexity. A number of researchers in artificial general intelligence (AGI), such as Jeff Hawkins and Subutai Ahmad of Numenta, assume (incorrectly in my view) that both concurrent and sequential patterns are learned and detected in the cortical columns of the cerebral cortex. In my model of the cortex, the cortical columns are used exclusively for sequence learning and detection while concurrent patterns are learned and recognized by the thalamus.

Stay tuned.

Edit 3/16/2017, 2:42 PM:

I should have elaborated further on the binary tree analogy. I prefer to call it an inverse or upside-down binary tree. That is to say, each node (pattern detector) in the tree receives only two inputs from lower level nodes. Each node may send output signals to any number of higher level nodes. It is a binary tree in the sense that the number of inputs doubles every time one climbs up one level in the hierarchy.

Saturday, January 7, 2017

Raising Money for AI Research

Smartphone Apps

I refuse to solicit or accept money from anyone to finance my research because I don't want to be indebted to or controlled by others. So I recently came up with a plan to put some of the knowledge I have acquired over the years to good use and do it in a way that does not reveal my hand too much. I am working on two intelligent mobile applications as described below. Let me know if you think they might be useful to you.

1. Crystal Clear Smartphone Conversations

The first app will filter out all background sounds other than the user's voice during a call. It will also repair or clean up the user's voice by filling in missing signals if necessary. Can be activated or deactivated at the touch of a button. Advantage: Crystal clear conversations.

2. Voice-based Security

The second app will use both voice and speech recognition to eliminate passwords. It does this by asking the user to read a random word or phrase. This app can be used for unlocking the phone, accessing accounts, etc. If your voice changes over time or if you want to give someone else access to your accounts, the app can be reset in an instant. Advantage: High security and no need to remember passwords.

Development

Although I think the first app has a better chance of being successful, I believe the second one is also doable. Some in the voice authentication and security business may disagree but the human voice is very much like a fingerprint. Every voice is unique in subtle ways that current technologies may not be able to capture. I use Microsoft Visual Studio and C# exclusively for programming. I will be using the Xamarin cross-platform tools to deploy the apps for the Windows Phone, the iPhone and Android phones. I don't anticipate needing GPU coprocessing.

I will release beta-test versions as soon as they are ready. Given my schedule, I anticipate the first app to be ready in two or three months.

The Ultimate Goal

If any of the apps is successful, I may venture into the hearing aid business. My plan is to generate enough funds to finance an artificial intelligence and computer research and development company. I believe that the requirements of true intelligence call for a new type of computer hardware and a better way to create software. My ultimate goal (or dream) is to build a truly intelligent bipedal robot that can do all your chores around the house such as cleaning, preparing food, babysitting the kids, doing the laundry, gardening, etc. A tall order, I know.