Sunday, July 10, 2016

The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

In Memoriam: Professor Hubert Dreyfus (1929 - 2017)

Abstract

In this article, I argue that mainstream artificial intelligence is about to enter a new AI winter because, in spite of claims to the contrary, they are still using a representational approach to intelligence, aka symbolic AI or GOFAI. This is a criticism that Hubert Dreyfus has been making for half a century to no avail. I further argue that the best way to get rid of the representationalist baggage is to abandon the observer-centric approach to understanding intelligence and adopt a brain-centric approach. On this basis, I conclude that timing is the key to unlocking the secrets of intelligence.

The World Is its Own Model

Hubert Dreyfus is a Professor of philosophy at the University of California, Berkeley. Dreyfus has been the foremost critic of artificial intelligence research (What Computers Still Can't Do) since its early days. The AI community hates him for it. Here we are, many decades later, and Dreyfus is still right. Drawing from the work of famed German philosopher, Martin Heidegger and the French existentialist philosopher, Merleau-Ponty, Dreyfus's argument has not changed after all those years. Using Heidegger as a starting point, he argues that the brain does not create internal representations of objects in the world. The brain simply learns how to see the world directly, something that Heidegger referred to as presence-at-hand and readiness-to-hand. Dreyfus gave a great example of this in his paper Why Heideggerian AI Failed and how fixing it would require making it more Heideggerian (pdf). He explained how roboticist Rodney Brooks solved the frame problem by moving away from the traditional but slow model-based approach to a non-representational one:
The year of my talk, Rodney Brooks, who had moved from Stanford to MIT, published a paper criticizing the GOFAI robots that used representations of the world and problem solving techniques to plan their movements. He reported that, based on the idea that “the best model of the world is the world itself,” he had “developed a different approach in which a mobile robot uses the world itself as its own representation – continually referring to its sensors rather than to an internal world model.” Looking back at the frame problem, he writes:
And why could my simulated robot handle it? Because it was using the world as its own model. It never referred to an internal description of the world that would quickly get out of date if anything in the real world moved.
Deep Learning's GOFAI Problem

By and large, the mainstream AI community continues to ignore Dreyfus and his favorite philosophers. Indeed, they ignore everyone else including psychologists and neurobiologists who are more than qualified to know a thing or two about intelligence and the brain. AI's biggest success, deep learning, is just GOFAI redux. A deep neural network is actually a rule-based expert system. AI programmers just found a way (gradient descent, fast computers and lots of labeled or pre-categorized data) to create the rules automatically. The rules are in the form, if A then B, where A is a pattern and B a label or symbol representing a category.

The problem with expert systems is that they are brittle. Presented with a situation for which there is no rule, they fail catastrophically. This is what happened back in May to one of Tesla Motors's cars while on autopilot. The neural network failed to recognize a situation and caused a fatal accident. This is not to say that deep neural nets are bad per se. They are excellent in controlled environments, such as the factory floor, where all possible conditions are known in advance and humans are kept at a safe distance. But letting them loose in the real world is asking for trouble.

As I explain below, the AI community will never solve these problems until they abandon their GOFAI roots and their love affair with representations.

The Powerful Illusion of Representations

The hardest thing for AI experts to grasp is that the brain does not model the world. They have all sorts of arguments to justify their claim that the brain creates representations of objects in the world. They point out that MRI scans can pinpoint areas in the brain that light up when a subject is thinking about a word or a specific object. They argue that imagination and dreams are proof that the brain creates representations. These are powerful arguments and, in hindsight, one cannot fault the AI community too much for believing in the illusion of representations. But then again, it is not as if knowledgeable thinkers, such as Hubert Dreyfus, have not pointed out the fallacy of their approach. Unfortunately, mainstream AI is allergic to criticism.

Why the Brain Does Not Model the World

There are many reasons. I'll just list a few as follows.
  • The brain has to continually sense the world in real time in order to interact with it. The perceptions only last a short time and are mostly forgotten afterwards. If the brain had a stored (long-term) model of the world, it would only need to update the model occasionally. There are not enough neurons in the brain to store a model of the world. Besides, the brain's neurons are too slow to engage in any complex computations that an internal model would require.
  • It takes the brain a long time (years) to build a universal sensory framework that can instantly perceive an arbitrary pattern. However, when presented with a new pattern (which is almost all the time since we rarely see the same exact thing more than once), the cortex instantly accommodates existing memory structures to see the new pattern. No new structures are learned. A neural network, by contrast, must be trained with many samples of the new pattern. It follows that the brain does not learn to create models of objects in the world. Rather it learns how to sense the world by figuring out how the world works.
  • The brain should be understood as a complex sensory organ. Saying that the brain models the world is like saying that a sensor models what it senses. The brain builds a huge collection of specialized sensors that sense all sorts of phenomena in the world. The sensors are organized hierarchically. They are just sensors (detectors) that respond directly to specific sensory phenomena in the world. For example, we may have a high level sensor that fires when grandma comes into view but it is not a model of grandma. Our brain cannot model anything outside of it because our eyes do not see grandma. They just sense changes in illumination. To model something, one must have access to both a subject and an object. An artist can model something by looking at both the subject and the painting. The brain must sense things directly. It only has the signals from its senses to work with.
To Understand the Brain, Be the Brain

The most crippling mistake that most AI researchers make is that they try to understand intelligence from the point of view of an outside observer. Rather, they should try to understand it from the point of view of the intelligence itself. They need to adopt a brain-centric approach to AI as opposed to an observer-centric approach. They should ask themselves, what does the brain have to work with? How can the brain create a model of something that it cannot see until it learns how to see it?

Once we put ourselves in the brain's shoes, so to speak, representations no longer exist because they make no sense. They simply disappear.

Timing is the Key to Unsupervised Learning

The reason that people like Yann LeCun, Quoc Le and others in the machine learning community are having such a hard time with unsupervised learning (the kind of learning that people do) is that they do not try to "see" what the brain sees. The cortex only has discrete sensory spikes to work with. It does not know or care where they come from. It just has to make sense of the spikes by figuring out how they are ordered. Here is the clincher. The only order that can be found in multiple sensory streams of discrete signals is temporal order: they are either concurrent or sequential. Timing is thus the key to unsupervised learning and everything else in intelligence.

One only has to take a look at the center-surround design of the human retina to realize that the brain is primarily a complex timing mechanism. It may come as a surprise to some that we cannot see anything unless there is motion in the visual field. This is the reason that the human eye is continually moving in tiny movements called microsaccades. Movements in the visual field generate precisely timed spikes that depend on the direction and speed of the movements. The way the brain sees is completely different from the way computer vision systems work. They are not even close.

New AI Winter in the Making

Discrete signal timing should be the main focus of AI research, in my opinion. It is very precise in the brain, on the order of milliseconds. This is something that neurobiologists and psychologists have known about for decades. But the AI community thinks they know better. They don't. They are lost in a lost world of their own making. Is it any wonder that their field goes from one AI winter to the next? Artificial intelligence research is entering a new winter as I write but most AI researchers are not aware of it.

See Also

Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI
Why Deep Learning Is a Hindrance to Progress Toward True AI
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut

6 comments:

Alexander Buianov said...

Great post, thank you.

jeanpaul said...

It has been a while, Louis. Are you OK?

Louis Savain said...

Hi, jeanpaul. I'm fine. Sorry for the long hiatus. I've been busy getting up to speed with my understanding of world politics. It is a frightening world we live in. There is an evil spirit hard at work destroying our civilization. Unless a miracle happens, we are doomed.

Dr. Hank Spanko said...

Memory is unlikely to be spatially based, even for 'recording spatial images' because there is too data.
Rather it is temporal because phased loops offer an unlimited number of harmonic hits.

Michael Cassady said...

The only danger to sense and a good night's sleep with the idea of "intelligence" isolated reductively by the AI enthusiasts is that they will invent the sort of thing that fits the needs of their art, then force it into being by the will to believe it's what they have made the case for by descriptive fiat—where there's a will, there's a will!

Hubert Dreyfus, under whom I studied philosophy as an undergraduate at UC Berkeley, presented an alternative account of intelligence to that of Alan Turning adopted by AI, but his approach was not objective description—what real science supposes itself to do—but was hermeneutic, or a subjective exploration of phenomenon aimed at characterizing the observed content of phenomenological looking in search of explanatory sufficiency. John Seattle, who criticized what he called "Strong AI: in his Chinese Room argument (the idea that computers can think as humans do), opposed Dreyfus's argument for being only literary and inherently observer dependent and, thus, not an objective account. Dreyfus's account does not satisfy me either, nor the same reason as Searle's, i.e., not being an objective account. But, I don't think Seattle helps to clarify the issues with finding an adequate objective account since his view depends projectively upon an intellectual commitment to metaphysical realism. What this means is we must simply accept the descriptive fact that consciousness, and, therefore, intelligence, is a feature of living material brains as we observe them, and accept to keep searching for an adequate means of explanation and have the patience to wait until that happens and avoid trying to force it into the hypothetical-deductive box much loved by empirical theorists. Buying into realism is, in my view, to great a price to pay to get intelligence into an objective formulation. There is another way.

The right question will aim at probing the utility and pertinence of the idea of "objectivity" as found in our propositional subject-object thinking. Wittgenstein and Barry Stroud are the best resources for going down this road. The skeptic, as viewed by them, takes off from the fact that we do do something we call intelligence, and the right way to go is to keep poking and probing the thing until we can find some sort of sufficient grounds for giving it a believable description. What we discover by giving our subject some rough knocking about is that the objective focus we arrive at by looking at psychological aspects of the behaviors of others and, by inference, what we do in the same regard ourselves, is we find objects are worldly content that take a form as existent particulars through the individuating acts of particular persons. Science can carry on observing instances of intelligence in objective form and selecting in whatever suits the search for law-like regularities without doing any damage, unless they have hopes of more than a selective object idealized for their third person descriptive need. However, objects as the content of a shared reality made up of agreed propositional descriptions continues to deliver to each of us intelligent particulars objects as related to ourselves as object-persons that are available with first person and third persons aspectual presentations. Objects are quite lacking in mysteriousness and are not puzzles unless we yearn to make all particulars, even those we know to exist in individuated form only to have a place apart from the messy thinking matter of mortals in some universal space-time envelope where there is no point-of-view and no creepy feeling stuff that comes from subjective concerns about an objective world as an environment we are concerned with in every thing we do.

Peter (stn1986@hotmail.com) said...

Your work inspired me to buy "What computers still can't do" by Hubert L. Dreyfus - very refreshing. Thanks Louis