Saturday, June 21, 2008

Parallel Computing: The End of the Turing Madness. Part II

Part I

Don’t Read My Stuff

I want to preface this post by pointing out that I don’t blog for the benefit of the computer science community. As dear as Project COSA is to me, I would rather see it fail if its success must depend on academic approval. If what I write about Alan Turing offends you, then don’t read my stuff. It’s not meant for you. And don’t send me emails to tell me that you don't like it because I don’t care. If I am a kook in your opinion, you are an idiot in mine. That makes us even. I just thought that I would get this little misunderstanding out of the way so I can continue to enjoy my freedom of expression on the Internet. It’s a beautiful thing. To the rest of you who are interested in what I have to say about Turing, I apologize for the delay in posting the second part of this article.

Turing Is the Problem, Not the Solution

In part I, I wrote that Alan Turing is a naked emperor. Consider that the computer industry is struggling with not just one but three major crises [note: there is also a fourth crisis having to do with memory bandwidth]. The software reliability and productivity crises have been around since the sixties. The parallel programming crisis has just recently begun to wreak havoc. It has gotten to the point where the multicore vendors are starting to panic. Turing’s ideas on computation are obviously not helping; otherwise there would be no crises. My thesis, which I defend below, is that they are, in fact, the cause of the industry’s problems, not the solution. What is needed is a new computing model, one that is the very opposite of what Turing proposed, that is, one that models both parallel and sequential processes from the start.


UBM vs. UTM

I have touched on this before in my seminal work on software reliability but I would like to elaborate on it a little to make my point. The computing model that I am proposing is based on an idealized machine that I call the Universal Behaving Machine or UBM for short. It assumes that a computer is a behaving machine that senses and reacts to changes in its environment.
Please read the paragraph on The Hidden Nature of Computing before continuing. Below, I contrast several characteristics of the UBM with those of the UTM. The Turing machine does not provide for it but I will be gracious and use multithreading as the Turing version of parallel processing.

Although multithreading is not part of the UTM, this is the mechanism that multicore processor vendors have adopted as their parallel processing model. Turing’s supporters will argue that parallelism can be simulated in a UTM without threads and they are correct. However, as I explain below, a simulation does not change the sequential nature of the Turing computing model. For an explanation of “non-algorithmic”, see my recent blog entry on the subject.

Simulation Does Not a Computing Model Make

True universality requires that a computing model should handle both serial and parallel computations and events by definition. In other words, both types of computation should be inherent parts of the model. One of the arguments that I invariably get from Turing’s supporters is that the Turing machine is a universal computing model because you can use it to simulate anything, even a parallel computer. This is a rather lame argument because observing that a Turing machine can be used to simulate a parallel computer does not magically transform it into a parallel computing model. This would be like saying that, since a Turing machine can be used to simulate a video game or a chess computer, that it is therefore a video game or a chess-computing model. That is absurd. Simulation does not a model make. Whenever one uses one mechanism to simulate another, one climbs to a new level of abstraction, a new model, one that does not exist at the lower level.

To Model or to Simulate, That Is the Question

The Turing machine is a model for a mechanism that executes a sequence of instructions. It does not model a parallel computer, or a tic-tac-toe program or a spreadsheet or anything else, even if it can be used to simulate those applications. The simulation exists only in the mind of the modeler, not in the underlying mechanism. The fallacy of universality is even more transparent when one realizes that a true parallel machine like the UBM does not have to simulate a Turing machine the way the UTM has to simulate the UBM. The reason is that the UBM can duplicate any computation that a Turing machine can perform. In other words, the UTM is an inherent part of the UBM but the opposite is not true.

The Beginning of the End of the Turing Madness

Thomas Kuhn wrote in his book, “The Structure of Scientific Revolutions” that scientific progress occurs through revolutions or paradigm shifts. Max Planck, himself a scientist, said that "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Last but not least, Paul Feyerabend wrote the following in Against Method: “… the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence. It is time to cut them down to size and give them a more modest position in society.”

I think that all the major problems of the computer industry can be attributed to the elitism and intransigence that is rampant in the scientific community. The peer review system is partly to blame. It is a control mechanism that keeps outsiders at bay. As such, it limits the size of the meme pool in much the same way that incest limits the size of the gene pool in a closed community. Sooner or later, the system engenders monstrous absurdities but the community is blind to it. The Turing machine is a case in point. The point that I am getting at is that it is time to eradicate the Turing cult for the sake of progress in computer science. With the parallel programming crisis in full swing, the computer industry desperately needs a Kuhnian revolution. There is no stopping it. Many reactionaries will fight it teeth and nails but they will fall by the wayside. We are witnessing the beginning of the end of the Turing madness. I say, good riddance.

See Also:

Jeff Raskin - Computers Are Not Turing Machines (pdf)
Half a Century of Crappy Computing
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
How to Construct 100% Bug-Free Software

7 comments:

James said...

Quick question for Louis or anyone who can answer. Is Forth non-deterministic?

Louis Savain said...

James,

Forth is a procedural language. Invoking a Forth word is similar to calling a function (subroutine). A function call is what is known as synchronous messaging. It forces the caller to wait. This creates all sorts of temporal uncertainties. Asynchronous messaging or signaling is one of the prerequisites to a deterministic programming model. In my opinion, it is impossible to have a deterministic software model unless it is a parallel software model.

Having said, I must say that, even though I don't like computer languages in general (for reasons that most of my readers know), I had the most fun with Forth. It was the first high level computer language that I learned. Forth is a powerful and extensible (user-definable) language. COSA is similarly extensible since the user can create new modules from existing ones. It is, in that sense, Forth-like.

James said...

Thanks for the quick response.

John L said...

(sorry - left this comment on an old post).

Jaron Lanier and Marvin Minksy are also advocating a complete re-think of our computing paradigms. They say 2D / boolean computing is a dead-end.

Are you and they thinking along the same lines? Lanier is advocating for "phenotropic" or biology-like computing. Minsky is thinking more along "emotional machines" seeking to emulate more of how human beings process data.

Louis Savain said...

John,

I like some of Lanier's ideas. I wrote a short news article about one of his ideas a couple of years ago. I can't say I am too fond of Minsky' work. Minsky is from last century's school of symbolic artificial intelligence, which I believe to be complete crackpottery.

Having said that, even though I've seen the term before, I don't think I understand what is meant by 2-D Boolean computing. I see the Turing computer model as a 1-D computing model. By contrast, I see Boolean circuits as being 3-dimensional, kind of like the brain. Please enlighten me as to the meaning of 2-D Boolean computing or direct me to a linked reference. I'll be glad to take a look.

Louis Savain said...

Please read the message posted by Peter Wegner of Brown University and my response in the comments section of Part I of this article. Peter surprised me with some amazing links to his and Dina Goldin's work on non-algorithmic computing at Brown and his refutation of the strong Church-Turing thesis.

This is amazing stuff. At last, I have found intelligent life in academia! This calls for a celebration. Good Scotch Whiskey and aged Mexican Tequila, por favor. Just in time for the July 4th barbecue, no less.

John L said...

Louis, I've always understood computers as Boolean processors: a massive assembly of yes/no objects (gates), on/off, 0/1, go/nogo. Not sure if "two dimensional" is the right metaphor, but Boolean logic is a two-state idea -- only two possible values can exist. All "standard" computing hardware is based on this idea.

Software written for Boolean platforms can emulate linear functions, but the fundamental computational process remains strictly binary. This is a disconnect - our s/w is severely limited by our h/w. In this sense, I agree with you. Our inherited topology is a dead-end.

I envision "ideal computing" looking more like the brain - as an analog or neural process rather than the "relay-like" Boolean logic gates we use today. I know some are working towards this topology.

And speaking of Turing, I would argue that his AI test is far too simplistic. When AI can match high-level human creativity (say, write a college level text that scholars review as ground-breaking), only then will true AI have been reached.