Tuesday, October 30, 2007

Half a Century of Crappy Computing

Decades of Deception and Disillusion

I remember being elated back in the early 80s when event-driven programming became popular. At the time, I took it as a hopeful sign that the computer industry was finally beginning to see the light and that it would not be long before pure event-driven, reactive programming was embraced as the universal programming model. Boy, was I wrong! I totally underestimated the capacity of computer geeks to deceive themselves and everyone else around them about their business. Instead of asynchronous events and signals, we got more synchronous function calls; and instead of elementary reactions, we got more functions and methods. The unified approach to software construction that I was eagerly hoping for never materialized. In its place, we got inundated with a flood of hopelessly flawed programming languages, operating systems and processor architectures, a sure sign of an immature discipline.

The Geek Pantheon

Not once did anybody in academia stop to consider that the 150-year-old algorithmic approach to computing might be flawed. On the contrary, they loved it. Academics like Fred Brooks decreed to the world that the reliability problem is unsolvable and everybody worshipped the ground he walked on. Alan Turing was elevated to the status of a deity and the Turing machine became the de facto computing model. As a result, the true nature of computing has remained hidden from generations of programmers and processor architects. Unreliable software was accepted as the norm. Needless to say, with all this crap going on, I quickly became disillusioned with computer science. I knew instinctively what had to be done but the industry was and still is under the firm political control of a bunch of old computer geeks. And, as we all know, computer geeks believe and have managed to convince everyone that they are the smartest human beings on earth. Their wisdom and knowledge must not be questioned. The price [pdf], of course, has been staggering.

In Their Faces

What really bothers me about computer scientists is that the solution to the parallel programming and reliability problems has been in their faces from the beginning. We have been using it to emulate parallelism in such applications as neural networks, cellular automata, simulations, VHDL, Verilog, video games, etc. It is a change-based or event-driven model. Essentially, you have a global loop and two buffers (A and B) that are used to contain the objects to be processed in parallel. While one buffer (A) is being processed, the other buffer (B) is filled with the objects that will be processed in the next cycle. As soon as all the objects in buffer A are processed, the two buffers are swapped and the cycle repeats. Two buffers are used in order to prevent the signal racing conditions that would otherwise occur. Notice that there is no need for threads, which means that all the problems normally associated with thread-based programming are non-existent. What could be simpler? Unfortunately, all the brilliant computer savants in academia and industry were and still are collectively blind to it. How could they not? They are all busy studying the subtleties of Universal Turing Machines and comparing notes.

We Must Reinvent the Computer

I am what you would call a purist when it come to event-driven programming. In my opinion, everything that happens in a computer program should be event-driven, down to the instruction level. This is absolutely essential to reliability because it makes it possible to globally enforce temporal determinism. As seen above, simulating parallelism with a single-core processor is not rocket science. What needs to be done is to apply this model down to the individual instruction level. Unfortunately, programs would be too slow at that level because current processors are designed for the algorithmic model. This means that we must reinvent the computer. We must design new single and multiple-core Processor architectures to directly emulate fine-grained, signal-driven, deterministic parallelism. There is no getting around it.

Easy to Program and Understand

A pure event-driven software model lends itself well to fine-grain parallelism and graphical programming. The reason is that an event is really a signal that travels from one object to another. As every logic circuit designer knows, diagrams are ideally suited to the depiction of signal flow between objects. Diagrams are much easier to understand than textual code, especially when the code is spread across multiple pages. Here is a graphical example of a fine-grained parallel component (see Software Composition in COSA for more info):

Computer geeks often write to argue that it is easier and faster to write keywords like ‘while’, ‘+’, ‘-‘ and ‘=’ than it is to click and drag an icon. To that I say, phooey! The real beauty of event-driven reactive programming is that it makes it easy to create and use plug-compatible components. Once you’ve build a comprehensive collection of low-level components, then there is no longer a need to create new ones. Programming will quickly become entirely high-level and all programs will be built entirely from existing components. Just drag’m and drop’m. This is the reason that I have been saying that Jeff Han’s multi-touch screen interface technology will play a major role in the future of parallel programming. Programming for the masses!

Too Many Ass Kissers

I often wondered what it will take to put an end to decades of crappy computing. Reason and logic do not seem to be sufficient. I now realize that the answer is quite simple. Most people are followers, or more accurately, to use the vernacular, they are ass kissers. They never question authority. They just want to belong in the group. What it will take to change computing, in my opinion, is for an intelligent and capable minority to stop kissing ass and do the right thing. That is all. In this light, I am reminded of the following quote attributed to Mark Twain:

“Whenever you find that you are on the side of the majority, it is time to pause and reflect.”

To that I would add that it is also time to ask oneself, why am I kissing somebody's ass just because everybody else is doing it? My point here is that there are just too many gutless ass kissers in the geek community. What the computer industry needs is a few people with backbones. As always, I tell it like I see it.

See Also:

How to Solve the Parallel Programming Crisis
Why Parallel Programming Is So Hard
Parallel Computing: Why the Future Is Non-Algorithmic
UC Berkeley's Edward Lee: A Breath of Fresh Air
Why I Hate All Computer Programming Languages
The COSA Saga
Transforming the TILE64 into a Kick-Ass Parallel Machine
COSA: A New Kind of Programming
Why Software Is Bad and What We Can Do to Fix It
Parallel Computing: Both CPU and GPU Are Doomed

12 comments:

Jamin said...

I've been reading your web pages for a while and seems to be only place I can find that is talking about questions I've had with computer science, namely:

1) Why is it that processing should be done 1 at a time? We know that physical systems in the real world operate in parallel, say like a brain with interconnected cells. Why not start with that rather doing only one thing at a time (ala the Turing model). Yes, you may be able to simulate any parallel system with a sequential one, but is that really efficient or necessary?

2) Is there even a science of parallel network signal processing? Computer science programming seems to start out assuming a sequential CPU and instructions, rather than parallel signal processing. It is very interesting to me to see that this is where you are starting in your COSA model.

3) When you look at a modern electronic system, you hook up parts in parallel (say resistors, capacitors, diodes) and think that way until - you connect a processor, when all of the sudden you start thinking in terms of instructions and sequential clocks; inside these processors is where all the threading and complicated one-at-a-time solutions are employed. Why this bias? Why not make everything networked to begin with?

4) For me it does not seem necessary for computer programs to be written with text, like you say. There is so much space for error in writing a page of text to be compiled. A graphical interface that force your options, whether it is a grammar or some other structure, would be much preferred. The computer should be able to automatically provide you the syntax, rather than having to construct it by the user's memory. It is like preferring menu options in Windows rather than having to remember DOS commands.

In any case, I enjoy reading your pages and hope it stimulates more discussion in this area.

Louis Savain said...

Jamin,

Thanks for your comments. I think the problem with computers started with mathematicians. Both Babbage and Lady Ada were mathematicians. All they wanted to do was to solve mathematical functions using sequential algorithms. That's where the bias comes from. You and I, on the other hand, want to simulate nature and nature, as you pointed out, is parallel.

Lately, the mathematicians have redoubled their efforts to retain their iron grip on computing. The latest trend is to abandon object-oriented programming and replace it with functional programming. They are hard at work pushing FP for multicore processing because functions can be easily made to work within threads. I think this is pure folly for reasons that I don't need to go into here.

In my opinion, computing is behaving, which means that it is 1/4 sensing, 1/4 effecting, and 1/2 signal processing.

Greg said...

Where does COSA fit between compiling-to-FPGA and Functional Reactive Programming?

Louis Savain said...

Grag wrote: "Where does COSA fit between compiling-to-FPGA and Functional Reactive Programming?"

I think funtional reactive programming is nonsense. It is an oxymoron. It is an attempt by the FP crowd to migrate FP from traditional function-oriented applications into real-time, mission-critical applications. It will not work since functions are inherently non-deterministic. COSA, by contrast, is reactive and deterministic down to the instruction level.

I haven't looked closely at how well-suited COSA is to FPGA programming but, at fisrt glance, given that a COSA program is signal-driven and similar to logic cicuits, the similarity between the two is unmistakable.

fche said...

If you're promoting visual programming notations as a panacea, have you wondered why visual programming systems such as LabVIEW are not implemented in terms of themselves?

Louis Savain said...

fche wrote:

If you're promoting visual programming notations as a panacea, have you wondered why visual programming systems such as LabVIEW are not implemented in terms of themselves?

Good question. The answer is, yes I have. Thanks for asking. For one thing, LabView is a dataflow system. The COSA model, by contrast, espouses a control flow approach to programming, which is not the same thing. Second, Unlike COSA, LabView does not have instruction-level objects from which all high-level objects are created.

I am not knocking LabView. It works very well for what it was designed for but it was not meant to be a universal software model. Besides, COSA is more than just a software model. It is also a computing model, that is to say, it requires that the CPU itself be reinvented to support the model.

So yes, a COSA OS and all of its development tools can and will be implemented using COSA.

Adam said...

It's easy to blog about something. But why don't you do it! Design an architecture and write a simulator. Show us a calculator or a webserver or a msPaint, written in your event-driven model. And don't worry how slow it is or how limited the functionality. I would very much like to see your results.

J said...

Temporal consistency?

Something about clock skew and signal propagation are bothering me. How much of the system has to wait on the same clock signal?

Async CPU's died becuase of the complexity of managing the clock.

I think what you say has merit -- but clock processing is the issue.

Louis Savain said...

j,

I am not sure which clock you are referring to. Is it the virtual system-wide clock or the hardware clock?

In the COSA model, the virtual clock advances as soon as all the cells (instructions) in the current buffer are finished. So individual cores can be clockless if the technology is there. One of the nice things about processing an instruction buffer is that the instructions do not have to be processed in any particular order. So the hardware is free to schedule and group them in such a way as to optimize the use of core registers and caches for the best possible performance.

Every once in a while, an individual core may have to wait for the one or more cores to finish processing but load balancing can be so optimized that the wait period is insignificant. The performance hit is flat, i.e., it does not grow exponentially as the number of cores increases.

Read Parallel Computing: Why the Future is Synchronous for an explanation of load balancing in COSA. I hope this answers your question.

PS. By the way, you guys at JPM should be careful how you invest your multicore money. Don't let Intel, AMD and the others pull a wool over your eyes. Don't believe the hype. Multithreading is a disaster waiting to happen. :-)

martin said...

I'm not sure I see the relationship between the event-based model and graphic programming. They are both interesting concepts with a lot of merits(and a number of problems, but that's for a diferent post), but I just don't see how they are connected.

Louis Savain said...

martin wrote:

I'm not sure I see the relationship between the event-based model and graphic programming. They are both interesting concepts with a lot of merits(and a number of problems, but that's for a diferent post), but I just don't see how they are connected.

Well, consider that logic circuits are also signal-based. Logic diagrams are a tried and tested method of representing and describing logic circuits.

nonzero said...

There are some innovative approaches to asynchronous parallel computing from the hardware on up popping up in academia. Particularly, check out this page of resources: http://www.cba.mit.edu/events/08.04.ASC/