Monday, February 22, 2010

Parallel Computing: Why the Future Is Non-Algorithmic (repost)

(Note: This is a repost of a previous article. I am still in my bash-computer-scientists mode. Read the paragraph titled "Don't Trust Your Dog to Guard Your Lunch" below.)

Single Threading Considered Harmful

There has been a lot of talk lately about how the use of multiple concurrent threads is considered harmful by a growing number of experts. I think the problem is much deeper than that. What many fail to realize is that multithreading is the direct evolutionary outcome of single threading. Whether running singly or concurrently with other threads, a thread is still a thread. In my writings on the software crisis, I argue that the thread concept is the root cause of every ill that ails computing, from the chronic problems of unreliability and low productivity to the current parallel programming crisis. Obviously, if a single thread is bad, multiple concurrent threads will make things worse. Fortunately, there is a way to design and program computers that does not involve the use of threads at all.

Algorithmic vs. Non-Algorithmic Computing Model

A thread is an algorithm, i.e., a one-dimensional sequence of operations to be executed one at a time. Even though the execution order of the operations is implicitly specified by their position in the sequence, it pays to view a program as a collection of communicating elements or objects. Immediately after performing its operation, an object sends a signal to its successor in the sequence saying, ‘I am done; now it’s your turn”. As seen in the figure below, an element in a thread can have only one predecessor and one successor. In other words, only one element can be executed at a time. The arrow represents the direction of signal flow.

In a non-algorithmic program, by contrast, there is no limit to the number of predecessors or successors that an element can have. A non-algorithmic program is inherently parallel. As seen below, signal flow is multi-dimensional and any number of elements can be processed at the same time.

Note the similarity to a neural network. The interactive nature of a neural network is obviously non-algorithmic since sensory (i.e., non-algorithmically obtained) signals can be inserted into the program while it is running. In other words, a non-algorithmic program is a reactive system.
Note also that all the elements (operations) in a stable non-algorithmic software system must have equal durations based on a virtual system-wide clock; otherwise signal timing would quickly get out of step and result in failure. Deterministic execution order, also known as synchronous processing, is absolutely essential to reliability. The figure below is a graphical example of a small parallel program composed using COSA objects. The fact that a non-algorithmic program looks like a logic circuit is no accident since logic circuits are essentially non-algorithmic behaving systems.

No Two Ways About It

The non-algorithmic model of computing that I propose is inherently parallel, synchronous and reactive. I have argued in the past and I continue to argue that it is the solution to all the major problems that currently afflict the computer industry. There is only one way to implement this model in a Von Neumann computer. As I have said repeatedly elsewhere, it is not rocket science. Essentially, it requires a collection of linked elements (or objects), two buffers and a loop mechanism. While the objects in one buffer are being processed, the other buffer is filled with objects to be processed during the next cycle. Two buffers are used in order to prevent signal racing conditions. Programmers have been using this technique to simulate parallelism for ages. They use it in such well-known applications as neural networks, cellular automata, simulations, video games, and VHDL. And it is all done without threads, mind you. What is needed in order to turn this technique into a parallel programming model is to apply it at the instruction level. However, doing so in software would be too slow. This is the reason that the two buffers and the loop mechanism should ideally reside within the processor and managed by on-chip circuitry. The underlying process should be transparent to the programmer and he or she should not have to care about whether the processor is single-core or multicore. Below is a block diagram for a single-core non-algorithmic processor.
Adding more cores to the processor does not affect existing non-algorithmic programs; they should automatically run faster, that is, depending on the number of objects to be processed in parallel. Indeed the application developer should not have to think about cores at all, other than as a way to increase performance. Using the non-algorithmic software model, it is possible to design an auto-scalable, self-balancing multicore processor that implements fine-grained deterministic parallelism and can handle anything you can throw at it. There is no reason to have one type of processor for graphics and another for general purpose programs. One processor should do everything with equal ease. For a more detailed description of the non-algorithmic software model, take a look at Project COSA.

Don’t Trust Your Dog to Guard Your Lunch

The recent flurry of activity among the big players in the multicore processor industry underscores the general feeling that parallel computing has hit a major snag. Several parallel computing research labs are being privately funded at major universities. What the industry fails to understand is that it is the academic community that got them into this mess in the first place. British mathematician Charles Babbage introduced algorithmic computing to the world with the design of the analytical engine more than 150 years ago. Sure, Babbage was a genius but parallel programming was the furthest thing from his mind. One would think that after all this time, computer academics would have realized that there is something fundamentally wrong with basing software construction on the algorithm. On the contrary, the algorithm became the backbone of a new religion with Alan Turing as the godhead and the Turing machine as the quintessential algorithmic computer. The problem is now firmly institutionalized and computer academics will not suffer an outsider, such as myself, to come on their turf to teach them the correct way to do things. That’s too bad. It remains that throwing money at academia in the hope of finding a solution to the parallel programming problem is like trusting your dog to guard your lunch. Bad idea. Sooner or later, something will have to give.

Conclusion

The computer industry is facing an acute crisis. In the past, revenue growth has always been tied to performance increases. Unless the industry finds a quick solution to the parallel programming problem, performance increases will slow down to a crawl and so will revenue. However, parallel programming is just one symptom of a deeper malady. The real cancer is the thread. Get rid of the thread by adopting a non-algorithmic, synchronous, reactive computing model and all the other symptoms (unreliability and low productivity) will disappear as well. In an upcoming post, I will go over the reasons that the future of parallel programming is necessarily graphical.

See Also:

How to Solve the Parallel Programming Crisis
Parallel Computing: The End of the Turing Madness
Parallel Computing: Why the Future Is Synchronous
Parallel Computing: Why the Future Is Reactive
Why Parallel Programming Is So Hard
Why I Hate All Computer Programming Languages
Parallel Programming, Math and the Curse of the Algorithm
The COSA Saga

No comments: