Sunday, April 27, 2008

Parallel Computing: Why the Future Is Reactive

Reactive vs. Non-Reactive Systems

A reactive system is one in which every stimulus (discrete change or event) triggers an immediate response within the next system cycle. That is to say, there is no latency between stimulus and response. Algorithmic software systems are only partially reactive. Even though an operation in an algorithmic sequence reacts immediately to the execution of the preceding operation, it often happens that a variable is changed in one part of the system but the change is not sensed (by calling a comparison operation) until later. In other words, in an algorithmic program, there is no consistent, deterministic causal link between a stimulus and its response.

The End of Blind Code

Algorithmic systems place a critical burden on the programmer because he or she has to remember to manually add code (usually a call to a subroutine) to deal with a changed variable. If an application is complex or if the programmer is not familiar with the code, the probability that a modification will introduce an unforeseen side effect (bug) is much higher. Sometimes, even if the programmer remembers to add code to handle the change, it may be too late. I call blind code any portion of an application that does not get automatically and immediately notified of a relevant change in a variable.

Potential problems due to the blind code problem are so hard to assess and can have such catastrophic effects that many system managers would rather find alternative ways around a deficiency than modify the code, if at all possible. The way to cure blind code is to adopt a reactive, non-algorithmic software model. In a reactive programming system, a change in a variable is sensed as it happens and, if necessary, a signal is broadcast to every part of the system that depends on the change. It turns out that the development tools can automatically link sensors and effectors at design time so as to eliminate blind code altogether. See Automatic Elimination of Blind Code in Project COSA for more info on the use of sensor/effector association for blind code elimination.


The synchronous reactive software model is the future of parallel computing. It enforces temporal determinism and eliminates blind code and all the reliability problems that plague conventional algorithmic software. In addition, it is ideally suited to the creation of highly stable and reusable plug-compatible software modules. Drag’m and drop’m. These easy to use, snap-together modules will encourage the use of a plug-and-play, trial-and-error approach to software construction and design. Rapid application development will never be the same. This is what Project COSA is all about. Unfortunately, a truly viable reactive system will have to await the development of single and multicore processors that are designed from the ground up to support the non-algorithmic software model. Hopefully, the current multicore programming crisis will force the processor industry to wake up and realize the folly of its ways.

In my next article, I will explain why future computers will be non-algorithmic.

See Also:

How to Solve the Parallel Programming Crisis
Nightmare on Core Street
Parallel Computing: The End of the Turing Madness
Parallel Programming: Why the Future Is Synchronous
Parallel Computing: Why the Future Is Non-Algorithmic
Why Parallel Programming Is So Hard
Parallel Programming, Math and the Curse of the Algorithm
The COSA Saga

PS. Everyone should read the comments at the end of Parallel Computing: The End of the Turing Madness. Apparently, Peter Wegner and Dina Goldin of Brown University have been ringing the non-algorithmic/reactive bell for quite some time. Without much success, I might add, otherwise there would be no parallel programming crisis to speak of.


Amir said...

the processor industry is fairly aware that reactive dataflow programming, or more commonly "spreadsheets," are a better programming model for parallelism.

it's the software industry, our reliance on existing code bases, and the continued profitability of creating legacy von Nuemann code that forestalls the inevitable transition to explicit dataflow graph execution.

the fact that multithreading on multicores is leading the way to software-pipelined stream programs is a good sign that things are heading the right way. as the (giga-ops / $) economics change to prefer more cells over faster cells, expect GPU and FPGA architectures to creep into tradition CPU markets. this transition is happening in HPC, and it will slowly trickle into embedded/portable markets next.

James said...

Yes, I would like to hear your thoughts on data-flow languages like Lush, and on "functional reactive" add-ons such as the cell add-on for lisp. Thanks.

Navid said...

Mad I say that I was actively developing a language with this features, when I found out about data based languages and functional languages. Not that I did not know them, but suddenly i saw that they were exactly those graphs and worked it the way you described.
obviously, the compiler is not made in that way, but hey... ocaml runs *very* fast. It tops C++ for compiled code and Java for bytecode. And it's very close to haskell and COSA.

Madbe you like it and can start coding in your style today. :)