Friday, February 26, 2010

Unreliable Software, Part III: Brooks's Blunder

Part I, II, III

Abstract

In Part II, I wrote that Brooks's excuse for crappy software is actually a curse because it is much more detrimental to society than is commonly assumed. In this post, I defend the thesis that there is a way to lift the curse and solve the software reliability problem once and for all. Timing is the key.

Brooks's Blunder

In his famous No Silver Bullet paper, computer scientist Frederic Brooks wrote the following:
The complexity of software is an essential property, not an accidental one.
Brooks's thesis is that the accidental complexity of software programming (e.g., syntax, spelling, infinite loops, null pointers, etc.) can be effectively managed but its essential complexity is so vast that errors cannot be prevented. Brooks stakes his entire thesis on the following assertion:
From the complexity comes the difficulty of enumerating, much less understanding, all the possible states of the program, and from that comes the unreliability.
In other words, Brooks argues that, in order for a program to be reliable, one must be able to enumerate and understand all the possible states of the program. This may seem obvious at first glance but is it really true? Well, I hate to disappoint those of you who have believed in this fairy tale for the last twenty years or so but Brooks's thesis can be easily falsified as follows:

Let's take a simple temperature control program that is designed to read a register T that represents room temperature. The program turns on the heater if T decreases below t1 and turns it off when T increases above t2. Is it necessary to enumerate all the possible states of this program in order to know that it works correctly? Of course not. The program will work correctly if the two conditions (i.e., T < t1 and T > t2) that control its behavior are tested and shown to elicit the right behavior. There is no need to test all the possible states of T in order to know that the program is working properly as designed.

Note: Many years ago, speaking about Brooks's 'No Silver Bullet' paper, I wrote that "no other paper in the annals of software engineering has had a more detrimental effect on humanity's efforts to find a solution to the software reliability crisis." I still stand by that statement.

Thinking of Everything: It's All in the Timing

It is common knowledge that the human mind is prone to errors and often overlooks important aspects of a complex design. My claim is that there is a way to solve this problem. In my thesis on the correct approach to programming, timing is an essential and fundamental aspect of software, not an afterthought. This means that a program should be synchronous and reactive and should consist entirely of sensors (comparison operators or event detectors) and effectors (normal data operators). It also means that the relative timing of every elementary operation can be easily discovered via the use of simultaneity and sequence detectors. During testing, a special tool called the temporal discovery tool (TDT) can automatically discover the normal relative temporal order of all the sensors and effectors used in the program. This knowledge can be used to automatically generate alert detectors for every possible temporal anomaly. What would constitute an anomaly? Suppose the TDT discovers that signals A, B, C, and D always occur sequentially in that order. It will then create anomaly detectors that fire if there is a break somewhere in the sequence.
In practice, the TDT will check for both simultaneity and sequentiality in order to cover every possible eventuality. An application must not be deployed unless every discovered anomaly (alert signal) is adequately handled. This way, it is no longer necessary for the programmer to think of every possible anomalous condition in the program since that is the job of the TDT.

An Example

A simple program like the temperature control program I described above would suffice in most situations but what if some anomaly occurred such as an external hardware malfunction? Given enough temperature sensors, the TDT can discover many things about the way temperature changes in the room.  For example, the TDT can easily discover that the temperature cannot increase above X twice in a row without first decreasing below X in between.

In other words, it discovers that an increase above X normally alternates with a decrease below X. So it creates an alert sensor that sends a signal if the rule is broken. This sort of thing can occur if the temperature sensor undergoes a malfunction for whatever reason. The discovery process is fully automatic and thorough and guarantees that every possible contingency is covered.

Dear Toyota Software Managers

Please take note. If Toyota had a tool like the TDT, it would not be in the predicament that it finds itself in. The TDT would have automatically discovered that pressing the brake pedal at the same time as the gas pedal is an anomaly. Of course, Toyota's engineers knew that but, with the TDT, they would also know that everything is guaranteed to be in working order and, as a result, they would not be shy about adding as much complexity to the control program as necessary, or as desired.

Conclusion

The TDT tool described above can only work in a deterministic software environment. What this means is that the system must make it possible to determine the temporal relationship (simultaneous or sequential) of any two events in a program. The beauty of the COSA software model is that it enables temporally deterministic programs. Non-deterministic and/or abnormal behavior can be fully accounted for during development. COSA does not construct your code for you. It simply makes sure that whatever you create is rock-solid and complete with no stone left unturned. If only the not so wise programming managers at Toyota, or wherever safety-critical code is developed, would take notice.

Next: How to Construct 100% Bug-Free Software

See Also:

Parallel Computing: Why the Future Is Synchronous
Parallel Computing: Why the Future Is Reactive
How to Solve the Parallel Programming Crisis
Parallel Computing: The End of the Turing Madness
Why Software Is Bad and What We can Do to Fix It
Project COSA
The COSA Operating System

Wednesday, February 24, 2010

Unreliable Software, Part II: Brooks's Curse

Part I, II, III

Abstract

In Part I, I described what I call Brooks's excuse (named after computer scientist Fred Brooks), the main reason given by programmers and computer scientists to explain why complex programs are not reliable. The excuse essentially amounts to asserting that, when it comes to programming complex software, nobody can think of everything. In this post, I explain why Brooks's excuse turned out to be a very costly curse.

Brooks's Curse

Many articles have been written on the cost of the software reliability crisis but I think they only scratch the surface. Society is paying a much higher price than is commonly perceived. Brooks's excuse has become a self-fulfilling prophecy of sorts because programmers are now convinced of its veracity. Their reasoning is that, since programs cannot be guaranteed to be bug-free, there is no point in trying to find a solution. The best that can be done is to alleviate the problem with the use of formal methods and the application of so-called software metrics. These methods are not only costly and time consuming, they just don't work. The Toyota debacle is proof of that. The end result is that it is impossible to develop and deploy truly sophisticated software for use in safety-critical applications.

Brooks's curse is the reason that cars do not drive themselves, a technology that would prevent over 40,000 yearly fatalities on US roads alone (Wheels of Death). It's the reason that humans are still required to operate trains, buses, aircrafts and other dangerous vehicles and equipment. It's the reason that we still have human air traffic controllers. But it is going to get much worse because the industry has settled on multithreading as the software model for its multicore processors. If you think single-threading is unreliable, wait till multithreading becomes widespread. About the only hope we have is that multithreading is such a pain in the ass to work with that it will seriously hurt the computer industry's bottom line. This might convince the big dogs in the business of the seriousness of the problem and put pressure on them to do something about it, something that has not been tried before. We can only hope.

Superstition

In Part III, I will explain why Brooks's curse is based on nothing more than superstition. There is a way to guarantee bug-free code and there is a way to think of everything even if humans normally can't.

See Also:

Why Software Is Bad and What We can Do to Fix It

Tuesday, February 23, 2010

Unreliable Software, Part I: Brooks's Excuse

Part I, II, III

Abstract

This is a three-part article on the reasons that complex software is unreliable and what we can do to fix the problem. I will argue against the prevailing notion that reliability in software is necessarily inversely proportional to complexity. Below, I explain the rationale behind this unquestioned belief.

The Doubled-Edge Nature of the Beast

As everyone knows, giant car-maker Toyota is in a world of serious hurt. Whether or not this is a case of sticky gas pedals, the real reason that this is such a huge problem for Toyota has to do with the software. If the software was properly written, the pedal problem would have been discovered and it would certainly not have led to serious accidents. As it happens, Toyota wrote some crappy code for their cars. And the reason that their code is crappy is that complex code is unreliable. However, this is only half of Toyota's code complexity troubles. Since software engineers know that unreliability is proportional to complexity, they err on the side of caution by limiting the complexity of their code as much as they can. The problem is that complexity is frequently called for due to safety and practical reasons. Code can sometimes be too simple (I think this may have been case with Toyota's brake pedal program). Toyota is certainly between a rock and a hard place but the other automakers should refrain from secretly rejoicing at Toyota's troubles because their turn on the hot seat will come sooner or later. It's the nature of the Beast.

Brooks's Excuse

The usual excuse for crappy code is what I call Brooks's excuse. I named it after computer scientist, Fred Brooks, the man who popularized the (dubious, in my view) notion that it's impossible to produce 100% guaranteed non-crappy code whether or not you want to. Brooks's excuse can be summed up thus. Even if you could catch and correct all the accidental errors, it's still impossible to think of everything that could go wrong because of the essential or non-accidental complexity of software. According to Brooks, since it is impossible to enumerate all the possible states (or, alternatively, comprehend the essential complexity) of a complex program, it is therefore impossible to create guaranteed bug-free code.

Brooks's excuse may well have credibility among software engineers but, as Toyota is finding out the hard way, it can serve neither as a legal defense nor as an antidote against bad publicity and costly recalls. In Part II, I will explain why Brooks's excuse is really Brooks's curse. I'm still in my bash-computer-scientists mode, I'm afraid.

See Also:

Why Software Is Bad and What We can Do to Fix It

Venting My Spleen on Twitter?

Maybe Twitter Is Good for Something, After All

It occurred to me recently that Twitter might be the perfect venue for venting one's spleen. Sometimes, one needs to let it all hang out. My Twitter name is RebelScience. I haven't tweeted anything yet but next time I get excited about something or some weird idea crosses my mind, I may just tweet it. So, put me in your following list and stay tuned.

Computer Scientists Created the Parallel Programming Crisis

It Pays to Be Incompetent

The computer industry is spending large sums of money to come up with a solution to the parallel programming crisis but the very people who created the crisis are the direct beneficiaries. About two years ago, the major players in the computer industry (Intel, Microsoft, AMD, Nvidia, etc.) poured tens of millions of dollars into the coffers of major university research centers. Has there been any progress in finding a solution since? Answer: No. Is there any indication that progress will be made in the near future? Answer: No. Now, everyone knows that two years is a long time in the computer business. A lot can and should happen in two years. The amazing thing about parallel programming research is that computer scientists have been working on the problem for the last thirty years! They are just as clueless now as to what the solution might be as they were when they first started working on it! What is wrong with this picture?

Seismic Paradigm Shift Ahead

Computer scientists are not ashamed to admit that they have no idea what the solution to the crisis might be. Why should they be? Long term failure has never stopped the money from flowing in. In fact, research money on parallel computing has markedly increased in the last few years because the industry is beginning to panic. Nobody messes with Moore's law and walks away to brag about it. In a way, it makes sense that the industry should approach academia for help. I mean, who else but academia is qualified to find a solution to this pressing problem? But how much longer can the crisis continue? Can the industry fund unsuccessful research indefinitely? Can we continue to live forever with hideous monstrosities like heterogeneous processors and multithreading? I don't think so. Sooner or later, something will have to give. The captains of the industry will eventually realize that they are pouring money into a black hole and many will wise up. A seismic upheaval in the way computer science is conducted will ensue. Many scientists who are now placed on a pedestal will see their work discredited. The computer science community may think they are immune to hard times but the market is known to be rather cruel when profits are in the balance.

Wrong From the Start

If you asked me who are the most to blame for the current crisis, I would tell you without hesitation that it is the mathematicians. All the major participants who helped shape the history of computing, people like Charles Babbage, Lady Ada Lovelace, Alan Turing and John Von Neumann, were mathematicians. Their vision of the computer is that of a machine built for the computation of mathematical functions or algorithms. Even today, after years of failure to solve the parallel programming problem, mathematicians are still clamoring for the use of functional programming as the solution. Everything looks like a nail when all you have is a hammer. The only reason that the functional programming community has not succeeded in pushing FP into the mainstream is that reality keeps kicking them in the ass. The truth is that functional programming languages are counter-intuitive, hard to learn and a pain to work with.

Behaving Machines, Communication and Timing

Mathematicians notwithstanding, a computer is a class of automaton known as a behaving machine. As such, it must not be seen as a function calculator that takes input arguments and returns an answer. It should be seen as belonging to the category of machines that includes brains and neural networks. The proper paradigm for describing the working of such machines is not mathematics but psychology. We should be using terms like stimuli, responses, sensors, effectors, signals, and environment when describing computers. This way, an operation becomes an effect performed by an effector on the environment (data) and a comparison operator becomes a sensor. Once the computer is seen in its proper light, it immediately becomes clear that, like nervous systems, a computer program is really a communication network that senses and effects changes in its environment. Nothing more and nothing less. And, as with all communication systems, the deterministic timing of sensed events (stimuli) and operations (responses) becomes critical. Most of the problems that plagues the computer industry (e.g., unreliability and low productivity) stem from the inability to precisely determine the temporal order (concurrency or sequentiality) of all events in the machine. Temporal determinism is a must. This then is the biggest problem with the Turing Computing Model: timing is not an inherent part of the model.

How to Solve the Crisis

We must reinvent the computer. We must turn it from a mathematician's wet dream into a psychologist's wet dream, from a glorified function calculator into a universal behaving machine. And we must do so at the fundamental level. It is time to stop feeding money into the academic black hole for the reason that the academics have failed. It is time to stop encouraging mediocrity among the baby boomer generation who created this mess. It is time for the boomers to gracefully retire and allow a new generation of thinkers to have their turn at the wheel. Industry leaders should simply say to the Turing Machine worshipers, “thank you very much, ladies and gentlemen, for your great service; here's a nice retirement check for all your troubles.” Then they should forget the whole mess and forge a bold new future for computing. And then the next computer revolution will make the first one pale in comparison.

See Also:

How to Construct 100% Bug-Free Software
How to Solve the Parallel Programming Crisis
Parallel Computing: The End of the Turing Madness
Why Parallel Programming Is So Hard
Parallel Computing: Why the Future Is Non-Algorithmic
Half a Century of Crappy Computing
Why Software Is Bad and What We Can Do to Fix It

Monday, February 22, 2010

Parallel Computing: Why the Future Is Non-Algorithmic (repost)

(Note: This is a repost of a previous article. I am still in my bash-computer-scientists mode. Read the paragraph titled "Don't Trust Your Dog to Guard Your Lunch" below.)

Single Threading Considered Harmful

There has been a lot of talk lately about how the use of multiple concurrent threads is considered harmful by a growing number of experts. I think the problem is much deeper than that. What many fail to realize is that multithreading is the direct evolutionary outcome of single threading. Whether running singly or concurrently with other threads, a thread is still a thread. In my writings on the software crisis, I argue that the thread concept is the root cause of every ill that ails computing, from the chronic problems of unreliability and low productivity to the current parallel programming crisis. Obviously, if a single thread is bad, multiple concurrent threads will make things worse. Fortunately, there is a way to design and program computers that does not involve the use of threads at all.

Algorithmic vs. Non-Algorithmic Computing Model

A thread is an algorithm, i.e., a one-dimensional sequence of operations to be executed one at a time. Even though the execution order of the operations is implicitly specified by their position in the sequence, it pays to view a program as a collection of communicating elements or objects. Immediately after performing its operation, an object sends a signal to its successor in the sequence saying, ‘I am done; now it’s your turn”. As seen in the figure below, an element in a thread can have only one predecessor and one successor. In other words, only one element can be executed at a time. The arrow represents the direction of signal flow.

In a non-algorithmic program, by contrast, there is no limit to the number of predecessors or successors that an element can have. A non-algorithmic program is inherently parallel. As seen below, signal flow is multi-dimensional and any number of elements can be processed at the same time.

Note the similarity to a neural network. The interactive nature of a neural network is obviously non-algorithmic since sensory (i.e., non-algorithmically obtained) signals can be inserted into the program while it is running. In other words, a non-algorithmic program is a reactive system.
Note also that all the elements (operations) in a stable non-algorithmic software system must have equal durations based on a virtual system-wide clock; otherwise signal timing would quickly get out of step and result in failure. Deterministic execution order, also known as synchronous processing, is absolutely essential to reliability. The figure below is a graphical example of a small parallel program composed using COSA objects. The fact that a non-algorithmic program looks like a logic circuit is no accident since logic circuits are essentially non-algorithmic behaving systems.

No Two Ways About It

The non-algorithmic model of computing that I propose is inherently parallel, synchronous and reactive. I have argued in the past and I continue to argue that it is the solution to all the major problems that currently afflict the computer industry. There is only one way to implement this model in a Von Neumann computer. As I have said repeatedly elsewhere, it is not rocket science. Essentially, it requires a collection of linked elements (or objects), two buffers and a loop mechanism. While the objects in one buffer are being processed, the other buffer is filled with objects to be processed during the next cycle. Two buffers are used in order to prevent signal racing conditions. Programmers have been using this technique to simulate parallelism for ages. They use it in such well-known applications as neural networks, cellular automata, simulations, video games, and VHDL. And it is all done without threads, mind you. What is needed in order to turn this technique into a parallel programming model is to apply it at the instruction level. However, doing so in software would be too slow. This is the reason that the two buffers and the loop mechanism should ideally reside within the processor and managed by on-chip circuitry. The underlying process should be transparent to the programmer and he or she should not have to care about whether the processor is single-core or multicore. Below is a block diagram for a single-core non-algorithmic processor.
Adding more cores to the processor does not affect existing non-algorithmic programs; they should automatically run faster, that is, depending on the number of objects to be processed in parallel. Indeed the application developer should not have to think about cores at all, other than as a way to increase performance. Using the non-algorithmic software model, it is possible to design an auto-scalable, self-balancing multicore processor that implements fine-grained deterministic parallelism and can handle anything you can throw at it. There is no reason to have one type of processor for graphics and another for general purpose programs. One processor should do everything with equal ease. For a more detailed description of the non-algorithmic software model, take a look at Project COSA.

Don’t Trust Your Dog to Guard Your Lunch

The recent flurry of activity among the big players in the multicore processor industry underscores the general feeling that parallel computing has hit a major snag. Several parallel computing research labs are being privately funded at major universities. What the industry fails to understand is that it is the academic community that got them into this mess in the first place. British mathematician Charles Babbage introduced algorithmic computing to the world with the design of the analytical engine more than 150 years ago. Sure, Babbage was a genius but parallel programming was the furthest thing from his mind. One would think that after all this time, computer academics would have realized that there is something fundamentally wrong with basing software construction on the algorithm. On the contrary, the algorithm became the backbone of a new religion with Alan Turing as the godhead and the Turing machine as the quintessential algorithmic computer. The problem is now firmly institutionalized and computer academics will not suffer an outsider, such as myself, to come on their turf to teach them the correct way to do things. That’s too bad. It remains that throwing money at academia in the hope of finding a solution to the parallel programming problem is like trusting your dog to guard your lunch. Bad idea. Sooner or later, something will have to give.

Conclusion

The computer industry is facing an acute crisis. In the past, revenue growth has always been tied to performance increases. Unless the industry finds a quick solution to the parallel programming problem, performance increases will slow down to a crawl and so will revenue. However, parallel programming is just one symptom of a deeper malady. The real cancer is the thread. Get rid of the thread by adopting a non-algorithmic, synchronous, reactive computing model and all the other symptoms (unreliability and low productivity) will disappear as well. In an upcoming post, I will go over the reasons that the future of parallel programming is necessarily graphical.

See Also:

How to Solve the Parallel Programming Crisis
Parallel Computing: The End of the Turing Madness
Parallel Computing: Why the Future Is Synchronous
Parallel Computing: Why the Future Is Reactive
Why Parallel Programming Is So Hard
Why I Hate All Computer Programming Languages
Parallel Programming, Math and the Curse of the Algorithm
The COSA Saga

Sunday, February 21, 2010

Half a Century of Crappy Computing (repost)

(Note: I am still in my bash-computer-scientists mode. See my previous post. This article was originally posted in October 2007.)

Decades of Deception and Disillusion

I remember being elated back in the early 80s when event-driven programming became popular. At the time, I took it as a hopeful sign that the computer industry was finally beginning to see the light and that it would not be long before pure event-driven, reactive programming was embraced as the universal programming model. Boy, was I wrong! I totally underestimated the capacity of computer geeks to deceive themselves and everyone else around them about their business. Instead of asynchronous events and signals, we got more synchronous function calls; and instead of elementary reactions, we got more functions and methods. The unified approach to software construction that I was eagerly hoping for never materialized. In its place, we got inundated with a flood of hopelessly flawed programming languages, operating systems and processor architectures, a sure sign of an immature discipline.

The Geek Pantheon

Not once did anybody in academia stop to consider that the 150-year-old algorithmic approach to computing might be flawed. On the contrary, they loved it. Academics like Fred Brooks decreed to the world that the reliability problem is unsolvable and everybody worshipped the ground he walked on. Alan Turing was elevated to the status of a deity and the Turing machine became the de facto computing model. As a result, the true nature of computing has remained hidden from generations of programmers and processor architects. Unreliable software was accepted as the norm. Needless to say, with all this crap going on, I quickly became disillusioned with computer science. I knew instinctively what had to be done but the industry was and still is under the firm political control of a bunch of old computer geeks. And, as we all know, computer geeks believe and have managed to convince everyone that they are the smartest human beings on earth. Their wisdom and knowledge must not be questioned. The price [pdf], of course, has been staggering.

In Their Faces

What really bothers me about computer scientists is that the solution to the parallel programming and reliability problems has been in their faces from the beginning. We have been using it to emulate parallelism in such applications as neural networks, cellular automata, simulations, VHDL, Verilog, video games, etc. It is a change-based or event-driven model. Essentially, you have a global loop and two buffers (A and B) that are used to contain the objects to be processed in parallel. While one buffer (A) is being processed, the other buffer (B) is filled with the objects that will be processed in the next cycle. As soon as all the objects in buffer A are processed, the two buffers are swapped and the cycle repeats. Two buffers are used in order to prevent the signal racing conditions that would otherwise occur. Notice that there is no need for threads, which means that all the problems normally associated with thread-based programming are non-existent. What could be simpler? Unfortunately, all the brilliant computer savants in academia and industry were and still are collectively blind to it. How could they not? They are all busy studying the subtleties of Universal Turing Machines and comparing notes.

We Must Reinvent the Computer

I am what you would call a purist when it come to event-driven programming. In my opinion, everything that happens in a computer program should be event-driven, down to the instruction level. This is absolutely essential to reliability because it makes it possible to globally enforce temporal determinism. As seen above, simulating parallelism with a single-core processor is not rocket science. What needs to be done is to apply this model down to the individual instruction level. Unfortunately, programs would be too slow at that level because current processors are designed for the algorithmic model. This means that we must reinvent the computer. We must design new single and multiple-core Processor architectures to directly emulate fine-grained parallelism. There is no getting around it.

Easy to Program and Understand

A pure event-driven software model lends itself well to fine-grain parallelism and graphical programming. The reason is that an event is really a signal that travels from one object to another. As every logic circuit designer knows, diagrams are ideally suited to the depiction of signal flow between objects. Diagrams are much easier to understand than textual code, especially when the code is spread across multiple pages. Here is a graphical example of a fine-grained parallel component (see Software Composition in COSA for more info):


Computer geeks often write to argue that it is easier and faster to write keywords like ‘while’, ‘+’, ‘-‘ and ‘=’ than it is to click and drag an icon. To that I say, phooey! The real beauty of event-driven reactive programming is that it makes it easy to create and use plug-compatible components. Once you’ve build a comprehensive collection of low-level components, then there is no longer a need to create new ones. Programming will quickly become entirely high-level and all programs will be built entirely from existing components. Just drag’m and drop’m. This is the reason that I have been saying that Jeff Han’s multi-touch screen interface technology will play a major role in the future of parallel programming. Programming for the masses!

Too Many Ass Kissers

I often wondered what it will take to put an end to decades of crappy computing. Reason and logic do not seem to be sufficient. I now realize that the answer is quite simple. Most people are followers, or more accurately, to use the vernacular, they are ass kissers. They never question authority. They just want to belong in the group. What it will take to change computing, in my opinion, is for an intelligent and capable minority to stop kissing ass and do the right thing. That is all. In this light, I am reminded of the following quote attributed to Mark Twain:

“Whenever you find that you are on the side of the majority, it is time to pause and reflect.”

To that I would add that it is also time to ask oneself, why am I kissing somebody's ass just because everybody else is doing it? My point here is that there are just too many gutless ass kissers in the geek community. What the computer industry needs is a few people with backbones. As always, I tell it like I see it.

See Also:

Computer Scientists Suck
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
The Age of Crappy Concurrency: Erlang, Tilera, Intel, AMD, IBM, Freescale, etc...

Computer Scientists Suck

Rebelling Against Chicken Shit Science

I feel the same toward computer scientists as I do toward physicists who espouse such absurdities as continuity, time travel and the like. Their science is what I call a chicken shit science. “Well, you would not be using your computer to post articles to your blog on the internet if it weren't for us” complains the nearest computer scientist. And, lo and behold, I agree. Instead, I would be sitting by the pool smoking a fine cigar while my robotic chef prepares fresh sashimi in the kitchen and my mechanical secretary fetches me a good bottle of sake or some fine wine from the cellar. In retrospect, this is what really pisses me off about computer scientists. They have deprived, not just me, but the entire world of the good life that we should be enjoying right now. There would be no parallel programming crisis, had those scientists really understood what the hell they were doing all those years.

The Incompetence of the Labs

As you can tell, I am in what you might call, full rebellious mode. I am rebelling against the computer scientists at the UC Berkeley Parallel Computing Lab, the Stanford Pervasive Parallelism Lab and the Universal Parallel Computing Research Center of the University of Illinois at Urbana-Champaign. Those state of the art labs pocketed millions of dollars almost two years ago to come up with a solution to the parallel programming crisis. If you thought that, with all those brilliant savants working on the problem, they would have something interesting and novel to show by now, you would be wrong. They got zilch, nada, zero, not even the promise of a breakthrough. And those three labs are just the tip of the iceberg of incompetence. Billions of dollars are being wasted right now in parallel computing research labs around the world. That's not counting the untold billions of dollars computer scientists spent over the last forty years working on the problem to no avail. They are as clueless now, as to what the solution might be, as they were when they first started thinking about the problem back in the sixties and the seventies!

Chicken Shit to the Extreme

What really frustrates me is that every one of the labs I mentioned above have visited my blog and the Rebel Science site countless times along with IBM, AMD, Intel, Nvidia, Motorola, and many other companies and universities around the world. They are all familiar with my ideas on parallel programming. Not once did any of those folks bother to contact me to discuss these ideas. But it gets worse. Some are now trying to surreptitiously claim my (and other people's) ideas as their own. University of Illinois's Josep Torrellas is a case in point. As most of my readers know, I have been calling for a solution to the parallel programming problem that involves using buffers to hold instructions to be processed in parallel. While one buffer is being processed, another is filled with instructions to be processed in the next cycle. Guess what? Not too long ago, Torrellas and a few other computer scientists published a paper titled The Bulk Multicore Architecture for Improved Programmability (pdf). Torrellas thinks he can get away with calling instruction buffers “chunks” and buffer processing "bulk processing" even though buffer processing is a well-known parallelization technique. Here is an excerpt from the December 18 UIUC announcement:
In the Bulk Multicore Architecture, the default execution mode of a processor is to commit chunks of instructions at a time. Torrellas explains, "Such a chunked mode of execution and commit is a hardware-only mechanism, invisible to the software running on the processor.' Moreover, its purpose is not to parallelize a thread, but to improve programmability and performance."
And here's what I wrote in my parallel programming blog article in July, 2008:
The solution to the parallel programming problem is to do away with threads altogether. Threads are evil. There is a way to design and program a parallel computer that is 100% threadless. It is based on a method that has been around for decades. Programmers have been using it to simulate parallelism in such apps as neural networks,cellular automata, simulations, video games and even VHDL. Essentially, it requires two buffers and an endless loop. While the parallel objects in one buffer are being processed, the other buffer is filled with the objects to be processed in the next cycle. At the end of the cycle, the buffers are swapped and the cycle begins anew.
I then went on to explain in the next paragraph that the use of instruction buffers within the processor is completely transparent to the program, which is exactly what Torellas is claiming. Even though I wrote my blog article in July of 2008, these are ideas that I have been promoting on the internet since the 1990s. I used to call the instruction buffers 'update lists' in those days. My point is, you can call them buffers, chunks, or lists but it's all the same thing: they contain independent instructions that can be executed in any order but preferably in parallel. And besides, I don't claim to have invented the method as it is commonly used in video games, neural networks and cellular automata to simulate parallelism, albeit at a coarse-grain level. However, I think I may have been the first to propose their use in parallel processors at the instruction level.

Torrellas (et al) and UIUC Owe Everybody an Apology

So how do Torrellas and UIUC get away with pretending that they're introducing something novel? After two years and millions of dollars, the brilliant scientists at UIUC figured out that it is best to process parallel instructions in batches? Haysoos Martinez! All they had to do is ask me and I would have given them the answer for free. The truth is that Torrellas and other members of the parallel computing lab at UIUC probably read my blog or my web site and decided to use deceptive phrases like “chunk processing” to hide where they got their ideas from. This is chicken shit to the extreme. Torrellas and his co-authors are fakes who are claiming other people's ideas on parallel processing as their own. They and UIUC owe the programming community and myself an apology.

Ass-Kissing Cult

Needless to say, I am not feeling very forgiving at the moment. The computer science community invented the parallel programming crisis. They did it by creating a cult around a man named Alan Turing. The Turing machine is a purely sequential machine. It has absolutely nothing to do with providing a solution to the parallel programming crisis (see Parallel Computing: The End of the Turing Madness). I have known since 1980 that the way we build and program our computers is fundamentally flawed. It was flawed from the start. Why? Because computer scientists were busy kissing Turing's ass instead of doing what they were paid to do. Kissing ass is the norm in academia. Peer review enforces it. They're still at it to this day.

Rebel Now or Live in Mediocrity

In my opinion, the only way to liberate computing from the grips of the ass-kissing cult that currently controls it is to rebel against them. There is a dire need for a grass-root rebellion amongst the younger generation of programmers, chip engineers and managers in the computer industry. It's time to tell the baby boomers (the Turing Machine worshipers) who gave us this mess to eat shit. It's time to tell them to move the hell out of the way so the rest of us can fix this crisis and put computing on the right track. We are not going to take it anymore. So my advice to my readers is, rebel now or resign yourselves to live in mediocrity.

Next: Half a Century of Crappy Computing.

See Also:

How to Solve the Parallel Programming Crisis
UC Berkeley's Edward Lee: A Breath of Fresh Air
Parallel Computing: The End of the Turing Madness

Thursday, February 18, 2010

The Lattice Hypothesis: A Brief Overview

Abstract

Last year, I wrote a series of posts on the nature of motion in which I introduced the lattice hypothesis. I argued that Aristotle was right for insisting that motion is caused. The lattice hypothesis essentially says that a causal analysis of motion leads inescapably to the conclusion that normal matter is immersed in an immense lattice of energetic particles, lots and lots of it. Soon, we'll learn how to tap into this energy field for super fast propulsion and unlimited clean energy production. In this post, as a brief overview, I will enumerate the various assumptions, conclusions and claims of the lattice hypothesis.

Premises
  • Universe is discrete. Continuity is a religion of cretins.
  • Only the absolute exists. The relative is abstract. The idea promoted by relativists for a century that only relative motion/position exists is false. The exact opposite is true.
  • Time is abstract. A physical time dimension would make motion impossible. Relativists are always surprised to hear that nothing can move in spacetime and that spacetime is a fictitious math construct, but it's the truth.
  • Space (distance), too, is abstract, nothing but a perceptual illusion. Nonspatiality (action at a distance) is the norm, not the exception.
  • All phenomena, including motion, are caused.
  • A cause is a violation of a conservation principle.
  • The conservation of nothing (it's an ex-nihilo universe, baby) is the mother of all conservation principles.
  • An ex-nihilo universe means that we live in a yin-yang universe in which everything sums up to nothing.
Conclusions
  • The abstract nature of time implies that there is only one speed in the universe, the speed of light. Nothing can move faster or slower. Weird, I know, but the logic is unassailable.
  • The abstract nature of time also means that the universe is necessarily probabilistic. This explains why particle decay is probabilistic.
  • Normal matter is immersed in an immense 4-D lattice of energetic particles. No lattice = no motion.
  • Interactions between normal matter and the lattice are responsible for all electromagnetic phenomena.
  • Gravity is an instantaneous, nonlocal, energy conservation phenomenon that is caused by certain interactions occurring in the lattice.
Claims
  • In the not too distant future, we will develop technologies that exploit the energy of the lattice for both super fast propulsion and unlimited energy production.
  • Vehicles will be able to move at enormous speeds and negotiate right angle turns without slowing down and without incurring any damage due to inertial forces.
  • New York to Beijing in minutes, earth to Mars in hours, floating sky cities. That's the future of transportation and energy production.
Nothing Escapes the Lattice

Remember that we are all prisoners of the lattice because nothing escapes the lattice. The reason, again, is that there can be no motion without the lattice. Carl Sagan once famously said that extraordinary claims require extraordinary proofs and I agree. The experimental proof of the lattice hypothesis will come at a time and place of my choosing. The lattice hypothesis will be the central topic of my upcoming downloadable e-book, A New Kind of Physics. Stay tuned.

See Also:
Physics: The Problem With Motion
Understanding the Lattice
Nasty Little Truth About Spacetime Physics
More Nasty Little Truth About Physics
Gravity, Part I
Why Infinity is Infinitely (Almost) Stupid

Wednesday, February 17, 2010

Parallel Computing: The End of the Turing Madness. Part II (repost)

This is a repost of a previous article. I still feel the same way about Turing, sorry.

Part I, II

Don’t Read My Stuff

I want to preface this post by pointing out that I don’t blog for the benefit of the computer science community. As dear as Project COSA is to me, I would rather see it fail if its success must depend on academic approval. If what I write about Alan Turing offends you, then don’t read my stuff. It’s not meant for you. And don’t send me emails to tell me that you don't like it because I don’t care. If I am a kook in your opinion, you are an idiot in mine. That makes us even. I just thought that I would get this little misunderstanding out of the way so I can continue to enjoy my freedom of expression on the Internet. It’s a beautiful thing. To the rest of you who are interested in what I have to say about Turing, I apologize for the delay in posting the second part of this article.

Turing Is the Problem, Not the Solution

In Part I, I wrote that Alan Turing is a naked emperor. Consider that the computer industry is struggling with not just one but three major crises [note: there is also a fourth crisis having to do with memory bandwidth]. The software reliability and productivity crises have been around since the sixties. The parallel programming crisis has just recently begun to wreak havoc. It has gotten to the point where the multicore vendors are starting to panic. Turing’s ideas on computation are obviously not helping; otherwise there would be no crises. My thesis, which I defend below, is that they are, in fact, the cause of the industry’s problems, not the solution. What is needed is a new computing model, one that is the very opposite of what Turing proposed, that is, one that models both parallel and sequential processes from the start.

UBM vs. UTM

I have touched on this before in my seminal work on software reliability but I would like to elaborate on it a little to make my point. The computing model that I am proposing is based on an idealized machine that I call the Universal Behaving Machine or UBM for short. It assumes that a computer is a behaving machine that senses and reacts to changes in its environment.
Please read the paragraph on The Hidden Nature of Computing before continuing. Below, I contrast several characteristics of the UBM with those of the UTM. The Turing machine does not provide for it but I will be gracious and use multithreading as the Turing version of parallel processing.

Although multithreading is not part of the UTM, this is the mechanism that multicore processor vendors have adopted as their parallel processing model. Turing’s supporters will argue that parallelism can be simulated in a UTM without threads and they are correct. However, as I explain below, a simulation does not change the sequential nature of the Turing computing model. For an explanation of “non-algorithmic”, see my recent blog entry on the subject.

Simulation Does Not a Computing Model Make

True universality requires that a computing model should handle both serial and parallel computations and events by definition. In other words, both types of computation should be inherent parts of the model. One of the arguments that I invariably get from Turing’s supporters is that the Turing machine is a universal computing model because you can use it to simulate anything, even a parallel computer. This is a rather lame argument because observing that a Turing machine can be used to simulate a parallel computer does not magically transform it into a parallel computing model. This would be like saying that, since a Turing machine can be used to simulate a video game or a chess computer, that it is therefore a video game or a chess-computing model. That is absurd. Simulation does not a model make. Whenever one uses one mechanism to simulate another, one climbs to a new level of abstraction, a new model, one that does not exist at the lower level.

To Model or to Simulate, That Is the Question

The Turing machine is a model for a mechanism that executes a sequence of instructions. It does not model a parallel computer, or a tic-tac-toe program or a spreadsheet or anything else, even if it can be used to simulate those applications. The simulation exists only in the mind of the modeler, not in the underlying mechanism. The fallacy of universality is even more transparent when one realizes that a true parallel machine like the UBM does not have to simulate a Turing machine the way the UTM has to simulate the UBM. The reason is that the UBM can duplicate any computation that a Turing machine can perform. In other words, the UTM is an inherent part of the UBM but the opposite is not true.

The Beginning of the End of the Turing Madness

Thomas Kuhn wrote in his book, “The Structure of Scientific Revolutions” that scientific progress occurs through revolutions or paradigm shifts. Max Planck, himself a scientist, said that "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Last but not least, Paul Feyerabend wrote the following in Against Method: “… the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence. It is time to cut them down to size and give them a more modest position in society.”

I think that all the major problems of the computer industry can be attributed to the elitism and intransigence that is rampant in the scientific community. The peer review system is partly to blame. It is a control mechanism that keeps outsiders at bay. As such, it limits the size of the meme pool in much the same way that incest limits the size of the gene pool in a closed community. Sooner or later, the system engenders monstrous absurdities but the community is blind to it. The Turing machine is a case in point. The point that I am getting at is that it is time to eradicate the Turing cult for the sake of progress in computer science. With the parallel programming crisis in full swing, the computer industry desperately needs a Kuhnian revolution. There is no stopping it. Many reactionaries will fight it teeth and nails but they will fall by the wayside. We are witnessing the beginning of the end of the Turing madness. I say, good riddance.

[This article is part of my downloadable e-book on the parallel programming crisis.]

See Also:

How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic

Parallel Computing: The End of the Turing Madness. Part I (repost)

This is a repost of a previous article. I still feel the same way about Turing, sorry.

Part III

Hail, Turing

Alan Turing is, without a doubt, the most acclaimed of all computer scientists. He is considered to be the father of modern computer science. Soon after his untimely death in 1954 at the age of 41, the computer science community elevated him to the status of a god. Books are written, movies are made and statues are erected in his memory. Turing is so revered that the most coveted prize in computer science, the A. M. Turing Award, was named after him. It is worth noting that Turing’s sexual inclinations and religious convictions did little to diminish his notoriety. Homosexual and atheist movements around the world have wasted not time in transforming the man into a martyr and the sad history of his life into a veritable cause célèbre. The end result is that nothing negative may be said about Turing. It is all right to talk about the Von Neumann bottleneck but mentioning that the bottleneck was already part and parcel of the Turing machine is taboo. The adulation of Turing is such that criticizing him or his ideas is guaranteed to deal a deathblow to the career of any scientist who would be so foolish.

Unabashedly Bashing Turing and Loving It

It is a good thing that I don’t lose any sleep over my reputation in either the scientific community or the computer industry, otherwise I would be in a world of hurt. I am free to bash Turing and his supporters to my heart’s content. I am free to point out that the emperor is buck-naked and ugly even while everyone around me is fawning at his non-existent clothes. As someone with a strong iconoclastic bent, I admit that I enjoy it. Free speech is a wonderful thing. I have argued forcefully and unabashedly in the past that academia’s strange and pathological obsession with Turing is the primary cause of the software reliability and productivity crisis. I now argue even more forcefully that the parallel programming crisis can be directly traced to our having swallowed Turing’s erroneous ideas on computing, hook, line and sinker. Had the computer science community adopted the correct computing model from the start, there would be no crisis and you would not be reading this article and getting offended by my irreverent treatment of Mr. Turing. In fairness to Turing, my criticism is directed mostly toward those who have turned the man into the infallible god that he never was and never claimed to be.

The Not So Universal UTM

Unfortunately for Turing’s admirers, for many years now, the laws of physics have been quietly conspiring against their hero. Sequential processors have reached the limits of their performance due to heat dissipation problems. The computer industry is thus forced to embrace parallel processing as the only way out of its predicament. Parallelism is a good thing but it turns out that programming parallel processors is much easier said than done. In fact, it is such a pain that the big players in the multicore industry are spending hundreds of millions of dollars in research labs around the world in the hope of finding a viable solution. They have been at it for decades without any hint of success in sight. Worse, the leaders of the industry, out of sheer desperation, are now turning to the very people who got everybody into this mess in the first place, none other than the Turing worshippers in academia. It would be funny if it weren’t so pathetic.

Consider that Turing’s ideas on computing, especially the Universal Turing machine (UTM), were strongly influenced by his love of mathematics. He was, first and foremost, a mathematician and, like Babbage and Lady Ada a century before him, he saw the computer primarily as a tool for solving serial math problems. It is highly unlikely that he ever thought of the concept of parallel processing or the idea that a computer might be used for anything other than problem solving and the execution of instruction sequences. The time has come for the computer industry to realize that the UTM is the quintessential sequential computer and, as such, it is ill suited as a model for parallel computing. It is time to devise a truly universal computing machine, an anti-UTM machine if you will, one that can handle both sequential and concurrent processing with equal ease.

Smashing the Turing Machine

The Turing machine cannot be said to be universal because it is a strictly sequential machine by definition whereas the universe is inherently parallel. It is certainly possible to use a UTM to emulate parallel processes. Programmers have been emulating parallelism in such applications as neural networks, video games and cellular automata for decades. However, the emulation is not part of the Turing computing model. It is an ad hoc abstraction that a programmer can use to pretend that he or she is performing parallel processing even though the underlying model is sequential. Programmers get away with the pretense because processors are extremely fast. Slow the processor down to a few hundred Hertz and the illusion of parallelism disappears.

One can always argue that having two or more Turing machines running side by side is an example of parallel computation. However, two Turing machines running independently cannot be construed as representing one Universal Turing machine, as defined by Turing. Parallelism is not synonymous with temporal independence. On the contrary, a truly parallel system is one in which all events can be unambiguously determined as being either concurrent or sequential. Even if one could extend the definition of the UTM to include multiple parallel machines, deciding which operations are performed concurrently requires even more ad hoc additions such as a communication channel between the two machines. Alternatively, one can imagine a Turing like machine with an infinite number of read/write heads on a single infinitely long tape. Still, the Turing model does not provide a deterministic mechanism for the detection of either concurrency or sequentiality and the problem becomes worse with the addition of multiple heads. There is only one way to solve the parallel programming crisis. We must smash the un-universal Turing machine and invent a truly universal alternative.

In Part II of this two-part article, I will describe what I mean by a true universal computing machine and do a direct comparison with the UTM.

See Also:

How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic

Tuesday, February 16, 2010

UC Berkeley’s Edward Lee: A Breath of Fresh Air (repost)

This is a repost of a previous article (6/21/2008) with an update at the end.

Not All Academics Are Ass kissers

I don’t hide the fact that my general attitude vis-à-vis the computer science community is a hostile one. I just cannot stand the way most academics kiss each other’s ass. I guess it is a legacy of peer review, a mechanism whose main purpose is to stifle dissent within the community. However, this does not mean that I believe all academics are inveterate ass kissers. Some are more like the prophet Daniel in King Nebuchadrezzar’s court. I previously lauded Peter Wegner and Dina Goldin on this blog for their work on non-algorithmic interactive computing. I am sure there are many more like them. Today, I would like to draw your attention to the work of Professor Edward A. Lee at UC Berkeley’s Department of Electrical Engineering and Computer Sciences.

Threads Are Bad and Precise Timing Is Crucial

Professor Lee made major news a couple of years ago with the publication of his The Problem with Threads (pdf) in which he methodically laid out the evils of multithreading. As my readers know, I have been waging a ferocious battle against multithreading for many years. What impresses me the most about Lee’s work is that he seems to have a deep understanding of what I believe to be two of the most important issues in computing: timing and implicit concurrency. Deterministic timing is essential to program reliability and implicitly concurrent programming elements are essential to the design and composition of parallel programs. Although I do not agree with Lee’s apparent obsession with doing for software timing what has been done in hardware (real time precision in complex software is a pipe dream, in my opinion), I recommend that everybody takes a close look at Professor Lee’s work, especially the Ptolemy Project.

Thread Monkeys All Around

Professor Lee is, as of this writing, the chair of UC Berkeley’s Parallel Computing Lab, which is supported in part by Microsoft and Intel. Now, it is no secret that the people at Intel and Microsoft are a bunch of thread monkeys, not unlike the thread monkeys at Stanford's Pervasive Parallelism Lab. I don’t know about the views of the other members of Berkeley’s research team but it is obvious that Intel and Microsoft’s addiction to multithreading is at odds with Lee’s position. I sure hope that Professor Lee stands his ground and absolutely refuses to go along with the brain-dead thread mentality. I hope, for the sake of the future of computing, that UC Berkeley and professor Lee are willing to stand up to the Wintel idiocy and tell them in no uncertain terms that their thread-based approach to parallel programming and multicore architecture design is just crap.

Having said that, I am afraid that Lee’s style, in contrast to mine, is very low-key and he may lose this battle. Lee needs to be a lot more forceful, in my opinion, otherwise he does not stand a chance against the thread monkeys. At any rate, it’s going to be interesting to see what comes out of Berkeley’s Parallel Computing Lab by the end of this year.

The Battle at Berkeley (Update 2/16/2010)

OK. It has been more than a year and what do you know? UC Berkeley is nowhere near a solution to the parallel programming crisis. No big surprise here. I was right. Professor Lee has neither the political clout nor the personal strength to stand up to the nefarious army of thread monkeys. He's surrounded by them, not just within Berkeley's computer science department but also at Intel and Microsoft. Heck, it doesn't look like Professor Lee is even the chair of Berkeley’s Parallel Computing Lab any more. That's too bad, but there is no need to cry over spilled milk. Those of you who are really interested in this topic should go over to the Rebel Science E-Bookstore (pay only if you like what you read after you read it) and download my e-book, "How to Solve the Parallel Programming Crisis". At the very least, read the related blog posts at the links below. Something is bound to happen, sooner or later.

See Also:
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
Parallel Computing: The End of the Turing Madness

Friday, February 12, 2010

My First E-Book: How to Solve the Parallel Programming Crisis

The Rebel Science E-Bookstore

The Rebel Science E-Bookstore is now open for business. Only one e-book is currently available but that will change soon. My first e-book, "How to Solve the Parallel Programming Crisis", is a compilation of about 36 articles on computers and programming that I have written over the years. With very few exceptions, I left most of the articles as they were originally written.

Honor System

I initially planned to publish an e-book with Amazon (or some other e-book vendor) but I have since changed my mind for reasons I wrote about in a previous post. I have concluded that Adobe's Portable Document Format (PDF) is the best format for electronic self-publishing. Every e-book in the Rebel Science Bookstore is available for free download. Pay me only if you think it was worth your while.

The Rebel Science Research Institute

The money that I make selling my books will go toward setting up the Rebel Science Research Institute. RSRI will focus mainly on education and research in physics, computer science and artificial intelligence. Please donate if you think this is a worthwhile endeavor.