Friday, March 5, 2010

How to Construct 100% Bug-Free Software

Abstract

Software unreliability is a monumental problem. Toyota's brake pedal troubles are just the tip of the iceberg. Yet, the solution is so simple that I am almost tempted to conclude that computer scientists are incompetent. As I showed in my previous post, the usual 'no silver bullet' excuse (Brooks's excuse) for unreliable code is bogus. Contrary to Fred Brooks's claim in his famous No Silver Bullet paper, it is not necessary to enumerate every state of a program to determine its correctness. What matters is the set of conditions or temporal expectations that dictate the program's behavior. Timing is fundamental to the solution. Below, I expand on my thesis by arguing that the computer can in fact automatically discover everything that may go wrong in a complex program even if the programmer overlooks them. Please read Unreliable Software, Part I-III before continuing.

Expectations and Abnormalities

Jeff Voas, a software reliability expert and a co-founder of Cigital, once said, "it's the things that you never thought of that get you every time." Voas is not in any hurry to see a solution to the unreliability problem because he would be out of a job if that happened. Still, I agree with him that it is observably true that the human mind cannot think of everything that can go wrong with a complex software system but (and this is my claim) the computer itself is not so limited. It is because the computer has a certain advantage over the human brain: it can do a complete exhaustive search of what I call the expectation space of a computer program. The latter has to do with all the possible decision pathways that might occur within a program as a result of expected events.

A billion mathematicians jumping up and down and foaming at the mouth notwithstanding, software is really all about stimuli and responses, or actions and reactions. That function calculation stuff is just icing on the cake. Consider that every decision (reaction) made by a program in response to a sensed event (a stimulus) implicitly expects a pattern of sequential and/or simultaneous events to have preceded the decision. This expected temporal signature is there even if the programmer is not aware of it. During the testing phase, it is easy for a diagnostic subprogram to determine the patterns that drive decisions within the application under test. It suffices to exercise the application multiple times to determine its full expectation pattern. Once this is known, it is even more trivial for the subprogram to automatically generate abnormality sensors that activate in the event that the expectations are not met. In other words, the system can be made to think of everything even if the programmer is not thorough. Abnormality sensors can be automatically connected to an error or alarm component or to a component constructed for that purpose. The system should then be tested under simulated conditions that force the activation of every abnormality sensor in order to determine its robustness under abnormal conditions.

Learn to Relax and Love the Complexity

The above will guarantee that a program is 100% reliable within its scope. The only prerequisite to having a diagnostic subprogram like the one I described is that the software model employed must be synchronous and reactive. This insures rock-solid deterministic program behavior and timely reactions to changes, which are the main strengths of the COSA software model. The consequences of this are enormous for the safety-critical software industry. It means that software developers no longer need to worry about bugs in their programs as a result of complexity. This way, adding new functionality to a system makes it even more robust and reliable. Why? Because new functionality cannot break the system's existing expectations without triggering an alarm. They must conform to the functionality that is already in place. Expectations are like constraints and the more complex a program is, the more constraints it has. We can make our programs as complex as necessary without incurring a reliability penalty. So there is no longer any reason to not have a completely automated mass transportation or air traffic control system.

Academic Responsibility

This is the part where I step on my soapbox and start yelling. This blog is read everyday by academics from various institutions around the world and from research labs in the computer industry. I know, I have the stats. If you are a computer scientist and you fail to act on this information, then you are a gutless coward and an asshole, pardon my French. Society should and probably will hold you personally responsible for the over 40,000 preventable traffic fatalities on U.S. roads alone. You have no excuse, goddammit.

See Also:

Why Does Eugene Kaspersky Eat Japanese Baby Crabs and Grin?
Why the FAA's Next Generation Air Traffic Control System Will Fail
Computer Scientists Created the Parallel Programming Crisis
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
Parallel Computing: Why the Future Is Synchronous
Parallel Computing: Why the Future Is Reactive
Parallel Computing: The End of the Turing Madness
Why Software Is Bad and What We can Do to Fix It
COSA: A New Kind of Programming

16 comments:

chzchzchz said...

Louis,

If you want people to put effort into implementing your work (which for some reason, you refuse to do on your own), you're going to have to address a few key issues.

From a theoretical standpoint, a firm understanding of Turing machines and other automata is essential. You may claim to have a better model and perhaps don't need to know about those things or don't feel like wasting time on it. Still, it's important to be able to address theoretical criticisms in a constructive manner. For example, how is your model any different from an n-tape Turing machine with one input tape encoding "stimuli" on some time-line and with an output tape that the machine can write "responses" to? Or is it something that is weaker, doesn't suffer from the halting problem, and is physically realizable, like a DFA? Is it a Petri net of DFAs? If not, why not?

From a practical standpoint, you need either a working prototype of your language or a reference specification which is implementable. You seem to veer toward a pure-hardware design, so you may claim that expensive, special hardware is required for it to work. In this case, why would a physical simulation be inadequate? I recall you suggested it would be programmed using a graphical environment. Can this language be expressed as a context-free grammar? In other words, can you accurately describe the rules for what the graphical environment will allow and disallow? This would be tremendously helpful to compiler writers.

From an evaluation standpoint, you make very strong claims which go unsupported. Would a COSA program stop "40,000 preventable traffic fatalities"? If COSA is so easy to use, why not build a COSA system that does just that? It doesn't have to cost a lot; could you simulate it? If every program is 100% reliable, could you build some example 100% reliable application? How is a normal system "unreliable" whereas your system is suddenly "reliable"? Try to provide side-by-side realistic examples of something a computer may do compared with something COSA would do. That is, write out the program in some computer language demonstrating how the program would be written today and then write out the program exactly how it'd be represented in COSA.

From a personal standpoint, you should try to make at least one visit to a mental health professional. You may find it beneficial. If you don't like it, you don't have to go again.

Louis Savain said...

chzchzchz,

I remember you. You're the dumbass coward from Stanford. You belong to that cretinous cult of academics who believe in infinity and continuity. You're worse than the flat-earthers. You forget that I don't write for academics, especially physicists and computer scientists. I write for people who still have a few working neurons left in their brains.

What I wrote in my article above is so simple to grasp, it's no wonder it went over your head. As I said, you are a coward and asshole. You're the perfect example of an academic who can kiss my ass. How about that?

PS. Did you read my 2008 article, Computer Academics Can Kiss My Ass? You should. Hopefully, it will help you to stop reading my blog. You seem to be pathologically addicted to my writings.

Josh Painter said...

How to turn Iron into Gold

You just, you know, like, turn iron into gold, man. Anybody from Stanford who doesn't agree can kiss my ass. Anybody who doesn't realize how easy it is to turn iron into gold is a fool and shouldn't be reading this comment in the first place. You just turn iron into gold, man.

Basile_S said...

You might be interested in reading Jacques Pitrat's latest book Artificial Beings (the conscience of a conscious machine) (ISBN: 9781848211018, Wiley 2009).
In some sense, it is a bit GOFAI (definitely, it is Symbolic Artificial Intelligence) but with a very unusual and refreshing approach, based upon reflection thru declarative meta-knowledge.

You might also be interested in the TUNES project.

Regards

--

Basile Starynkevitch

Louis Savain said...

Basile,

Thanks for the links. The TUNES project looks very interesting. I like rebels and revolutionaries, so I'll take a closer look.

At first glance, Jacques Pitrat's AI stuff does not appeal to me very much because he talks about consciousness. This alone tells me that he has no clue but I'll take a look anyway.

Louis Savain said...

Josh Painter,

The problem with turning any cheap metal into gold is that the gold merchants and gold miners don't like it a bit.

Maniaque said...

Tee-hee - the problem with turning Iron into Gold is has nothing to do with gold merchants and gold miners, and rather a lot to do with the Value of Gold (immutability, scarcity)

To your point about "chzchzchz" and his ass-kissing abilities... any "personal" comments aside, and leaving all the "academic" keywords aside also, there is one thing you can't get away from: Your "writings" have too many claims and simplistic diagrams, and too few real-life examples.

Just one simple example (fully worked, not just alluded to) of how a real-life problem would be better solved by the mechanisms you propose than by "common" or "current" approaches, would really help inject some realism.

(or, as I suspect, you will be unable to find such a real-world problem).

Keep entertaining the "academics from various institutions around the world and from research labs in the computer industry"! :)

Louis Savain said...

Maniaque,

There are two kinds of people in the world, those who get it and those who don't. You're one of those people who don't and probably never will. Your writing style tells me that you are either a clueless academic or you're aspiring to be known as one. In which case you, too, can kiss my ass. See ya around.

Louis Savain said...

I just figured out that Maniaque is none other than chzchzchz above. Good try, Stanford man. You are a maniaque, indeed. I am blocking all your comments until you have the guts to identify yourself. You gutless coward. LOL.

Aaron said...

I agree that some solid examples would be helpful.

Louis Savain said...

Aaron wrote:

I agree that some solid examples would be helpful.

Well, that's the chicken and egg part. To convince the naysayers and the doubting Thomases, I would have develop a new operating system (COSA), a new development environment, and design and implementation a very complex application such as a self-driving urban passenger vehicle.

No problem. I'll get to it as soon as you write me a check for several million dollars.

Louis Savain said...

One more thing, Aaron.

Keep an eye on the Rebel Science Discussion Forum. A handful of enthusiastic software engineers who believe in the promise of COSA are working on an open source COSA virtual machine as a first step toward a full-blown COSA OS, a COSA synchronous parallel processor and a graphical development environment.

Joey said...

This would be the programming language needed to run a Resource Based Economy in the Venus Project. reliable, artificially intelligent, efficient programming.

Destable said...

Hey!

First, let me start by saying that I am not a computer scientist, but that I do a lot of programming. I have done a lot of work with AI and genetic programming as of late.

Your ideas as presented are interesting, but I'm not sure I follow on how this could be implemented in a real-world system. It sounds like you are proposing not only a new programming technique, but an entirely new computing platform on which to implement it on.

I guess I am having a hard time envisioning how this would work in my head. From what I understand, the program would catalog itself and figure out all of the potential inputs and the associated outputs, and from there would simply identify any inputs that resulted in erroneous outputs. Seems fairly straightforward, but then this is where you lose me.

You say the program would then purposefully give itself inputs that would result in erroneous outputs to test for "robustness". I'm not sure I understand what the purpose in that would be. Is it to determine what happens, ie if the program crashes or not? What kind of interaction would the programmer have at that point? Or, in other words, what would the debugging process for the programmer look like?

Again, I'm just trying to get a better understanding of this because it does sound interesting.

Louis Savain said...

Destable,

Man, what can I say? I just needed to wait a little while before someone with his or her thinking cap on jumped in and asked the right questions. All it takes is a little patience. I also like the fact that you are not one of those gutless academics because those guys have shot computing in the foot, big time.

You wrote:

You say the program would then purposefully give itself inputs that would result in erroneous outputs to test for "robustness". I'm not sure I understand what the purpose in that would be. Is it to determine what happens, ie if the program crashes or not? What kind of interaction would the programmer have at that point? Or, in other words, what would the debugging process for the programmer look like?

First, let me explain something about sensors in the COSA software model. A sensor is a decision mechanism that is based on an expectation. That is to say, a COSA sensor expects an event or a pattern of events to occur at a particular time. It emits a signal whenever the expectation is met. One of the things that set COSA apart from other signal-based systems is that every sensor in COSA can have a complement, i.e., an opposite sensor that fires if its expectation is not fulfilled at the right time. Complementarity alone is enough to catch the majority of abnormalities in a program. An automated sub-program can simply iterate through all the sensors in a program and create their complements if they do not already exist.

However, many of the invariant temporal patterns in a program will usually not be used for any purpose because the programmer/designer either has no use for them or is unaware of their existence. A simple learning mechanism can be devised to automatically discover these temporal patterns during testing and create special sensors that fire when they fail to occur as expected. Again, the purpose of creating abnormality sensors is to ensure that all conditions and circumstances are accounted for, even the ones that the designers did not think about.

I think your question can be paraphrased thus: What should a programmer do with the abnormality sensors? In other words, what should the outputs of the sensors be connected to? The answer is that it does not matter as long as the connection does not cause a motor conflict. And if it causes one or more abnormality sensors to fire elsewhere, these, too, will have to be handled in some manner. The default connection will be to an alarm or disabling component of some sort but the programmer or designer will be forced to either accept the default or make a custom connection. Either way, nothing is left to chance and nothing is overlooked.

Remember also that COSA is a reactive software model, which means that abnormality sensors must be connected to effectors. And since activating an effector can cause one or more associated sensors to fire, a behavioral pattern will ensue which may or may not cause more abnormalities to occur.

Again, I'm just trying to get a better understanding of this because it does sound interesting.

I am glad that you and a handful of other people have taken an interest in COSA. Hopefully, I can convince someone or some organization with deep pockets to fund a full-time software/hardware project in order to get this thing off the ground.

Nathan Cline said...

At first when I was reading over this web site I noticed you clearly are intelligent and the manner in which you are speaking seemed to indicate you might have a good grasp on what you're talking about. I too am working on the software reliability problem, and needless to say I was a bit worried as I started reading, thinking you might have some kind of insight or head start on solving this problem. I smiled however after I had read further and learned more about your ideas. While well presented, and maybe containing some nuggets of truth here and there, they are for the most part off the mark and reflect ignorance of certain things, and overall naivete.

Your web site looks a lot like something I might have dreamed up 5-10 years ago had I been thinking about this problem then. Back then I was too young to realize I didn't know a quarter of what I needed to even understand such a large and difficult problem, let alone begin approaching a solution, let alone create a huge web site proclaiming my ideas as the solution to everything; in those days I too might have been inclined to spend my energy doing something I was comfortable with (writing, blogging, and drawing up fancy diagrams) rather than actually focusing on studying the problem and designing a real solution.

I would give you some pointers on how to proceed, but I'm not trying to give my competitors a leg up. Your attitude to other posters here indicates you likewise have some maturing to do on accepting criticism. At this point you are too self-focused, ego-driven, and insecure. I really think you need about 5-6 more years of focused effort (and some psychedelics) to mature before you can really start making any kind of real contribution to the AI field. I do think you have great potential and will likely become famous and accomplished some day, if you learn to control your ego, focus your energies, and be humble. Don't be afraid to look at and carefully study what other people have done, e.g. Turing machines, because even if it's wrong or simplistic you still learn something from it.