Tuesday, October 30, 2007

Half a Century of Crappy Computing

Decades of Deception and Disillusion

I remember being elated back in the early 80s when event-driven programming became popular. At the time, I took it as a hopeful sign that the computer industry was finally beginning to see the light and that it would not be long before pure event-driven, reactive programming was embraced as the universal programming model. Boy, was I wrong! I totally underestimated the capacity of computer geeks to deceive themselves and everyone else around them about their business. Instead of asynchronous events and signals, we got more synchronous function calls; and instead of elementary reactions, we got more functions and methods. The unified approach to software construction that I was eagerly hoping for never materialized. In its place, we got inundated with a flood of hopelessly flawed programming languages, operating systems and processor architectures, a sure sign of an immature discipline.

The Geek Pantheon

Not once did anybody in academia stop to consider that the 150-year-old algorithmic approach to computing might be flawed. On the contrary, they loved it. Academics like Fred Brooks decreed to the world that the reliability problem is unsolvable and everybody worshipped the ground he walked on. Alan Turing was elevated to the status of a deity and the Turing machine became the de facto computing model. As a result, the true nature of computing has remained hidden from generations of programmers and processor architects. Unreliable software was accepted as the norm. Needless to say, with all this crap going on, I quickly became disillusioned with computer science. I knew instinctively what had to be done but the industry was and still is under the firm political control of a bunch of old computer geeks. And, as we all know, computer geeks believe and have managed to convince everyone that they are the smartest human beings on earth. Their wisdom and knowledge must not be questioned. The price [pdf], of course, has been staggering.

In Their Faces

What really bothers me about computer scientists is that the solution to the parallel programming and reliability problems has been in their faces from the beginning. We have been using it to emulate parallelism in such applications as neural networks, cellular automata, simulations, VHDL, Verilog, video games, etc. It is a change-based or event-driven model. Essentially, you have a global loop and two buffers (A and B) that are used to contain the objects to be processed in parallel. While one buffer (A) is being processed, the other buffer (B) is filled with the objects that will be processed in the next cycle. As soon as all the objects in buffer A are processed, the two buffers are swapped and the cycle repeats. Two buffers are used in order to prevent the signal racing conditions that would otherwise occur. Notice that there is no need for threads, which means that all the problems normally associated with thread-based programming are non-existent. What could be simpler? Unfortunately, all the brilliant computer savants in academia and industry were and still are collectively blind to it. How could they not? They are all busy studying the subtleties of Universal Turing Machines and comparing notes.

We Must Reinvent the Computer

I am what you would call a purist when it come to event-driven programming. In my opinion, everything that happens in a computer program should be event-driven, down to the instruction level. This is absolutely essential to reliability because it makes it possible to globally enforce temporal determinism. As seen above, simulating parallelism with a single-core processor is not rocket science. What needs to be done is to apply this model down to the individual instruction level. Unfortunately, programs would be too slow at that level because current processors are designed for the algorithmic model. This means that we must reinvent the computer. We must design new single and multiple-core Processor architectures to directly emulate fine-grained, signal-driven, deterministic parallelism. There is no getting around it.

Easy to Program and Understand

A pure event-driven software model lends itself well to fine-grain parallelism and graphical programming. The reason is that an event is really a signal that travels from one object to another. As every logic circuit designer knows, diagrams are ideally suited to the depiction of signal flow between objects. Diagrams are much easier to understand than textual code, especially when the code is spread across multiple pages. Here is a graphical example of a fine-grained parallel component (see Software Composition in COSA for more info):

Computer geeks often write to argue that it is easier and faster to write keywords like ‘while’, ‘+’, ‘-‘ and ‘=’ than it is to click and drag an icon. To that I say, phooey! The real beauty of event-driven reactive programming is that it makes it easy to create and use plug-compatible components. Once you’ve build a comprehensive collection of low-level components, then there is no longer a need to create new ones. Programming will quickly become entirely high-level and all programs will be built entirely from existing components. Just drag’m and drop’m. This is the reason that I have been saying that Jeff Han’s multi-touch screen interface technology will play a major role in the future of parallel programming. Programming for the masses!

Too Many Ass Kissers

I often wondered what it will take to put an end to decades of crappy computing. Reason and logic do not seem to be sufficient. I now realize that the answer is quite simple. Most people are followers, or more accurately, to use the vernacular, they are ass kissers. They never question authority. They just want to belong in the group. What it will take to change computing, in my opinion, is for an intelligent and capable minority to stop kissing ass and do the right thing. That is all. In this light, I am reminded of the following quote attributed to Mark Twain:

“Whenever you find that you are on the side of the majority, it is time to pause and reflect.”

To that I would add that it is also time to ask oneself, why am I kissing somebody's ass just because everybody else is doing it? My point here is that there are just too many gutless ass kissers in the geek community. What the computer industry needs is a few people with backbones. As always, I tell it like I see it.

See Also:

How to Solve the Parallel Programming Crisis
Why Parallel Programming Is So Hard
Parallel Computing: Why the Future Is Non-Algorithmic
UC Berkeley's Edward Lee: A Breath of Fresh Air
Why I Hate All Computer Programming Languages
The COSA Saga
Transforming the TILE64 into a Kick-Ass Parallel Machine
COSA: A New Kind of Programming
Why Software Is Bad and What We Can Do to Fix It
Parallel Computing: Both CPU and GPU Are Doomed

Thursday, October 18, 2007

Fine-Grain Multicore CPU: Giving It All Away?

All Multicore Related Articles

I was about to upload the second part of my two-part article on Memory Caching for a Fine-Grain, Self-Balancing Multicore CPU and I got to thinking that, maybe I am foolish to give all my secrets away. My primary interest in multicore CPU architecture is driven mostly by my enduring passion for artificial intelligence. I have good reasons to believe that true AI will soon be upon us and that our coming intelligent robots will need fast, reliable, portable, self-balancing, fine-grain multicore CPUs using an MIMD execution model. Of course, these CPUs do not exist. Current multicore CPUs are thread-based and coarse-grained. To do fine-grain computing, one would have to use an SIMD (single-instruction, multiple data) execution model. As we all know, SIMD-based software development is a pain in the ass.

It just so happened that while working on another interest of mine, software reliability, I devised a developer-friendly software model (see Project COSA) that is a perfect fit for parallel programming. Using this model as a guide, I came up with a novel architecture for a self-balancing, auto-scalable, fine-grain, multicore CPU. What woud the computer market give for an easy to program, fine-grain, multicore CPU? I think customers would jump through hoops to get their hands on them, especially when they find out that they can also use them to create rock-solid applications that do not fail.

The point I'm driving at is that I need money for my AI research. I think too many people have benefited from my writings without spending a dime (I know, I keep tract of all visitors and a lot of you have been visiting my site for months, if not years). I think this is a good opportunity for me to get the funds that I need. I am sitting on an idea for a multicore CPU that is worth money, lots of money. So, if you, your organization or your government agency are interested in funding or joining in the founding of a multicore startup company, drop me a line and let me know what you can do.

Thursday, October 11, 2007

Is South Korea Poised to Lead the Next Computer Revolution?

All Multicore and Parallel Programming Articles

I have been saying for a long time that the way we currently build and program computers is fundamentally flawed. It is based on a model of computing that is as old as Charles Babbage and Lady Ada Lovelace. The West has turned most of its celebrated computer scientists into demigods and nobody dares to question the wisdom of the gods. Other countries, especially South Korea, China, India and Japan are not handicapped by this problem. They have every reason to question western wisdom, especially if it results in catapulting their societies into technological preeminence.

I have good reasons (that I cannot go into, sorry) to suspect that the South Korean semiconductor industry (e.g., Samsung) may be poised to transform and dominate the multicore processor industry in the coming decades. The Europeans and the North Americans won’t know what hit them until it is too late. I have always admired the Koreans. They are hard workers, very competitive, they have an excellent business sense and a knack for thinking things through. It may have something to do with their love of Baduk, the wonderful ancient Chinese strategy board game also known as Go (Japan) and Weiki (China). The game forces the player to think very long term, a highly desirable skill in life as well. Unfortunately, all my liberties are taken (pun intended) at the moment and I cannot say much more than I have already said.

Wednesday, October 10, 2007

Who Am I? What Are My Credentials?

People write to me to ask, “Who are you?” or “What are your credentials?”

  • I am a crackpot and a crank. Those are my credentials. Ahahaha…
  • I am a self-taught computer programmer. Ok, I did take a C++ class at UCLA a long time ago, just for grins and giggles. I have programmed in assembly, FORTH, BASIC, C, C++, C#, Pascal, Java, php, asp, etc…
  • I hate computer languages, all of them.
  • I hate operating systems, all of them.
  • I hate computer keyboards, even if I have to use them. They are ancient relics of the typewriter age.
  • I hate algorithmic computing.
  • I hate software bugs.
  • I hate all the crappy multicore processors from Intel, AMD, Tilera, Freescale Semiconductor, ARM, and the others.
  • I actually hate all CPUs, if only because they are all designed and optimized for algorithmic computing.
  • I hate thread-based parallelism.
  • I hate coarse-grain parallelism.
  • I hate threads, period. The thread is the second worst programming invention ever.
  • I hate Erlang’s so-called ‘lightweight’ processes.
  • I believe that, if your parallel language, OS or multicore CPU does not support fine-grain parallelism, it’s crap.
  • I hate the Von Neumann bottleneck.
  • I love synchronous, reactive, deterministic, fine-grain, parallel computing. That’s the future of computing.
  • I love reliable software.
  • I love Jeff Han’s multi-touch screen technology. That’s the future interface of programming. Drag'm and drop'm.
  • I love cruising catamarans.
  • I love people from all over the world.
  • I love Paris, New York City, Provence, French Riviera, Monaco, Nice, Venice, Rome, London, Amalfi coast, Turkey, Miami Beach, the Caribbean, the South Pacific, Hawaii, Polynesia, Thailand, Vietnam, Cambodia, the Philipines, Papua, Sumatra, Australia, New Zealand, Japan, Seychelles, Moroco, Zanzibar, Portugal, Russia (la vieille Russie), Eastern Europe, Nothern Europe, Western Europe, India, Sri Lanka, Brazil, Mexico, Vienna, Bolivia, Amazon, Africa, China, Rio de Janeiro, Machu Picchu, Chitzen Itza, Tokyo, Greece, Hong Kong, Budapest, Shanghai, Barcelona, Naples, Yucatan peninsula, Texas, Colorado, Alberta, Key West, Central America, South America, Alaska, Montreal, California, San Francisco, Carmel (Cal.), Los Angeles, Baja California, Houston, Seattle, Mazatlan, Vancouver, Chicago, Kernville (Cal.), Yosemite, Grand Canyon, Redwood Forest, Yellowstone, etc… All right. I never set foot in some of those places but I would love to. Come to think of it, I just love planet earth.
  • I love plants and trees and animals.
  • I love astronomy, archaelogy, history, science, languages and cultures.
  • I love white water rafting, canoeing, fishing, walking in the woods or in a big city, hiking, bicycling, sailing, scuba diving, surfing. Unfortunately, I can’t do most of these sports for the time being.
  • I love the arts, movies, painting, architecture, theatre, sculpture, photography, ceramics, microphotography, novels, science fiction, poetry, digital arts, haute cuisine, hole-in-the-wall cuisine, home-made cuisine, haute couture, restaurants, interior decorating, furniture design, landscaping, carpentry, all sorts of music.
  • I am passionate about artificial intelligence and extreme fundamental physics. I don't know why. Check out my series on motion.
  • More than anything, I love the Creator who made it all possible.
  • Atheist computer geeks hate me but I laugh in their faces.
  • Shit-for-brains voodoo physicists don’t like me but I crap on their time-travel and black hole religion.
  • I am a Christian but, unlike most Christians, I believe in weird Christian shit. I believe that we are all forgiven (just ask), even computer geeks and crackpot physicists. What’s your chicken shit religion? Ahahaha...
  • If my Bible research offends you, then don't read my blog. It's not meant for you. I need neither your approval, nor your criticism, nor your money. I don't care if you're Bill Gates or the Sultan of Brunei.
  • I’m the guy who hates to say ‘I told you so’ but I told you so. Goddamnit!
  • I am right about software reliability.
  • I am right about parallel programming and multicore processors.
  • I am right about crackpot physics.
  • I am right about the causality of motion and the fact that we are immersed in an immense ocean of energetic particles.
  • I am wrong about almost everything else.
  • Food? Did anybody mention food? I’m glad you asked. I love sushi and sashimi with Napa Valley or South American Merlot, Indian food, Mexican food, Thai food, Chinese Szechwan food, French food, Italian food, Ethiopian food, Spanish food, Korean food, Iranian food, Brazilian food, Cuban food, Malaysian food, Indonesian food, Haitian food, Argentinean food, Peruvian food, Vietnamese food, Cajun food, southern style barbecue ribs, Jamaican food, Yucateco food, Greek food, New York hotdogs, Chicago hotdogs, burritos, tacos de carne asada, tacos al pastor, chipotle, peppers, In-N-Out Burgers, New York pizza, corn tortillas, chiles rellenos, huevos rancheros, pollo en mole, French crepes, French cheeses, Italian cheeses, Japanese ramen (Asahi Ramen, Los Angeles), Japanese curry, soy sauce, sake, tequila with lime, mezcal, rum, rompope, Grand Marnier, cocktails, all sorts of wine, Dijon mustard, chocolate, French pastry, Viennese pastry, German beer, espresso, cappuccino, caffe latte, cafĂ© Cubano, Starbucks, Jewish deli food, Italian deli food, all sorts of spices, all sorts of seafood, tropical fruits, etc… Ok, you get the picture. As you can see, I love food and this is just a short list. And no, I’m not a fat slob. I am actually skinny.
  • I am part French, part Spanish, part black, part Taino (Caribe Indian) and other mixed ethnic ingredients from the distant past.
  • Oh, yes. I love women, too.
All right. That's enough of me. I got to get back to my AI project now. Later.

Sunday, October 7, 2007

Parallel Programming, Math, and the Curse of the Algorithm

The CPU, a Necessary Evil

The universe can be seen as the ultimate parallel computer. I say ‘ultimate’ because, instead of having a single central processor that processes everything, every fundamental particle is its own little processor that operates on a small set of properties. The universe is a reactive computer as well because every action performed by a particle is a reaction to an action by another particle. Ideally, our own computers should work the same way. Every computer program should be a collection of small reactive processors that perform elementary actions (operations) on their assigned data in response to actions by other processors. In other words, an elementary program is a tiny behaving machine that can sense and effect changes in its environment. It consists of at least two actors (a sensor and an effector) and a changeable environment (data variable). In addition, the sensor must be able to communicate with the effector. I call this elementary parallel processor the Universal Behaving Machine (UBM).

More complex programs can have an indefinite number of UBMs and a sensor can send signals to more than one effector. Unfortunately, even though computer technology is moving in the general direction of our ideal parallel computer (one processor per elementary operator), we are not there yet. And we won’t be there for a while, I’m afraid. The reason is that computer memory can be accessed by only one processor at a time. Until someone finds a solution to this bottleneck, we have no choice but to use a monster known as the CPU, a necessary evil that can do the work of a huge number of small processors. We get away with it because the CPU is very fast. Keep in mind that understanding the true purpose of the CPU is the key to solving the parallel programming problem.

Multicore and The Need for Speed

Although the CPU is fast, it is never fast enough. The reason is that the number of operations we want it to execute in a given interval keeps growing all the time. This has been the main driving force behind CPU research. Over the last few decades, technological advances insured a steady stream of ever faster CPUs but the technology has gotten to a point where we can no longer make them work much faster. The solution, of course, is a no-brainer: just add more processors into the mix and let them share the load, and the more the better. Multicore processors have thus become all the rage. Unsurprisingly, we are witnessing an inexorable march toward our ideal computer in which every elementary operator in a program is its own processor. It’s exciting.

Mathematicians and the Birth of the Algorithmic Computer

Adding more CPU cores to a processor should have been a relatively painless evolution of computer technology but it turned out to be a real pain in the ass, programming wise. Why? To understand the problem, we must go back to the very beginning of the computer age, close to a hundred and fifty years ago, when an Englishman named Charles Babbage designed the world’s first general purpose computer, the analytical engine. Babbage was a mathematician and like most mathematicians of his day, he longed for a time when he would be freed from the tedium of performing long calculation sequences. All he wanted was a reasonably fast calculator that could reliably execute mathematical sequences or algorithms. The idea of using a single fast central processor to emulate the behaviors of multiple small parallel processors was the furthest thing from his mind. Indeed, the very first program written for the analytical engine by Babbage’s friend and fellow mathematician, Lady Ada Lovelace, was a table of instructions meant to calculate the Bernoulli numbers, a sequence of rational numbers. Neither Babbage nor Lady Ada should be faulted for this but current modern computers are still based on Babbage’s sequential model. Is it any wonder that the computer industry is having such a hard time making the transition from sequential to parallel computing?

Square Peg vs. Round Hole

There is a big difference between our ideal parallel computer model in which every element is a parallel processor and the mathematicians’ model in which elements are steps in an algorithm to be executed sequentially. Even if we are forced to use a single fast CPU to emulate the parallel behavior of a huge number of parallel entities, the two models require different frames of mind. For example, in a true parallel programming model, parallelism is implicit but sequential order is explicit, that is to say, sequences must be explicitly specified by the programmer. In the algorithmic model, by contrast, sequential order is implicit and parallelism must be explicitly specified. But the difference is even more profound than this. Whereas an element in an algorithm can send a signal to only one other element (the successor in the sequence) at a time, an element in a parallel program can send a signal to as many successors as necessary. This is what is commonly referred to as fine-grain or instruction-level parallelism, which is highly desirable but impossible to obtain in an MIMD execution model using current multicore CPU technology.
The image above represents a small parallel program. A signal enters at the left and a ‘done’ signal is emitted at the right. We can observe various elementary parallel operators communicating with one another. Signals flow from the output of one element (small red circle) to the input of another (white or black circle). The splitting of signals into multiple parallel streams has no analog in an algorithmic sequence or thread. Notice that parallelism is implicit but sequential order is explicit. But that’s not all. A true parallel system that uses signals to communicate must be synchronous, i.e., every operation must execute in exactly one system cycle. This insures that the system is temporally deterministic. Otherwise signal timing quickly gets out of step. Temporal determinism is icing on the parallel cake because it solves a whole slew of problems related to reliability and security.

It should be obvious that using Babbage’s and Lady Ada’s 150-year old computing model to program a parallel computer is like trying to fit a square peg into a round hole. One would think that, by now, the computer industry would have figured out that there is something fundamentally wrong with the way it builds and programs computers but, unfortunately, the mathematicians are at it again. The latest trend is to use functional languages like Erlang for thread-based parallel programming. Thread-based, coarse-grain parallelism is a joke, in my opinion. There is a way to design a fine-grain, self-balancing multicore CPU for an MIMD execution environment that does not use threads. Threaded programs are error-prone, hard to program and difficult to understand. Decidedly, the notion of a computer as a calculating machine will die hard. It is frustrating, to say the least. When are we going to learn?

Lifting the Curse of the Algorithm

To solve the parallel programming problem, we must lift the curse of the algorithm. We must abandon the old model and switch to a true parallel model. To do so, we must reinvent the computer. What I mean is that we must change, not only our software model, but our hardware model as well. Current CPUs were designed and optimized for the algorithmic model. We need a new processor architecture (both single core and multicore) that is designed from the ground up to emulate non-algorithmic, synchronous parallelism. It’s not rocket science. We already know how to emulate parallelism in our neural networks and our cellular automata. However, using current CPUs to do so at the instruction level would be too slow. The market wants super fast, fine-grain, self-balancing and auto-scalable multicore processors that use an MIMD execution model. It wants parallel software systems that are easy to program and do not fail. Right now there is nothing out there that fits the bill.

The Next Computer Revolution

It remains to be seen who, among the various processor manufacturers, will be the first to see the light. Which nation will be the standard bearer of the new computing paradigm? When will the big switch happen? Who knows? But when it does, it will be the dawning of the next computer revolution, one which will make the first one pale in comparison. We will be able to build super fast computers and programs of arbitrary complexity that do not fail. It will be the true golden age of automation. I can’t wait.

[This article is part of my downloadable e-book on the parallel programming crisis.]

See Also:

Nightmare on Core Street
Why Parallel Programming Is So Hard
The Age of Crappy Concurrency: Erlang, Tilera, Intel, AMD, IBM, Freescale, etc…
Half a Century of Crappy Computing
Parallel Computers and the Algorithm: Square Peg vs. Round Hole

Thursday, October 4, 2007

The Intel Cartel: Algorithmic Dope Dealers

All Multicore and Parallel Programming Articles

Cry Babies

It seems that all Intel does lately is bitch about how hard parallel programming is and how programmers are not using enough threads. Their latest tantrum is about how there are too many parallel languages to choose from. Does anybody else sense a wee bit of panic in Intel’s camp? The company has bet all its marbles on multicore CPUs being the big money maker for the foreseeable future, which is understandable. The problem is that most legacy software cannot take advantage of multiple cores and programmers are having a hell of a hard time writing good parallel software. So what’s Intel’s solution? Bitching, whining, jumping up and down and foaming at the mouth, all the while, making a royal fool of itself. Haysoos Martinez! What a bunch of cry babies you people are!

Algorithmic Cocaine

I got news for you, Intel. Stop blaming others for your own mistakes. You are the primary cause of the problem. You, more than any other company in this industry, got us into this sorry mess. You made so much money, over the years, milking algorithmic cocaine from that fat cow of yours that it never occurred to you that the cow might run dry some day. Now that you’ve got everybody stoned and addicted, they keep coming back for more. But there is no more. Moore’s law is no longer the undisputed law of the land. “Mix threads with your dope!”, you scream at them with despair in your voice, but they’re not listening. And they keep coming. Worse, you got so stoned consuming your own dope, you cannot see a way out of your self-made predicament. Your only consolation is that all the other dope dealers (AMD, IBM, Sun Microsystems, Freescale Semiconductors, Motorola, Texas Instruments, Tilera, Ambric, ARM, etc…) are in the same boat with you. I don’t know about the rest of you out there but methinks that the Intel cartel is in trouble. Deep trouble. It's not a pretty picture.

The Cure

We all know what the problem is but is there a cure? The answer is yes, of course, there is a cure. The cure is to abandon the algorithmic software model and to adopt a non-algorithmic, reactive, implicitly parallel, synchronous model. I have already written enough about this subject and I am getting tired of repeating myself. If you people at Intel or the other companies are seriously interested in solving the problem, below are a few articles for your reading pleasure. If you are not interested, you can all go back to whining and bitching. I am not one to say I told you so, but the day will come soon when I won’t be able to restrain myself.

The Age of Crappy Concurrency: Erlang, Tilera, Intel, AMD, IBM, Freescale, etc…
Parallel Programming, Math, and the Curse of the Algorithm
Half a Century of Crappy Computing
Parallel Computers and the Algorithm: Square Peg vs. Round Hole
Don’t Like Deadlocks, Data Races and Traffic Accidents? Kill the Threads
Why I Think Functional Programming Languages Like Erlang and Haskell are Crap
Killing the Beast
Why Timing Is the Most Important Thing in Computer Programming
Functional Programmers Encourage Crappy Parallel Computing
How to Design a Self-Balancing Multicore CPU for Fine-Grain Parallel Applications
Thread Monkeys: Tile64 and Erlang
COSA, Erlang, the Beast, and the Hardware Makers
Tilera vs. Godzilla

Wednesday, October 3, 2007

Darwinian Software Composition

Unintentional Software

Last night, I got to thinking again about Charles Simonyi’s intentional software project and it occurred to me that a domain expert or software designer does not always know exactly what he or she wants a new software application to look like or even how it should behave. Initially, a designer may have a partially-baked idea of the look and feel of the desired application. However, even though we may not always know what we want, we can all recognize a good thing when we see it. This is somewhat analogous to a musician searching for the right notes for a new melody idea. The composer may end up with a final product that is not exactly as originally envisioned but one that is nevertheless satisfactory. Searching can thus be seen as an indispensable part of designing. It adds an element of randomness into the process. What makes this approach attractive is that it is highly interactive and it works. It dawned on me that a similar approach could be used when designing software in a COSA environment.

Relaxing the Rules

Normally, COSA uses strict plug-compatibility criteria to connect one component to another.

Two connectors may connect to each other only if the following conditions are met:

  1. They have opposite gender (male and female).

  2. They use identical message structures.

  3. They have identical type IDs.
As you can see, there is no possibility for mismatched components in a COSA application. Strict plug compatibility allows components to automatically and safely snap together. This is fine and desirable in finished applications and components but what if the rules were relaxed a little during development? What if we could instruct the development environment to temporarily disregard the third compatibility criterion in the list above? This would allow the designer to try new component combinations that would otherwise be impossible. The only problem is that, more often than not, the new combinations would result in timing conflicts, i.e., bugs.

Good Bugs vs. Bad Bugs

In general, computer languages try to prevent bugs as much as possible. Most of the bugs that used to plague assembly language programmers in the past are now gone. With the current trend toward thread-based, multicore computers, a lot of effort has gone into making programs thread-safe. The problem has to do with multiple threads accessing the same data in memory. This situation can lead to all sorts of conflicts because the timing of access is not deterministic. Functional languages avoid the problem altogether by eliminating variables and thus disallowing side effects between threads. The COSA philosophy, however, is that side effects between concurrent modules should be welcome. A bug is bad only if it is not found. Since COSA programs are reactive and temporally deterministic, all data access conflicts (motor conflicts) between connected modules can be discovered automatically. What this means is that fast trial-and-error composition becomes feasible. But it gets even better than that.

Darwinian Selection

Given that, in a COSA development environment, components can connect themselves autonomously and that motor conflicts can be discovered automatically, it is not hard to envision a mechanism that can compose reliable and possibly useful applications through random trial and error, from a pool of pre-built components. Simple survival of the fittest. Of course, it is always up to the human developer to decide whether or not to accept the system's inventions but this would take software development up to a new level of productivity and serendipity. Who knows, a nice surprise could pop up every once in a while.

PS. Some of my long term readers may find it strange that I, a Christian, would be using words like 'Darwinian selection'. Well, the world is full of surprises, isn't it?

Tuesday, October 2, 2007

The ‘Everything Is a Function’ Syndrome

The Not So Great Brainwashed Masses

It never ceases to amaze me how effective education can be at brainwashing people. Skinner was right about conditioning. Not that this is necessarily bad, mind you (that’s what religion, which includes scientism, is all about), but there is good brainwashing and bad brainwashing. Here is a case in point. Brainwashed functional programming fanatic namekuseijn (also calls himself Piccolo Daimao) claims that a computer is fundamentally a mathematical machine and that everything that a computer does can be seen as a function that returns a value. In response to a comment Daimao posted on my blog recently, I wrote the following:

Truth is, computing is about behavior. And behavior is about sensing, acting and timing. This means that a computer program is a collection of elementary sensors comparators), effectors (operators), environment (variable data) and a timing mechanism. That is all.
Daimao replies:

-elementary sensors (comparators)

that seems to me like a function taking 2 or more arguments and producing as result one of them. Or a multiplexer.

-effectors (operators)

it's a function which takes arguments and returns results. In low level Von Neumann machine, this may mean the result of the computation is put into a register or a set of registers.

-environment (variable data)

function scope.

-timing mechanism

flow of control: you start with a function and goes evaluating it step-by-step.

Anthropomorphizing the Computer

What Daimao will probably never grasp (brainwashed people rarely change their minds) is that what he’s doing is anthropomorphizing the computer. In his view, the computer doesn’t just do math, it becomes a mathematician: it takes mathematical arguments, performs calculations and returns mathematical results. Never mind that a computer is merely reacting to changes by effecting new changes. And, when you think about it, effecting changes is nothing but flipping bits (electric potentials). The math stuff is all in Daimao’s mind but don’t tell him that. He’s liable to go into an apoplectic fit.

Of course, it will never occur to Daimao that what he refers to as “taking arguments” is not a mathematical operation at all but effects carried out by the computer: some bits are flipped in a memory area that we call the stack and in a special register that we call the stack pointer. Likewise, returning a result is another stack effect carried out by the computer. Daimao comes close to seeing the truth (“the result of the computation is put into a register or a set of registers”) but he dismisses it as “low level”. Again, putting something in a register has nothing to do with math. It is just another effect carried out by the computer.


Forget about Daimao’s notion that data variables (changeable environment) constitute “function scope” (it’s just more silly anthropomorphizing). Right now, I want to address Daimao’s assertion that timing is just flow of control. This is something that is close to my heart because I have been saying for a long, long time that timing is the most important thing in computing. My primary claim that computing is strictly about behaving and that an elementary behavior is a precisely timed sensorimotor phenomenon. Timing is to computing what distance is to architecture. At least, it should be.

How does flow of control (another term that stands for algorithm) guarantee action timing in functional (math-based) programs since math is timeless to begin with. There is nothing in a math operation (taking arguments and returning results) that specifies its temporal order relative to other operations. Of course, one can argue that the algorithm itself is a mathematical timing concept but I beg to differ. People have been performing step by step procedures long before mathematicians thought about them as being part of math. Note that executing an algorithmic program consists of performing all sorts of sensorimotor behaviors such as incrementing a pointer, copying data to registers, performing an operation, copying data to memory, sensing a clock pulse, etc… In reality, everything in a computer is already signal-based (change-based) but the ubiquitous math metaphors make it hard to see. Every behaving entity (operation) sends a signal (clock pulse and index counter) to the next operation in a sequence meaning, now it’s your turn to execute. The problem is that signal flow (communication) within a function follows a single thread and cannot split into multiple threads. This is a big problem if you want fast, fine-grain parallelism, which is the future of computing.

Implicit vs. Explicit Temporal Order

The point that I am driving at is that there is nothing in functional programming that allows a program to make decisions pertaining to the relative temporal order (concurrent or sequential) of elementary operations. Temporal order is not explicit in a function; it is both implicit and inherently sequential. Explicit temporal order is a must for reliable software systems because it makes it possible to build deterministic parallel systems. Explicit temporal order simply means that a system is reactive, that is, actions (operations) are based on change (timed signals). A purely reactive system is one where every action occurs instantaneously upon receiving a signal, that is, it executes itself within a single system cycle. Since there should not be any changes in the temporal behavior of a deterministic system, timing watchdogs can be inserted in the system to alert the designer of any change (could be due to hardware failure or a modification to the system software). Deterministic timing makes for super fast, fine-grain parallelism because it gives small parallel processes access to shared memory without having to worry about contentions.

Reactive Behavior and Data Dependencies

The future of computing is not thread-based parallelism but fine-grain, self-balancing, parallel, multicore CPUs using an MIMD execution model. Other than the fact that functional programming encourages the continued manufacture and use of coarse-grain parallel computers (see The Age of Crappy Concurrency), the biggest problem I see with functional programming is that it makes it impossible to implement a mechanism that automatically discovers and resolves data dependencies, an absolute must for reliability. This is only possible in a purely reactive system. I will not go into it here but suffice it to say that functional languages should not be seen as general purpose computer languages and FP must not be promoted as a software model. The lack of timing control and reactivity makes FP inadequate for safety-critical systems where software failure is not an option.

In conclusion, I'll just reiterate my point. Everything in computing is not a function. Everything is behavior.

Monday, October 1, 2007

Adobe's Macromedia Director MX 2004™

Macromedia Director™ is a powerful multimedia authoring tool. It has been around since the eighties and it is obvious that they put a lot of thought into creating a clean and intuitive user interface. It is easy to learn once you understand the underlying movie metaphor. It comes with a choice of two scripting languages, Lingo (the original Director language) and JavaScript. Even though it was originally intended for applications that use things like movies, sprites, sounds and animations, there is no reason that it cannot be used for general purpose application development. It has support for most common user interface functions like buttons, menus, lists, windows, textboxes, etc… Director applications can be played either in Windows™ or the Macintosh™ or directly within a web browser with the use of Adobe’s Shockwave technology. In addition, there is a sizeable supply of third party extensions (many are free) that add to its functionality. In sum, I think it is a pretty awesome all-around software development tool. Why isn't everybody using it?

My take is that Director is ideal for creating complex graphical user interfaces that involves displaying and manipulating graphical objects on the screen. So it is certainly well-suited for developing a COSA Editor. Since third-party extensions can be used for database access, it should not be too hard to create a keyword-browsable object repository for COSA modules/components. I am not sure how a Director application can send and receive messages to other running applications but I suspect it can be done. I’m thinking that the COSA Editor should have the ability to communicate directly with a running COSA virtual machine (CVM). This way, a COSA developer could easily modify a running COSA application on the fly. There should be no need for compiling or saving the app to a file, in my opinion. Visually tracing signal flow within a running application would be nice as well.

Having said that, I think that writing a COSA Editor and a CVM is a major undertaking, regardless of the chosen tool. I had intended to start a dev project and let others finish it but, the more I think about it, the more I realize that it’s not going to work out. A lot of thought must go into designing, not only the user interface, but also the underlying data structures for each and every COSA effector and sensor. So I am back to where I started: I can’t do it. I just can’t devote the time to it. Unless somebody or some organization is willing to dump some serious money into this project, I am afraid that Project COSA will continue to be just an idea whose time has not yet arrived. And by serious money, I am talking about at least ten million dollars because, in my opinion, design and development of a COSA-compatible, fine-grain, multicore CPU and a COSA embedded operating system must happen more or less concurrently.

So this is how it stands, for now. The world will just have to continue to make do with crappy multicore CPUs, crappy operating systems and crappy programming languages, not to mention all the bug-infested software. Oh well. No need to despair, though. COSA is getting a fair share of publicity, these days. Sooner or later, something is bound to happen.