All Multicore Related Articles
Ada Did It
Lately, a lot has been said about how hard it is to program parallel computers. It never occurs to the pundits that we might be trying to force a square peg into a round hole. The algorithmic software model saw its first use, about a century and a half ago, when Lady Ada Lovelace wrote the first algorithm (or table of instructions) for Babbage’s analytical engine. This programming model has been with us ever since and has served us well. It is a natural fit for our sequential computers. However, now that the computer industry has decided to transition from sequential computing to massive parallelism, should we expect the algorithmic model to be the correct one for parallel computing? I think not. And yet, this is exactly what Intel, AMD, Tilera and other multicore CPU manufacturers are forcing down our throats.
What the Market Wants
What the market wants is not hard to figure out. The market wants super fast multicore applications that are painlessly auto-scalable, are easy to develop, do not fail and use fine-grain parallelism. The market does not want parallel computers that cannot parallelize a QuickSort algorithm because they are using a coarse-grain, thread-based programming model. In reality, the market does not want to think about how to deal with cores at all. It just wants ease of programming, linear transparent scalability and speed, just like in the old days of Moore’s law. The market believes in a simple equation: more cores = faster computers. No code redesign after adding cores, no jumping through loops, no futzing with a bunch of hard to program threads. Simplicity and speed. That’s all. Unfortunately, Intel, AMD, Microsoft, Tilera, Sun Microsystems and the others are not delivering the goods. Why? Because they don’t know how. All they know is square pegs and round holes.
Stealing the Pot of Gold
Having said that, let me ask a pertinent question. Even if they did know how, would they deliver the goods? I have the funny suspicion that they wouldn’t. The reason is that they have a lot (trillions?) invested in the old paradigm. They want to milk that puppy for everything it’s got. They are not going to switch to an incompatible model that may destroy their old cash cow. Besides, a lot of well-paid programmers and engineers would become obsolete since their expertise with the old stuff would no longer fetch as much in the new marketplace. The fact is that there is a better way to build and program computers. It is called COSA. It is an implicitly parallel, explicitly sequential model that would solve what ails the computer industry, if it would only open its eyes. But then again, maybe some of them do have their eyes open. They just don’t like the consequences. That’s too bad, in my opinion. Judging from the number of visits this blog and the COSA pages have been getting recently from Silicon Valley and certain well known venture firms, an unlikely startup may sneak behind them and steal the pot of gold. Oh sweet irony!