Legacy’s Double Edged Sword
The computer industry has wisely and correctly embraced parallelism as the only way to maintain the continuous performance increases of the previous decades. Unfortunately, the software legacy problem hangs menacingly like the sword of Damocles over the heads of industry executives, threatening to derail their careful plans for a smooth transition to parallel computing. It is a two-edged sword. On the one hand, if they choose to retain compatibility with the existing code base via multithreading with x86 cores, parallel programming will continue to be a pain in the ass. On the other hand, if they decide to adopt a universal computing model that makes parallel programming easy, they will lose compatibility with the huge installed base of legacy code. Damned if you do and damned if you don’t. It is not easy being a multicore executive.
Not the End of the World
The legacy problem may seem like a nightmare from hell with no solution in sight, but it is not the end of the world. Keep in mind that most computers are connected to either a local or global area network. Even if the industry changes over to a completely new kind of computer architecture, current servers, printers, databases will still continue to work as they are for the foreseeable future. It will take time for all the nodes in the network to change over to the new architecture but communication protocols will continue as they were. Standards like IP, HTTP, HTML, XML, SQL, PDF, etc… will remain viable. This way, new systems will have no trouble sharing the same network with the old stuff. Consider also that the embedded software industry will not hesitate to adopt more advanced processor and programming technologies, no matter how disruptive.
The End of Windows, Mac OS, Linux, etc…
The only real legacy problem has to do with mass-market operating systems (e.g., Windows, Mac OS, Linux, etc…) and the applications that run on them. These systems can continue to run on legacy hardware but they obviously cannot make the transition to a new incompatible computing model. It seems like a real losing proposition but what if the computer industry introduced a multicore model that made parallel programming so easy and intuitive that practically anybody could use it to develop sophisticated, rock-solid software applications using a drag-and-drop, component-based (see below) software construction environment? What if the cost of reprogramming complex legacy applications from scratch using this new model were negligible compared to the advantages? What if the recreated applications were blindingly fast, scalable, bug-free, secure and better in terms of features and ease of use? What if the end user could increase the performance of his or her computer simply by replacing, say, its 8-core processor, with a more powerful unit having 16, 32, 64 or more cores? What if multicore processors could handle both general purpose and data-intensive multimedia apps with equal ease? Would the world switch to this new model? You bet it would. Will it be the beginning of the end for MS Windows, Linux and the others? I think so. And not just operating systems, it would be the end of dedicated graphics processors as well.
Componentizing is a time-honored and highly successful tradition in the hardware industry. Computer scientists have tried for decades to emulate this success in software with mixed results. In my thesis on the software reliability crisis, I argue, among other things, that the reason that component-based programming never became widespread is that our current algorithmic software model is fundamentally flawed. I maintain that it is flawed primarily because, unlike the hardware model, it provides no mechanism for the deterministic control of timing, i.e., the execution order (concurrent or sequential) of operations. In other words, software lacks the equivalent of the deterministic hardware signal. The parallel programming crisis affords us with an unprecedented opportunity to do things right, the way it should have been in the first place. I have argued for many years (long before multicore processors became the rage) that the correct computing model is one that is fundamentally synchronous, reactive (signal-based) and supports fine-grained parallelism at the instruction level. This is the basis of the COSA software model. COSA components are plug-compatible, that is to say, they can automatically and safely connect themselves to other compatible components using uniquely tagged male and female connectors.
The Age of the Do-It-Yourself Operating System
In the COSA software model, the operating system is componentized and extensible. In this light, applications are no longer considered stand-alone programs but high-level components that can be used to extend the OS. A company could use this technology to design and build a scalable multicore computer starting with a skeleton OS, super fast graphics, a set of easy-to-use software composition tools and just one initial end-user application, a powerful web browser. Extending or customizing the OS will be so easy (just drag a new component from a vendor’s web-based component library and drop it on the OS object et voila!) that it will be up to the customer/user to decide what features he or she wants to purchase. For example, if you don’t need mouse or file I/O support, just don’t buy those components. The same method will be used to construct and customize almost any application such as the browser, word processor or paint program.
I am claiming that once processors (both single and multicore) are designed and optimized to support the COSA software model, rapid, drag-and-drop programming will become the norm. It will turn almost everybody into a computer programmer: programming for the masses. I believe that when COSA is adopted by the computer industry, it will usher in the next computer revolution, the true golden age of automation and supercomputing on the desktop. Notice that I wrote ‘when’ rather than ‘if’. This is how confident I am of the correctness of the COSA model. The computer industry has no alternative, in my opinion, because there is only one correct parallel computing model and COSA is it. The industry can retain the flawed multithreading model and continue to live in hell, or it can do the right thing and reap the profits. It's kind of like the Matrix movie; it's either the red pill or the blue pill. Take your pick.
In a future post, I will go over each and every advantage of adopting the COSA software model.
Nightmare on Core Street