Tuesday, February 16, 2010

UC Berkeley’s Edward Lee: A Breath of Fresh Air (repost)

This is a repost of a previous article (6/21/2008) with an update at the end.

Not All Academics Are Ass kissers

I don’t hide the fact that my general attitude vis-à-vis the computer science community is a hostile one. I just cannot stand the way most academics kiss each other’s ass. I guess it is a legacy of peer review, a mechanism whose main purpose is to stifle dissent within the community. However, this does not mean that I believe all academics are inveterate ass kissers. Some are more like the prophet Daniel in King Nebuchadrezzar’s court. I previously lauded Peter Wegner and Dina Goldin on this blog for their work on non-algorithmic interactive computing. I am sure there are many more like them. Today, I would like to draw your attention to the work of Professor Edward A. Lee at UC Berkeley’s Department of Electrical Engineering and Computer Sciences.

Threads Are Bad and Precise Timing Is Crucial

Professor Lee made major news a couple of years ago with the publication of his The Problem with Threads (pdf) in which he methodically laid out the evils of multithreading. As my readers know, I have been waging a ferocious battle against multithreading for many years. What impresses me the most about Lee’s work is that he seems to have a deep understanding of what I believe to be two of the most important issues in computing: timing and implicit concurrency. Deterministic timing is essential to program reliability and implicitly concurrent programming elements are essential to the design and composition of parallel programs. Although I do not agree with Lee’s apparent obsession with doing for software timing what has been done in hardware (real time precision in complex software is a pipe dream, in my opinion), I recommend that everybody takes a close look at Professor Lee’s work, especially the Ptolemy Project.

Thread Monkeys All Around

Professor Lee is, as of this writing, the chair of UC Berkeley’s Parallel Computing Lab, which is supported in part by Microsoft and Intel. Now, it is no secret that the people at Intel and Microsoft are a bunch of thread monkeys, not unlike the thread monkeys at Stanford's Pervasive Parallelism Lab. I don’t know about the views of the other members of Berkeley’s research team but it is obvious that Intel and Microsoft’s addiction to multithreading is at odds with Lee’s position. I sure hope that Professor Lee stands his ground and absolutely refuses to go along with the brain-dead thread mentality. I hope, for the sake of the future of computing, that UC Berkeley and professor Lee are willing to stand up to the Wintel idiocy and tell them in no uncertain terms that their thread-based approach to parallel programming and multicore architecture design is just crap.

Having said that, I am afraid that Lee’s style, in contrast to mine, is very low-key and he may lose this battle. Lee needs to be a lot more forceful, in my opinion, otherwise he does not stand a chance against the thread monkeys. At any rate, it’s going to be interesting to see what comes out of Berkeley’s Parallel Computing Lab by the end of this year.

The Battle at Berkeley (Update 2/16/2010)

OK. It has been more than a year and what do you know? UC Berkeley is nowhere near a solution to the parallel programming crisis. No big surprise here. I was right. Professor Lee has neither the political clout nor the personal strength to stand up to the nefarious army of thread monkeys. He's surrounded by them, not just within Berkeley's computer science department but also at Intel and Microsoft. Heck, it doesn't look like Professor Lee is even the chair of Berkeley’s Parallel Computing Lab any more. That's too bad, but there is no need to cry over spilled milk. Those of you who are really interested in this topic should go over to the Rebel Science E-Bookstore (pay only if you like what you read after you read it) and download my e-book, "How to Solve the Parallel Programming Crisis". At the very least, read the related blog posts at the links below. Something is bound to happen, sooner or later.

See Also:
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
Parallel Computing: The End of the Turing Madness

2 comments:

Tango said...

Louis,

Thanks for your update message.

it doesn't look like Professor Lee is even the chair of Berkeley’s Parallel Computing Lab any more.
It’s natural. How can you give a new Idea to come and dominate in market, as you have already invested your all into it ? Intel, AMD, all have invested unlimited amount of money into the traditional microprocessor research over previous fifty years of evolution of computers.
Just a question, why is CISCO not giving IPv6 priority over IPv4, even the definition of IPv6 is being completed over previous 30 years?
About Threads and MPUs
I have intensively worked with Threads and Multiprocessor techniques. I am working in a Real time Video Processing project where Threads are used extensively to try utmost for looking it parallel. But, the people work in computer scientist or PhD students, they don’t have any knowledge about Physics. They cannot realize even what is meant by “Parellel”? I heard from many so called experts of Computer science that “Ok. You create multiple threads. So performance of your codes will increase.” Then I asked her that Have u used Thread anytime before? Answer is always , Not. I have read in books.
I have contributed in a Open source OS design project, I have worked to import that OS into Intel’s Core2Duo and Dual Core processor. The internal working model is fundamentally very tough to implement. Even if it is working, so then you cannot get the best performance from your application. Because your code is not written in that way. There is no way to write codes to be parallel. MPI, PVM are just wastage of time. They only add more complexity in the code, not increase the performance.
If someone really used it and benefited, so then come and comment here.

Louis Savain said...

Tango,

Thanks for the comment. It seems that there are powerful forces that are against finding a solution to the parallel programming crisis. If the correct solution is found and exploited, all the old stuff would eventually become obsolete. And it could happen virtually overnight given society's constant hunger for new superior technology.