Monday, July 21, 2008

UC Berkeley’s Edward Lee: A Breath of Fresh Air

Not All Academics Are Ass kissers

I don’t hide the fact that my general attitude vis-à-vis the computer science community is a hostile one. I just cannot stand the way most academics kiss each other’s ass. I guess it is a legacy of peer review, a mechanism whose main purpose is to stifle dissent within the community. However, this does not mean that I believe all academics are inveterate ass kissers. Some are more like the prophet Daniel in King Nebuchadrezzar’s court. I previously lauded Peter Wegner and Dina Goldin on this blog for their work on non-algorithmic interactive computing. I am sure there are many more like them. Today, I would like to draw your attention to the work of Professor Edward A. Lee at UC Berkeley’s Department of Electrical Engineering and Computer Sciences.

Threads Are Bad and Precise Timing Is Crucial

Professor Lee made major news a couple of years ago with the publication of his The Problem with Threads (pdf) in which he methodically laid out the evils of multithreading. As my readers know, I have been waging a ferocious battle against multithreading for many years. What impresses me the most about Lee’s work is that he seems to have a deep understanding of what I believe to be two of the most important issues in computing: timing and implicit concurrency. Deterministic timing is essential to program reliability and implicitly concurrent programming elements are essential to the design and composition of parallel programs. Although I do not agree with Lee’s apparent obsession with doing for software timing what has been done in hardware (real time precision in complex software is a pipe dream, in my opinion), I recommend that everybody takes a close look at Professor Lee’s work, especially the Ptolemy Project.

Thread Monkeys All Around

Professor Lee is, as of this writing, the chair of UC Berkeley’s Parallel Computing Lab, which is supported in part by Microsoft and Intel. Now, it is no secret that the people at Intel and Microsoft are a bunch of thread monkeys, not unlike the thread monkeys at Stanford's Pervasive Parallelism Lab. I don’t know about the views of the other members of Berkeley’s research team but it is obvious that Intel and Microsoft’s addiction to multithreading is at odds with Lee’s position. I sure hope that Professor Lee stands his ground and absolutely refuses to go along with the brain-dead thread mentality. I hope, for the sake of the future of computing, that UC Berkeley and professor Lee are willing to stand up to the Wintel idiocy and tell them in no uncertain terms that their thread-based approach to parallel programming and multicore architecture design is just crap.

Having said that, I am afraid that Lee’s style, in contrast to mine, is very low-key and he may lose this battle. Lee needs to be a lot more forceful, in my opinion, otherwise he does not stand a chance against the thread monkeys. At any rate, it’s going to be interesting to see what comes out of Berkeley’s Parallel Computing Lab by the end of this year.

2/15/2010 The Battle at Berkeley

OK. It has been more than a year and what do you know? UC Berkeley is nowhere near a solution to the parallel programming crisis. No big surprise here. I was right. Professor Lee has neither the political clout nor the personal strength to stand up to the nefarious army of thread monkeys. He's surrounded by them, not just within Berkeley's computer science department but also at Intel and Microsoft. Heck, it doesn't look like Professor Lee is even the chair of Berkeley’s Parallel Computing Lab any more. That's too bad, but there is no need to cry over spilled milk. Those of you who are really interested in this topic should go over to the Rebel Science E-Bookstore (pay only if you like what you read) and download my e-book, "How to Solve the Parallel Programming Crisis". At the very least, read the related blog posts at the links below. Something is bound to happen, sooner or later.

See Also:
How to Solve the Parallel Programming Crisis
Parallel Computing: Why the Future Is Non-Algorithmic
Parallel Computing: The End of the Turing Madness

10 comments:

Jamin said...

Have you seen work on Cellular Neural Networks (CNN) by these guys:

L. O. Chua http://nonlinear.eecs.berkeley.edu/

T. Roska
http://lab.analogic.sztaki.hu/

I wonder if any other academics are looking from the parallel "cell" perspective and whether they have received much research or industry support.

Ben Franklin said...

On Peer Review

Hello,

If you wish to save the American Intellectual Tradition and our university system, you and your friends must check out this documentary:

http://larouchepac.com/harvard-yard

It exposes the devastating corruption of America's universities, currently being pushed by such supposedly leading institutions as Harvard University. American intelligence is being crippled by the current mode of “higher education,” as is exemplified by the current US Presidential election insanity.

I will gladly take any questions or feedback from those seriously concerned with the future of our nation.

-Ben

BLD.LYM@gmail.com
www.LaRouchePAC.com

LeeMeador said...

Your link to "The Problem With Threads" leads to an abstract with a link to another abstract with a link where you can buy it for $19.00.

Try this link to read it with one click: http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf

Louis Savain said...

LeeMeador,

Thanks for the link. A correction is in order.

Louis Savain said...

By the way, does anybody know why Professor Lee is not involved in any active research project at Berkeley's Parallel Computing Lab? Did the thread monkeys in charge of PCL (David Patterson?) get rid of Lee? If true, that would be a shame, in my opinion. UC Berkeley's reputation as a great computer research center will suffer as a result.

Kevin said...

It's all very well saying there is a problem and doing some analysis, but people really prefer to have a solution - and one that doesn't require starting from scratch. Here's my attempt at fixing the problem with threads -

http://parallel.cc

Kev.

Louis Savain said...

Kevin,

I think you may be confusing threads with parallel processes. Multithreading is inherently non-deterministic. This is the nature of the beast and there is no way to get around this problem other than abandoning the thread concept altogether. Simulating parallel processes in a video game using an endless loop and two buffers is not the same thing as creating concurrent threads.

It's not just multithreading that is bad but the thread concept (algorithm or sequence of instructions) itself. It is prone to errors because the temporal relationships between various events in a program is neither deterministic nor easily observed. Multithreading makes it worse. The brain is a temporal processing machine par excellence. Deterministic timing is the sine qua non of brain function. We need a parallel software model that incorporates timing as a fundamental aspect of program architecture, not just as an afterthought. The current programming paradigm cannot evolve to properly handle timing and this is the reason that we need a radical paradigm shift. And we need it yesterday.

See How to Solve the Parallel Programming Crisis for more on on this topic.

Kevin said...

Have to disagree with you: multithreading is not inherently non-deterministic - particularly if you have a single thread of control. Running parallel (multiple threads of control) it may be non-deterministic. HDLs are essentially multithreaded programming languages that are usually deterministic.

ParC (my stuff) meets your requirement for having time as a fundamental component of the language. However I disagree that that is essential for deterministic behavior. It also falls in line with your assertion "software should behave logically more like hardware", since the new language features are taken from HDLs.

ParC supports the paradigm you are suggesting, but does not constrain you to use it. Definitely a paradigm shift, but not a radical one, just encouragement to write software like you design hardware.

Louis Savain said...

Kevin,

I suspect that your definition of multithreading is different than mine and that of Prof Lee. By threads, I am talking about operating system threads that involves copying the entire content of the CPU registers into a cached memory buffer or, in the case of hyperthreading, switch between on-chip register sets. This type of multithreading is certainly non-deterministic because there is no way to predict the timing of context switches. I seriously doubt that HDLs are using multithreading in this sense. Besides, there is no need to. Two buffers and an endless loops are all that is needed to properly simulate deterministic parallelism. It's done in video games, cellular automata and simulations.

I'll take a look at ParC but I should warn you that I am a die-hard proponent of graphical programming. In addition, there is no way to simulate fine-grained (instruction level), deterministic, non-algorithmic and reactive parallelism (the COSA paradigm) in software without incurring a significant performance penalty.

Kevin said...

I would say that I'm thinking of threads as "logical threads" rather than implementation level (pThreads etc.).

I have no objection to graphical programming, but you still have to get from the graphical description to something that runs on hardware. ParC provides support for that because you can get a simple mapping from the graphical objects to code objects, ParC gets translated to C++ which compiles with a regular C++ compiler.

As with HDLs, ParC supports an event-driven style - which is the same as the COSA reactive paradigm.

Determinism wrt to performance is dependent on the hardware used. The longer term goal in ParC is to use assertions (as per HDL synthesis/verification) to indicate the requirements so the compiler system can do the right thing.