Sunday, December 6, 2009

The Death of Larrabee or Intel, I Told You So

I Had Foreseen It

Back in June of this year, I wrote the following comment in response to a New York Times' Bits blog article by Ashlee Vance about Sun Microsystem's cancellation of its Rock chip project:
[...]The parallel programming crisis is an unprecedented opportunity for a real maverick to shift the computing paradigm and forge a new future. It’s obvious that neither Intel nor AMD have a solution. You can rest assured that Sun’s Rock chip will not be the last big chip failure in the industry. Get ready to witness Intel’s Larrabee and AMD’s Fusion projects come crashing down like the Hindenburg.

Anybody who thinks that last century’s multithreading CPU and GPU technologies will survive in the age of massive parallelism is delusional, in my opinion. After the industry has suffered enough (it’s all about money), it will suddenly dawn on everybody that it is time to force the baby boomers (the Turing Machine worshippers) to finally retire and boldly break away from 20th century’s failed computing models.

Sun Microsystems blew it but it’s never too late. Oracle should let bygones be bygones and immediately fund another big chip project, one designed to rock the industry and ruffle as many feathers as possible. That is, if they know what’s good for them.
Will Oracle do the right thing? I doubt it. Now that Intel has announced the de facto demise of Larrabee, my prediction is now partially vindicated. Soon, AMD will announce the cancellation of its Fusion chip and my prediction will then be fully vindicated. Fusion is another hideous heterogeneous beast that is also destined for oblivion. There is no escaping this, in my opinion, because the big chip makers are going about it the wrong way, for reasons that I have written about in the last few years. I see other big failures on the horizon unless, of course, the industry finally sees the light. But I am not counting on that happening anytime soon.

Goodbye Larrabee

Sorry Intel. I am not one to say I told you so, but I did. Goodbye Larrabee and good riddance. Nice knowing ya even if it was for such a short time. Your only consolation is that you will have plenty of company in the growing heap of failed processors. Say hello to IBM's Cell Processor when you arrive.

See Also:
How to Solve the Parallel Programming Crisis
Nightmare on Core Street
Parallel Computing: The End of the Turing Madness

2 comments:

chzchzchz said...

I know you want everyone to post their actual name and affiliation for some reason, but I'll take my chances.

Intel's decision to drop Larrabee can hardly be viewed as a failure of multi-core architectures. The most likely factor is there's no way to strategically market Larrabee. The chip was designed to compete with now-outdated GPU technology, so it can't be profitably sold as a GPU. Selling it as a general purpose chip would cut into i7 profits (which will soon have six cores). Selling it as a massively parallel chip focused on throughput would be silly given that a similar design using Atom cores would be smaller and more power efficient. There is just very little economic motivation to devote resources to something like Larrabee.

I don't understand why you hate on Turing computability so much. While a Turing machine is by nature sequential, there are plenty of equivalent models that cleanly admit parallelism (nondeterministic Turing machines and lambda calculus immediately come to mind). Likewise, no one takes Turing's model seriously as an architectural or programming model, but everyone shoots for equivalence. Of course, any physically realizable architecture is going to be finite state machine, but in practice it can be treated as Turing complete without much repercussion.

Although you have no actual specification of your COSA or UBM model (e.g. there isn't even a way to write a COSA or UBM "program" and verify that it is well-formed), I expect it is likely to be at most Turing-equivalent. Your major complaint seems to be that a Turing machine has no way to interact with its environment. This is not an inherent weakness of Turing computability in the slightest.

One could very easily construct a multi-tape Turing machine where one of the tapes represents the "environment" of your UBM model. Both the "sensor" and "effector" components could conceivably be expressed as part of the Turing machine's program. If you want to encode the behavior of the environment, this can be done with yet another Turing machine. This would still be Turing-complete since it can be simulated on a single Turing machine.

Even still, you claim "there are no signaling entities, no signals and no signal pathways ... in computer memory". This is puzzling considering how common interrupts, memory mapped device I/O, and cache-coherent shared memory are in modern architectures. In what way are your "signals" more powerful than existing mechanisms?

Tango said...

CHCHCHCHCCHCH said.....
In what way are your "signals" more powerful than existing mechanisms?

In what way existing mechanisms is the end technology for the computing ?Signals are powerful , the way you are powerful than a cockroach. Its never comparable.
Think about it.