Sunday, January 18, 2009

Parallel Computing: The Fourth Crisis, Part I

Part I, II

The Memory Bandwidth Problem

I apologize for the long hiatus. It took me a while to recover from the death of a close relative and my recent move from Texas to California. As I wrote in my last post, I have been thinking about the memory bandwidth problem. This is worse than the parallel programming crisis because it appears that the solution will require some sort of breakthrough in quantum tunneling or optical computing. This could happen now or ten years from now. The computer industry cannot wait. Somehow, it must forge a solution as soon as possible, otherwise even big chip companies like Intel, IBM, and AMD will see their profit stream dwindle down to a trickle. These are companies that rely on constant improvements in performance to maintain their competitive edge.

The Four Crises

The computer world is battling four major crises, in my opinion. The first two, software reliability and programmer productivity, have been with us since the seventies and they show no sign of abating. Parallel programming, the third crisis, emerged only a few years ago with the commercialization of multicore processors. I have forcefully argued in the past that these three crises are really one and the same. What I mean is that it is a single problem that calls for a single solution. The fourth major crisis is the memory bandwidth problem. This is the worst one of them all, in my opinion, because, whether or not the other three are solved, slow memory threatens to repeal Moore’s law and bring progress in the field to a screeching halt. Nobody wants that to happen, at least not in the foreseeable future.

A New Kind of Computer

I think the world needs a new kind of computer. I have been thinking about a radically new way of achieving blazingly fast parallel computing without the use of a central processor. The idea has been percolating in my mind for quite some time. It is still partially baked but I think that it is worth pursuing. Essentially, I believe that the age of the central processor must come to an end. The communication bottleneck that results from segregating the processor from memory is simply intolerable. My idea is based primarily on certain characteristics of the COSA software model. I will describe what I have in mind in greater details in part II of this article.

See Also:
How to Solve the Parallel Programming Crisis

2 comments:

Josh McDonald said...

Glad to see you're back Louis. I too think the current machine architecture is on its last legs. The same way we have single chips that act the way shelves of transistors or tubes did in the past, we need now single-box (chip, cube, card, whatever) versions of a Google compute grid, albeit of smaller capacity. Thousands of discrete units of both processing and ram, that organise themselves into routing nodes, work nodes, and monitor nodes that adjust the numbers and in-silicon location of the various clusters to maximise performance as time goes on.

It would be a very exciting time to be in a well-funded silicon startup at the moment, as the opportunities are both varied and numerous.

Tanmoy Deb said...

Whenever i watch the Multicore CPU's picture, Ravana's
Picture is coming in my mind [:D] The joke in India is going in children that Ravana cannot buy the underwear for him[:D]

The Multicore computers facing or will face soon the Memory bottleneck problem,as Ravana Felt, is still hidden from the root, now it is getting turned daemon. From the Turing Machine to Von-Neumann everything kept the silence about that problem. What we invented in this 50years about computing architecture?? Look the von-neumann Bottleneck that is kissing our ass today. Only Adding more frequency in processors and adding caches between the Memory and the Processor. Huge amount Page faults? ok. Add more RAM in system; no solution they provided in the 50years for getting reliable computing architecture. Now you can see 3-4 GB RAM is common. But, no-one is hitting in the base point, that why that problem is coming ? Softwares are dependent and running on the hardware system only, as the basepoint is still undeveloped,immature, how we can get reliable software??

No rethink about it.No more excuses. The Solution came already. Are you ready to change you ? Otherwise I will change you.