Wednesday, September 3, 2008

The Radical Future of Computing, Part I

Part I, II

Abstract

A reader named Marc left an interesting comment at the end of my previous post, Heralding the Impending Death of the CPU. I hope Marc will forgive me for using his words as a vehicle on which to piggyback my message.

Linguistic Origin of Programming


I think that algorithmic programming is popular because it is similar to the way many of us write in western natural language; people plan whether a thought should be after or before a previous one in academic essays, which is inherently sequential in nature.
I agree. I believe that the modern computer evolved from the sequential/algorithmic needs of mathematicians like Charles Babbage and Ada Lovelace. As you know, linguistic or textual symbols are perfect for expressing mathematical algorithms. I have often wondered what kind of computers we would have if clockmakers or locomotive engineers had had a more direct influence on the development of early computer technology. Those folks are more accustomed to having multiple interacting mechanisms performing their functions concurrently.

Note also that the typewriter predated modern computers and served as a model for the input device (keyboard) of the first mass market computers and has profoundly affected the way we perceive them. Although the mouse was a major influence in changing human-computer interactions, the event-driven approach to programming that it engendered somehow failed to convince computer scientists that every action in programming should be a reaction to some other action (event-driven), down to the instruction level. Hopefully, the new multi-touch screen technologies will drastically change our idea of what a computer is or should be.

Petri Nets, Conditions, Metaphors and Simplicity


Native parallel programming requires that the programmer (or implementer if you'd rather call it that) decides what are the conditions that have to be met for each cell to trigger and what are the outputs that are produced based on those conditions so it requires skills that are part-user, part coder. Petri Nets are a great graphical symbolism for this. It actually requires that people focus on the problem instead of style.
I agree. Nobody should have to worry about syntax or have to learn the meaning of a token (in someone else’s native language and alphabet) in order to program a computer. Only a few graphical symbols (sensors, effectors, pathways, data and encapsulations) should be allowed. Labeling should be done in the designer’s preferred language. I believe that the main reason that graphical programming languages have not taken off is that their designers not only don’t seem to appreciate the importance of encapsulation (information hiding), but they have a tendency to multiply symbols beyond necessity. I am a fanatic when it comes to simplicity.

One of my goals is to turn computer programming into something that the lay public will find approachable and enjoyable. In this regard, I think that even Petri Nets, in spite of their simplicity compared to other programming models, are still too complicated and too abstract, making them unpalatable to the masses or the casual developer. I rather like PNs and I am sorry that the concept never really became mainstream. However, I have a bone to pick with the notion of conditions (places?). Don’t take me wrong; I don’t disagree that there is a need for conditions. I just don’t think the token concept is intuitive or concrete enough to appeal to the layperson. In my opinion everything should be driven by events (changes or transitions). What Petri calls a transition is what I call a sensor. A condition, to me, is just a past or current event and, as such, it should be used in conjunction with sensors (logic sensors, sequence detectors, etc.). This makes it easy to extend the idea of conditions to include that of temporal expectations, a must for reliability in COSA.

That being said, the ideal programming metaphors, in my opinion, are those taken from the behavioral sciences such as psychology and neuroscience: stimulus/response, sensor/effector, sequence memory, action/reaction, environment (variables), etc… The reason is that a computer program is really a behaving machine that acts and reacts to changes in its environment. A layperson would have little trouble understanding these metaphors. Words like ‘transitions’, ‘tokens’ and ‘places’ don’t ring familiar bells. Let me add that, even though I applaud the clean graphical look of PNs, my main criticism is that they are not deterministic. In my view, this is an unpardonable sin. (I confess that I need to take another close look at PNs because it seems that they have evolved much over the years).

New Leadership to Replace the Old


To me, starting with a software specification before implementing a solution seems obvious, but my son has mainly sold freelance projects to business types based on his suggested user interface first; when he tried to tell his potential customers what data sources he used and how he got to his results, the customers' eyes would often glaze over...
Yes. People love to see pretty pictures, which is understandable because business types tend to see technology from a user’s perspective. They want to see the big picture and they don’t care about what makes it work. You make an interesting point because I have pretty much given up on selling my ideas directly to techies. I am slowly coming to the conclusion that the next computer revolution will have to emerge out of some government-funded initiative or some industry-wide consortium under the leadership of an independent, strategy-minded think tank. The reason is that the industry is in a deep malaise caused by the aging baby boomers who drove computer innovation in the last half of the 20th century, but lately have run out of ideas simply because they are old and set in their ways. I don't want to generalize too much but I think this is a major part of the problem. Their training has taught them to have a certain perspective on computing that is obviously not what is needed to solve the parallel programming and software reliability crises. Otherwise, they would have been solved decades ago. In fact, it is their perspective on computing that got the industry and the world into this mess in the first place.

As a case in point, consider this recent article at HPCwire by Michael Wolfe. It pretty much sums up what the pundits are thinking. Michael believes that “the ONLY reason to consider parallelism is for better performance.” I don’t know how old Michael is but it is obvious to me that his thinking is old and in serious need of an update. The problem is that the older computer nerds are still in charge at the various labs/universities around the world and they hold the purse strings that fund research and development. These folks have titles like CTO or Project Director or Chief Science Officer. That does not portend well for the future of computing.

As I wrote somewhere recently, the computer industry is in dire need of a seismic paradigm shift and there is only one way to do it. The old computer nerds must be forced into retirement and new leadership must be brought in. The new mandate should be to reevaluate the computing paradigms and models of the last century and assess their continued adequacy to the pressing problems that the industry is currently facing, such as the parallel programming and software reliability crises. If they are found to be inadequate (no doubt about it from my perspective), then they should be replaced. These kinds of strategic decisions are not going to be made by the old techies but by the business leaders, both in the private sector and within the government. Sometimes, it pays not to be too married to the technology because you can’t see the forest for the trees.

Software Should Be More Like Hardware and Vice Versa


There is plenty of parallel processing already going around in graphics processors, Field-programmable Gate Arrays and other Programmable Logic chips. It's just that people with software-experience who are used to a certain type of tool are afraid to make the effort to acquire what they see as hardware-type electrical engineer thought-habits; I know my programmer son would have an issue. The US has developed a dichotomy between electrical engineers and computer scientists.
Which is rather unfortunate, in my opinion. In principle, there should be no functional distinction between hardware and software, other than that software is flexible. I foresee a time when the distinction will be gone completely. The processor core as we know it will no longer exists. Instead, every operator will be a tiny, super-fast, parallel processor that can randomly access its data directly at any time without memory bus contention problems. We will have a kind of soft, super-parallel hardware that can instantly morph into any type of parallel computing program.

Programming for the Masses


Also "Talking heads" have a vested interest in promoting certain products that are only incremental improvements over the existing tools, because otherwise they would need to educate the clients about the details of the new paradigm, which would require extended marketing campaigns which would only pay back over the long term.
Yeah, legacy can be a big problem but it doesn’t have to be. I wrote about this before but you bring out the important issue of client education, which is a major part of the paradigm shift that I am promoting. I think the time has come to move application design and development from the realm of computer geeks into that of the end user. The solution to the parallel programming problem gives us an unprecedented opportunity to transform computer programming from a tedious craft that only nerds can enjoy into something that almost anybody can play with, even children. Now that multi-touch screens are beginning to enter the consumer market, I envision people using trial-and-error methods together with their hands and fingers (and possibly spoken commands) to quickly manipulate parallel 3-D objects on the screen in order to create powerful and novel applications. I see this as the future of programming, kind of like putting Lego blocks together. In this regard, I don’t think we will need to reeducate traditional programmers to accept and use the new paradigm. They will have to get with the new paradigm or risk becoming obsolete.

PS. I’ll respond to the rest of your message in Part II.

4 comments:

neotoy said...

Hi, I've been reading your blog for awhile, and I was transfixed by this statement (my special interest is artificial intelligence):

"We will have a kind of soft, super-parallel hardware than can instantly morph into any type of parallel computing program."

I find it a very forward thinking proposition, and I have one question: assuming that there is no longer any distinction between hardware and software- what is the nature of the element/object that would be introduced into the machine in order to facilitate this "morph" or dynamic reconfiguration? Would it also be a form of hardware/program of the same super-parallel nature? And if so, how would it maintain it's shape (configuration), especially during transmission, over a network for example?

Amir said...

good post. You should read about the Sapir-Whorf Hypothesis as it applies to programming languages and mindset burn-in.

I think the EE/CS dichotomy is more pressing than the generation gap: Electrical Engineers don't believe in magic, Computer Scientists don't believe in physics (some probably believe in "The Matrix"). If the CS's think multithreading is parallel programming, then the EE's clearly aren't building appropriate boxes for the CS's to think inside of.

The generation gap is probably less about the programming models and more about things like Open Source, SaaS, and generally how we approach life given the Internet.

I also blogged a response to the HPCWire article.

Louis Savain said...

neotoy,

[...]assuming that there is no longer any distinction between hardware and software- what is the nature of the element/object that would be introduced into the machine in order to facilitate this "morph" or dynamic reconfiguration? Would it also be a form of hardware/program of the same super-parallel nature? And if so, how would it maintain it's shape (configuration), especially during transmission, over a network for example?

My understanding of quantum physics tells me that we will soon gain enough understanding of the phenomenon of quantum tunneling to be able to effect changes at a distance, what Einstein referred to as "spooky action at a distance". When that happens, a computer will consist of billions of tiny reconfigurable processors that can perform a single operation on memory variables without interference from other processors and without using actual physical connections. The processors will likewise be modifiable at a distance by other similar processors.

In this light, since we are using action at a distance, messaging via network transmission lines will be a thing of the past. In fact, I foresee the future advent of robots that operate in one place (say Mars) while their brains are millions of miles away. And there will be no message lag. Even better, there will be no reason for a brain to be in one place. Its parts could be distributed all over the solar system and still act as one brain. That would make for an almost indestructible brain because, even if a few parts are destroyed at one place, the rest of the brain can continue to function. Additionally, the system could be set up so that it rebuilds its malfunctioning parts automatically at a different location in case of failure.

If you read some of my writings on space (see link below), you will find that I don't believe in the existence of space. I am convinced that distance or space is a perceptual illusion and that, in the future, we will have long distance quantum jump technologies that will allow us to move instantly from anywhere to anywhere without going through the in-between positions. I realize that all of this sounds wild and unlikely but it is easy to prove that space does not exist.

Nasty Little Truth About Space

neotoy said...

Cool, and thanks for the response. I agree with you 100% on the nature of space.. like many things in science I believe the community consensus merely represents an information gap. Obviously space is a medium of some sort, rather than emptiness.

I find the quantum remarks particularly insightful, and they are further reinforced by an article I just read about a breakthrough in the phenomenon you mentioned. You might want to read it if you haven't already:

Physicists spooked by faster-than-light information transfer
http://www.nature.com/news/2008/080813/full/news.2008.1038.html?s=news_rss

I wish I knew more about this field because I am still unclear on how software would be stored and programmed utilizing quantum entangled particles, still I'm sure it's well within the realm of possibility.