Sunday, December 30, 2007

Supervised Motor Learning in the Cerebellum


In the previous article, I made a falsifiable prediction about the cerebellum based on my interpretation of certain metaphorical passages in the book of Revelation. One should note that the current consensus among neurologists regarding cerebellar contribution to speech processing in the brain goes contrary to my prediction. So clearly, nobody can accuse me of using existing scientific literature to make predictions after the fact. Many neurologists have concluded that the cerebellum participates in the production of speech after noticing that patients with cerebellar lesions exhibit speech difficulties. My claim is that they are mistaken and that the cerebellum contributes nothing at all to the generation of speech and language. Its purpose is to attend to routine non-speech-related motor tasks (such as maintaining posture) that would otherwise have to be performed by the motor cortex. I argue that the type of speech impairment observed in patients with cerebellar lesions is due to the motor cortex having to attend to tasks that should normally be handled by the cerebellum. This results in frequent interruptions that manifest themselves as disjointed speech.

One More Cerebellar Prediction From the Bible

The cerebellum is a sensorimotor learning system. Although its learning principles are simple, it can be trained to perform sophisticated sensorimotor tasks such as maintaining posture, walking, running, self-balancing, navigating, etc… Its actions are purely reactive, that is to say, it cannot anticipate the outcomes of sensory or motor patterns. In other words, the cerebellum uses sensory signals to directly control motor effectors in real time.

Two main types of sensory signals are used in the brain. The first is a transient spike or a short spike burst that marks the onset or offset of a sensory phenomenon. The second is a sustained spike train that lasts for the duration of the sensed phenomenon. In the neurobiological literature, these two types of signals arriving at the cortex from the retina are known to go through the magnocellular and the parvocellular pathways. Based on my understanding of the metaphors of Smyrna and Laodicea in the book of Revelation, I can confidently predict that the cerebellum (Laodicea) processes only the second type (sustained spike train) of signals. The Bible uses two metaphors to distinguish between the two: rich (sustained) and poor (transient).

Supervised Learning

In my opinion, the training principle used in the cerebellum is rather simple. It is a trial and error process. Essentially, parallel fibers that receive sensory signals from various places in the body make random synaptic connections with a huge number of Purkinje cells. The output signals generated by a Purkinje cell ultimately activate a muscle. As long as the Purkinje cell is receiving input signals from the parallel fibers, the muscle remains activated. During training, a mature behavior group in the motor cortex (symbolized by the church of Philadelphia) monitors the activation of the muscles under its control and sends a stop signal whenever a muscle is activated longer than it should be. When the stop signal reaches the climbing fiber on the Purkinje cell, a powerful corrective spike is generated. This, in turn, greatly weakens the connections with any parallel fiber that is still firing. Eventually, only the parallel fibers that activate the Purkinje cells at the right time retain their synaptic connections.

Robotic Cerebellum

I think that a simulated software cerebellum can serve as a very effective motor learning system for a humanoid robot. A human trainer using a special sensor-equipped suit that mimics the shape and limbs of the actual robot could teach such a robot to perform various complex tasks such as waking, climbing stairs, etc... Just a thought.

Thursday, December 13, 2007

Falsifiable Prediction About Human Cerebellum From the Bible

Note: What follows was copied from the previous article. I felt that it should be separate.

According to my interpretation of the Biblical texts, the cerebellum is a supervised automaton. It is trained by the motor cortex to take over certain routine motor tasks whenever the basal ganglia and motor cortex are busy reasoning internally or engaging in some other motor activity. My understanding of the metaphorical messages to the church of Pergamum (Broca's area) and Laodicea (cerebellum) in the book of Revelation is that speech is always an attentional or volitional (as opposed to automatic or unconscious) process that involves corrective feedback from the basal ganglia. The cerebellum is not directly involved in processing speech and language. The indication is that the cerebellum can have motor control over the entire body except the mouth, throat and tongue muscles. This means that activities like eating, chewing and swallowing are also excluded from cerebellar control.

How can this prediction be falsified? In my opinion, it suffices to examine the brain pathways that link the motor cortex with the cerebellum. The prediction is that there are no pathways between the cerebellum and any parts of the motor cortex that controls the mouth, speech, etc… Another way to falsify this prediction would be to compare MRI images of cerebellar activities when a subject is speaking (in a relaxed position) and engaging in non-speech related activities. I predict that the data will support the claim that the cerebellum does not generate speech.

Another interesting consequence to this prediction is that serious damage to the cerebellum should be accompanied by a loss of speech capability while the subject is engaged in other motor activities (e.g., walking). The reason is that the subject can no longer rely on the cerebellum for routine tasks (while speaking) and must consciously attend to them. We can only attend to one conscious task at a time. This is why the cerebellum is so important. I suspect that it would take some time for the subject to train him/herself to sit or lay down in order to regain the ability to speak.

Addendum 12/23/2007:

Someone (Ritchie Annand) on the Expelled blog wrote that neurologists Marco Mumenthaler and Otto Appenzeller already falsified my hypothesis that the cerebellum does not generate speech. I disagree, of course. Here's the relevant excerpt from Neurologic Differential Diagnosis, section 2.11.3, Lesions of Basal Ganglia and Cerebellum:

With disorders of the cerebellum, speech is harmonically disturbed, irregular, loud and explosive. The speech disturbance in multiple sclerosis is due to foci in the cerebellum, and takes the form of staccato explosive speech with exaggerated pauses between parts of the sentences and words, as in scanning speech.

In my opinion, the observations of Mumenthaler et al lend credence to my claim. It makes sense that cerebellar damage should affect speech production as I pointed out above, but that is not evidence that the cerebellum generates speech. Since the motor cortex and Broca’s area normally rely on the cerebellum to attend to routine tasks (e.g., maintaining posture, walking, standing, etc…) when speaking, it is logical to expect that speech should be affected as a result of a cerebellar lesion. The motor cortex cannot multitask. Therefore, unless the cerebellum is helping, the motor cortex is forced to interrupt itself frequently to attend to important tasks. Hence the stacatto speech and exagerated pauses observed by Mumenthaler et al.

Since speech impairments are observed in patients with cerebellar damage, it is very easy to conclude that the cerebellum contributes to speech production. A cursory look at the neurological literature indicates that many have already reached this conclusion. I argue that this is not the case. Based on my research, I can confidently predict that the speech processing ability of subjects with cerebellar lesions should markedly improve when the subjects are lying down in a relaxed position. The reason is that there is no need for the brain to maintain posture (a normal cerebellar function) while the subject is in a relaxed position, in which case the motor cortex has more freedom to lend its undivided attention to speech production. It should be a fairly easy way to test this hypothesis.

If any of my readers know of someone with a speech impairment due to a cerebellar lesion, please ask him or her to lay down on a couch and relax. I predict that he/she will find it easier to speak as a result.

See also:

The next four posts in this series. Just click on Newer Post at the bottom of this page.

Tuesday, December 11, 2007

Christianity, Evolution and Falsifiability

Intelligent Design and the Demand for Falsifiable Predictions

One of the incessant demands from atheists and Darwinian evolutionists is that intelligent design proponents must provide falsifiable predictions in support of the ID hypothesis. This is a legitimate demand, in my opinion. Unless and until ID proponents come up with formal predictions that can be tested by other researchers, they do not have a leg to stand on.

My Take on the Design vs. Evolution Debate

I am a non-fundamentalist Christian evolutionist as opposed to a Darwinian evolutionist. In my opinion, it is not really evolution that is in dispute. There is no doubt that some form of evolution is happening now and has been happening for millions of years. What is in dispute is exactly how it happened. I disagree with the Darwinian stance regarding the origin of species. I believe that evolution was intelligently directed in the past through genetic engineering. I must add that I am not affiliated with the ID movement.

The way I understand it, most proponents of Darwinian evolution claim that the species originated as a result of random mutations and natural selection through sexual reproduction. They maintain that it happened naturally without intelligent intervention. Of course, as a Christian, I have to disagree with that stance, since it contradicts the Biblical teaching that the original species where created by God. The creation process obviously lasted millions of years (no, I don’t believe that God created the heavens and the earth in six twenty-four-hour days). Thus it is not surprising that the fossil record shows a progression in the sophistication of the species over time. Any creation process is necessarily an evolutionary process. The fact that biological research successfully relies on the evolutionary hypothesis is not surprising either but it is not evidence that evolution was not intelligently directed.


I have no idea whether or not ID advocates have proposed any experimental test that could potentially falsify the design hypothesis and silence their critics. All I have is my own research based on Biblical metaphors. I take an indirect approach to ID falsifiability. I believe that the Bible contains amazing and revolutionary scientific information hidden in clever metaphors. I believe that the metaphors, once properly deciphered, can be used to make precise scientific predictions that can be tested in the laboratory. It follows that if any of these predictions can withstand falsification, they would lend credibility to Biblical claims regarding the origin of the species.

My critics can always argue that any interpretation of Biblical passages is highly suspect because the Bible can be interpreted to support any point of view and I agree. However, based on my research over the last twenty years, I feel sufficiently confident in my understanding of certain Biblical metaphors to make testable predictions about various characteristics of brain operation and organization. These are precise predictions about aspects of the brain (unknown to science) that I could not possibly have any knowledge of, since I am neither a neurobiologist nor do I have access to a neurobiological research lab. What follows is one such testable prediction about an aspect of the cerebellum that is currently unknown to neurobiologists and brain experts.

Falsifiable Biblical Prediction About the Cerebellum

According to my interpretation of the Biblical texts, the cerebellum is a supervised automaton. It is trained by the motor cortex to take over certain routine motor tasks whenever the basal ganglia and motor cortex are busy reasoning internally or engaging in some other motor activity. My understanding of the metaphorical messages to the church of Pergamum (Broca's area) and Laodicea (cerebellum) in the book of Revelation is that speech is always an attentional or volitional (as opposed to automatic) process that involves corrective feedback from the basal ganglia. The cerebellum is not directly involved in processing speech and language. The indication is that the cerebellum can have motor control over the entire body except the mouth, throat and tongue muscles. This means that activities like eating, chewing and swallowing are also excluded from cerebellar control.

How can this prediction be falsified? In my opinion, it suffices to examine the brain pathways that link the motor cortex with the cerebellum. The prediction is that there are no pathways between the cerebellum and any parts of the motor cortex that controls the mouth, speech, etc… Another way to falsify this prediction would be to use MRI images to observe cerebellar activities when a subject is speaking (in a relaxed position) and engaging in non-speech related activities. I predict that the data will support the claim that the cerebellum cannot produce speech.

Another interesting consequence to this prediction is that serious damage to the cerebellum should be accompanied by a loss of speech capability while the subject is engaged in other motor activities (e.g., walking). The reason is that the subject can no longer rely on the cerebellum for routine tasks (while speaking) and must consciously attend to them. We can only attend to one conscious task at a time. This is why the cerebellum is so important. I suspect that it would take some time for the subject to train him/herself to sit or lay down in order to regain the ability to speak.

More falsifiable predictions to come…

Friday, November 9, 2007

Unreliable Software vs. National Security

The Clear and Foreseeable Danger

The security of a modern nation is a function of its scientific and technological know-how. As we all know, nowadays, nothing gets done in R&D circles without the use of computer software. Software is without a doubt the lifeblood of science and technology. In an ideal world, there would be no limit to how complex and sophisticated our technologies could get. In the real world, however, software unreliability places an upper limit on the complexity of our systems. For example, we could conceivably have cars that drive themselves and airplanes that fly themselves by now but concerns over safety, costs and liability will not allow it. In the meantime, over 40,000 people die every year in traffic accidents in the US alone.

Software unreliability is the biggest problem of the technological age. It handicaps society by condemning it to a sort of chronic mediocrity. As I have repeatedly said in the past, the price that we have paid and continue pay, as a result, is staggering. And it will get worse. But what if there were a solution to the software problem and an enemy nation got a hold of it first? Freed from the shackles of unreliability, they would suddenly possess the ability to develop systems of arbitrary complexity and unlimited sophistication. Soon after, their technological advantage would turn into technological superiority, both economically and militarily. A shift in the world’s balance of power would ensue and therein lies the danger.

Technological Race

Since the collapse of the former Soviet Union and the end of the cold war, I have witnessed a major realignment of allegiances around the world. From my vantage point, the world is increasingly becoming divided into three major blocs: the Christian and secular West, the loose confederacy of Islamic nations, and Asia. Both the West and Asia have invested heavily in technology. The Islamic nations have for the most part relied on their oil revenues and have neglected R&D. However, there are strong signs that this is about to change drastically. It may seem that they have a lot of catching up to do but this is not necessarily true. Their late entry in the technological race might give them the opportunity to leapfrog obsolete or outmoded technologies and start with the best. This could mean the building of a state of the art infrastructure in a relatively short time. The point that I am driving at is that the world is not a friendly place. Many nations have embarked on a renewed arms race, one that is heavily dependent on science and technology.

Radical Change Ahead

I may be accused of using an alarmist tone in order to promote my own agenda and there might be some truth to it. I am certainly biased since I have my own goals in mind. However, I am convinced that the danger is very real and I invite everybody to take a good and impartial look at what I am proposing and make up your own minds. I believe that there is indeed a solution to the software reliability problem, one that will usher in the true golden age of automation. I have been writing about it for years. I have made only a handful of converts but that is because the solution I am proposing will require a radical change not only in the way we construct our programs, but also in the way we build our computers. I am asking for the reinvention of the computer. Nothing less will do. In my work, I have bluntly criticized some of the most revered names in the history of computing and this has not gained me many friends in the industry and the computer science community. Still, I believe that this is the sort of self-criticism that the West must have the courage to engage in if it wants to solve some of its most pressing problems including the software reliability crisis.

Taking Sides

In conclusion, I would like to say that I was born and raised in the western hemisphere. Regardless of my religious, philosophical or political convictions, I must choose sides. I choose the West. One of the problems that I see is that the western world is too conceited about its supposed intellectual superiority. They have elevated their most famous scientists to the status of demigods whose wisdom (good or bad) must never be questioned. This, too, is dangerous because the rest of the world, including our enemies, is not constrained by this mindset. They have every reason to look for holes in our wisdom and use them as opportunities for advancement. Unless we wake up and realize the clear danger that I mentioned above, we may have to face an unpleasant future.

Tuesday, October 30, 2007

Half a Century of Crappy Computing

Decades of Deception and Disillusion

I remember being elated back in the early 80s when event-driven programming became popular. At the time, I took it as a hopeful sign that the computer industry was finally beginning to see the light and that it would not be long before pure event-driven, reactive programming was embraced as the universal programming model. Boy, was I wrong! I totally underestimated the capacity of computer geeks to deceive themselves and everyone else around them about their business. Instead of asynchronous events and signals, we got more synchronous function calls; and instead of elementary reactions, we got more functions and methods. The unified approach to software construction that I was eagerly hoping for never materialized. In its place, we got inundated with a flood of hopelessly flawed programming languages, operating systems and processor architectures, a sure sign of an immature discipline.

The Geek Pantheon

Not once did anybody in academia stop to consider that the 150-year-old algorithmic approach to computing might be flawed. On the contrary, they loved it. Academics like Fred Brooks decreed to the world that the reliability problem is unsolvable and everybody worshipped the ground he walked on. Alan Turing was elevated to the status of a deity and the Turing machine became the de facto computing model. As a result, the true nature of computing has remained hidden from generations of programmers and processor architects. Unreliable software was accepted as the norm. Needless to say, with all this crap going on, I quickly became disillusioned with computer science. I knew instinctively what had to be done but the industry was and still is under the firm political control of a bunch of old computer geeks. And, as we all know, computer geeks believe and have managed to convince everyone that they are the smartest human beings on earth. Their wisdom and knowledge must not be questioned. The price [pdf], of course, has been staggering.

In Their Faces

What really bothers me about computer scientists is that the solution to the parallel programming and reliability problems has been in their faces from the beginning. We have been using it to emulate parallelism in such applications as neural networks, cellular automata, simulations, VHDL, Verilog, video games, etc. It is a change-based or event-driven model. Essentially, you have a global loop and two buffers (A and B) that are used to contain the objects to be processed in parallel. While one buffer (A) is being processed, the other buffer (B) is filled with the objects that will be processed in the next cycle. As soon as all the objects in buffer A are processed, the two buffers are swapped and the cycle repeats. Two buffers are used in order to prevent the signal racing conditions that would otherwise occur. Notice that there is no need for threads, which means that all the problems normally associated with thread-based programming are non-existent. What could be simpler? Unfortunately, all the brilliant computer savants in academia and industry were and still are collectively blind to it. How could they not? They are all busy studying the subtleties of Universal Turing Machines and comparing notes.

We Must Reinvent the Computer

I am what you would call a purist when it come to event-driven programming. In my opinion, everything that happens in a computer program should be event-driven, down to the instruction level. This is absolutely essential to reliability because it makes it possible to globally enforce temporal determinism. As seen above, simulating parallelism with a single-core processor is not rocket science. What needs to be done is to apply this model down to the individual instruction level. Unfortunately, programs would be too slow at that level because current processors are designed for the algorithmic model. This means that we must reinvent the computer. We must design new single and multiple-core Processor architectures to directly emulate fine-grained, signal-driven, deterministic parallelism. There is no getting around it.

Easy to Program and Understand

A pure event-driven software model lends itself well to fine-grain parallelism and graphical programming. The reason is that an event is really a signal that travels from one object to another. As every logic circuit designer knows, diagrams are ideally suited to the depiction of signal flow between objects. Diagrams are much easier to understand than textual code, especially when the code is spread across multiple pages. Here is a graphical example of a fine-grained parallel component (see Software Composition in COSA for more info):

Computer geeks often write to argue that it is easier and faster to write keywords like ‘while’, ‘+’, ‘-‘ and ‘=’ than it is to click and drag an icon. To that I say, phooey! The real beauty of event-driven reactive programming is that it makes it easy to create and use plug-compatible components. Once you’ve build a comprehensive collection of low-level components, then there is no longer a need to create new ones. Programming will quickly become entirely high-level and all programs will be built entirely from existing components. Just drag’m and drop’m. This is the reason that I have been saying that Jeff Han’s multi-touch screen interface technology will play a major role in the future of parallel programming. Programming for the masses!

Too Many Ass Kissers

I often wondered what it will take to put an end to decades of crappy computing. Reason and logic do not seem to be sufficient. I now realize that the answer is quite simple. Most people are followers, or more accurately, to use the vernacular, they are ass kissers. They never question authority. They just want to belong in the group. What it will take to change computing, in my opinion, is for an intelligent and capable minority to stop kissing ass and do the right thing. That is all. In this light, I am reminded of the following quote attributed to Mark Twain:

“Whenever you find that you are on the side of the majority, it is time to pause and reflect.”

To that I would add that it is also time to ask oneself, why am I kissing somebody's ass just because everybody else is doing it? My point here is that there are just too many gutless ass kissers in the geek community. What the computer industry needs is a few people with backbones. As always, I tell it like I see it.

See Also:

How to Solve the Parallel Programming Crisis
Why Parallel Programming Is So Hard
Parallel Computing: Why the Future Is Non-Algorithmic
UC Berkeley's Edward Lee: A Breath of Fresh Air
Why I Hate All Computer Programming Languages
The COSA Saga
Transforming the TILE64 into a Kick-Ass Parallel Machine
COSA: A New Kind of Programming
Why Software Is Bad and What We Can Do to Fix It
Parallel Computing: Both CPU and GPU Are Doomed

Thursday, October 18, 2007

Fine-Grain Multicore CPU: Giving It All Away?

All Multicore Related Articles

I was about to upload the second part of my two-part article on Memory Caching for a Fine-Grain, Self-Balancing Multicore CPU and I got to thinking that, maybe I am foolish to give all my secrets away. My primary interest in multicore CPU architecture is driven mostly by my enduring passion for artificial intelligence. I have good reasons to believe that true AI will soon be upon us and that our coming intelligent robots will need fast, reliable, portable, self-balancing, fine-grain multicore CPUs using an MIMD execution model. Of course, these CPUs do not exist. Current multicore CPUs are thread-based and coarse-grained. To do fine-grain computing, one would have to use an SIMD (single-instruction, multiple data) execution model. As we all know, SIMD-based software development is a pain in the ass.

It just so happened that while working on another interest of mine, software reliability, I devised a developer-friendly software model (see Project COSA) that is a perfect fit for parallel programming. Using this model as a guide, I came up with a novel architecture for a self-balancing, auto-scalable, fine-grain, multicore CPU. What woud the computer market give for an easy to program, fine-grain, multicore CPU? I think customers would jump through hoops to get their hands on them, especially when they find out that they can also use them to create rock-solid applications that do not fail.

The point I'm driving at is that I need money for my AI research. I think too many people have benefited from my writings without spending a dime (I know, I keep tract of all visitors and a lot of you have been visiting my site for months, if not years). I think this is a good opportunity for me to get the funds that I need. I am sitting on an idea for a multicore CPU that is worth money, lots of money. So, if you, your organization or your government agency are interested in funding or joining in the founding of a multicore startup company, drop me a line and let me know what you can do.

Thursday, October 11, 2007

Is South Korea Poised to Lead the Next Computer Revolution?

All Multicore and Parallel Programming Articles

I have been saying for a long time that the way we currently build and program computers is fundamentally flawed. It is based on a model of computing that is as old as Charles Babbage and Lady Ada Lovelace. The West has turned most of its celebrated computer scientists into demigods and nobody dares to question the wisdom of the gods. Other countries, especially South Korea, China, India and Japan are not handicapped by this problem. They have every reason to question western wisdom, especially if it results in catapulting their societies into technological preeminence.

I have good reasons (that I cannot go into, sorry) to suspect that the South Korean semiconductor industry (e.g., Samsung) may be poised to transform and dominate the multicore processor industry in the coming decades. The Europeans and the North Americans won’t know what hit them until it is too late. I have always admired the Koreans. They are hard workers, very competitive, they have an excellent business sense and a knack for thinking things through. It may have something to do with their love of Baduk, the wonderful ancient Chinese strategy board game also known as Go (Japan) and Weiki (China). The game forces the player to think very long term, a highly desirable skill in life as well. Unfortunately, all my liberties are taken (pun intended) at the moment and I cannot say much more than I have already said.

Wednesday, October 10, 2007

Who Am I? What Are My Credentials?

People write to me to ask, “Who are you?” or “What are your credentials?”

  • I am a crackpot and a crank. Those are my credentials. Ahahaha…
  • I am a self-taught computer programmer. Ok, I did take a C++ class at UCLA a long time ago, just for grins and giggles. I have programmed in assembly, FORTH, BASIC, C, C++, C#, Pascal, Java, php, asp, etc…
  • I hate computer languages, all of them.
  • I hate operating systems, all of them.
  • I hate computer keyboards, even if I have to use them. They are ancient relics of the typewriter age.
  • I hate algorithmic computing.
  • I hate software bugs.
  • I hate all the crappy multicore processors from Intel, AMD, Tilera, Freescale Semiconductor, ARM, and the others.
  • I actually hate all CPUs, if only because they are all designed and optimized for algorithmic computing.
  • I hate thread-based parallelism.
  • I hate coarse-grain parallelism.
  • I hate threads, period. The thread is the second worst programming invention ever.
  • I hate Erlang’s so-called ‘lightweight’ processes.
  • I believe that, if your parallel language, OS or multicore CPU does not support fine-grain parallelism, it’s crap.
  • I hate the Von Neumann bottleneck.
  • I love synchronous, reactive, deterministic, fine-grain, parallel computing. That’s the future of computing.
  • I love reliable software.
  • I love Jeff Han’s multi-touch screen technology. That’s the future interface of programming. Drag'm and drop'm.
  • I love cruising catamarans.
  • I love people from all over the world.
  • I love Paris, New York City, Provence, French Riviera, Monaco, Nice, Venice, Rome, London, Amalfi coast, Turkey, Miami Beach, the Caribbean, the South Pacific, Hawaii, Polynesia, Thailand, Vietnam, Cambodia, the Philipines, Papua, Sumatra, Australia, New Zealand, Japan, Seychelles, Moroco, Zanzibar, Portugal, Russia (la vieille Russie), Eastern Europe, Nothern Europe, Western Europe, India, Sri Lanka, Brazil, Mexico, Vienna, Bolivia, Amazon, Africa, China, Rio de Janeiro, Machu Picchu, Chitzen Itza, Tokyo, Greece, Hong Kong, Budapest, Shanghai, Barcelona, Naples, Yucatan peninsula, Texas, Colorado, Alberta, Key West, Central America, South America, Alaska, Montreal, California, San Francisco, Carmel (Cal.), Los Angeles, Baja California, Houston, Seattle, Mazatlan, Vancouver, Chicago, Kernville (Cal.), Yosemite, Grand Canyon, Redwood Forest, Yellowstone, etc… All right. I never set foot in some of those places but I would love to. Come to think of it, I just love planet earth.
  • I love plants and trees and animals.
  • I love astronomy, archaelogy, history, science, languages and cultures.
  • I love white water rafting, canoeing, fishing, walking in the woods or in a big city, hiking, bicycling, sailing, scuba diving, surfing. Unfortunately, I can’t do most of these sports for the time being.
  • I love the arts, movies, painting, architecture, theatre, sculpture, photography, ceramics, microphotography, novels, science fiction, poetry, digital arts, haute cuisine, hole-in-the-wall cuisine, home-made cuisine, haute couture, restaurants, interior decorating, furniture design, landscaping, carpentry, all sorts of music.
  • I am passionate about artificial intelligence and extreme fundamental physics. I don't know why. Check out my series on motion.
  • More than anything, I love the Creator who made it all possible.
  • Atheist computer geeks hate me but I laugh in their faces.
  • Shit-for-brains voodoo physicists don’t like me but I crap on their time-travel and black hole religion.
  • I am a Christian but, unlike most Christians, I believe in weird Christian shit. I believe that we are all forgiven (just ask), even computer geeks and crackpot physicists. What’s your chicken shit religion? Ahahaha...
  • If my Bible research offends you, then don't read my blog. It's not meant for you. I need neither your approval, nor your criticism, nor your money. I don't care if you're Bill Gates or the Sultan of Brunei.
  • I’m the guy who hates to say ‘I told you so’ but I told you so. Goddamnit!
  • I am right about software reliability.
  • I am right about parallel programming and multicore processors.
  • I am right about crackpot physics.
  • I am right about the causality of motion and the fact that we are immersed in an immense ocean of energetic particles.
  • I am wrong about almost everything else.
  • Food? Did anybody mention food? I’m glad you asked. I love sushi and sashimi with Napa Valley or South American Merlot, Indian food, Mexican food, Thai food, Chinese Szechwan food, French food, Italian food, Ethiopian food, Spanish food, Korean food, Iranian food, Brazilian food, Cuban food, Malaysian food, Indonesian food, Haitian food, Argentinean food, Peruvian food, Vietnamese food, Cajun food, southern style barbecue ribs, Jamaican food, Yucateco food, Greek food, New York hotdogs, Chicago hotdogs, burritos, tacos de carne asada, tacos al pastor, chipotle, peppers, In-N-Out Burgers, New York pizza, corn tortillas, chiles rellenos, huevos rancheros, pollo en mole, French crepes, French cheeses, Italian cheeses, Japanese ramen (Asahi Ramen, Los Angeles), Japanese curry, soy sauce, sake, tequila with lime, mezcal, rum, rompope, Grand Marnier, cocktails, all sorts of wine, Dijon mustard, chocolate, French pastry, Viennese pastry, German beer, espresso, cappuccino, caffe latte, café Cubano, Starbucks, Jewish deli food, Italian deli food, all sorts of spices, all sorts of seafood, tropical fruits, etc… Ok, you get the picture. As you can see, I love food and this is just a short list. And no, I’m not a fat slob. I am actually skinny.
  • I am part French, part Spanish, part black, part Taino (Caribe Indian) and other mixed ethnic ingredients from the distant past.
  • Oh, yes. I love women, too.
All right. That's enough of me. I got to get back to my AI project now. Later.

Sunday, October 7, 2007

Parallel Programming, Math, and the Curse of the Algorithm

The CPU, a Necessary Evil

The universe can be seen as the ultimate parallel computer. I say ‘ultimate’ because, instead of having a single central processor that processes everything, every fundamental particle is its own little processor that operates on a small set of properties. The universe is a reactive computer as well because every action performed by a particle is a reaction to an action by another particle. Ideally, our own computers should work the same way. Every computer program should be a collection of small reactive processors that perform elementary actions (operations) on their assigned data in response to actions by other processors. In other words, an elementary program is a tiny behaving machine that can sense and effect changes in its environment. It consists of at least two actors (a sensor and an effector) and a changeable environment (data variable). In addition, the sensor must be able to communicate with the effector. I call this elementary parallel processor the Universal Behaving Machine (UBM).

More complex programs can have an indefinite number of UBMs and a sensor can send signals to more than one effector. Unfortunately, even though computer technology is moving in the general direction of our ideal parallel computer (one processor per elementary operator), we are not there yet. And we won’t be there for a while, I’m afraid. The reason is that computer memory can be accessed by only one processor at a time. Until someone finds a solution to this bottleneck, we have no choice but to use a monster known as the CPU, a necessary evil that can do the work of a huge number of small processors. We get away with it because the CPU is very fast. Keep in mind that understanding the true purpose of the CPU is the key to solving the parallel programming problem.

Multicore and The Need for Speed

Although the CPU is fast, it is never fast enough. The reason is that the number of operations we want it to execute in a given interval keeps growing all the time. This has been the main driving force behind CPU research. Over the last few decades, technological advances insured a steady stream of ever faster CPUs but the technology has gotten to a point where we can no longer make them work much faster. The solution, of course, is a no-brainer: just add more processors into the mix and let them share the load, and the more the better. Multicore processors have thus become all the rage. Unsurprisingly, we are witnessing an inexorable march toward our ideal computer in which every elementary operator in a program is its own processor. It’s exciting.

Mathematicians and the Birth of the Algorithmic Computer

Adding more CPU cores to a processor should have been a relatively painless evolution of computer technology but it turned out to be a real pain in the ass, programming wise. Why? To understand the problem, we must go back to the very beginning of the computer age, close to a hundred and fifty years ago, when an Englishman named Charles Babbage designed the world’s first general purpose computer, the analytical engine. Babbage was a mathematician and like most mathematicians of his day, he longed for a time when he would be freed from the tedium of performing long calculation sequences. All he wanted was a reasonably fast calculator that could reliably execute mathematical sequences or algorithms. The idea of using a single fast central processor to emulate the behaviors of multiple small parallel processors was the furthest thing from his mind. Indeed, the very first program written for the analytical engine by Babbage’s friend and fellow mathematician, Lady Ada Lovelace, was a table of instructions meant to calculate the Bernoulli numbers, a sequence of rational numbers. Neither Babbage nor Lady Ada should be faulted for this but current modern computers are still based on Babbage’s sequential model. Is it any wonder that the computer industry is having such a hard time making the transition from sequential to parallel computing?

Square Peg vs. Round Hole

There is a big difference between our ideal parallel computer model in which every element is a parallel processor and the mathematicians’ model in which elements are steps in an algorithm to be executed sequentially. Even if we are forced to use a single fast CPU to emulate the parallel behavior of a huge number of parallel entities, the two models require different frames of mind. For example, in a true parallel programming model, parallelism is implicit but sequential order is explicit, that is to say, sequences must be explicitly specified by the programmer. In the algorithmic model, by contrast, sequential order is implicit and parallelism must be explicitly specified. But the difference is even more profound than this. Whereas an element in an algorithm can send a signal to only one other element (the successor in the sequence) at a time, an element in a parallel program can send a signal to as many successors as necessary. This is what is commonly referred to as fine-grain or instruction-level parallelism, which is highly desirable but impossible to obtain in an MIMD execution model using current multicore CPU technology.
The image above represents a small parallel program. A signal enters at the left and a ‘done’ signal is emitted at the right. We can observe various elementary parallel operators communicating with one another. Signals flow from the output of one element (small red circle) to the input of another (white or black circle). The splitting of signals into multiple parallel streams has no analog in an algorithmic sequence or thread. Notice that parallelism is implicit but sequential order is explicit. But that’s not all. A true parallel system that uses signals to communicate must be synchronous, i.e., every operation must execute in exactly one system cycle. This insures that the system is temporally deterministic. Otherwise signal timing quickly gets out of step. Temporal determinism is icing on the parallel cake because it solves a whole slew of problems related to reliability and security.

It should be obvious that using Babbage’s and Lady Ada’s 150-year old computing model to program a parallel computer is like trying to fit a square peg into a round hole. One would think that, by now, the computer industry would have figured out that there is something fundamentally wrong with the way it builds and programs computers but, unfortunately, the mathematicians are at it again. The latest trend is to use functional languages like Erlang for thread-based parallel programming. Thread-based, coarse-grain parallelism is a joke, in my opinion. There is a way to design a fine-grain, self-balancing multicore CPU for an MIMD execution environment that does not use threads. Threaded programs are error-prone, hard to program and difficult to understand. Decidedly, the notion of a computer as a calculating machine will die hard. It is frustrating, to say the least. When are we going to learn?

Lifting the Curse of the Algorithm

To solve the parallel programming problem, we must lift the curse of the algorithm. We must abandon the old model and switch to a true parallel model. To do so, we must reinvent the computer. What I mean is that we must change, not only our software model, but our hardware model as well. Current CPUs were designed and optimized for the algorithmic model. We need a new processor architecture (both single core and multicore) that is designed from the ground up to emulate non-algorithmic, synchronous parallelism. It’s not rocket science. We already know how to emulate parallelism in our neural networks and our cellular automata. However, using current CPUs to do so at the instruction level would be too slow. The market wants super fast, fine-grain, self-balancing and auto-scalable multicore processors that use an MIMD execution model. It wants parallel software systems that are easy to program and do not fail. Right now there is nothing out there that fits the bill.

The Next Computer Revolution

It remains to be seen who, among the various processor manufacturers, will be the first to see the light. Which nation will be the standard bearer of the new computing paradigm? When will the big switch happen? Who knows? But when it does, it will be the dawning of the next computer revolution, one which will make the first one pale in comparison. We will be able to build super fast computers and programs of arbitrary complexity that do not fail. It will be the true golden age of automation. I can’t wait.

[This article is part of my downloadable e-book on the parallel programming crisis.]

See Also:

Nightmare on Core Street
Why Parallel Programming Is So Hard
The Age of Crappy Concurrency: Erlang, Tilera, Intel, AMD, IBM, Freescale, etc…
Half a Century of Crappy Computing
Parallel Computers and the Algorithm: Square Peg vs. Round Hole

Thursday, October 4, 2007

The Intel Cartel: Algorithmic Dope Dealers

All Multicore and Parallel Programming Articles

Cry Babies

It seems that all Intel does lately is bitch about how hard parallel programming is and how programmers are not using enough threads. Their latest tantrum is about how there are too many parallel languages to choose from. Does anybody else sense a wee bit of panic in Intel’s camp? The company has bet all its marbles on multicore CPUs being the big money maker for the foreseeable future, which is understandable. The problem is that most legacy software cannot take advantage of multiple cores and programmers are having a hell of a hard time writing good parallel software. So what’s Intel’s solution? Bitching, whining, jumping up and down and foaming at the mouth, all the while, making a royal fool of itself. Haysoos Martinez! What a bunch of cry babies you people are!

Algorithmic Cocaine

I got news for you, Intel. Stop blaming others for your own mistakes. You are the primary cause of the problem. You, more than any other company in this industry, got us into this sorry mess. You made so much money, over the years, milking algorithmic cocaine from that fat cow of yours that it never occurred to you that the cow might run dry some day. Now that you’ve got everybody stoned and addicted, they keep coming back for more. But there is no more. Moore’s law is no longer the undisputed law of the land. “Mix threads with your dope!”, you scream at them with despair in your voice, but they’re not listening. And they keep coming. Worse, you got so stoned consuming your own dope, you cannot see a way out of your self-made predicament. Your only consolation is that all the other dope dealers (AMD, IBM, Sun Microsystems, Freescale Semiconductors, Motorola, Texas Instruments, Tilera, Ambric, ARM, etc…) are in the same boat with you. I don’t know about the rest of you out there but methinks that the Intel cartel is in trouble. Deep trouble. It's not a pretty picture.

The Cure

We all know what the problem is but is there a cure? The answer is yes, of course, there is a cure. The cure is to abandon the algorithmic software model and to adopt a non-algorithmic, reactive, implicitly parallel, synchronous model. I have already written enough about this subject and I am getting tired of repeating myself. If you people at Intel or the other companies are seriously interested in solving the problem, below are a few articles for your reading pleasure. If you are not interested, you can all go back to whining and bitching. I am not one to say I told you so, but the day will come soon when I won’t be able to restrain myself.

The Age of Crappy Concurrency: Erlang, Tilera, Intel, AMD, IBM, Freescale, etc…
Parallel Programming, Math, and the Curse of the Algorithm
Half a Century of Crappy Computing
Parallel Computers and the Algorithm: Square Peg vs. Round Hole
Don’t Like Deadlocks, Data Races and Traffic Accidents? Kill the Threads
Why I Think Functional Programming Languages Like Erlang and Haskell are Crap
Killing the Beast
Why Timing Is the Most Important Thing in Computer Programming
Functional Programmers Encourage Crappy Parallel Computing
How to Design a Self-Balancing Multicore CPU for Fine-Grain Parallel Applications
Thread Monkeys: Tile64 and Erlang
COSA, Erlang, the Beast, and the Hardware Makers
Tilera vs. Godzilla

Wednesday, October 3, 2007

Darwinian Software Composition

Unintentional Software

Last night, I got to thinking again about Charles Simonyi’s intentional software project and it occurred to me that a domain expert or software designer does not always know exactly what he or she wants a new software application to look like or even how it should behave. Initially, a designer may have a partially-baked idea of the look and feel of the desired application. However, even though we may not always know what we want, we can all recognize a good thing when we see it. This is somewhat analogous to a musician searching for the right notes for a new melody idea. The composer may end up with a final product that is not exactly as originally envisioned but one that is nevertheless satisfactory. Searching can thus be seen as an indispensable part of designing. It adds an element of randomness into the process. What makes this approach attractive is that it is highly interactive and it works. It dawned on me that a similar approach could be used when designing software in a COSA environment.

Relaxing the Rules

Normally, COSA uses strict plug-compatibility criteria to connect one component to another.

Two connectors may connect to each other only if the following conditions are met:

  1. They have opposite gender (male and female).

  2. They use identical message structures.

  3. They have identical type IDs.
As you can see, there is no possibility for mismatched components in a COSA application. Strict plug compatibility allows components to automatically and safely snap together. This is fine and desirable in finished applications and components but what if the rules were relaxed a little during development? What if we could instruct the development environment to temporarily disregard the third compatibility criterion in the list above? This would allow the designer to try new component combinations that would otherwise be impossible. The only problem is that, more often than not, the new combinations would result in timing conflicts, i.e., bugs.

Good Bugs vs. Bad Bugs

In general, computer languages try to prevent bugs as much as possible. Most of the bugs that used to plague assembly language programmers in the past are now gone. With the current trend toward thread-based, multicore computers, a lot of effort has gone into making programs thread-safe. The problem has to do with multiple threads accessing the same data in memory. This situation can lead to all sorts of conflicts because the timing of access is not deterministic. Functional languages avoid the problem altogether by eliminating variables and thus disallowing side effects between threads. The COSA philosophy, however, is that side effects between concurrent modules should be welcome. A bug is bad only if it is not found. Since COSA programs are reactive and temporally deterministic, all data access conflicts (motor conflicts) between connected modules can be discovered automatically. What this means is that fast trial-and-error composition becomes feasible. But it gets even better than that.

Darwinian Selection

Given that, in a COSA development environment, components can connect themselves autonomously and that motor conflicts can be discovered automatically, it is not hard to envision a mechanism that can compose reliable and possibly useful applications through random trial and error, from a pool of pre-built components. Simple survival of the fittest. Of course, it is always up to the human developer to decide whether or not to accept the system's inventions but this would take software development up to a new level of productivity and serendipity. Who knows, a nice surprise could pop up every once in a while.

PS. Some of my long term readers may find it strange that I, a Christian, would be using words like 'Darwinian selection'. Well, the world is full of surprises, isn't it?

Tuesday, October 2, 2007

The ‘Everything Is a Function’ Syndrome

The Not So Great Brainwashed Masses

It never ceases to amaze me how effective education can be at brainwashing people. Skinner was right about conditioning. Not that this is necessarily bad, mind you (that’s what religion, which includes scientism, is all about), but there is good brainwashing and bad brainwashing. Here is a case in point. Brainwashed functional programming fanatic namekuseijn (also calls himself Piccolo Daimao) claims that a computer is fundamentally a mathematical machine and that everything that a computer does can be seen as a function that returns a value. In response to a comment Daimao posted on my blog recently, I wrote the following:

Truth is, computing is about behavior. And behavior is about sensing, acting and timing. This means that a computer program is a collection of elementary sensors comparators), effectors (operators), environment (variable data) and a timing mechanism. That is all.
Daimao replies:

-elementary sensors (comparators)

that seems to me like a function taking 2 or more arguments and producing as result one of them. Or a multiplexer.

-effectors (operators)

it's a function which takes arguments and returns results. In low level Von Neumann machine, this may mean the result of the computation is put into a register or a set of registers.

-environment (variable data)

function scope.

-timing mechanism

flow of control: you start with a function and goes evaluating it step-by-step.

Anthropomorphizing the Computer

What Daimao will probably never grasp (brainwashed people rarely change their minds) is that what he’s doing is anthropomorphizing the computer. In his view, the computer doesn’t just do math, it becomes a mathematician: it takes mathematical arguments, performs calculations and returns mathematical results. Never mind that a computer is merely reacting to changes by effecting new changes. And, when you think about it, effecting changes is nothing but flipping bits (electric potentials). The math stuff is all in Daimao’s mind but don’t tell him that. He’s liable to go into an apoplectic fit.

Of course, it will never occur to Daimao that what he refers to as “taking arguments” is not a mathematical operation at all but effects carried out by the computer: some bits are flipped in a memory area that we call the stack and in a special register that we call the stack pointer. Likewise, returning a result is another stack effect carried out by the computer. Daimao comes close to seeing the truth (“the result of the computation is put into a register or a set of registers”) but he dismisses it as “low level”. Again, putting something in a register has nothing to do with math. It is just another effect carried out by the computer.


Forget about Daimao’s notion that data variables (changeable environment) constitute “function scope” (it’s just more silly anthropomorphizing). Right now, I want to address Daimao’s assertion that timing is just flow of control. This is something that is close to my heart because I have been saying for a long, long time that timing is the most important thing in computing. My primary claim that computing is strictly about behaving and that an elementary behavior is a precisely timed sensorimotor phenomenon. Timing is to computing what distance is to architecture. At least, it should be.

How does flow of control (another term that stands for algorithm) guarantee action timing in functional (math-based) programs since math is timeless to begin with. There is nothing in a math operation (taking arguments and returning results) that specifies its temporal order relative to other operations. Of course, one can argue that the algorithm itself is a mathematical timing concept but I beg to differ. People have been performing step by step procedures long before mathematicians thought about them as being part of math. Note that executing an algorithmic program consists of performing all sorts of sensorimotor behaviors such as incrementing a pointer, copying data to registers, performing an operation, copying data to memory, sensing a clock pulse, etc… In reality, everything in a computer is already signal-based (change-based) but the ubiquitous math metaphors make it hard to see. Every behaving entity (operation) sends a signal (clock pulse and index counter) to the next operation in a sequence meaning, now it’s your turn to execute. The problem is that signal flow (communication) within a function follows a single thread and cannot split into multiple threads. This is a big problem if you want fast, fine-grain parallelism, which is the future of computing.

Implicit vs. Explicit Temporal Order

The point that I am driving at is that there is nothing in functional programming that allows a program to make decisions pertaining to the relative temporal order (concurrent or sequential) of elementary operations. Temporal order is not explicit in a function; it is both implicit and inherently sequential. Explicit temporal order is a must for reliable software systems because it makes it possible to build deterministic parallel systems. Explicit temporal order simply means that a system is reactive, that is, actions (operations) are based on change (timed signals). A purely reactive system is one where every action occurs instantaneously upon receiving a signal, that is, it executes itself within a single system cycle. Since there should not be any changes in the temporal behavior of a deterministic system, timing watchdogs can be inserted in the system to alert the designer of any change (could be due to hardware failure or a modification to the system software). Deterministic timing makes for super fast, fine-grain parallelism because it gives small parallel processes access to shared memory without having to worry about contentions.

Reactive Behavior and Data Dependencies

The future of computing is not thread-based parallelism but fine-grain, self-balancing, parallel, multicore CPUs using an MIMD execution model. Other than the fact that functional programming encourages the continued manufacture and use of coarse-grain parallel computers (see The Age of Crappy Concurrency), the biggest problem I see with functional programming is that it makes it impossible to implement a mechanism that automatically discovers and resolves data dependencies, an absolute must for reliability. This is only possible in a purely reactive system. I will not go into it here but suffice it to say that functional languages should not be seen as general purpose computer languages and FP must not be promoted as a software model. The lack of timing control and reactivity makes FP inadequate for safety-critical systems where software failure is not an option.

In conclusion, I'll just reiterate my point. Everything in computing is not a function. Everything is behavior.

Monday, October 1, 2007

Adobe's Macromedia Director MX 2004™

Macromedia Director™ is a powerful multimedia authoring tool. It has been around since the eighties and it is obvious that they put a lot of thought into creating a clean and intuitive user interface. It is easy to learn once you understand the underlying movie metaphor. It comes with a choice of two scripting languages, Lingo (the original Director language) and JavaScript. Even though it was originally intended for applications that use things like movies, sprites, sounds and animations, there is no reason that it cannot be used for general purpose application development. It has support for most common user interface functions like buttons, menus, lists, windows, textboxes, etc… Director applications can be played either in Windows™ or the Macintosh™ or directly within a web browser with the use of Adobe’s Shockwave technology. In addition, there is a sizeable supply of third party extensions (many are free) that add to its functionality. In sum, I think it is a pretty awesome all-around software development tool. Why isn't everybody using it?

My take is that Director is ideal for creating complex graphical user interfaces that involves displaying and manipulating graphical objects on the screen. So it is certainly well-suited for developing a COSA Editor. Since third-party extensions can be used for database access, it should not be too hard to create a keyword-browsable object repository for COSA modules/components. I am not sure how a Director application can send and receive messages to other running applications but I suspect it can be done. I’m thinking that the COSA Editor should have the ability to communicate directly with a running COSA virtual machine (CVM). This way, a COSA developer could easily modify a running COSA application on the fly. There should be no need for compiling or saving the app to a file, in my opinion. Visually tracing signal flow within a running application would be nice as well.

Having said that, I think that writing a COSA Editor and a CVM is a major undertaking, regardless of the chosen tool. I had intended to start a dev project and let others finish it but, the more I think about it, the more I realize that it’s not going to work out. A lot of thought must go into designing, not only the user interface, but also the underlying data structures for each and every COSA effector and sensor. So I am back to where I started: I can’t do it. I just can’t devote the time to it. Unless somebody or some organization is willing to dump some serious money into this project, I am afraid that Project COSA will continue to be just an idea whose time has not yet arrived. And by serious money, I am talking about at least ten million dollars because, in my opinion, design and development of a COSA-compatible, fine-grain, multicore CPU and a COSA embedded operating system must happen more or less concurrently.

So this is how it stands, for now. The world will just have to continue to make do with crappy multicore CPUs, crappy operating systems and crappy programming languages, not to mention all the bug-infested software. Oh well. No need to despair, though. COSA is getting a fair share of publicity, these days. Sooner or later, something is bound to happen.

Sunday, September 30, 2007

Korea Advanced Institute Of Science And Technology

All right. You guys at KAIST (Daejeon, Seoul) have been hitting the Silver Bullet and Project COSA pages almost daily for the last few weeks. KAIST has a solid reputation for top-notch research and education in Asia and the rest of the world. What are you working on that involves Project COSA? Is this a class assignment or what? Talk to me.

Saturday, September 29, 2007

Functional Programmers Encourage Crappy Parallel Computing

All Functional Programming Articles

The Computer Is a Behaving Machine, Not a Calculator

The idea of a computer as a calculating machine has a long history. Ever since the Persian mathematician Muhammad ibn Mūsā al-Khwārizmī invented the algorithm (the word algorithm derives from 'al-Khwārizmī) in 825 AD as a problem solving method, people have dreamt of creating machines that could perform long and tedious calculation sequences automatically. Charles Babbage was the first to design and build (partially) such a machine. Babbage’s friend and mathematician, Lady Ada Lovelace, became the world’s first programmer for having written the first computer program (table of instructions) for Babbage’s analytical engine. I have nothing against mathematicians wanting to build function calculators. But I do have a problem with mathematicians wanting to force the world to adopt their antiquated concept of what a general purpose computer should be, especially in the 21st century. The computer is not a calculator, for crying out loud. It is a behaving machine. Calculating is just one of the many types of behaviors that computers can perform.

A Computing Model for the 21st Century

The idea of a computer as a machine for calculating sequences of operations (algorithms or threads) is fundamentally wrong. It is the primary reason that the computer industry is in the mess that it is: software is buggy, expensive and hard to develop. The problem with software will disappear only when the computer industry shakes its addiction to the algorithmic software model and wakes up to the fact that a computer is not a calculator. A computer program should be seen as a collection of elementary parallel, synchronous, reactive, behaving entities that use signals to communicate with each other. The algorithmic model has served us well in the last century but now that the industry is transitioning from sequential computing to massive parallelism, it is time to switch to a new model, one that is worthy of the 21st century.

Thread Monkeys

If you think that functional languages like Erlang or Haskell should be used for concurrent programming, you are a thread monkey. You are a hindrance to progress in computer technology. Why? Because you are encouraging multicore CPU manufacturers like Intel, AMD, IBM, Sun Microsystems, Freescale Semiconductor, Tilera and others, to continue to make multicore CPUs that support coarse-grain, thread-based parallelism at a time when they should be trying to build fine-grain, auto-scalable and self-balancing multicore CPUs. You people are promoting what I have been calling, the age of crappy concurrency.

Inadequacy of Functional Programming

The market wants super fast multicore CPUs and operating systems that support fine-grain parallel computing. It wants systems that are bug-free and easy to program. Do functional languages provide what the market wants? Not even close, in my opinion. Sure, they are a better choice for thread-based parallelism than imperative languages but, as I explained in a previous article, they lack something that is essential to reliable software and that is the ability to automatically find and resolve data dependencies, a must for reliability. This feature can only be implemented in a purely reactive, deterministic and synchronous system. And please, don’t give me the crap about Erlang being great for writing reliable concurrent programs. The whole idea behind concurrency in Erlang, as stated by Joe Armstrong himself, is to provide a mechanism for fault tolerance. Fault tolerance assumes unreliability. It does not prevent it. FP is inadequate for safety-critical applications for this reason.

In conclusion, my advice to FP fanatics is to promote functional languages for what they do best, solving math functions. Do not advertise FP as the solution to the parallel programming problem. It is not. Crappy parallelism, yes. But true fine-grain parallelism, I think not.

Thursday, September 27, 2007

How to Make Computer Geeks Obsolete: The Future of Software Design

Charles Simonyi

I just finished reading a very interesting article over at MIT Technology Review about former Microsoft programming guru and billionaire, Charles Simonyi. Essentially, Simonyi, much like everyone else in the computer business with a head on their shoulders, realized that there is something fundamentally wrong with the way we construct software. So, while working at Microsoft, he came up with a new approach called intentional programming to attack the problem. Seeing that his bosses at Microsoft were not entirely impressed, Simonyi quit his position and founded his own company, Intentional Software Corporation, to develop and market the idea. It’s been a while, though. I am not entirely sure what’s holding things up at Intentional but methinks they may have run into a brick wall and, knowing what I know about Simonyi’s style, he is probably doing some deconstruction and reconstruction.

Sorry, Charlie, Geeks Love the Dark Ages

There is a lot of secrecy surrounding the project but, in my opinion, Simonyi and the folks at Intentional will have to come around to the conclusion that the solution will involve the use of graphical tools. At this week’s Emerging Technology Conference at MIT, Simonyi tried to convince programmers to leave the Dark Ages (LOL), as he put it. His idea is to bring the business people (i.e., the domain experts) into software development. I applaud Simonyi’s courage but my question to him is this, if your goal is to turn domain experts into developers, why give a talk at a techie conference? The last thing a computer geek wants to hear is that he or she may no longer be needed. In fact, based on my own personal experience, the geeks will fight Simonyi every step of the way on this issue. Ironically enough, geeks are the new luddites of the automation age. Unfortunately for the geeks but fortunately for Simonyi, he is not exactly looking for venture capital. With about a billion dollars in his piggy bank, a mega-yacht in the bay and Martha Stewart at his side, the man can pretty much do as he pleases.

The Future of Software Development

In my opinion, Simonyi does not go far enough. In his picture of the future of software development, he sees the domain expert continuing to work side by side with the programmer. In my picture, by contrast, I see only the domain expert gesturing in front of one of Jeff Han’s multi-touch screens and speaking into a microphone. The programmer is nowhere to be seen. How can this be? Well, the whole idea of automation is to make previous expertise obsolete so as to save time and money, right? Programmers will have joined blacksmiths and keypunch operators as the newest victims of the automation age. Sorry. I am just telling it like I see it. But don't feel bad if you're a programmer because, eventually, with the advent of true AI, even the domain expert will disappear from the picture.

Intentional Design vs. Intentional Programming

The way I see it, future software development will be strictly about design and composition. Forget programming. I see a software application as a collection of concurrent, elementary behaving entities organized into plug-compatible modules that communicate via message connectors. Modules are like pieces in a giant picture puzzle. The main difference is that modules are intelligent: they know how to connect to one another. For example, let’s say you are standing in front of your beautiful new multi-touch screen and you are composing a new business application. Suppose you get to a point where you have some floating point data that you want the program to display as a bar graph. You simply say “give me bar graph display module” into the microphone. Problem is, there are all sorts of bar graph display modules available and the computer displays them all on the right side of the screen. No worry. You simply grab all of them with your right hand and throw them into your app space like confetti driven by the wind. And, lo and behold, the one that is compatible with your data magically and automatically connects itself to your app and voila! You smile and say “clean up!” and all the incompatible modules disappear, as if by magic. You suddenly remember Tom Cruise’s character, John Anderton, in the movie Minority Report and you can barely keep from laughing. Creating software is so much fun! This tiny glimpse of the future of software development is brought to you by Project COSA.

In conclusion, my advice to Charles Simonyi is to start thinking in terms of reactive, plug-compatible parallel objects and to get somebody like Jeff Han on board. Also, stop trying to convince the geeks.

Wednesday, September 26, 2007

COSA and Macromedia Director MX 2004™

Religious Fervor

My motivation in getting the COSA software model adopted by the computer industry is due mainly to my interest in artificial intelligence. We are going to need fast, reliable, easy to program and powerful parallel computers for the coming AI revolution. The current crop of multicore CPUs leaves a lot to be desired, in my opinion. We need auto-scalable, self-balancing CPUs that support fine-grain (no more coarse-grain, thread-based CPUs, please) parallelism using an MIMD execution model. I had always thought that my arguments in favor of adopting a non-algorithmic software model (and a new CPU architecture to support the new model) would be enough to galvanize interest in the developer community. I was wrong. Sure, a handful of people write to tell me that they agree with me but the groundswell of enthusiasm that I had hoped for did not materialize. It turns out that I had badly underestimated the religious fervor of the attachment that computer geeks feel vis-à-vis the current paradigm, especially vis-à-vis their favorite programming languages.

I Don't Like Computer Geeks (LOL)

This blog and Project COSA do generate a lot of interest from around the world but as soon as the loud atheist majority within the computer geek community finds out about my religious beliefs (in this regard, my position is simple: if my religious beliefs bother you, then don't read my blog, goddamnit; it's not meant for you) and about my views on the rampant crackpottery that I see in the physics community (“chicken feather voodoo physics” is one my favorite putdowns, LOL), they use it to attack me personally and brand me as some sort of religious nut or a crank. Not that I care, mind you (the assholes do not put food on my table, thank God), but it only serves to slow the progress of Project COSA. Many of my readers write to advise me that I should write a prototype COSA Editor but, frankly, I am too busy with my AI project to spend much time writing code in order to convince a bunch of computer geeks of the soundness of the COSA model. First, I don’t think that will do it and second, I have a low opinion of computer geeks in general. I want clear thinking software and hardware professionals on my side, not a bunch of know-it-all grownup nerds who get all excited about Star-Trek physics crap like multiple universes, brain uploads and time travel through wormholes.

Macromedia Director

With this in mind, I am investigating whether or not Adobe’s Macromedia Director is a good tool with which to quickly develop a COSA Development Studio (CDS). My idea is to use Director to build the user interface and app generator and use C++ or C# to create a small and fast COSA kernal/virtual machine to run the apps. Again, I can’t spend much time on this. I am hoping that I'll be able to start the project and then release it so that others can continue to work on it. I’m going to play with Director for a few days. I’ll let you know what I think. Later.

Monday, September 24, 2007

Why Functional Languages Should Not Be Used for Safety-Critical Applications

All Functional Programming Articles

I Actually Like Functional Languages But With a Caveat

This may come as a surprise to my enemies in the functional programming community but I happen to like functional languages. Their power of expression cannot be denied. I especially like Mathematica, if for nothing else than its beauty alone. My bone of contention with FP proponents is not that FP is not useful, but that it should not be used for complex, real-time, safety-critical and automation-type systems. I believe that FP should be used mainly for solving mathematical functions for which it is ideally suited. I have used words like ‘crap’ and ‘abomination’ to characterize FP but that’s mostly for effect. It bothers me that FP proponents are championing functional languages (especially Erlang) as the solution for the parallel programming problem. It is not. Erlang is encouraging coarse-grain, inherently non-deterministic, thread-based parallelism at a time when the computer industry should be shooting for fine-grain deterministic parallelism, an essential requirement for safe and reliable software systems. I also want to draw attention to the fact that FP is not a software model. A functional language is mainly a specialized problem-solving tool that sits on top of an existing model which happens to be the algorithmic software model. As such, it inherits the principal drawback of algorithms, unreliability.

FP and Safety-Critical Applications

There is a real danger in thinking that FP even comes close to being a silver bullet for the software reliability problem. If it was, there would be no reason to advertise Erlang as a language for the construction of fault-tolerant concurrent software systems. Fault tolerance subsumes the likely presence of hidden faults, does it not? Note that I am not saying that there is anything wrong with fault tolerance, as long as we are talking about hardware faults. My problem is with software faults. They bother me. They do because I, unlike Joe Armstrong, the main inventor of Erlang, don’t think they are inevitable. The pundits may insist over and over that unreliability is an inherent characteristic of complex software systems but, from my standpoint, they are referring only to algorithmic software. There is no doubt in my mind that, should the computer industry decide to abandon its long sordid love affair with the algorithmic model and settles down with a nice, non-algorithmic, synchronous reactive model instead, the reliability and productivity crisis would simply vanish.

Finding and Resolving Data Dependencies

The irony of it all is that the very thing that FP gurus are railing against (the use of variables) turns out to be its Achilles’ heel. I understand their rationale. The use of variables introduces unwanted side effects that invariably lead to software failure. However, as I argued in my previous article, getting rid of variables is not the answer. Besides, FP does not really get rid of variables, it replaces them with functions and these keep their changing values on the stack. It just so happens that traditional variables are part of the solution to a vexing problem in software systems. I am talking about the discovery and resolution of data dependencies. Traditionally, this is done by the programmer but, and this is my main point, it does not have to be. It makes no difference whether or not one is using declarative or imperative languages, it is easy to miss dependencies in a complex software system. This is especially true if the programmer is in charge of maintaining a legacy system that he or she is not familiar with. Any addition or modification can potentially introduce an unwanted side effect.

Using Variables to Automatically Find and Resolve Data Dependencies

A data dependency exists whenever one part of a program depends on data changes occurring in another part. Keeping dependent objects up-to-date with the latest changes must be done in a timely fashion. For example, the movements of the mouse or the clicking of a button must be communicated immediately to every target program or module that needs them. What is nice about this is that we can run a simulation program that generates mouse clicks and movements without having to modify the target programs. This is an example of dependencies being resolved automatically in a reactive manner. Let me take it a bit further. Any time you write code to perform a comparison operation on a data item, a new data dependency is born. If you add new code that modifies the data but you forget to invoke the comparison operation in a timely manner, you have a situation where a part of the program is unaware of the change and continues to operate under a false assumption. Failure is bound to ensue. Like I said, this is a major problem when maintaining complex legacy systems. There is only one way to solve the problem and that is to adopt a non-algorithmic, reactive, synchronous software model and allow the use of variables. The idea is to associate every effector (operator) that changes a variable with every sensor (comparator) that may be affected by the change. This can be done automatically in a reactive system, eliminating the problem altogether. You can read the fine details elsewhere.

The FP Data Dependency Problem

The problem is that FP forbids the use of variables in functions and FP systems are not reactive. So the question is, how can FP automatically resolve data dependencies? Answer, it cannot. This is the reason, in my opinion, that FP is not suited for safety and mission-critical applications like avionics, air traffic control, power plant control, medical and financial systems, etc… In other words, use it at your own risk.

Sunday, September 23, 2007

Functional Programming Is Worse Than Crap: State Changes Are Essential to Reliability

Geekism Is a Menace to Society

All Functional Programming Articles

(Skip this rant if you’re only interested in my criticism of FP)

I had assumed that functional programming was a mere harmless sideshow conducted by a bunch of math nerds pretty much for their own benefit and entertainment. Now that I had a little bit more time to think about it and after discovering (from doing a few searches on Google) that FP fanatics have been frantically promoting FP for use in safety-critical applications, I must say that I am becoming rather alarmed. Those geeks have found themselves a new religion to latch onto and have taken to proselytizing with a passion akin to that of Catholic missionaries in the days after Columbus stepped foot in the new world. They are convinced that they have found the holy grail of computing and will not consider any argument to the contrary. Any criticism is seen as a threat to the religion. In a sense, this is understandable since the religion has become their bread and butter.

In my opinion, computing has become too much a vital part of our daily lives to be entrusted entirely to a bunch of nerds whose main goal in life is to convince others of the superiority of their gray matter. Geekism is a threat to national security and society at large because it aims to create an elite class of individuals who consider themselves above public scrutiny and even above scrutiny from the government. In fact, they look down on the general public, not unlike the high priests of ancient religions. There is a need for checks and balances.

The Jihad Against States and State Changes

One of the stated goals of FP is that side effects between different parts of a program should be eliminated because they are inherently harmful. Never mind, for now, that we are already in crackpot territory (see below) but the FP fanatic’s prescription for avoiding side effects is a little strange, to say the least. They want to eliminate variables (changeable states) from programming altogether. And how do they propose to do this amazing feat, pray tell, given that computing is all about sensing and effecting changes? Well, never underestimate a geek’s capacity for self-deception. Their bright idea is that, whenever a variable must be changed, a new variable should be initialized and used in the place of the old one. Huh? Now, hold on a second. Run that by me one more time, por favor. How does creating a new variable to replace the old one not constitute a state change, pray tell? Aren’t these newly created immutable variables used as arguments to other functions so as to generate a new effect? When a function uses a new set of arguments that is not equal in value to the previous set, does that not constitute a change of state? And how is that any different from changing the value of the previous set?

Crackpot Territory

What most FP theorists fail to explain is that, in FP, the function itself is the variable. The variable value of functions are kept on the stack and are used as arguments for other functions. One function affects another. Insisting that there are no variables and thus no side effects in FP is wishful thinking at best and crackpottery at worst. The side effects are obvious to anybody who is willing to look. Certainly, you may claim that there is no side effects within a function but so what? It remains that one part of a program will affect another and that is unavoidable. Dependencies are a fact of life in software. The point I am driving at is that FP harbors a fundamental flaw in the way it deals with dependencies. Let me explain.

It often happens that a programmer is given the job of adding new functionality to a legacy program. It may happen that the programmer creates a new function to operate on a list but forgets to update an existing function with the new list? Worse, he or she may not even be aware of the existence of the other function or the need to update it. This is what I call a hidden side effect or blind code. That is to say, code that is unaware of changes that are relevant to itself and the proper functioning of the program. Publish and subscribe is not the answer. First, the programmer has to remember to subscribe and second, if the programmer is not familiar with the code, he or she may have no idea that a new subscription may be needed. What is needed is a software system that automatically discovers and resolves all data dependencies in a program. And, unfortunately for the FP lobby, this little bit of magic can only be performed with the use of shared mutable variables.

Why I Came to Love State Changes and Their Side Effects

Changes and effects are unavoidable characteristics of software. This is a given. The way I see it, the problem with traditional imperative languages is not that they allow side effects from state changes but that they are neither reactive nor deterministic. There are good side effects and bad side effects. A bad side effect is one that is either not propagated at the right time or not propagated at all, thus leading to false assumptions. A good side effect is one that is propagated immediately to every part of the program that is dependent on or affected by the change. A good side effect is one that cannot be ignored by the software designer because the system will not allow it. If the effect is unwanted, this is immediately obvious and the designer must take steps to correct the problem. The automatic discovery and resolution of state (data) dependencies is a must for reliable software. It can only be done in a purely reactive and deterministic system that uses named variables. For more on this issue, see how COSA handles blind code elimination.

Temporal determinism is essential to software reliability. It means that the execution order (concurrent or sequential) of reactions to changes is guaranteed to remain the same throughout the lifetime of the system. Deterministic control over the precise timing of changes is the only way to prevent deadlocks and unwanted side effects in a parallel system that allows the use of shared variables. There is more to it than that, though. Since the execution order of changes is expected to be stable, any violation should be seen as a potential defect. Timing watchdogs can be inserted in various places within a deterministic program in order to sound an alarm in the event of a violation. Deterministic timing can be used for security purposes but it primary use is for insuring reliability. It forces the designer to investigate any change to the timing of previously created software modules. Furthermore, a timing change may introduce a motor conflict, i.e., a situation where a data object is modified by multiple operators simultaneously. Motor conflicts are temporal in nature and are detectable only in deterministic system.

Why Functional Programming Is an Abomination

FP fanatics love to boast that FP is perfectly suited for parallel computers because, unlike threads in conventional programs, pure functions are free of side effects. By assigning functions to threads and using message passing between threads, one can create a thread-safe concurrent system. This is all fine and dandy but what the FP crowd conveniently forget to mention is that there are major drawbacks to using FP in a parallel environment.

  1. FP forces the use of a particular type of parallelism, thread-based parallelism. In other words, if your computer uses fine-grain parallelism (this is the future of computing, you can bet on it), FP will not support it because functions are inherently algorithmic. So if you want to implement a fast, fine-grain, parallel QuickSort routine, you might as well forget using FP. The widespread adoption of FP will only result in encouraging what I call the age of crappy concurrency.
  2. FP programs are notoriously wasteful of CPU cycles because they do not allow the use of shared variables or data. A collection of data can only be modified by one thread at a time. The entire collection must be copied onto a message queue and sent to another function/thread. FP proponents are afraid of side effects because they don't know how to handle them safely.
  3. The structure of FP is such that an automatic mechanism for the discovery and resolution of data dependencies cannot be implemented. The reason is that such a mechanism is only possible in a system that uses variables. This means that additions to legacy FP programs can introduce hidden dependency problems that are potentially catastrophic. FP is not suited for safety-critical applications for this reason.
  4. FP is non-deterministic. This is a serious flaw, in my opinion, because the precise execution order of changes, a must for reliable software, is not guaranteed. This is especially true in a thread-based parallel environment.

Invasion of the Mind Snatchers

(Another rant)

People ask me why I use pejorative qualifiers like “crap” and “geeks” in my arguments. The answer is simple. I use them for psychological effect. Computer geeks are notoriously political animals who cannot fathom that others may look down on their supposed intellectual superiority. Nothing infuriates them more than a total lack of respect. It’s like throwing holy water on vampire. But unlike a vampire, they want the minds and admiration of the populace, not their blood. Realize that I have nothing to hide. I am engaged in a battle to get the computer industry to adopt what I believe to be the correct approach to computing. It’s an uphill battle because I am asking for nothing less than the reinvention of the computer, both from a hardware and a software point of view. My main opposition comes from computer geeks. Why do they oppose me? Because their livelihoods are at stake, that’s why. If my approach is adopted, they become irrelevant. And the more famous a computer geek is, the more he feels threatened by any paradigm that may supplant the one he or she is championing. They’ll defend their turf with everything they got. I, on the other hand, am the fearless and relentless barbarian at the gate, the infidel climbing the makeshift ladder, threatening to overthrow the ramparts. My advice to the geeks is, give up now and join me before it's too late. It’s only a matter of time before the walls come crumbling down. I am in top shape and I've just begun to fight. ahahaha...

Next: Why Functional Languages Should Not Be Used for Safety-Critical Applications