Monday, May 28, 2007

How to Automate Driving Without AI

People tell me that it is impossible to have self-driving cars in a major city because that would require something close to strong artificial intelligence. I personally don't think there is a need for full blown AI. I believe we could fully automate traffic in any big city using current technology. The system would use collision avoidance software together with RFID devices embedded in the road and readers in the vehicles. RFID would be used to communicate info about the location of the vehicle within the city. They could also be used to signal how far the vehicle is from either curb. In addition, vehicles could receive itinerary information and traffic conditions from a central computer via a city-wide wi-fi or similar network.

The beauty of automated transportation is that it could significantly reduce the number of cars on the road and help save energy. Consider that most vehicles are idle most of the time anyway. Big cities could reduce pollution and energy consumption by banning private vehicles altogether. City dwellers and visitors would be given a pager that they can use to summons transportation at the push of a button. The nearest parked vehicle would then drive itself to the passengers' location and take them to their destination.

There is only one catch, however. Such a system would involve highly complex software. Concern over things like safety, liability and development cost would kill the project before it is even started. The only solution is to adopt a non-algorithmic synchronous software model which is what Project COSA is about.

Sunday, May 27, 2007

Intel's Whining

According to a recent c/net article, Intel fellow Shekhar Borkar is reported to have said that "software has to double the amount of parallelism that it can support every two years." This is so infuriating. That's not the problem with software. The nastiest problem in the computer industry is not speed but software unreliability. Unreliability imposes an upper limit on the complexity of our systems and keeps development costs high. As I've repeatedly mentioned on this blog, we could all be riding in self-driving vehicles (and prevent over 40,000 fatal accidents every year in the US alone) but concerns over safety, reliability and costs will not allow it. The old ways of doing things don't work so well anymore. We have been using the same approach to software/hardware construction for close to 150 years, ever since Lady Ada Lovelace wrote the first algorithm for Babbage's analytical engine.

The industry is ripe for a revolution. The market is screaming for it. And what the market wants, the market will get. It is time for a non-algorithmic, synchronous approach. That's what Project COSA is about. Intel would not be complaining about software not being up to par with their soon-to-be obsolete CPUs (ahahaha...) if they would only get off their asses and revolutionize the way we write software and provide revolutionary new CPUS for the new paradigm. Maybe AMD will get the message.

Saturday, May 26, 2007

Why We Need a New Computer Revolution

Unreliability imposes an upper limit on the complexity of our software systems. We could conceivably be riding in self-driving vehicles right now but concerns over reliability, safety and high development costs will not allow it. As a result, over 40,000 people die every year in traffic accidents. Something must be done. Unfortunately, the computer industry is still using the same algorithmic computing model that Charles Babbage and Lady Ada Lovelace pioneered close to 150 years ago. This would not be so bad except that the algorithmic model is the main reason that software is so unreliable and so hard to develop. It is time to question the wisdom of the gods of computer science and switch to a new computing model, a non-algorithmic, synchronous model. It is time for a new revolution. There is no avoiding it. The market is screaming for it. And what the market wants, the market will get. This is what Project COSA is about.

Having seen first hand the inertia and hostility of the western computer industry and computer science community toward any suggestion that there may be a better way of doing things, I have concluded that the new revolution cannot come from the West. They have placed their computer pioneers on a pedestal and nobody dares question the wisdom of their gods. India and China, on the other hand, don't have that problem. They have nothing to lose and everything to gain. They have been on the tail end of the first computer revolution from the beginning but now they are in a position to leapfrog the western advantage and become the leader of the second revolution.

Friday, May 25, 2007

Categorical Constraints in COSA

In a previous article on constraint discovery in COSA, I wrote about its inductive nature. Inductive means that discovered constraints are not written in stone. It is up to the program designer to either validate or reject every discovered constraint. COSA features another type of constraint, a categorical one. It is governed by the principle of motor coordination or PMC. This principle will eliminate every kind of internal logical conflict (inconsistency) within a COSA program. I call it categorical because every discovered PMC conflict must be eliminated. Note that PMC conflict discovery can also be completely automated. In conclusion, the combination of automatic constraint discovery (inductive and categorical) and the automatic resolution of dependencies solve the reliability problem in COSA programs. In a future page, I will explain the PMC in greater detail.

Note. Constraint discovery is part of my ongoing work in artificial intelligence. There is a way to combine inductive and categorical constraint discovery into a single mechanism that will enable its full automation. Unfortunately, I cannot divulge this method at this time. Sorry.

Thursday, May 24, 2007

Who Will Lead the Next Computer Revolution, the East or the West?

Will it be China or India? Or will it be Europe or the US? I am putting my money on either China or India and here is why. The West has become severely handicapped by complacency and conceit. This is largely due to their having been at the forefront of the first computer revolution from the beginning. They are so immersed in and so drunk with the success of their own paradigm, they cannot imagine another paradigm replacing it. They have placed their famous scientists (Alan Turing, Fred Brooks, John von Neumann, etc...) on a pedestal. Nobody dares question the wisdom of the gods for fear of being ridiculed. As a result, nothing really new has emerged in more than half a century of computing. The approach to building computers is still based the old von Neumann architecture which is itself based on the algorithmic software model, a model that is at least 140 years old (Charles Babbage and Lady Ada Lovelace). Intel, IBM and AMD and the others are not doing research on truly new cpu architectures. Why should they? They're not in the business of inventing new computing paradigms. They are tool makers. They just produce processors that are optimized as much as possible for the current model. They have no choice but to continue to improve on the old von Neumann model by adding more speed, less energy consumption, more transistors, etc...

I think the West has forced itself into a dangerous situation. The reason is that, while this is going on, the computer industry is suffering terribly from a chronic malady called unreliability. Their own scientists (e.g., Fred Brooks) are convinced that the problem is here to stay. As bad as it already is, the real cost of unreliability goes deeper than it appears on the surface. Consider that over 40,000 people die every year in the US alone as a result of traffic accidents. The solution is obvious: people should not be driving automobiles. That is to say, all vehicles should be self-driving. However, building driverless vehicles is out of the question because concerns over reliability, safety and cost have imposed an upper limit on the complexity of our current software systems. On the military and political front, there is a desperate need to automate the battle field as much as possible in order to minimize human casualties and appease the voters back home.

The western world is thus stuck between a rock and a hard place. On the one hand, they have a really nasty problem sitting on their lap and it keeps getting worse. On the other hand, they have a bunch of aging gurus with a firm grip on the accepted paradigm, telling them that the problem cannot be fixed. This is where the East may want to capitalize on and profit from the West's self-imposed mental paralysis, in my opinion. What if there were another paradigm that solved the reliability problem at the cost of beheading some of the demi-gods of western computer science? Should the East care? I don't think so. Is it their gods that would be sacrificed? No. Does not the West look down on them as being mere copycats? Yes. Are they not the technological maids hired by the West to cook and do their laundry (outsourcing), so to speak? Yes.

The point of all this is that countries like China and India may have been late jumping on the wagon but there is no longer any reason nor necessity for them to continue riding in somebody else's wagon. They can now afford their own. They don't have to do other people's laundry anymore. This is why I advise the movers and the shakers of the East to take a good look at Project COSA. COSA is the solution to the nasty problem that everyone has been talking about. It's the one solution that the West cannot touch for fear of dirtying their "noble" hands and insulting their gods.

There is a revolution coming, no doubt about it. The market wants it and what the market wants the market will get, by whatever means possible. Who will come out unscathed? Who will cease the opportunity and lead the revolution? The East or the West? Can the West wake up out of its drunken stupor and realize the error of its ways and repent in time? Seriously, I don't think so. I have seen first hand the power and inertia of conservatism. The old guard will not be replaced without a fight. There is too much at stake... unless, of course, the revolution happens in the East. Then they would have to stand up and take notice.

Quantum Computing Crackpottery

All Quantum Computing Articles

No other branch of science surpasses physics when it comes to promoting crackpottery. Most of us are already familiar with some of the more blatant instances of a science gone awry. Things like time travel, wormholes, black holes, parallel universes, etc... have already entered popular culture thanks to Hollywood's enfatuation with physics myths. Physicists believe that they are above public scrutiny because they are convinced that the public is too stupid to understand what they do. And nowhere is this more apparent than in the new "science" of quantum computing.

Physicists pride themselves in that theirs is a science based strictly on observation but they pay lip service to empiricism when it suits their political agenda. Consider that quantum computing (QC) is based on the concept of state superposition, a concept that was made famous in the last century by none other than quantum physics luminary and Nobel Prize winner, Erwin Schrodinger. Schrodinger asserted in his now famous Schrodinger's cat thought experiment that a subatomic particle can have two states simultaneously, both decayed and not decayed. Problem is, this superposition of states can never be observed because (using the quantum physics lingo) observation causes the wave function to collapse. So much for empiricism.

This reminds me of a young kid who insisted that he could jump as high as a tall building but only when nobody was looking. It would be funny it it weren't so pathetic. Unobservable superposition of states has become part of quantum physics' credo, so much so that an entire research industry has mushroomed in recent years to take advantage of the hype surrounding QC. I don't know how much money have been spent on it but it must be in the hundreds of millions of dollars. One of the most visible champion of QC is a physicist named David Deutsch. Deutsch believes in all sorts of voodoo science including the physical possiblity of time travel and the existence of an infinite number of parallel universes. Indeed, superposition is so blatantly false that some physicists felt it necessary (probably out of shame) to postulate the existence of parallel universes, one for every possible superposed quantum state. As Feyerabend wrote, "the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence". The crackpottery never ends.

It is interesting to note that physicists have no clue as to why particle decay is probabilistic. And yet, in spite of this glaring lacuna in their understanding combined with the lack of observation, they feel free to tell us mere mortals that we must believe in their crackpottery. QC fanatics will point out that QC is a proven fact that has been demonstrated in the laboratory. Don't you believe it! It is all (to borrow a favorite term from the vernacular) bullshit. Periodic QC announcements in the media are just part of the hype and a way for researchers to justify continued funding. QC is either hoax or crackpottery or both. If you are a QC physicist and you feel that I am defaming your profession, then by all means, let the courts decide.

Wednesday, May 23, 2007

Constraint Discovery: An Example

In my last article I mentioned that the constraint discovery mechanism (CDM) gets its input signals only from sensory cells. Why not the effector cells? The reason is two-fold. First, effector cells are usually linked together to form a deterministic sequence. There is no need to test a sequence that is guaranteed to always execute in a certain order. Second, sensory cells are the dictators that govern the behavior of a program. They make all the decisions and, as you can surmise, the order in which decisions are made is essential to reliability. One more good thing about constraint discovery is that the program designer can use it to troubleshoot potential problems that are external to the program.

I will illustrate the concept with an example. Let's say we have a COSA component (controller) in charge of controlling the temperature of a room using sensory readings from a thermostat. The controller turns on the air conditioning unit if the temperature goes above 80 degrees Fahrenheit and turns it off when it goes below 75. In addition, the heater is turned on when the temperature goes below 65 and turned off when it goes above 70. During testing, the CDM will learn a few things about the way temperature changes. If the temperature goes above 80, the AC is turned on and the CDM expects the temperature to then go down below 75. If the temperature goes below 65, the heat is turned on and the CDM expects the temperature to then go up above 70.

If, for whatever reason, temperature changes do not occur in their expected order, the CDM will sound an alarm. This is an instance where temporal constraints learned by the CDM during testing can be left in the program and used for normal error handling.

PS. Remember that blogger supports rss (syndication). All you need is an rss reader (there are a number of free readers out there) and you can receive the Rebel Science News as they happen.

Monday, May 21, 2007

Constraint Discovery Mechanism in COSA

A constraint is a temporal correlation between two events that must always be satisfied. There are only two possible temporal correlations between discrete events: they can be either sequential or concurrent. The COSA constraint discovery mechanism (CDM) is simply a collection of special learning cells similar to a layer of neurons in a neural network. As many of you already know, the COSA software model borrows heavily from my ongoing work in artificial intelligence and pulsed neural networks. The cells in the CDM detect either simultaneous or sequential sensory events.

Constraint discovery is an inductive learning process that takes place while the application is running. As such, not all discovered constraints are necessarily valid. It is up to the application designer to validate or reject every constraint. The outputs of the cells may be connected to an alarm component or a report generator that compiles pertinent information into a file or database in case of violations. The inputs of the CDM cells are initially not connected. A special component called a searcher periodically sweeps through the program being tested and randomly connects the outputs of a few sensory cells within the program to the inputs of the CDM cells. There is a reason why only a few sensory cells are chosen during a pass of the searcher: learning can sometime be CPU intensive and it makes sense not to slow down the application too much during testing,especially in real time environments. Eventually all sensory cells get connected. CDM cells use synaptic strengthening to establish a correlation. If the strength of a synapse reaches a predetermined value, the correlation is accepted. Learning can be extremely fast in deterministic systems because a single violation is enough to invalidate a synaptic connection.

During development, it is advisable to run the CDM as often as possible in order to prevent the introduction of inconsistencies. All violations should be corrected immediately. When it comes time to launch the application, the CDM must be removed from the release version for two reasons: it is no longer needed and the application runs faster without it.

P.S. In a future article, I will talk about a few COSA optimization techniques that should make a COSA program at least as fast as existing scripting languages.

It's Neither Magic Nor Rocket Science

One of the things that bothers me about academics is their love affair with complexity for complexity's sake. I personally like simple solutions. I believe that COSA is a simple solution to the software reliability problem and I would not have it any other way. Some have written to advise me that COSA needs a formal mathematical description in order to be accepted by the computer science community. To that I say, phooey! COSA needs no such thing. If the academic community thinks that COSA is too simple or simplistic for their taste, that is too bad.

Aside from its synchronous and concurrent nature, there are two other concepts in COSA that I consider essential to the model. One is the automatic elimination of blind code and the other is the automatic discovery of temporal constraints. While the former effectively solves the problem of event or data dependencies, the latter insures that a system under development remains consistent and free of logical contradictions. This, in my opinion, is the most important aspect of the COSA model because it introduces the counterintuitive notion that design correctness is proportional to complexity. In other words, since additions and modifications are not allowed to break the existing design of a COSA application and since the number of constraints is proportional to the complexity of the application, therefore the more complex the application, the more correct it gets. This enables us to build applications of arbitrary complexity without the burden of unreliability, something that was impossible until now. This, in my opinion is what will truly bring us into the golden age of computing and automation. I will explain the mechanism of constraint discovery and enforcement in my next article. Like everything else in COSA, it is neither magic nor rocket science. Its power lies in its simplicity.

Newsgroups Discussion on COSA

Chris Glur recently started a discussion on usenet about the best way to implement a COSA system. Check it out. I will try to add a few comments as time permits.

Saturday, May 19, 2007

The Real Cost of Software Unreliability

According to the National Center for Statistics and Analysis, in 2005, over 43,000 people were killed in traffic accidents in the U.S. alone. I don't know what the number is for the entire world but it must be in the six digits. No one can fault software unreliability for those fatalities since human drivers were at fault, but what if I told you that the reason that human beings are driving cars and trucks on the road and killing themselves in the process is that unreliability imposes an upper limit on the complexity of software systems? As I wrote in a previous article, we could conceivably be riding in self-driving vehicles right now but concerns over safety and reliability will not allow it. In addition, the cost of developing safety-critical software rises exponentially with the level of complexity. The reason is that complex software is much harder to test and debug.

What will it take to convince the computer industry to change over to a new paradigm that will make it possible to automate all vehicles? What will it take to convince software developers that complexity no longer has to be an enemy but can and should be a trusted friend? What will it take to convince them that there is a way to build bug-free software of arbitrary complexity? What will it take? Are 43,000 dead men, women and children not enough?

In my opinion, most of the funds allocated for traffic research by the U.S. Department of Transportation should be used to find a solution to the software reliability crisis. Why? Because the solution would keep tens of thousands of human beings from dying needlessly every year. Are you listening, Secretary Mary E. Peters?

Friday, May 18, 2007

COSA Business Models

This article was first posted in September 2004 in the old Silver Bullet News.

One of the nice things about COSA is that it can accommodate several types of business models that target specific niche markets. Below is a list of products and/or services for which COSA is ideally suited.

  • Embedded COSA Operating System (ECOS). COSA would be perfect as the basis for a small embedded operating system for mission-critical applications and/or portable devices such as automotive control systems, avionics, cell (mobile) phones, set-top boxes, PDAs, etc...
  • COSA Virtual Machine (CVM). Similar to the Java Virtual Machine (JVM), the CVM could serve as an application execution engine for use in existing legacy operating systems such as Windows, Linux, OSX, etc... CVM and ECOS would have largely compatible execution kernels. This means that the same software construction tools (see below) could be used to develop applications for both environments.
  • COSA Development Studio (CDS). The CDS would consist of a set of graphical tools for designing and testing COSA applications. It could be used as a proprietary rapid application development (RAD) tool with which to create software for either CVM, ECOS or COS (see below). CDS could be hosted on any of a number of existing desktop OSs. It could also be sold to the public as a RAD tool for legacy systems (CVM), embedded systems (ECOS) or the COSA operating system (COS).
  • COSA Operating System (COS). COS could be either an open or closed source OS depending on the business model. It is a full operating system in the sense that it would include all the usual service components and applications found in systems like Linux, MacOS and Windows. In addition, COS would, due to its very nature, automatically support cluster computing for high-performance applications such as weather forecasting and scientific/technical simulations. COS should be initially marketed to businesses and government agencies, especially for mission-critical environments.
  • COSA-Optimized Processors (COP). These are RISC-like central processing units (CPU) designed and built especially for the COSA software model. COPs would process COSA cells directly and would replace most of the COSA execution kernel. The end result would be extremely fast processing and simulated parallelism implemented at the chip level. COP chips can be designed for various markets such as end-user products (desktop computers, cell (mobile) phones, set top boxes, game boxes, notebook computers, laptops, etc...) and mission-critical business systems.
  • COSA Neural Processors (CNP). The COSA project was heavily influenced by my ongoing work in spiking (pulsed) neural networks or SNNs. Since COSA cells are similar to spiking neurons, it makes sense to extend the capabilities of COSA-optimized processors so as to add support for fast SNN processing. Neural network driven applications are bound to multiply in the near future. The nice thing about CNPs is that they would be ideal for large-scale distributed SNN applications that require hundreds of millions or even billions of neurons.

Thursday, May 17, 2007

The Simple and Beautiful Secret of Reliable Software

The secret of constructing reliable software is not rocket science. The secret is in the timing. Nothing must be allowed to happen before or after its time. If you could control the timing of events in a software system in such a way that the system's complex temporal behavior becomes thoroughly predictable, you could, as a result, construct a software sentinel that would automate the job of the discovering and enforcing the temporal laws that govern the system's behavior. Additions and/or modifications to the system are not allowed to break the exisiting timing protocols thereby insuring solid consistency. The beauty of this is that it makes it possible to create software programs of arbitrary complexity without incurring the usual penalties of unreliability and high development costs. This simple and beautiful secret will usher in the golden age of automation.

It's the Hardware, Stupid.

Sooner or later, the COSA paradigm will hit critical mass. It will happen when a sufficiently large number of intelligent people in the business recognize, not only the wisdom of the approach, but also the golden age that the new software model will bring. However, I have a word of caution for any Fortune 500 technology company that is seriously interested in capitalizing on the coming COSA revolution: the money is in the hardware, not the software. COSA is an idea that is already in the public domain. You can't patent it. You can't say it's your idea either unless, of course, you want to be laughed at. Until a de facto COSA standard model emerges, good luck trying to make money selling a proprietary COSA OS or development tools that may or may not be compatible with the eventual standard. Everybody and their uncle will be working on a competing OS and the free software movement is certainly not going to flip on its back and die. This is not to say that there will not be any money in selling COSA tools but the days of DOS-begat-Microsoft are over.

It is best to take an indirect approach, in my opinion. I think it more advisable for you to form a strong alliance within the industry with the openly stated objective of focusing your collective financial, political and philanthropic muscle into establishing a completely open standard. In other words, the COSA OS and the necessary development tools should be completely free and open. And I mean 'free' both as in "free beer" and as in liberty. In the meantime, you would be hard at work on your new fully COSA-optimized, multi-core, green CPU design. By the time the standard is agreed upon, you would be way ahead of the pack by being the first to market a CPU compatible with the accepted model. This will give you the breather you need to improve on your initial design and maintain an iron grip on the market. By that time, you're no longer in the business of manufacturing and selling computer CPUs. You're in the tool business. Don't worry about public acceptance of the new OS. COSA software construction will be so easy that the market will be flooded with high quality applications from the get go. Essentially, you'll be in the middle of a gold mining frenzy and you're the only supplier of picks and shovels in town.

Welcome to the true golden age of computing. Welcome to the COSA revolution.

Monday, May 14, 2007

AMD is Doomed to Always Be a Follower Unless...

It seems that AMD's research department is only concerned with beating Intel at its own game. This is foolish, IMO. AMD is doomed to always be a follower unless its engineers can come up with a revolutionary new CPU architecture based on a revolutionary software model. The new architecture must address the two biggest problems in the computer industry today: reliability and productivity. Unreliability puts an upper limit to how complex our software systems can be. As an example, we could conceivably be riding in self-driving vehicles right now but safety and reliability concerns will not allow it. Why? Because there is something fundamentally wrong with software. Fortunately, a software model that solves these problems already exists. It is called the "non-algorithmic, synchronous, reactive software model. That's what Project COSA is about.

Saturday, May 12, 2007

Why Quantum Computing Is Bunk (part 2)

Part I, II

All Quantum Computing Articles

As I mentioned in my previous article, quantum computing is based on the belief that quantum states are superposed. The idea is that since both states (0 and 1) of a quantum bit (qbit) exist simultaneously, it should be possible to perform operations on both states at the same time. Why do quantum physicists believe in such an absurd concept? I suspect that it has to do with peer pressure. I think it all started when Erwin Schrödinger first proposed a now famous thought experiment known as Schrödinger's cat. While no one has ever observed multiple simultaneous states of a quantum property, quantum physicists accept it as a fact.

A great example of the probabilistic nature of quantum processes is what is known as the half life of subatomic particles. While it is not possible to predict exactly when a radioactive atom will decay, physicists can predict the decay time of half of a large group of identical atoms based on observation. The question is why does nature use probability? Physicists have no clue and yet, this nasty little lacuna in their understanding does not seem to have had an effect on their convictions.

The reason that quantum interactions are probabilistic is rather simple. Time is abstract and the universe is discrete. What this means is that the universe cannot calculate the exact duration of interactions. In other words, all interactions, regardless of the energies involved, have the exact same fundamental discrete duration, a very minute interval. The problem is that this would break conservation laws. Nature has no recourse but to use probability to decide when to allow interactions to happen. Over the long run, conservation laws are obeyed.

In no way does this mean that nature must somehow maintain both states (decayed and not decayed) of a particle. All it means is that nature knows how energetic a particle's interaction with another is and uses this value to determine the percentage of a group of similar particles which must undergo decay. There is no need to invoke quantum weirdness, superposition of states, infinite universes, voodoo or any other such magic. It is for these reasons that I maintain that quantum computing is voodoo science of the worst kind regardless of the incessant claims of its practitioners.

See Also:

D-Wave's Quantum Computing Crackpottery

Thursday, May 10, 2007

Why Quantum Computing Is Bunk (part 1)

Part I, II

Paul Feyerabend, the foremost science critic of the last century, once wrote in his book 'Against Method' that "the most stupid procedures and the most laughable results in their domain are surrounded with an aura of excellence. It is time to cut them down in size, and to give them a more modest position in society." Feyerabend was speaking of scientists in general but he may as well have been talking about the new "science" of quantum computing. Quantum computing is based on the so-called Copenhagen interpretation of quantum mechanics. The idea is that the states of certain quantum properties, such as the spin of a particle, are superposed, meaning that a quantum property can have multiple states simultaneously.

The blatantly ridiculous nature of this belief has not stopped an entire research industry from sprouting everywhere in the academic community. QC researchers are making grandiose promises about magical computational powers being just around the corner in order to obtain grants and attract the attention of gullible investors while having nothing to show of practical importance. Not a week goes by without some announcement about some "progress" or "advance" in QC. It's like a magician going through all sorts of contortions without ever pulling the rabbit out of the hat. Rather than retrace their steps, a few physicists have tried to explain away the contradictions by postulating the existence of an infinite number of universes one for each quantum state. In so doing, the QC hole keeps getting deeper and deeeper and words like fraud, crackpottery and hoax come to mind.

The problem with QC is not so much its laughable absurdity, but the fact that quantum physicists have no clue as to why certain quantum processes are probabilistic in the first place. Physicists love to boast that theirs is an empirical science but have no qualms believing in things that have never been observed. To them, superposition is not an interpretation or a belief but a fact. However, from my vantage point, QC is now a full-blown organized religion. In my next article, I will explain the simple reason that quantum processes are probabilistic and why quantum computing is utter nonsense or, using one of my favorite putdowns, "chicken feather voodoo physics".

Go to Part II

See Also:

D-Wave's Quantum Computing Crackpottery

Monday, May 7, 2007

Rebel Science News!

I've been meaning to start a Rebel Science blog page for a long time. Only laziness and lack of time have prevented me from doing so. The advantages of using a standard blog are obvious: automatic archiving, time stamps, RSS support (syndication), searching, etc... I could have hosted my blog on my web server but I decided that there is no need to use any of my allotted bandwidth since Google already provides a free blogging service. I also came to the conclusion that having a separate news page for each topic (Bible physics, Bible AI, computer reliability) does not really offer any benefit. From now on, all news articles will reside in a single location. The content of a news item and the accompanying links should be enough to establish context. A reader can always use keywords to search the archive and locate articles of interest.

On the right side of this blog page is a panel labeled "Older News". In it you will find links to the old news pages. I plan to eventually transfer all the old news articles to the archive for this blog so that they can be searched by keywords.

Having said this, nothing is written in stone. I am always willing to see the error of my ways and repent if necessary. Please let me know what you think.

Misgivings

I have been having serious misgivings about continuing this work (Bible Physics), or rather, about posting my findings on the web. Just the other day, I was doing a little research on Google about the Coral Castle, a strange tourist attraction south of Miami where I live. Apparently Edward Leedskalnin, the builder of Coral Castle, stumbled upon some powerful secret which he claimed was known to the ancient builders of the great Pyramid of Egypt. What intrigued me is a little book written by Leedskalnin on the subject of magnetism. In it, he claims that the theory he proposed makes sense only if the reader orients himself due east while reading the descriptions. One should note that the Coral Castle, just like the great pyramid of Egypt and other megalithic structures around the world, is aligned to true north. I did not get much from the book in the way of inspiration because of its strange cryptic style but, as I mention elsewhere in these pages, the secret of the seraphim, the constituents of an immense sea of energy in which we move, has something to do with their movements along absolute 2-D planes. The earth's axis certainly has a fixed north-south orientation due to its rotation. An imaginary surface cutting through the earth at a perpendicular angle to the north-south axis would constitute a fixed 2-D plane. Whether or not this plane is perfectly aligned with one of the absolute fixed planes of the seraphim, I do not know yet; but I have this funny feeling that the little Latvian was onto something big. I sorely need to conduct a few experiments of my own but my current situation won't let me.
Anyway, I edited and made a few additions to the Seraphim page while I am debating whether or not this is the right time for this knowledge to emerge. There is a lot more stuff I want to write about but, frankly, I am afraid. Forgive my use of the vernacular but this is truly powerful shit I am meddling with here. This stuff is downright scary. The artificial intelligence stuff is scary too but the physics stuff is scarier, in my opinion, if only because I believe it can be implemented by almost anybody on a very short notice. In a world so divided and shaken by strife and violence, this is the sort of thing that would surely bring us face to face with catastrophe on a global scale. Unless we change our ways, of course. More to come...

Are You Offended by My Biblical Research?

Some of my readers write to advise me that I should refrain from mixing my software reliability work with my Biblical research on artificial intelligence and particle physics on the same site. My response is the same as always: it just ain't gonna happen! Their rationale is that most computer geeks are atheists and that people who are first attracted to the computer related stuff will stop taking me seriously as soon as they find out about my Bible stuff. Let me make it clear once and for all. I am not running for political office and this is not a popularity contest. If my writings on the Bible offend you, then don't read my site. It's not meant for you, sorry. Besides, it's not as if I'm making friends in the Christian community either. Only cowards fail to live by their convictions. If you think I'm a crackpot, more power to you.

Sunday, May 6, 2007

Animal!

I am seriously considering going back to my AI roots, so to speak, and revive my old Animal program. My rationale for doing so is that I need some serious funding to continue my work. I am thinking that a chess program that gets better as it plays would be attractive to lenders and investors alike, especially if it can go from a rank beginner to expert or even master level without the usual alpha-beta search algorithm. A user could raise his or her own chess brain and pit it against others. I would like to rewrite Animal almost from scratch but this time I would use the C# language and Microsoft's XNA Game Studio Express, thereby killing two birds with one stone since the product would be available for both Windows and Xbox. My plan is to eventually move to the game of Go, a beautiful board game that has so far resisted all traditional approaches commonly used by chess or checkers programmers. Go is the most popular board game in the world, being played religiously by millions of people in Japan, Korea, China and elsewhere. Current computer Go programs are famously weak. A Go program that can play a decent game is sure to be an instant hit. My only problem is time. Oh well.