Tuesday, June 22, 2010

Why Does Eugene Kaspersky Eat Japanese Baby Crabs and Grin?

Abstract

I just read an excellent article on the cyber security crisis by David Talbot, Technology Review's chief correspondent. My position is that the cyber security crisis is identical to the software reliability crisis. The cyber security industry has a conflict of interest in that it is in no hurry to see the problem solved once and for all because a final solution would put it out of business. Below, I argue that the crisis will be solved when we change to a new software model. I defend my thesis by commenting on a few quotes I selected from Talbot's article.

It Is All About Bugs in the Code
Code is more complex, and that means more opportunity to exploit the code. There is more money to be made in exploiting the code, and that means there are more and more sophisticated people looking to exploit vulnerabilities.
The cyber security crisis is really identical to the software reliability crisis because vulnerabilities are, as everyone knows, bugs in the code. And the more complex the code is, the more buggy it gets. What is a bug? A bug is either a defect in the code or an omission, i.e., something important that the programmers and/or designers overlooked. Usually, a savvy attacker will use his knowledge of a specific bug in a software application in such a way as to cause it to behave in a manner that the original software designers did not intend.

So it is rather interesting that the cyber security industry chooses to focus on virus and other malware detection and not on finding ways to construct bug-free code. But then again, why not? After all, the security industry is in business to make money and the more buggy the code is, the more opportunity there is for the attackers, and the more money the industry makes. They have no interest in coming up with a final solution to this pressing problem. Now you know why Eugene Kaspersky, the head of Kaspersky Lab, a famous Russian cyber security company, eats Japanese baby crabs in his Moscow office and grins for the camera. Business is good and getting better all the time. (read Talbot's article for context)
Eugene Kaspersky Is Happy
Bring in the Lawyers
"Hardening targets--and having good laws and good law-enforcement capacity--are the key foundational pieces no matter what other activities we want to try to pursue," Christopher Painter, the White House senior director for cyber security, pointed out at a recent conference.
Now that everyone has given up on eliminating the vulnerabilities that malevolent hackers exploit, it makes sense to bring in the lawyers. The problem with the legal approach is that it is not just criminals that are a threat to cyber security. Governments are engaged in it as well, probably more so than the criminals.
"Botnets are a serious threat, but we're out of luck until there is international agreement that cyber crime really needs fairly rigorous countermeasures and prosecutions across pretty much all of the Internet-using nations," says Vern Paxson, a computer scientist at the University of California, Berkeley, who studies large-scale Internet attacks.
In truth, a country's hackers are a national treasure. Why should they incarcerate them when they can use them for spying on other countries? China does it. So do the US, Russia, India, and others. So good luck on getting those countries to vigorously enforce international cyber security laws. Not that the lawyers really care. There is money to be made either way.

Danger Ahead

What if there were a way to construct 100% bug-free code and a hostile nation finds out about it and uses it to protect its critical systems from your cyber war specialists while at the same time continuing to spy on your country's networks and finding ways to disrupt them? This would create a dangerous situation. My thesis is that there is indeed a way to construct 100% bug-free code and I have written about it elsewhere (see links below). The solution has to do with making timing an inherent and fundamental part of the software model. The only drawback is that it will require a switch to a new software model.

One may argue that we cannot wait for the industry to switch to new software model because it would take years to implement. Legacy systems must be protected right now. True but I contend that it is not necessary to reprogram our entire network infrastructure to gain the full security benefit that comes with bug-free code. By reprogramming a few critical nodes of a network, we can fully protect the entire network against cyber attacks. Of course, every software system in the world will eventually have to be reprogrammed. There is no way around it.

Conclusion

My advice to cyber security policy makers is to take a good look at the folly of our current approach. Unless something is done now that has not been tried before, the whole thing can get very ugly in a hurry. There is a way to solve the problem once and for all but do not count on the cyber security industry to do it. We must acknowledge that the baby boomer geeks have shot computing in the foot in the last century with their Turing cult and their infatuation with the Turing machine. The truth is that the Turing computing model is the problem, not the solution. It is time for the boomers to retire so that a new generation can have their turn at the wheel.

See Also:

Technology Review: Moore's Outlaws
How to Construct 100% Bug-Free Software
How to Solve the Parallel Programming Crisis
The COSA Software Model
Why Software Is Bad and What We Can Do to Fix It

9 comments:

Conzar said...

I agree with your post. I would like to add that money is the problem of everything.

Its why we have automobiles that can crash into each other, its why there is a new iPhone every 6 months, its the reason why people starve throughout the world, and its the fundamental reason for war.

The world is maintained by large corporations which have supplanted most governments. Most countries will not make cyber security a priority (in the sense of finding a technical solution), they will just use this condition as a new market that will be exploited.

I would suggest reading Jacque Fresco's "The best that money can't by" and Jack Reed's "The next evolution" for more information on the current state of society and proposed solutions for building sustainable societies.

Brittany said...

Correction: the author of the Technology Review article is David Talbot, not Michael Talbot.

Thanks.

Louis Savain said...

@Conzar,

Thanks for the comment and the references.

@Brittany,

Thanks. I don't know why I had Michael Talbot in mind. Corrected.

Josh Painter said...

Disclaimer: I am a software developer that makes good money from the "status quo." However, I have no loyalties and will happily move to "the next best thing" whatever that may be.

I think it is important to define "bug." In today's day and age of frameworks, especially very high-level frameworks like SharePoint or CRM, we don't really fight with traditional "bugs." Sure, there are hotfixes every month for our frameworks from the manufacturers, but I'd say 90% of my bugs are business logic bugs now-a-days. In other words, I simply got the business logic wrong for whatever reason. No graphical designer or newfangled processor will help me there.

Now to be fair, I exist at a very high level as compared to, say, an embedded device programmer. But my observation is that it is more important to build frameworks (hammer) than it is to try to redefine the hardware (nail).

My advise is to build COSA as a software framework first, on top of "broken" processors and hardware. If you can prove that it is indeed 100% bug-free (albeit slow), you will have no shortage of dollars to build your own custom hardware.

Even then, realize that you are competing with a thousand other frameworks out there that already eliminate a large number of the bugs that I think you are talking about - but in my opinion, nothing will eliminate the business logic bugs except careful analysis and testing of business logic - security included.

Louis Savain said...

Josh,

Thanks for the comment. You wrote:

Disclaimer: I am a software developer that makes good money from the "status quo." However, I have no loyalties and will happily move to "the next best thing" whatever that may be.

I like that.

My advise is to build COSA as a software framework first, on top of "broken" processors and hardware. If you can prove that it is indeed 100% bug-free (albeit slow), you will have no shortage of dollars to build your own custom hardware.

I see what you're saying but even a software framework is a major undertaking. First of all, I would need to implement a full working OS with a comprehensive set of dev tools. Still, nobody will bat an eye if I use them to write something trivial like a bug-free tic-tac-toe program. To be taken seriously by the doubting Thomases, I would have to develop the software for something as complex as a safe and reliable self-driving car. Having said that, there are a handful of people working on a COSA virtual machine as I speak. Take a look at the on-going discussion at the Rebel Science Forum.

Even then, realize that you are competing with a thousand other frameworks out there that already eliminate a large number of the bugs that I think you are talking about

Almost all of these bugs have to do with what Fred Brooks called accidental complexity in his famous "Silver Bullet" paper. They are fairly easy to prevent and detect. They are not the bugs I am talking about.

- but in my opinion, nothing will eliminate the business logic bugs except careful analysis and testing of business logic - security included.

Business logic bugs arise from what Brooks calls "essential complexity". Brooks (and most everyone else) is adamant that this class of bugs cannot be fully avoided in a complex application. I agree with Brooks but only if the application is built using the industry's current software model.

I am claiming that my software construction model can and will prevent these types of bugs, all of them. It will find both logic bugs (logical conflicts) and bugs of omission (overlooked circumstances or conditions). The solution involves two fundamental aspects of behaving systems: timing and complementarity, both of which are fundamental parts of the COSA software model. I plan to write an article to fully explain what I mean in the near future. Hang in there.

Joshua said...

I thought by "Business Logic Bugs" he was referring to bugs created more by the process than by the software. EG the Business Project Manager (BPM) fails to clearly define the desired finished project. How could COSA remove that type of human error?

Josh Painter said...

It will find both logic bugs (logical conflicts) and bugs of omission (overlooked circumstances or conditions).

This is what I am interested in! Here's a (real world) example. Marketing tells me "We need to send out a reminder email 3 days before an event." No problem - we write a task that runs every night and sends out reminder emails.

A month later Marketing comes yelling at us, "Why is the system sending out reminder emails for canceled events?"

"How do we know if the event is canceled?" we ask. "You look at the Marketing Status field," they say. So we change the code and move on.

Three months later we've written 5 other scheduled tasks, reusing the "canceled" logic that we learned the hard way. Operations comes screaming at us, "Why aren't these workflows running for these canceled events?"

"Aha!" we say, "We learned that if Marketing Status is set to Canceled, the event is off, so we don't need to run your workflows!"

"But we still need this workflow to run no matter what! We use the Operations Status field! Fix it!"

-fin-

These are the "bugs" I am talking about, the ones I fight daily, and why your quote above interests me. How would COSA help me here?

Louis Savain said...

Josh,

In my opinion, this type of omission is easy to catch in a reactive software model that uses sensors and effectors. Sensors detect specific conditions or phenomena during execution. In fact, nothing can happen in a program unless at least one sensor emits a signal.

Using your event example, the programmers must create a set of sensors that broadcast specific signals when certain phenomena (changes) related to events are detected. One of those signals is emitted every time an event cancellation is detected. Every module (e.g., reminder and workflow) that handles events must process all sensory signals related to events including event cancellations.

So, even if the client forgets to specify what should be done in a certain condition, the development system will raise a flag and will not allow the application to be released unless the condition is properly handled by all relevant modules or components. At this point, the programmers are forced to ask the client for further specifications.

PS. The whole thing may seem somewhat nebulous at this point but I plan to write a blog article in the near future to flesh it all out in greater detail. Hang in there.

InfoCoder said...

Do that, but I just had a comparable situation where the user did not think of all of the requirements and saw the results as erroneous.

This despite specific requests on the crisis. Of course doing development in the above way would resolve some of that but no way all of it.

That's one reason anyway that programmers will always be needed, who can think about what goes first, or in your COSA language exactly what should follow what first.