Wednesday, July 15, 2009

The COSA Control Hierarchy


Every COSA software application is organized like a tree. This is a fundamental aspect of COSA programming. In this article, I will argue that the use of a control or command hierarchy is the most effective and simplest way to design parallel applications and precisely control many objects operating in parallel. Please read the previous multi-part article, COSA: A New Kind of Programming, before continuing.

Why Use a Hierarchy?

The brain’s memory structure (see The Brain: Universal Invariant Recognition) is organized as a hierarchy just like a COSA application. This is not surprising since they both consist of many parallel acting entities. There are excellent reasons for this arrangement in COSA. Examples are the grouping or classification of interrelated components, the reuse or sharing of components, easy program comprehension, the control of attention and the selection and coordination of tasks.
The figure above is a simplified representation of a COSA application shown in tree form. The leaf nodes are the low-level components that contain actual sensors and effectors. The other nodes (small round circles) are the supervisor components. The trunk of the tree is the main supervisor component. Remember that node children are called slaves in COSA and that a supervisor can control an indefinite number of slaves. Here's another depiction of a supervisor and its slaves:

Object Classification and Reuse

One experiences an exponential rise in power and sophistication as one traverses toward the trunk of the program tree, away from the leaf nodes. A tree structure not only facilitates easy program comprehension, it also makes it easy to search the component repository for a particular component because the repository uses the exact same tree structure to store components. Related components are easily spotted because they lie on the same branch of the tree.

Attention Control

The brain has a finite number of motor effectors to choose from. This means that the effectors must be shared by a plurality of tasks (behaviors). Unless behaviors are carefully selected for activation and deactivation at the right time, motor conflicts will invariably crash the system. A tree hierarchy makes it possible for the brain’s action selection mechanism to easily pick non-conflicting branches of the tree for motor output. A similar method is used in a COSA program to solve motor conflicts. Even though effectors can be easily duplicated and executed in parallel, there are occasions when this is not possible. An example is a robotic system with a fixed set of motor effectors. Attention control allows the program to activate certain components while deactivating others. It forces the program to focus on a narrow set of tasks at a time, thus preventing failures. This is easier than it sounds because the COSA development environment will automatically alert the programmer of any real or potential motor conflicts (Principle of Motor Coordination).

The primary goal of the COSA visual design tools is to make it easy to compose complex, rock-solid applications as quickly as possible. I think the use of a tree architecture for program organization is part of the future of parallel programming.

See Also:

COSA: A New Kind of Programming
Why I Hate All Computer Programming Languages
How to Solve the Parallel Programming Crisis

1 comment:

James said...

What about prioritisation based on a hierarchy of actual signal buffers? Imagine that for each signal in a child buffer, every signal in the parent is executed - so that any activity in a parent buffer would implicitly pause all activity in its children.

Since usually the rate of signals entering the parent buffer would be 0, child buffers would not normally be interrupted.

The main advantage is that real-time applications can lay dormant in a parent buffer while intensive and potentially hazardous communication takes place in a child buffer.

I was thinking specifically of a simple concentric hierarchy: e.g., an OS with 4 buffers: a kernel at the center, a security subsystem, i/o subsystem, and application space. In this case the designer would choose which buffer to put an application, and the kernel would still have to manage the priorities of processes in the application space.

After reading this post though, I'm thinking of a system which allows a tree like hierarchy of priority buffers. I think this would make for safer environments than if a manager is run side by side with applications in the same buffer.

While the the absence of a fundamental prioritisation of signals could be defended with the idea that future processors will run exceptionally fast, I think this is similar to the idea that conventional cpu's are fast enough to complete their tasks. An exceptionally complicated system such as an AI would still benefit from a natural prioritisation of its signals, simply because it could react faster than if every signal in its entire system had to be executed concurrently.