Stability can be thought of as a measure of agency. That is, the more stable a system is, the better we are able to recognize it as a distinct agent, a system that actively, structurally or by happenstance persists through time, space and/or other dimensions. Burton Voorhees defines a concept of virtual stability as a “state in which a system employs self-monitoring and adaptive control to maintain itself in a configuration that would otherwise be unstable.” He clarifies that virtual stability is not the same as stability or metastability and gives formal definitions of all three.* By making a distinction between stability, metastability and virtual stability, we can gain further clarity on agency itself and the emergence of new agents and new levels of organization.
Metastable systems subsume an important and ubiquitous class of systems here on Earth: autocatalytic systems. Autocatalytic systems are those that pass through two or more distinct states in a cyclical way such that each state will eventually be repeated. Examples of autocatalytic systems include gliders, fire, metabolism, and organism reproduction. Due to the hierarchical organization of natural and artificial complex systems, each “state” is also a configuration of agents at the lower level (e.g. pixels, oxygen molecules, enzymes, organs, parent organisms, etc). Autocatalytic systems are heavily dependent on a rich environment of resource agents for the process to continue indefinitely, and without a source of renewal (i.e. exogenous energy), eventually the system will not be able to sustain itself. Thus, a thorough understanding of an autocatalytic system cannot be gleaned without a thorough understanding of the environment of potentially constituent agents. Of course one agent’s environment is another agent’s… self. Meaning that agents act as each other’s environment and thus can form cross-catalytic cycles. In metastable systems, agency arises (i.e. emerges) when the catalytic loop is closed and one or more cycles is formed. The tighter the loop — the stronger the probability that once in state A, the system will get back to state A — the more stable we say the system is, and the more likely we are to recognize the entire system as an agent at the higher level of organization. Agency is not a binary proposition, but rather a continuum where the metric is stability.

From The Chaos Point. Reproduced with permission from the author.
Given the rich environment required to foment autocatalytic reactions from random ones, it is not surprising that where there is one emergence, there are many similar simultaneous emergences. Stuart Kauffman calls life on earth, including us sentient beings, “we the expected,” by which he is at least partially referring to the non-accidental, and fecund nature of autocatalysis under the right circumstances. And given the resource intensive kind of environment required for autocatalysis to emerge, it’s not surprising that autocatalytic agents would find themselves in competition for increasingly scarce resources. The sigmoid graph of the autocatalytic rate law gives a good intuition for how emergence of agents at a new level of organization quite naturally leads to increasing selective pressure.
Once there is selective pressure exerted over a population of autocatalytic agents, we arrive at the familiar zone of Darwinian evolution. As the population moves up the sigmoid and resources become scarce (either through sustained competition or otherwise changing environment), cooperative behaviors begin to appear. The following narrative illustrates the general dynamic:
When the going gets tough, social amoebas get together. Most of the time, these unusual amoebas live in the soil as single-celled organisms, but when food runs short, tens of thousands of them band together to form a sluglike multicellular cluster, which slithers away in search of a more bountiful patch of dirt.
New research shows that, within this slug specialized cells rove around vacuuming up invading bacteria and toxins, thus forming a kind of rudimentary immune system. The discovery could provide a molecular link between the bacteria-eating behavior of single-celled amoebas and similar behavior by cells of animals’ immune systems.
Science News, August 25, 2007
Cooperative behavior between agents at one level (e.g. amoebas) can lead to the emergence of a new level of agent (e.g. slugs).** And just as with autocatalytic emergence, should a population of higher-level agents emerge and begin to compete, natural selection occurs at the higher level. This does not mean that selection at the lower level(s) ceases, though the very nature of cooperation implies a reduced differential in fitness between cooperating agents, and thus selective pressure is reduced relative to a competitive environment. Additionally, selection at the higher level can work to constrain destructive selection at the lower level. John Pepper et al suggest that animal cell differentiation patterns are an adaptation to suppress evolution at the cellular level and hence stave off cancer.*** At the cultural levels, we find tons of examples of the higher level constraining destructive competition between constituent agents, including the entire legal and governmental enterprises. Not all selective pressures are destructive to higher levels, as evidenced by the adaptive immune system, which clearly makes the animal a more robust agent through increased flexibility in responding to threats.
This brings us back to Voorhees’ notion of a virtually stable system, which “maintains itself on the boundary between two or more attractor basins…. [Energy is expended] to maintain the system on an unstable trajectory, or in an unstable state. This energy expenditure purchases an increase in behavioral flexibility.” The adaptive immune system fits the model of ility, as does Palombo’s theory of emergent ego (to name just one of several theories of mind which conjure notions of ility). While in one sense “virtually stable” systems teeter on an unstable edge, in another sense they are more stable (virtually so) than a “one trick pony” such as the innate immune system, which can be represented by a single basin of attraction. Interestingly, when we find virtually stable systems in nature, they tend to be employing a selection mechanism over a population of agents (in other words, evolution) as the central mechanism of flexibility. But there are other mechanisms of self-monitoring and adaptive control besides natural selection, and we tend to observe these in man-made virtually stable systems such as robotic controllers, constitutional democracies, etc.
At this point, I will indulge in some speculation about the relationship and differences between autocatalytic emergence and cooperative emergence.
- Agents that emerge from either type can be found as constituents for a higher level emergence of either type.
- Autocatalysis is formed from heterogeneous agents which each play a unique role, and thus autocatalysis is more susceptible to disruption (less stable) than cooperative emergence.
- Cooperative emergence often relies on homogeneous agents which are functionally interchangeable and is thus more robust.****
- For a cooperatively emergent agent to persist through time, its constituent agents must also have a way to persist through time, generally in the face of deleterious forces. This means that constituent agents must either either self-repair when compromised (ility), or be replaced by a healthy, functionally equivalent agent. Conceptually speaking such replacement can happen via either replication or re-emergence, but practically speaking, the former is much more prevalent. This is because autocatalytic systems can and do produce by-products, including additional copies of the original autocatalytic set. This is the foundation of agent replication, which, due to geometric growth, yields exponentially greater population sizes than re-emergence alone.
- Autocatalytic systems, being less stable on the whole, exhibit a weaker form of agency than cooperative systems. That is, the agents don’t “look out” for their own well-being as forcefully, and so the degrees of freedom of interaction — and the velocity and multitude of interaction — are greater in autocatalysis than cooperative emergence.
- Cooperative behavior requires direct informational feedback, whereas a single catalytic reaction does not (though once the catalytic loop is closed, a feedback loop has been created).
- ility only arises in systems that are internally complex enough to contain models of not only their current environment (as all agents either explicitly or implicitly do), but also models of what Kauffman terms the “adjacent possible.” In other words, the system must be able to predict what the environment might be like in the future. Populations of agents undergoing selection have this feature (at the population level), as do neural networks and other local search mechanisms.
- One way to look at agency is as a “unit of survival”, or what persists over time. Replication is a form of meta-survival, but plain old survival counts too. Individual molecules exist for a long time, individual turtles less so.
Populations of agents are said to evolve via natural selection. (And populations are agents too). But agents evolve via mechanisms other than natural selection, for instance ontogenesis and senescence.
There is nothing particularly special about natural selection in the pantheon of complex systems dynamics. It is a model that has great explanatory power for a set of observations and phenomena that are readily available to the naked human eye combined with a cataloging mechanism. With the burgeoning interest in complex systems, and the more general trend towards multidisciplinary discourse, we will eventually have robust models of emergence (and other dynamics) as distinct from evolution.
—
* In an earlier post I classified metastability and various kinds of ility (such as self-repair and environmental representation) as “mechanisms” of stability, and I see now that I probably should have used the word “aspect” instead of “mechanism”. Regardless of whether one views metastability and ility as classes of a more general definition of stability or as distinct from one another, it should be clear that all three are intimately connected and lead to a more general notion of agency than any one alone would.
** I have claimed in a previous post that “emergence of higher levels of organization of complex systems happens via cooperation of agents at the lower level, and that without cooperation, the burgeoning of complexity would not occur.” I should have been less absolute in this claim because I believe now that autocatalytic emergence is importantly different than cooperative emergence.
*** Apoptosis (programmed cell death) is also believed to be such an adaptation.
**** Heterogeneous agents can and do form cooperative alliances, and some co-evolve to the point of strong symbiosis, wherein one cannot exist without the other. A common example is gut fauna (e.g. e. coli), which are necessary for many mammals (including humans) to digest their food, and which have become specialized to the diet and general environment of their host’s innards.
[…] I see there being at least two types of emergence, autocatalytic and cooperative. […]
[…] not the only form. Cooperation represents a large class of self-organizing behavior as well. Both autocatalysis and cooperation lead to emergence of new levels of […]
[…] of a defective part? If a system is altered, should it be brought back to the previous status, or is there a new standard defining a new stable system? The development of many diseases can take years, during which time the system has adapted to […]
[…] for aspects of complex systems that the standard genetic framework cannot. Examples include virtual stability, epigenetic/lamarkian inheritance, organismal adaptation (e.g. Baldwin effect), and […]