LIFE, COMPLEXITY AND THE EDGE OF CHAOS:COGNITIVE ASPECTS OF COMMUNICATION BETWEEN CELLS AND OTHER COMPONENTS OF LIVING SYSTEMS [

submitted to: Biotheoretica Acta Nov. 27, 1995]

Donald C. Mikulecky Department of Physiology Medical College of Virginia Commonwealth University Box 980551 MCV Station Richmond, VA 23298-0551, U.S.A. e-mail: Mikulecky@gems.vcu.edu

ABSTRACT: The concept of "complexity" has become very important in theoretical biology. It is a many faceted concept and too new and ill defined to have a universally accepted meaning. This review examines the development of this concept from the point of view of its usefulness as a criteria for the study of living systems to see what it has to offer as a new approach. In particular, one definition of complexity has been put forth which has the necessary precision and rigor to be considered as a useful categorization of systems, especially as it pertains to those we call "living". This definition, due to Robert Rosen, has been developed in a number of works and involves some deep new concepts about the way we view systems. In particular, it focuses on the way we view the world and actually practice science through the use of the modeling relation. This mathematical object models the process by which we assign meaning to the world we perceive. By using the modeling relation, it is possible to identify the subjective nature of our practices and deal with this issue explicitly. By so doing, it becomes clear that our notion of complexity and especially its most popular manifestations, is in large part a product of the historical processes which lead to the present state of scientific epistemology. In particular, it is a reaction to the reductionist/mechanistic view of nature which can be termed the "Newtonian Paradigm". This approach to epistemology has dominated for so long that its use as a model has become implicit in most of what we do in and out of science. The alternative to this approach is examined and related to the special definition of complexity given by Rosen. Some historical examples are used to emphasize the dependence of our view of what is complex in a popular sense on the ever changing state of our knowledge. The role of some popular concepts such as chaotic dynamics are examined in this context. The fields of artificial life and related areas are also viewed from the perspective of this rigorous view of complexity and found lacking. The notion that in some way life exists "at the edge of chaos" is examined. Finally, the causal elements in complex systems are explored in relation to complexity. Rosen has shown that a clear difference in causal relations exists between complex and simple systems and that this difference leads to a uniquely useful definition of what we mean by "living".

INTRODUCTION

Science progresses with a smooth, continuous rate of change until new paradigms cause more catastrophic changes. From hindsight, these discontinuities become a normal, well understood and accepted aspect of the history of science. We are in a period which fosters the use of words such as "new science" or "emerging science" and "revolution" to describe something called "the study of complexity, or complex systems". Centers for the study of complexity and/or complex systems now exist in prestigious institutions around the world. This review is intended to put these ongoing changes into historical perspective. It is based on the belief that significant and radical changes in the way we do and view science are indeed at hand. It is also based on some historical facts which are in danger of being missed as the popular press and the internet play a stronger role in the way science is seen by the world at large. Science, being so highly specialized and fragmented, is a topic which turns almost all its practitioners into "educated" lay persons as soon as they wander any distance from their own specialty. This fact alone makes a review of this nature very necessary.

What this review does is to focus on a particular idea of how to interpret the term "complexity" which goes back a long time and has been largely ignored by those who recently discovered that reductionist science has limits. Using this idea, we will examine other attempts to define the same term and show why these attempts fail to break free from the reductionist mentality which they so strongly criticize. Finally, in a more speculative mood, this review will suggest that we have systematically looked in the wrong place for an understanding of what living systems are. This last assertion is based on our incomplete collective attempts to use new techniques and the explosion in computing power to look at complication rather than complexity. The answer seem to be close to being found and yet continues to be hard to see. Possibly this is due to how we have chosen to look. Possibly, even though we now use "complexity" and "chaotic dynamics" as everyday terms, we have not yet really changed how we look at the world in any fundamental way. Let us examine the situation and see if there are any clues about how a really new way of looking might come about.

In his book on cybernetics, Ashby [Ashby, 1956] introduces a concept about systems which has come to occupy a central position in modern systems science. He speaks of "The Complex System" in his early discussion of cybernetics as an approach to systems science. It is worthwhile to examine his discussion of this topic in more detail. According to Ashby, complex systems are "common in the biological world" and "too important to be ignored". The following passage is extremely relevant to the ideas being reviewed in this paper:

"Science stands today on something of a divide. For two centuries it has been exploring systems that are either intrinsically simple or that are capable of being analyzed into simple components. The fact that such a dogma as 'vary the factors one at a time' could be accepted for a century, shows that scientists were largely concerned in investigating such systems as allowed (by) this method; for this method is fundamentally impossible in the complex systems."

A more prophetic passage could hardly be expected to be found. The extent to which this prophecy has been manifest and refined is the topic of this review. To develop that topic, it will be necessary to spend considerable time defining the terms Ashby uses in some detail. In particular, he uses the terms "simple" and "complex" in a way which Robert Rosen has used them to some advantage. This usage, though intuitively very logical, has resulted in some misunderstanding and confusion. This problem must be dealt with systematically.

In his discussion of the correct approach to systems, Ashby speaks about an alternative way of studying systems. He does this by insisting that instead of asking what the system is doing here and now, we should ask "what are all the possible behaviors that it can produce?" Herein lies the essence of complexity. It will also be shown that this is clearly a suggestion which embodies a notion of subjectivity in a practice (science) which has presented itself as free from such "soft" influences and which claims to be the most deserving of the term "objective".

In order to treat these ideas with adequate care, it is necessary to first define terms and concepts which will be used to develop the argument.

THE MODELING RELATION AND COMPLEXITY:

The Modeling Relation: Our "window" to the natural world.

Rosen [1978, 1985, 1991] introduces the modeling relation to focus our thinking on the process we carry out when we "do science". It is a mathematical object, but will be manipulated in a less formal way here. It must be kept in mind that the mathematics involved is among the most sophisticated available to us. In its purest form, it is called "category theory" [Rosen, 1978, 1985, 1991]. Category theory is a stratified or hierarchical structure without limit, which makes it suitable for application to complexity, for, as Rosen shows in some detail, there is no "largest model" for a complex system. This means that we can compile as many mechanistic models as we like, but we will never recapture the complex system by this method. In other words, our usual reductionist methods are inadequate in these cases. It will not be necessary to understand category theory to follow this presentation. On the other hand, there will be ideas which are best discussed in terms of category theory which will seem a bit more clumsily presented without it.

Fig. 1: The modeling relation

Figure 1 represents the modeling relation in a pictorial form. The figure shows two systems, a so called "natural system" and a "formal system" related by a set of arrows depicting processes and/or mappings. The assumption is that when we are "correctly" perceiving our world, we are carrying out a special set of processes which this diagram represents. The natural system is something in our surrounds which we wish to "understand" (as well as control and make predictions about, if these are distinguishable from "mere" understanding). In particular, arrow 1 depicts causality in the natural world. Embedded in this representation is our belief that the world has some sort of order associated with it; it is not a hodge- podge of seemingly random happenings. On the right is some creation of our mind or something our mind "borrows" in order to try to deal with observations or experiences we have with our surrounds. The arrow 3 is called "implication" and represents some way in which we manipulate the formal system to try to mimic causal events in the natural system of interest. The arrow 2, is some way we have devised to encode the natural system or, more likely, select aspects of it in terms of the formal system. Finally, the arrow 3 is a way we have devised to decode the result of the implication event in the formal system to see if it represents the causal event's result in the natural system. Clearly, this is a delicate process and has many potential points of failure. When we are fortunate to have avoided these failures, we actually have succeeded in having the following relationship be true:

1 = 2 + 3 + 4.

When this is true, we say that the diagram commutes and that we have produced a model of our world.

This exposition of the modeling relation follows that of Rosen [1991] closely and is somewhat simplified. Much of importance is not discussed here due to limitations of space. One of the most important topics underlying this diagram is the role of measurement in providing our "perception" of the natural world and its role in choice of how we implement the encoding and decoding steps. For more details, consult Rosen [1978, 1985].

One feature about the structure of the modeling relation which we can not afford to fail to understand is that the encoding and decoding mappings are independent of the formal and/or natural systems. In other words, there is no way to arrive at them from within the formal system or natural system. This makes modeling as much an art as it is a part of science. Unfortunately, this is probably one of the least well appreciated aspects of the manner in which science is actually practiced and, therefore, one which is often actively denied.

The Nature of Complexity:

The Newtonian Paradigm and Cartesian Reductionism:

In science, the modeling relation which has been dominant for centuries came to us through the work and thinking of Descartes, as was formalized by Sir Isaac Newton. We will refer to this particular way of perceiving the world as the "Newtonian Paradigm" and the closely allied aspects of Cartesian epistemology as "reductionism". In short, most of the advances of modern science are based on this approach. This is the underlying fact in Ashby's statement above. Epistemologically, reductionism has become ontological [Peacocke, 1985]. Wholes are merely the sum of their parts and Descartes machine metaphor and dualism (mind/body) are the dominant ideas we encounter. The most recent manifestation of this way of modeling the natural world is "molecular biology" which, in fact, can be said to be an oxymoron. If biology is the study of life, it surely requires the retention of higher levels of organization than the molecular! But this one method of viewing our world has become dominant and, in fact, is the basis for a form of snobbery which criticizes disciplines such as sociology and psychology for being "soft" sciences because they are not readily able to "beat" their subject matter into a form which conforms to the Newtonian/Cartesian mold [Rosen, 1991].

The Emergence of Complexity:

Newtonian mechanics gave way to quantum mechanics after the turn of the century, but the Newtonian/Cartesian world view did not suffer in spite of this upheaval. Even though the ultra violet catastrophe told us that the idea that physics was "finished" with its job was a total illusion, we have had to wait for chaos, fractals, and other signs that the Newtonian/Cartesian picture is a very special way of seeing the world; a special manifestation of the modeling relation [Rosen, 1991, Casti, 1989].

As the evidence that this single way of looking at the world was deficient began to grow, the notion that reductionism is the path to knowing began to come under scrutiny. One of the earliest prophets of this wave of change was Nicholas Rashevsky [1954] who, in a paper which marked a radical turning point in his own approach, turned sharply away from his many successes in creating mechanistic models of living systems towards a new path which he called "relational biology". It is worth noting that this was just two years before Ashby's above cited work was published. As Rashevsky's student and later on his own, Robert Rosen has developed those ideas over the last half-century. Meanwhile, the word "complexity" has become a buzz-word both in science and in the lay press. A recent editorial in Science [Horgan, 1955] mentions 31 definitions of the word, none of which necessarily recognizes the others. We can surmise at least two things from this; first that there is indeed something happening which has caused all this commotion; and second that the Newtonian/Cartesian paradigm has had its "clay feet" become visible.

The first thing to be dealt with here is the manifestation of the new field called "complexity research" and the second is to examine their new notions of complexity with some care. For these purposes, the modeling relation will be very necessary and useful. As complexity is discussed in terms of its manifestation in the literature and the lay press, it will be useful to have some benchmark to use as a point of reference. For this purpose, the concept of complexity used by Robert Rosen will be of great value to us.

A working definition of complexity: (From Rosen, 1978)

A system is complex if we can describe it in a variety of different ways, each of which corresponds to a distinct subsystem. Complexity then ceases to be an intrinsic property of a system, but it is rather a function of the number of ways in which we can interact with the system and the number of separate descriptions required to describe these interactions. Therefore, a system is simple to the extent that a single description suffices to account for our interactions with the system; it is complex to the extent that this fails to be true.

Thus, a system is complex to the extent that we have more than one distinct way of interacting with it. This will necessitate more than one model. Remember that a commuting modeling relation is a single model. Distinct, in this sense, means that one model is not derivable from another. The more general statement about complex systems is that there is no largest model, from which all others might be derived. To the extent that this is not true about a system, we call it simple. In other words, simple systems do have a largest model from which all others may be derived. Generally, in simple systems, the largest model arises from a single viewpoint which is some version of the Newtonian Paradigm. The most representative example of a formal system which constitutes a "largest model" of simple systems is non-linear dynamics. This statement is a central reason for the ideas being reviewed here and will require a great deal of elaboration.

The subjective component to this categorization must now be dealt with. In a very real sense all natural systems are complex. We saw them as simple for hundreds of years only because we had no tools to interact with them in more than one way. This means both experimental tools and theoretical tools. The revolution in progress in science is so dependent on the fact that the new tools now exist that our necessary introduction of subjectivity is really nothing more than an acknowledgement of an epistemological reality.

Complexity in language as an analogy for complexity in science.

In order to make ideas more concrete and in line with intuition, a comparison may be helpful. Our tool for this comparison will be the modeling relation. We will put complex systems science as a category in the position of the natural system and the formal system will be language. This use of the modeling relation is very important and analogical models make up a very special class of models [Rosen, 1985,1991]. It is necessary to develop an encoding and decoding such that the diagram commutes. We wish to dwell on one aspect of complex systems which is paramount in making it impossible for them to have a largest model. This aspect is the presence of some impredicativity in the form of a non-computable property. Impredicativity is closely related to the property of self-reference. It prevents the computation of some feature of the whole system. In literature we speak of the difference between syntax and semantics. In other words, we are speaking of something that necessitates our being both subject and object in the process. Syntax is a concept which carries over from language to computer science and the limits of computers. To say it as simply as possible, it is a way of encoding things into some "alphabet" of symbols so they can be processed by a device which has the ability to manipulate the symbols and thereby manufacture propositions, algorithms, and theorems from them. These in turn can be used to further manipulate the symbols, propositions, algorithms and theorems. The most general version of this type of device is called the Universal Turing Machine (UTM) [Fischler and Firschein, 1987].

Our modern computers are special cases of this UTM. Things computable with syntactical machines are simple. So complex things involve semantics. Here's where literature can be of a great help to us. In Robert Frost's poem "Stopping by the woods on a Snowy Evening" the last stanza reads:

The woods are lovely, dark, and deep,

But I have promises to keep,

And miles to go before I sleep,

And miles to go before I sleep.

This example came up in a recent symposium on the machine metaphor in science and the nature of complexity [Henry, 1995]. Henry emphasizes the formal identity in syntax and grammar in the last two lines. He stresses that they are not equivalent semantically and are highly context dependent. More than that, the meaning they convey is not formalizable, that is it is not capable of being expressed in the kind of algorithmic manner necessary for its being programmed on a computer.

Rosen [1978, 1985, 1991] gives a much more elegant and rigorous explanation of these terms, but this is the subject of many books and papers. For this discussion, it is hoped that the analogy drawn with literature is sufficiently suggestive as to make it possible to use the term "complexity" with some precision. It only needs to be noted that the Newtonian Paradigm deals only with the syntactical aspects of systems and thus is dealing with them as simple systems which are totally computable. As it has been pointed out, in many significant ways, the term "complexity," as used by many, is at best ambiguous, if not badly defined. It is useful to look in some depth at one aspect of complexity which is widely accepted as a key feature of complex systems, namely, the property of self-organization.

Self-Organization: An oddity or normal behavior?

The phenomenon of self-organization is receiving more and more attention in the biological literature. Often, it is mentioned in a context which at least implies, if not stating directly, that such phenomena are somehow violating the second law of thermodynamics. This is because the second law itself is so poorly understood. We often speak of entropy as a measure of order and or disorder in a context of simple systems containing a few, at most, species of non-interacting species in a totally linear world. In stark contrast, the biological world is highly organized and seems to achieve this organization spontaneously. It is often argued that the self-organization in biology is a result of their being thermodynamically "open" systems. This enables them to achieve stationary states away from equilibrium as long as a flow of matter and/or energy through them is maintained. This ignores some fundamental examples of equilibria which manifest self-organization. The clearest examples being the oil/water partition, the hydrophilic/hydrophobic dichotomy, and the spontaneous formation of micelles, vesicles, and lamellae in mixtures of lipid and water. The most pertinent example in this class may be the "black lipid membrane" which suggests that cell membranes are an inherently stable structure, at least in the formation of their lipid "backbone". It may have been this kind of spontaneous self-organization which played a key role in evolution. It would have provided the necessary compartmentalization to insure that certain key proteins made by an RNA molecule could be kept in its vicinity in order that they be available for its own use [Alberts, Bray, Lewis, Raff, Roberts, and Watson, 1994]. Thus, the "wonders" of self organization may be more dependent on subjective reactions to real surprises which result from our inability to make sound predictions using the Newtonian Paradigm or the more limited versions of thermodynamics. In the context of the coupled, non-equilibrium processes which are the essence of the living system as a homeostatic, reproductive, development and growth sustaining entity, self-organization seems commonplace, at least from a phenomenological point of view. Modern thermodynamics in the form of network thermodynamics deals with these coupled processes through n-ports, which are devices which are embodiments of the second law principle that structure and organization easily occur when such negative entropy producing processes are coupled to even greater sources of dissipation [Priggogine, 1961; Oster, Perelson and Katchalsky, 1973; Peusner, 1986; Mikulecky, 1993]. Often, this coupling is a result of conservation laws at work and has very little to do with reductionist mechanisms. As an example, reference is constantly made to the B‚nard phenomenon, which has become the symbol of a self organizing system.

The Bénard phenomenon

The Bénard phenomenon is striking because it arises in a layer of water with a thermal gradient across it, It is easily demonstrated by placing a petri dish of water on a hot plate. The setup is shown in figure 2.

Fig. 2: The Bénard System

Heating the water from below creates a potentially unstable situation since the cooler water at the top is more dense and tends to "fall" to the bottom in the gravitational field.

If the temperature gradient is not too large, the instability is avoided and normal thermal diffusion will occur. At some critical gradient, an unstable condition is reached and the water "organizes" itself into hexagonal cells in which convection occurs. Figure 3 is a drawing of the result of such organization as viewed from above.

Fig. 3: The hexagonal cells as viewed from above

Figure 4 depicts the pattern of flow in the system once the convection cells are established.

Fig. 4:The flow pattern

There are many attempts to make more of this situation than seems to be warranted by the facts.

How can this spontaneous self-organization be reconciled with the laws of thermodynamics? The problem is really one of missing the point of thermodynamics rather than a contradiction or paradox. The notion of entropy as measure of order is one which applies in isolated systems. In isolated systems, matter and energy cannot enter or leave and their dynamics are short lived. They proceed to equilibrium and stop. Closed and Open systems, on the other hand, can be maintained in dynamic, non-equilibrium stationary states indefinitely as long as energy and/or matter is continuously supplied. The simplest, linear system exhibits stationary non-equilibrium states which obey the principle of minimum entropy production. This principle states that if some subset of the forces and flows in a non-equilibrium linear system are constrained, the remaining flows will adjust themselves to produce a stationary state of minimum entropy production. Network thermodynamics shows this to be a result of Tellegen's theorem which results from conservation laws, system organization, and little else. In a very real way, this itself is a kind of self-organization. The requirement of the minimum dissipation theorem is linearity, which is another way of speaking about closeness to equilibrium. When systems are "stressed" sufficiently, their linear behavior gives way to non-linear behavior. In other words, the response to stress is no longer merely proportional to the size of the stress. Generally, the stress is some gradient imposed on the system. In thermodynamic language, these stresses are differences in potential like quantities (intensive parameters) such as temperature, concentration, electrical potential, pressure, etc., and are referred to as "driving forces". The concept of a system which is displaced from equilibrium relaxing back to equilibrium spontaneously is simply a criteria for stability and also has its connection to the maximum entropy (disorder) concept. The idea is merely that the system has the highest probability of moving toward its most probable state from a far less probable state. The mere tendency for this to happen is embodied in the second law of thermodynamics, which requires that entropy be created in any spontaneous process. A steady state away from equilibrium can thus be looked upon as a constrained system. The constraints are in any gradient being maintained by supplying energy and/or matter. The linear, near equilibrium system then organizes itself to dissipate least, that is to produce the minimum amount of entropy as it responds to the imposed constraint. Remove the constraints by isolating the system from its source of energy and/or matter and it will spontaneously migrate to its highest entropy state, the equilibrium state.

If the applied stress is sufficiently large, some systems, typified by the Bénard system, undergo a transition which we now call self-organization and actually seem to maximize their dissipation [Schneider and Kay, 1994]. Here we encounter a chicken/egg problem. Is the new organization due to the fact that the system is maximizing the rate of entropy production or is the high rate of entropy production a result of the new organization? Consider an extreme case, namely the breaking of a rigid beam under conditions of extreme compressive stress at the ends. A rearrangement of the "organization" of the beam is the result of the compressive stress to the ends of the beam exceeding a certain threshold. Prior to the threshold being exceeded, the beam bent in response to the stress. It is much harder to see a general extremum principle at work in this case and, in some way, the result is more akin to self-disorganization rather than self-organization. However, in both cases, the result was to rid the system of the imposed gradient in the most drastic way available to it. We will return to this idea when we discuss the work of Schneider and Kay. It is important to see what there is about the Bénard system which would qualify it as complex in the sense we have established here.

The Bénard system as an example of complexity.

The conduction of heat in water is no mystery. We have both thermodynamic and kinetic descriptions of this process and need not ask for more. Once the hexagonal convection cells are established, the fact that conduction of heat is now replaced by a coupled convection/conduction system is also readily described by classical physical ideas, well within the Newtonian Paradigm [Chandrashaker, 1961]. The formation of the hexagonal cells and the transition from pure conduction to the more elaborate system which exists after the transition are not explainable from these same physical concepts. one reason the B‚nard system is such a good example of what is "different" about complex systems is the fact that things are both uncomplicated and clear. The usual system used to demonstrate the effect is one of relatively pure water, although other fluids (including air) will work as well. Thus there are no structural clues or molecular details with which to explain the phenomenon. The structures are truly an example of Priggogine's "dissipative structures" [Priggogine and Stengers, 1984] and they have no residual once the conditions for them to appear are removed. In fact, the closest thing we have to this phenomenon is a "phase transition" and knowing that is of little or no help to us.

In fact, there really is no good formalism to incorporate into the modelling relation in order to capture either the transition or the formation of hexagonal convection cells. Furthermore, if the rate of entropy production for the system is calculated, it is readily seen that the system has increased its rate of entropy production by this non-linear means of self-organization [Schneider and Kay, 1994]! Thus, nearer equilibrium, the system is linear and it obeys the minimum entropy production principle while in a stationary state away from equilibrium. Then, as the gradient of temperature across the system is increased, it undergoes this marvelous transition and begins dissipating at a high rate rather than minimally! These are merely a few ways with which we can interact with this system and we clearly have established that there is more than one distinct model necessary to deal with what we observe, and that one observation is particularly distinct in its nature from the others. If the temperature gradient is increased even further, the water will eventually boil and still another, difficult to model, interaction is experienced. we will consider this extreme when we discuss the "edge of chaos" concept.

Some alternative views on complexity:

With this basic introduction to the nature of complexity, it is useful to see how others view the same subject. To do this, we will examine some current thoughts as expressed in the writings of a number of authors. We will see some overlap with what we have developed so far, but also some differences. After this survey of ideas, we will be able to provide some critical evaluation based on the ideas developed above.

The following passage is from Levy's book on "artificial life" [Levy, 1992].

"The latest twist on our perception of the necessary conditions for aliveness comes from the recognition of complex systems theory as a key component in biology. A complex system is one whose component parts interact with sufficient intricacy that they cannot be predicted by standard linear equations; so many variables are at work in the system that its overall behavior can only be understood as an emergent consequence of the holistic sum of all the myriad behaviors embedded within. Reductionism does not work with complex systems, and it is now clear that a purely reductionist approach cannot be applied when studying life: in living systems, the whole is more than the sum of its parts.

The spirit of this passage is clearly in keeping with the idea of complexity we are developing, but something is lacking. The notion of "parts" as used here is generally indicative of a structure/function relationship in syntactic systems. In opposition to this idea is the recognition of functional components introduced by Rosen [1991] and amplified by Kampis [1991].

This is a crucial distinction and the two are the same only in simple systems or mechanisms. In complex systems, we can see the usual explanations of properties being associated with parts of the system but this will not be enough. What is also both similar, but yet different is the idea that the whole not be the sum of the parts. This is not due to solely to the large number of interactions nor their non-linearity or intricacy. It is due to the fact they constitute a non-computable aspect of the system. That non- computability entails the size, intricacy, non-linearity of the system and more. There are in fact, non-linear machines with lots of interactions and intricacy. We could call these systems "complicated", but not necessarily complex. They are, by our definition, simple, nevertheless. Finally, most of what is called "artificial life" is a matter of computer simulation and it will be necessary to severely alter the definition of "life" if these simulations are to be considered living, let alone complex. In the B‚nard system described above, the emergent, self-organizing events are in no way dependent on machine-like parts making the transition to a new organization happen. Nor is such an event computable in the usual sense of Turing computability. This is because the "hardware" and "software" are not distinct nor are they constant. In fact, it is the self- referential nature of the events during the transition which determine the outcome and it is the outcome which determines these processes! Computer software may simulate aspects of this, but in these simulations the software is all that can modify itself, the hardware is immutable. This is the central difference between the mechanistic and the complex. In the complex system, there is usually a "model" of the system and its environment which allows the system to anticipate the things it must do to reach some "goal" [Rosen, 1985; Prideaux, 1996; Mikulecky and Thellier, 1993]. This is a deep concept and will not be possible to develop adequately here. Suffice it to say that the full nature of complexity is lost in any system which can be simulated.

The next source to notice is a book on the wonders of artificial neural networks [Altman, 1989]. In a discussion about Carver Mead, creator of the silicon retina and the silicon cochlea [Mead, 1989] the author quotes Stanislaw Ulam:

"...most natural phenomena are complex systems, and regarding them as special cases is like referring to the class of animals that are not elephants as non-elephants."

This passage captures the essence of the subjective nature with which we make these categorizations. Rosen points out that even a stone, which seems simple to us, will be a complex system to a geologist, merely because of the myriad of ways the geologist has of interacting with the stone.

The author then goes on to quote Terry Sejnowski, a leader in the field of artificial neural networks:

" ...We're going to have to broaden our notions of what an explanation is. Explanations such as x ---> y ----> z may be impossible for some systems."

This quotation speaks to the class of phenomenon typified by artificial neural networks. These are highly parallel systems whose resultant behavior arises from a simultaneous action of its parts rather than a sequential computation characteristic of the UTM.

He goes on with the quotation later:

"...this 'thinking' ability, however, isn't the result of any one neuron's action. Rather it emerges from the complex interaction of large numbers of individual neurons."

Here is a real dilemma. The "secret" behind the amazing thing artificial neural networks can do is in the fact that they are "massively parallel" or "parallel distributed systems" just like the brain seems to be. And at the simple level we have been able to construct them, they look very much like they do what brains do. Are they therefore complex like the brain? They most likely are not. This becomes apparent as soon as we realize that they can be completely simulated on the Turing type computer [Rosen, 1994]. It is very likely that this is not the case with the brain. The halting problem is but one reason why this should be so [Fischler and Firschein, 1987]. In the halting problem, a series of exercises is performed on UTM's. The outcome is a contradiction, but the UTM is not able to "see" it, while our brain, as observer is able to do the job. Once again, it is the computability of these systems which suggests that they are not really complex in the sense that brains are. This does not deny the very strong possibility that they do, indeed, capture a very important computable aspect of complex systems.

Let us look more closely at what they do to examine why the fact that they can solve problems is not enough to qualify them as complex in Rosen's sense. One clear use for artificial neural networks is pattern recognition. In this application, the "neurons" may actually be a part of the pattern. For example, we might use a grid of a certain size on the computer screen. The grid then can be used to represent various symbols, say the letters X, E, T and O. In each case, the grid is made up of pixels which are on or off for each of the symbols in the set. The pixels correspond to the neurons in the network. The state "on" can be represented by a +1 and "off" by a -1. The network consists of the array of neurons, for example 10 x 10, which determines by their state which pixels are lighted. This can easily be represented by an array of plus or minus ones 100 members long. Each of the four letters in the set would be a different array. With a standard algorithm, the "weights" of the connections between the neurons can be calculated for each of the four letters. These weights are then added and the network has thereby been "trained". To "present" the network with a task, it is merely necessary to present a new array of 100 symbols. Let it represent one of the four letters which has been "fuzzied" by randomly changing a certain percentage of the symbols in the array. The fuzzied array is then fed into the network as input and an algorithm for computing the new states of the neurons from this input is activated. The new set of states is generally a sharpened image of the letter which had been fuzzied. This sharper image becomes the input for the next iteration, and so on until the image stops changing or some arbitrary cut-off is reached. Generally, the process converges on the answer after only a few iterations. This is the end of the process and the solution has been shown to correspond to a minimum in some "energy" function [Hopfield, 1982]. Thus, the system has actually performed some kind of optimization. In general, a common use of these networks is to solve optimization problems [Hopfield and Tank, 1986]

A quote of Hopfield in Altman's book is important for this discussion:

"Biology doesn't particularly care where the emergent properties come from. It just uses whatever evolution gives it that happens to work."

This is a common attitude towards these systems. In other words, if the artificial neural network somehow represents activity in the brain, the fact that it is there is due to some evolutionary advantage provided by this capacity. It is necessary to then ask ourselves whether or not this seems reasonable. The answer seems to rest in the question of why such a capacity would be of use. The answer seems obvious enough, the ability to learn about patterns and related matters and then recall these learned entities with imperfect cues would be very advantageous. But would a good analogy to brain processes be a system going to equilibrium? That seems counter to what we perceive going on in our minds as well as any model which would be built on known information about brain neural activity.

To examine this further, we simulated a "hard wired" neural network using SPICE. The network used capacitors to monitor the "state" of the neurons (their voltage) and it is necessary to use a mathematical device involving a transfer function to convert that state into either an "on" or an "off" value (plus or minus one). The advantage of such a model is that instead if iterating to reach the "answer", the network is able to change in time continuously. It was easy to demonstrate that the "answer" was arrived at long before the equilibrium state was reached.

This last point seems key to seeing the difference between the computable and non- computable aspects of the real nervous system. If the artificial neural network represents the computable part, the role we have been playing in assigning "meaning" to the arrays of plus and minus ones (or even arriving at them via some mathematical operation involving transfer functions) is the key to this whole idea! It really takes a meta-system do get the job done. In a functioning nervous system, some other aspect of the system must play such a role. This is what makes the nervous system complex as it does the combination of the artificial neural network and us; the meta-system is incorporated into a single system. It is important to note at least one problem inherent in this meta-system concept. Dennet [1991] points out that any model of consciousness that requires one part of the brain to act as an observer in a "Cartesian Theater" is still a special form of Cartesian dualism. This seems to be a valid point and it points to how much more is needed to incorporate artificial neural networks into a brain model.

Finally, we should mention some other difficulties with artificial neural networks. These are mentioned in a recent critique of the artificial neural network and genetic algorithm as metaphors for networks that think and systems that evolve [Dewdney, 1993]. The point is made very strongly that for every example in which the performance of these computational methods is seemingly miraculous, there are many other examples where other methods are far better and still more examples where these are abject failures. Furthermore, there seems to be nothing that these techniques can accomplish which can not be done by standard computation techniques. So we must conclude that the idea that the limits of computability might be overcome by such devices is somewhat overstated. The brain, by contrast, seems complex by nature and surely still remains outside our ability to model its function in terms of simple mechanisms, no matter how many we choose to employ [Rosen, 1991, 1993, 1994].

Another popular work on complexity [Waldrop, 1992] states:

"...nobody knows quite how to define it, or even where its boundaries lie...."

The author then goes on to list attributes of a complex system:

This illustrates again the confusion which the word complexity has associated with it. For example, spontaneous self-organization is a key factor except for certain objects such as snowflakes. This would seem to exclude the self organization of lipid membranes as well, a candidate for a key step in the evolutionary process!

In another popular book on complexity [Lewin, 1992] the so called "bottoms up" hypothesis is mention in association with the ideas of Chris Langton, a main proponent of the notion of "artificial life". The idea is that the parts are working in the system subject to local rules (such as in cellular automata models on the computer) while there is some "emergent global structure" feeding back and impinging on these local interactions "from the top down". This seems close to what we have been saying at first glance, but once again, upon closer examination is not really different from a complicated machine. What is missing in this idea is the requirement for different ways of interacting with the system. Only if this requirement is met will we be able to distinguish this system from a simple mechanism which happens to be very complicated. Once again, the objects Langton has in mind behind the diagram are computer simulations of so-called "artificial life". These are illusions of complexity, let alone life [Rosen, 1991, 1994]. The reason is multifaceted and includes the fact that when we use the modeling relation to look for complexity, we immediately are forced to see more than one entity below the global structure. There is no fragmentation into parts that fits the interactions other than those which are traditional and involve the Newtonian paradigm. In a complex system, each distinct interaction with the system will involve the discovery of "functional components" or subsystems defined by the specific nature of that particular interaction. Seldom, if ever, do any but the mechanistic aspects of the system map 1:1 on the fractionable parts.

One way which we can partially illustrate this idea is to consider a television set. As a whole item, it has some interesting features, namely the fact that sound and animated picture are produced on its screen. We can interact with the TV in a number of distinct ways. One, which might be motivated by reductionist techniques of molecular biology would be to smash it to smithereens with a sledge hammer and then to characterize the resultant parts in various standard ways. It would seem obvious that this kind of reduction is sterile with respect to understanding the origin of the pictures and sounds.

Another way of decomposing the set is along functional lines. In older sets at least, this was almost a self-evident aspect of their construction. with modern integrated circuits, the way this might be realized is far more subtle. The decomposition into functional components is in line with the definition of complexity we are using and has been developed into a theory of "component systems" [Kampis, 1991 ]. We might find as subsystems an amplifier, a power supply, a tuner, a picture tube, a speaker, etc. Each of these is more importantly characterized by function rather than structure. In fact, each might be replaced by objects with the same functional characteristics, but ones which have totally different structure.

Is a television set complex? Here we see the meaning of the subjective aspects of this concept more clearly than before. Because we understand the device so well, it certainly seems to be a "simple" mechanism albeit one which is at the same time a complicated one. On the other hand, if we were to be naive and mystified by its antics, the above distinction in ways of interacting with it in order to understand it would make it seem complex. It would seem then, that emergence, and self-organization, adaptability and the other attributes which we stand in awe of with respect to complex systems only produce the "desired" response when we have not constructed the device ourselves! This point needs to be remembered.

In a commentary in Scientific American, John Horgan [Horgan, 1995] asks if we can ever achieve a "unified theory" of complex systems. This idea seems a bit contradictory from the perspective we have developed. The conclusion he wishes us to reach seems clear and that is that no such theory is forthcoming. To some extent, Horgan has created a straw man and he never even mentions authors such as Rosen, Kampis or Casti. His critique is instructive in spite of that. It is replete with quotes from Jack Cowan which speak of "too much journalism", "tremendous hype", "computer hacking", and the difficulty of doing research on complex systems. He cites 31 different definitions of complexity and presents about 10 . So what does this all mean? Quite simply, the answer, I submit, is in the kind of analysis Rosen has provided and on which this review is based. Let us try to apply it to the situation described by Horgan.

Complexity of Formal Systems

Going back to the modeling relation, which, we should recall, represents our way of "doing" science, we are talking about complexity with respect to our interactions with the natural system. On the other hand, most authors seem to find it easier to speak about the complexity of the Formal System. This is an entirely different issue. Formal systems may, of course be put into the position of the natural system in the modeling relation. Rosen does this with number theory to show the application of G"del's theorem in the failure of the attempt to formalize mathematics. He also shows that some sophisticated modeling relations are obtained when the modeling relation is used between two natural systems which share a common formal system in their individual models. In such a case, we speak of analogical models. We will return to this point when we discuss network thermodynamics later on.

The issue at hand is a different one. The multiplicity of views of complexity : is a direct result of the failure to recognize the reality of the modeling relation, whether it be being used implicitly or explicitly, and thereby focusing on the formal system as if it were the natural system. This is certainly an ontological error since it confuses the one thing with the other, but it is also an epistemological error in that it fails to recognize how our knowledge is actually obtained.

A conscious attempt to classify formal systems is indeed possible and useful. For example, the theory of Algorithmic Information Theory and its parent discipline Computational Complexity Theory [Chaitin, 1987] has clarified much of what seems confusing to Horgan. Edmonds [1995] has also developed a classification in his discussion of the evolution of complexity. For further discussion on this topic see Casti [Casti, 1994].

Organization and complexity

Organization has been discussed above in the limited context of self-organization. Self organizing events are processes which spontaneously carry a system from one state of organization to another. What we refer to as "organization" in a system is not without problems. One of the most difficult things to define about a system is its organization. This is mainly due to a lack of formal treatments of this topic within the Newtonian Paradigm. The nature of organization is further confused by attempts to quantify it by using some measure of information to be associated with organization. We are quick to use equilibrium identifications between negative entropy and information in situations which are totally inappropriate. In fact, the entropy itself is not well defined under certain circumstances in far from equilibrium non-linear systems [see special editorial "Networks in Nature in Nature 234; 380-381, 1971].

Further confusion about organization comes from a failure to distinguish structural organization from functional organization and the related differences in causality associated with them. This will be made clear when the issue of causality is discussed in more depth later on. The example of the television set above serves to give a summary of the difference between structural and functional organization. Recall that the structure of the system has to do with the way the parts are wired together and the actual nature of those particular parts as well as their location in the set. The functional organization is also dependent on the wiring diagram, but in a somewhat different way. This is because the television set has a hierarchical organization. At the level of a given component, such as the amplifier, the structural organization might differ from one particular design to the next, so long as input/output characteristics are maintained compatible with the other components. At a higher level, components like the amplifier, tuner, power supply, etc. have a very much more fixed set of relations to each other and the wiring diagram is more or less fixed.

One approach to organization in systems is that of Varela and coworkers [Varela, 1979, Maturana and Varela, 1980]. Varela introduced a concept called "autopoiesis" to help define the special organization of living systems. Varela defines the organization of a machine the relations that define the machine as a unity and which determine the dynamics of the interactions it can undergo. He speaks of autopoietic and allopoietic machines. Autopoietic machines continuously generate and specify their own organization as a system of production of their own components by having an endless turnover of components. Thus, from Varela's point of view, an autopoietic machine is "homeostatic". Furthermore the network of processes constituting the autopoietic machine is a unitary system producing components and generating the network.

Varela's ideas closely parallel the ideas we are reviewing from Rosen's writings, but seem to have been generated without knowledge of the large amount of work which had already been done by Rosen. Unfortunately, the terms used are very different and would appear to be contradictory without a great deal of careful elaboration. For example, the use of the word machine by Varella is a result of his apparent fear of being labeled a vitalist. He chooses this word because it connotes a strict reliance on normal physics and chemistry and nothing more. Rosen, on the other hand, is not so fearful of the "vitalist" label and often speaks of a new vitalism. He relegates the machine to the world of simple systems. It seems clear that what Varela means by an "autopoietic machine" is closely related to Rosen's ideas about organisms and complexity. The fundamental assumption underlying Varela's work is that there is a common organization belonging to all living systems, whatever the nature of their components. This sounds very much like Rosen's description [Rosen, 1985] of the concepts Nicholas Rashevsky introduced when he first suggested the approach caller relational biology [Rashevsky, 1954]. The relational approach of Rashevsky, especially as further developed by Rosen, focuses on the relations between functional components rather than physico-chemical analysis.

The two approaches are discussed in some detail by Kampis [Kampis, 1991] in his discussion of component systems.

Autopoiesis should be praised for being a first clear formulation of ideas on self-maintenance processes. In this sense the theory of autopoieses is a forerunner of our own theories.....Let us start to examine the main statement of Maturana and Varela, namely, that autopoiesis involves a distinct mode of existence and a new type of logic. In the self-producing circle there is no first element and the beginning presupposes the end. Hence, according to autopoiesis, the linear cause-effect logic of causal systems is no more applicable. This standpoint is easily understandable if we consider how referentiality and self-referentiality are related. If we understand self-reference as the situation in which a function f is applied to itself ( as f(f) = . or f(.) = f etc.), then the basic form of what we may call a referential relation is simply the function f. Such functions are interpretable as expressions of causality if applied to processes, as we know: it follows that self-reference would correspond to self-causality, in the most straightforward interpretation( this is discussed by Hart 1987)...... The self-referential, autonomous units conceived in the autopoiesis theory are ultimately closed to themselves. They are examples for the Kantian Ding an sich. If self-reference and autonomy are complete, there is no window left in the system through which we can peep. More importantly, there is no possibility to define or modify the system from the outside. It is very logical that Maturana and Varela go on to explain evolution as a random drift process....Autopoiesis is in line with these efforts [the neutralist alternative to the selectionist position and the emphasis on internal organization in the mind] and yet, I think its conclusions are wrong. They come from the rigid, closed-to itself construction. Evolution need not be random if it is not selectional; the mind need not be closed if it is not commanded from outside."

Thus, the ideas of Rosen, as developed by Kampis, are able to capture the thrust of the autopoietic goal without its shortcomings, it seems. As we will see shortly, Rosen's conclusion that organisms are "closed to efficient cause" is distinctly more flexible than being "organizationally" closed, yet it seems to retain all the goals of the autopoietic theory [Rosen, 1991].

Here again we are confronted with the subjective nature of some of these concepts. Rosen [1978] describes "emergence" with the following language:

Emergence involves "...the sudden appearance of apparently new modes of organization and behavior, which seem quite unpredictable from anything which preceded them." He goes on to say "...the appearance of an emergent novelty is recognized by the "failure", or inadequacy, of a particular mode of description, at the point where the novelty appears, and the necessity of passing to a new mode of description, more or less unrelated to the previous one. It is the lack of relation between the two modes of description which has made emergent phenomena so puzzling and yet so refractory to dynamic analysis."

The elaboration which follows this is very enlightening:

"....From this point of view, the establishment of a linkage between a pair of previously unlinked observables is a prototypic example of an emergent phenomenon. The establishment of a linkage makes available an alternative description: a state previously described by observables defined on its initial set of states becomes, after a linkage is established, describable by a new observable which has been linked to the initial one."

At this point, it seems legitimate for the reader to object to the generality of this language and its strong dependence on some of the definitions underlying terms like linkage, observable, etc. Since these are the subject matter of an entire book it is only possible to try to make the idea clearer by some specific examples chosen for their familiarity, knowing full well that Rosen might object to this particular set of examples. The choice might be justified by a few observations. First, the concept of linkage used by Rosen strongly suggests a relationship with the concept of coupling in non-equilibrium thermodynamics. It is interesting historically that the introduction of non-equilibrium thermodynamics into biology by Kedem and Katchalsky [1958] was, in almost every respect, an introduction of the linkage concept into biological theory which paralleled Rosen's, but which also ignored it. It is also interesting that it stood in contrast to the contemporary understanding of the Newtonian paradigm as "complex systems theory" does now. Further, it had the nature of an antidote to the reductionist breakdown of systems by providing a phenomenological framework which could replace the more mechanistic one used up to that point. Its first applications were to membrane transport. Kedem and Katchalsky called attention to Staverman's "reflection coefficient" which was a new observable resulting from the coupling (linkage?) of two previously independent descriptions of a system of membrane system and its bathing solutions. These independent processes were diffusion of a solute and the convection of volume due to a hydrostatic/osmotic pressure difference. The independent processes had the form

Y = AX and

y = ax.

In the new, phenomenological language of non-equilibrium thermodynamics, they took the new form as a coupled system:

Y = AX + Bx

y = bX + ax.

Notice that in the independent processes, there are two coefficients, A and a, which relate the flow responses, Y and y, to the causal driving forces, X and x. In the coupled processes a third coefficient (b = B in linear systems) becomes necessary. This third coefficient arises directly out of the coupling of the two processes. Is this an emergent property? Probably not according to what Rosen has in mind. However, as far as biologists were concerned at that time, it most certainly was! It provided a mathematical model for the well-known osmotic transient, which had to be the sum of two exponential functions. The shape of this curve can only be modeled by a second order process (i.e. something involving a second order differential equation) and never by independent first order processes. The coupling of the two first order processes fulfilled this requirement.

Why revisit this piece of history? It did not stop at this point. The next big jump in our thinking came when these ideas were applied to composite membranes. Another surprise was in store.

Organization and emergent behavior in "simple" systems: network thermodynamics.

Network thermodynamics was developed independently by a number of authors [Peusner, 1970, 1986; Oster, Perelson and Katchalsky,1973] and has at least some of its roots in the thinking of Meixner [see editorial: Networks in Nature, Nature 234: 380-381]. I have used the approach developed by Peusner and written about it extensively elsewhere [Mikulecky, 1993]. For the present discussion, I'd like to focus on central idea in network thermodynamics, namely that the formulation of equations of motion in networks involves two distinct contributions, the topological organization of the network, and the constitutive equations which define the branch elements in the network. All simple physical systems can be formulated by the methods of network thermodynamics and these formulations lead directly to a set of state-vector equations typical of any other method of formulating the problem, except that the coefficients now contain an explicit encoding of the organization. Once the formulation is carried to this point, the problem is cast in the form of a problem in dynamic systems theory. Network thermodynamics also introduces a method for dealing with the coupled processes so vital for the self- organization and maintenance of a living system [Mikulecky, 1995] These are the same coupled processes which Kedem and Katchalsky introduced via non-equilibrium thermodynamics. Finally, network thermodynamics allows an easy robust means for simulation complicated, dynamic, non-linear models using the simulation package SPICE [Walz, Caplan, Scriven and Mikulecky, 1995; Mikulecky, 1993]. The use of SPICE for chaotic dynamics, among others has been established and an extensive literature now exists [Chua and Parker, 1989]. The ability to formulate the organization of a network in terms of the separate contributions of its physics in the form of constitutive laws, and its organization in the form of graph theoretical methods, suggests a series of experiments whose outcome reveals a striking set of phenomenon, surprising enough to be called "emergent" and which clearly demonstrate that even in simple systems, the whole is more than the sum of its parts. To demonstrate these ideas, we will build on a description of the steady state behavior of composite membrane systems formulated in the context of non-equilibrium thermodynamics by Kedem and Katchalsky [Kedem and Katchalsky, 1963 a,b, & c]. These studies were exploited further in our models of transport through epithelial membranes in a physiological context [Thomas and Mikulecky, 1978; Fidelman and Mikulecky, 1988; Mintz, Thomas and Mikulecky, 1986 a & b].

Manipulating organization while holding the physical properties constant

What we are going to do is to describe a set of experiments which do something rather interesting, namely to manipulate organization while holding the physical properties of the parts of the system constant.

Uncharged membranes in series.

To characterize a homogeneous, porous membrane with respect to its osmotic properties, the membrane is clamped in a chamber and solutions of different osmotic pressure, pi, are placed on both sides of the membrane (the osmotic pressure, pi, is defined as, pi = RTc, where R is the gas constant, T is the absolute temperature, and c is the solute concentration). We then wait for a steady (or more often quasi-steady state, i. e. one where the driving force relaxes slowly enough to be considered constant during the duration of a series of flow measurements) state and then measure the resultant flow. For single homogeneous membranes, say a and b, the flow characteristics are a straight line through the origin on a plot of volume flow, Jv, vs osmotic pressure difference, deltapi, as is shown in figure 5.

Fig. 5: Osmotic flow through two different membranes

Unless membranes a and b are identical, this characteristic line will have a different slope for each membrane. If we take two different membranes, a and b and arrange them in a chamber such that they are is series with each other, a middle compartment is created and the flows passes through first one membrane, then the middle compartment, then the other. If we repeat the same experiment as used to characterize membranes a and b separately, but treat the series combination as a single, composite membrane, c, it's characteristic will appear as in figure 6.

Fig. 6: Omotic flow through a series combination of membranes

What has happened is that the new, composite membrane is a volume rectifier and has distinctly non- linear characteristics. This is due to the asymmetry in the characteristics of membranes a and b. Flow in one direction causes concentration in the central compartment to increase, flow in the other direction causes a decrease. This is called concentration polarization. The change in concentration in the central compartment changes the osmotic pressure difference across each of the two membranes and this affects the tighter membrane more than the looser one, hence the directional difference. Clearly, for this membrane property, c is not the sum of a + b! This dependence on organization for composite membranes is further illustrated by the case of charged membranes.

Charged membranes in series and parallel.

If two oppositely charged, ion exchange membranes, a cation exchanger and an anion exchanger, are again considered singly and then in a series and then a parallel array, once again the organization is a potent variable in determining system behavior. This example also allows the alteration of organization without perturbing the physical characteristics of the membranes. The cation exchange membrane is virtually impermeable to anions, and the anion exchange membrane is likewise impermeable to cations. They are characterized by placing them in a chamber between two salt solutions at different salt concentrations then measuring the electrical potential across the membranes and the current and salt flows through them. It is found that the electrical potential's magnitude closely approximates the Nernst potential and that its sign is opposite for the two membranes. The current flow is zero as is the salt flow.

The series array is rather uninteresting. Current and salt flows remain at zero and there is a potential across the system. On the other hand, the parallel array is a totally new system. There is no potential across the system and the salt flow is enormous. The reflection coefficient ( a measure of selectivity of the membrane between solute and solvent, normally between zero and plus one in a homogeneous membrane, where solvent is far more permeable than solute) is both negative and greater than one in magnitude indicating that the composite membrane has a strong preference for the solute over water. In such membranes the salt often flows through so easily that it makes the solution in the receiving chamber show the high concentration and density so it can be seen with the naked eye.

How is this to be explained? There is a very straightforward mechanistic explanation. Figure 7 shows what is going on in the system. The cations flow through the cation exchange membrane and the anions through the anion exchange membrane.

Fig. 7: Flow through a charged mosaic membrane

The result is an electroneutral flow of salt. The ion flows are separated in space resulting in a local "eddy" current which short circuits the potential which had been the big retarding force for flow in the single membranes and the series system. There is some resemblance here to the Bénard system, but also a very striking difference. In the Bénard system, the organization of the eddy flows is similar, but there is no rigid, heterogenous structure present to promote it as in the composite membrane. In both cases, the flow pattern itself is a kind of "dissipative structure" [Prigogine and Stengers, 1984].

A further elaboration of this theme is possible. If we return to the series array of oppositely charged membranes, we can turn this system into an energy conversion device called a biological fuel cell [Blumenthal,Caplan and Kedem, 1967]. Figure 8 shows the scheme for this.

Fig. 8: An enzymatic fuel cell

The enzyme, E, catalyzes a reaction involving the splitting of a neutral substrate, AB, which splits into a cation, B+, and an anion, A-. E

AB ---> A- + B+

The cation flows out one side and the anion the other, a current is generated, and the system can do work.

These examples are but a sample of the richness that organization contributes to system behavior. There were many such examples well understood before complexity became a popular concept and many of them were not typical examples of the Newtonian reductionist approach even though they are simple mechanisms. Thus, the reductionist approach does not just affect our approach to complex systems, but also has bearing on the whole being more than the sum of its parts in simple mechanisms as well.

Computer simulation, "Artificial Life" and the "edge of chaos"

Artificial life is a loose categorization of a number of approaches which mainly use computer simulation in the form of cellular automata, genetic algorithms, or other, related techniques to attempt to realize models of living systems. [Levy, 1992; Waldrop, 1992; Lewin, 1992; Horgan, 1995]. By definition, such models are totally computable, and we have already discussed the limitations of this to some extent. For simple mechanisms, computer simulation can be a useful tool and as long as one is aware of its limitations, it is a way of seeing more about a system than by almost any other means. For this reason, it is useful to examine some interesting results obtained from such simulations and some of the theory behind them.

Cellular automata models are extremely popular in the study of how simple local rules can produce simulated self-organization in systems. There are many varieties of these studies, but underlying them are some very general principles. The system consists of a field of cells which may or may not be occupied during any given time step. Rules about the neighboring cells occupancy govern the future state of a given cell. It may either "survive" or "die" if occupied, or a new occupant may be "born" in an empty cell depending on the occupancy of the surrounding cells and the particular rules in force.

Stephen Wolfram, the originator and marketer of the package "MATHEMATICA" [Bahder, 1995] has determined that cellular automata rules fall into one of four universality classes:

Chris Langton was able to develop a parameter called the lambda parameter defined as follows:

lambda = probability any cell would be alive in the next generation

He found that as lambda is increased the system progressed through Wolfram's classes as follows:

i & ii ---> iv ----> iii,

which suggests the progression:

order ---> "complexity" ---> chaos

where, the "complexity" phase is different from order in its explicit manifestations of self- organization, emergence, etc. In the B‚nard system, for example, the order phase is before the transition to the convection cells, while the existence of the cells would represent complexity. Finally, if the system is stressed sufficiently, the result will be chaos (boiling). The incidence of this set of transitions in many systems studied lead to the notion that life exists "at the edge of chaos". In particular, this concept is extremely suggestive in attempts to understand evolution at the species level as well as the more detailed questions involved in the evolution of single cell organisms [Kaufmann, 1993; Depew and Weber, 1995; Rosen, 1991, 1995]. That is to say, life needs the additional features observed in the region far enough from equilibrium to overcome the "normal" tendencies towards order, but not such totally disruptive transitions as are seen in the chaotic domain. Weather patterns and other non-living aspects of our environment on earth might also fit this scheme. Schneider and Kay [1994] have carried this idea a step further in suggesting that the role of life on the planet may be both cause and effect in the sense that the conditions remain conducive to sustaining life because life exists. Their ideas deserve a closer look at this point.

Causality and Complexity

The left hand side of the modeling relation involves events in the natural world and their causes. Causality is another concept which has been severely circumscribed by the dominance of the Newtonian Paradigm [Rosen, 1978,1985,1991; Casti, 1989]. It is common to find modern writings in science laud the ideas of Descartes and others for having "straightened us out" with respect to the ideas of Aristotle and, in particular, Aristotelian causality. Modern science and its positivist philosophical base restricts itself to questions of how things work (mechanisms, simple systems) and rejects any attempt to address questions which ask why things happen. Aristotle thought that the question "why" was essential to understanding causality. For any event, Aristotle recognized four causes [Rosen, 1991;Bohm, 1980] for events which were all important for our understanding of why that event happens. For example, in the case of the causes for the existence of a house, the four Aristotelian causes would be:

None of these reasons for the existence of the house seems unreasonable as a causal factor. In particular, it is not strange to list the purpose for the house, final cause, as one of the factors. In science, however, this would be a foreign, if not repulsive, notion. The Newtonian paradigm is very special with respect to this way of looking at causality. It is worth looking at this in some detail.

What Newtonian mechanics did epistemologically was to set up laws of motion which caused a very important mathematical structure to exist. In particular, Newton's second law introduces a set of key concepts. In its simplest form it is

f = ma,

where f stands for some external factor, called a force, which alters the motion of some body, a is the second derivative of the position of the body with respect to time (the acceleration) and the two are proportional with the proportionality constant, m, being defined as the mass of the body. What is accomplished in this one law is rather striking! The alternative would not be nearly as useful. Normally, the external force, f, is a function of position so that the calculus yields the law the status of a special type of differential equation, an equation of motion. Through use of the process of integration the equation of motion, which is really a local description of the body's motion, can be turned into a trajectory, an equation expressing the bodies position as a function of time in a global sense. Without the second law, a recursive set of higher derivatives of position with respect to time would be required to get even a very local description of the motion. The second law introduces a technique for solving this problem. It also does something very interesting in terms of the Aristotelian causalities associated with motion. It may seem surprising that Newton's formalism lends itself to being analyzed in terms of this notion of causality at all! In fact, the causes fit in a very systematic way. The Newtonian Paradigm allows the following identifications:

The causalities are separable, consistent with the reductionist approach. There is no possibility for final cause to be involved!

Complex systems involve final cause and the causes are mixed in ways that render them not reducible. In living systems, in particular, the appearance of final cause is often acknowledged when our guard is down, only to be placed in the evolutionary theory when we wish to tidy up our philosophical act. Rosen has written extensively about this, and points out that the property of anticipation is seen often in living and other complex systems [Rosen, 1985]. Some common examples in physiology and biochemistry are the so-called "cephalic phase" of gastric secretion and the forward activation of a key enzyme in the glycolytic pathway to prepare the enzyme for a rush of substrate at some later time [Mikulecky and Thellier, 1994; Stryer, 1994; Prideaux, 1996].

Schneider and Kay and the reduction of gradients as a manifestation of the second law.

Schneider and Kay [1994] have proposed a simple explanation for the self- organization that far from equilibrium living systems exhibit. They see the system reacting to imposed gradients in accordance with and because of the second law of thermodynamics. They propose a restatement of this law for far from equilibrium conditions:

"The thermodynamic principle which governs systems is that, as they are moved away from equilibrium they will utilize all avenues available to counter the applied gradients. As the applied gradients increase, so does the system's ability to oppose further movement from equilibrium."

The key phrase in this statement seems to be "all avenues available." It would seem that this phrase can not be interpreted in the usual mechanistic, reductionist manner. Its implications go very deep and suggest a new outlook on nature, especially with respect to life on this planet. If one accepts the thrust of this principle, then, in a very real sense, life arose as a response to an imposed gradient and functions to deal with that gradient on a grand scale. In this sense, the organism is not the reductionist "unit of life", but a by-product of global self-organization in response to the imposed gradient resulting from the sun's radiation. This is more a change in outlook rather than a new mechanism for explaining the nature of life. One important aspect of this way of defining the second law of thermodynamics and self-organization is that it admits openly to the action of final cause in systems' behavior [Ulanowicz, 1990].

A key feature in self organization is the linearity/non-linearity of the system, recapitulating some of what has been discussed thus far. Linear systems respond to constant, imposed gradients by seeking the state of minimum entropy production. Although a form of spontaneous organization under the constraints imposed, this is not what is commonly referred to as "self-organization". The popular notions of self- organization all refer to a class of non-linear systems. In general, the transition from linearity to non-linearity occurs as gradients are applied which push the system away from equilibrium. There are a number of consequences to this which have now been discussed in detail. Among them are the loss of superposition, the onset of bifurcations, and the eventual onset of chaos. In the B‚nard system, chaos would result if the hot plate were turned up even further. The water would boil! This assumes, of course, that chaos is the appropriate mathematical model for boiling water. [Rosen, 1993c].

The role of molecular biology

Living systems undergo self-organization which makes the Bénard system look trivial. This is because they possess types of non-linearity which are mechanistic in addition to the dynamic non-linearities which we might consider contributing to their complexity. Biological molecules are capable of intricate and very specific interactions. For this reason, they undergo self-organization which results not in hexagonal convection cells, but organelles, cells, tissues, and organisms. The cells contain communication systems analogous to but more complex than artificial neural networks. They have internal metabolic networks and machinery for replication. Superimposed on the mechanistic aspects of this super-organization is regulation and control. The living system has so much mechanism, that it is easy to loose the forest for the trees. The mechanisms, which are so much the center of the questions asked in molecular biology, give us insight into how the particular types of mechanisms are manifest. They do not and can not give us any insight into why these events occur. They do not tell us about complexiy and self- organization (In Rosen's words, they leave the mechanisms unentailed). The question raised by the "edge of chaos" concept is whether or not the non-linear dynamics supplies the answer to the question "why self-organization?" Rosen's discussion of chaos and randomness give us a clue. The evidence for the validity of chaotic dynamics as a model for biological processes is scant. The models which are put forward as examples all are relatively small systems which do not interact with anything else! This has a very non- biological flavor to it. This is not to say that apparent cyclic processes in living systems are not chaotic. In fact chaos may indeed be the source of what seems like randomness, but it is too early to know. Even if it is, how much does that tell us about the organization of living systems? In particular, how much does it tell us about why?

Chaotic dynamics serve an important role in this period of scientific history. They serve as a kind of reality check. They arise, in many cases, directly out of the Newtonian approach, yet they really have created a big problem for the Newtonian paradigm. They clearly shake our notions of clearly defined states and deterministic conclusions being the norm, even though they are clearly deterministic. When the foundations are shaken at this level, there may be things which fall. However, our job is not made simpler by this recognition. My guess is that a long involved pursuit of the role of chaos in the real complexity of living systems is going to be still another disappointment. Real biological complexity is more elusive. It defies computation and simulation. The answer to "why" the living system has become what it is will probably only ever be answered when we see the earth as a system and the organisms as resulting from something much greater than any of them alone. Biology has, therefore, in its purest sense, an intrinsic holism built into it. Molecular biology is therefore an oxymoron. It really seems more appropriately a branch of physics and chemistry.

Given this perspective, what does molecular biology tell us about the complexity of living things? It seems to describe in mechanistic detail one of the essential properties of living systems, namely that they are closed under efficient causation. This characteristic is an essential one in Rosen's definition of an organism, i. e. his definition of life [Rosen, 1991]. The contrast with autopoiesis has already been discussed. Being closed under efficient cause is simply the fact that the self-organization process has been achieved without the need of external agents to effect it. This does not eliminate its dependence on sources of matter and energy to sustain it. The material cause was the milieu of the planet with its solar energy source. The effective cause was a combination of the organism with a set of physical, cyclic processes (weather, water cycles, cycling of carbon and other elements through the ecosystem , etc.), a self replicating entity which sustains the system. The formal cause is the ecosystem globally. The final cause was the homeostatic stable condition which keeps the gradient in check so that the organisms may continue to sustain the homeostatic condition. In no case can a cause be uniquely assigned to a "part" of the system.

Cognition and complexity

There is another lesson to be learned from our fascination with artificial neural networks. To the extent that these systems are capable of some form of "cognition" however primitive, they are a model of something which is but a special case of communication or "signalling" networks in general in living systems [Varela, 1979; Mikulecky, 1991]. In this respect, it is interesting that the latest edition of a popular modern textbook on cellular and molecular biology has seen fit to add a section to its chapter on cell signalling making just this point [Alberts, Bray, Lewis, Raff, Roberts, and Watson, 1994]. In particular, one of the authors [Bray, 1990] has postulated that intracellular signalling can be analogically modeled as a parallel distributed process such by an artificial neural network. Likewise we have postulated that since neurons are a special case of chemical signals being sent between cells, the existence of other forms of cell signalling might be modeled using modified versions of artificial neural networks. To this end, a model of a tumor cell population was created having a basic structure resembling that of an artificial neural network modified to express the fact that the chemical transmission events were now far more diffuse than in the special case of neurons [Prideaux, Ware, Clarke, and Mikulecky, 1993; Prideaux, Mikulecky, Clarke, and Ware, 1993]. The results of the computer simulations of this model were in harmony with the general results of tumor growth and response to intervention. Later, this model was significantly modified to reflect properties of a primitive plant memory system [Mikulecky, Thellier, and Desbiez, 1996; Desbiez, Mikulecky, and Thellier, 1994]. Models of the immune system have also followed a similar line of reasoning and successful computer simulations of these models are also available [Perelson, 1988; Varela, Continho and Stewart, 1993; Stewart, 1996]

SUMMARY AND CONCLUSIONS

The concept of complexity in systems is in danger of being used in so many ways as to become of little use. This is because the epistemological issues surrounding it are deep ones. This review is merely a summary of these deep problems and focuses on the life work of Robert Rosen, who has provided a systematic and operationally well defined classification of systems into those which are simple or those which are complex. There is a subjective component to this definition which grows out of the history of science. The dominance of the Newtonian Paradigm has allowed us to see many, if not most systems, as simple. This is also manifest in the illusion that physics, which is basically the field of study of simple systems or mechanisms, is general relative to biology [Rosen, 1991]. In reality, if we look beyond the usual limits and controlled situations we create through our empirical methods, all systems become complex and the limits of the reductionist approach become visible. In fact, if Rosen's work is followed in more depth than is possible in this review, it is necessary to conclude that our present fascination with complexity is a direct result of our previous failure to approach the natural world with sufficient attention to its complex nature. This is not a criticism, but an acknowledgement of our growth as a scientific community.

One consequence of Rosen's definition of complexity is a recognition that systems which are simulable are simple systems. Complex systems contain something which is not simulable and is context dependent and semantic in character. In the case of cell signaling systems, this needs careful interpretation. As in the case of artificial neural networks, these cell signalling systems are most visibly demonstrating a myriad of mechanistic behaviors which are clearly apparent to us and which have lead to many computer simulations. The validity of these simulations as simulations is not at issue in this discussion. What is at issue is the idea that the simulations capture any more than the mere mechanistic shadow of the complex living system. The essence of living systems is their complexity and their causal structure. Living systems are complex in Rosen's sense and necessarily contain non-computable components. This, I will speculate, is where it will be most fruitful for us to look in order to progress. We construct, train, and interpret the behavior of the artificial neural network. The cellular network we seek to understand was neither constructed by us nor is readily interpretable by us. These networks are rich with properties which I would call "cognitive" (they have a kind of memory, they learn, etc.). How this cellular network becomes what it is and what its nature is seems almost totally unknown beyond certain mechanistic simplifications and a large body of detail which results. The forest seems totally obscured by the trees.

Once the historical context for the recent focus on the complexity of systems is realized, the next step is to systematically follow beyond this transition into a new era of science in which new approaches which explicitly seek to investigate new ways of interacting with systems. This will be the challenge for the next century. In the words of Don Juan "we must learn to see" [Casteneda, 1971].

REFERENCES

Alberts, B., D. Bray, J. Lewis, M. Raff, K. Roberts and J. D. Watson (1994) Molecular Biology of the Cell, Garland Pub. Co., N. Y. (Third Ed.)

Altman, W. F. (1989) Apprentices of wonder: Inside the neural network revolution, Bantam Books, NY

Blumenthal, R., S. R. Caplan and O. Kedem (1967) The coupling of an enzymatic reaction to transmembrane flow of electrical current in a synthetic "active transport" system. Biophys. J. 7:735-757.

Bohm, D. (1980) Wholeness and the Implicate Order, Routledge, Kegan and Paul, London.

Briggs, J. and D. Peat(1989) Turbulent Mirror , Harper & Row, NY

Capra, F. (1982) The Turning Point: Science, Society, and the Rising Culture, Bantam Books,

Casteneda, C. (1971) A Separate Reality: Further Conversations with Don Juan, Pocket Books, N. Y.

Casti, J. L. (1989) Newton, Aristotle, and the Modeling of Living Systems, in Newton to Aristotle: Toward a theory of models for living systems, (J. Casti and A. Karlqvist, eds.), Birkhauser, Boston, pp. 11-37.

Casti, J. L. (1994) Complexification, Harper Perennial, NY.

Chaitin, G. (1987) Algorithmic information Theory, Cambridge University Press, Cambridge, Eng.

Chaitin, G. (1992) Information-theoretic incompleteness, World Scientific Press, NY.

Chandrasekhar, S. (1961) Hydrodynamic and hydromagnetic stability, Oxford University Press, London.

Chua, L.O. and T.S. Parker (1989) Practical Numerical Algorithms for Chaotic Systems, Springer-Verlag, N.Y.

Dennet, D. C. (1991) Consciousness Explained, Little, Brown and Co., Boston.

Depew, D. J. and B. W. Weber (1995) Darwinism Evolving, M.I.T. Press, Cambridge, MA.

Dewdney, A. K. (1993) Misled by metaphors: Two tools that don't always work, in The machine as metaphor and tool, (H. Haken, A. Karlqvist, and U. Svedin, eds.), Springer- Verlag, NY, pp 77-86.

Edmonds, B. (1995) What is complexity?- The philosophy of complexity per se with application to some examples in evolution, Evolution of Complexity Workshop, Einstein Meets Magritte, VUB, Brussels, 1995. (Also available electronically at: http://www.fmb.mmu.ac.uk/~bruce/evolcomp).

Fidelman, M. L. and D. C. Mikulecky (1988) Network thermodynamic Analysis and simulation of isotonic Solute-Coupled Flow in Leaky epithelia: An example of the use of network theory to provide the qualitative aspects of a complex system and its verification by simulation, J. theor. Biol. 130:73-93.

Henry, C. (1995) Universal Grammar, in Self-reference in cognitive systems and biological systems, (L. Rocha,ed) CC-AI 12:45-62.

Hopfield, J. J. (1982) Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. USA 79: 2554-2558.

Hopfield, J. J. and D. W. Tank (1986) Computing with neural circuits: a model, Science 233: 625-633.

Fischler, M. A. and O. Firschein (1987) Intelligence: The eye, the brain, and the computer, Addison-Wesley Pub. Co., Reading, MA.

Horgan, J. (1995) From Complexity to Perplexity, Scientific American June, 1995: 104- 109.

Kampis, G. (1991) Self-modifying systems in biology and cognitive science: A new framework for dynamics, information, and complexity, Pergamon Press, NY.

Kaufmann, S. A. (1993) The Origins of Order: Self-Organization and Selection in Evolution, Oxford Univ. Press, N. Y.

Kedem, O. and A. Katchalsky (1958) Thermodynamic analysis of the permeability of biological membranes to non-electrolytes, Bioch. et Biophys. Acta 27:229-247.

Kedem, O. and A. Katchalsky (1963a) Permeability of composite membranes: Part 1.- Electric current flow and flow of solute through membranes, Trans. Faraday Soc. 59: 1918-1930.

Kedem, O. and A. Katchalsky (1963b) Permeability of composite membranes: Part 2.- Parallel elements, Trans. Faraday Soc. 59: 1931-1940.

Kedem, O. and A. Katchalsky (1963c) Permeability of composite membranes: Part 3.- Series array of elements, Trans. Faraday Soc. 59: 1941-1953.

Levins, R. and R. Lewontin (1985) The Dialectical Biologist, Harvard Univ. Press, Cambridge, MA, 1885.

Levy, S. (1992 ) Artificial life: the quest for a new creation, Pantheon, NY.

Lewin, R. (1992) Complexity: Life at the Edge of Chaos, Collier Books, N. Y.

Maturana, H. R. and F. J. Varela (1980) Autopoieses and Cognition: The realization of the living, D. reidel Publishing Co., Dordrecht, Holland.

Mikulecky, D. C. (1990) A Comparison Between the Formal Description of Reaction and Neural Networks: A Network Thermodynamic Approach. in Biomedical Engineering: Opening New Doors, D. C. Mikulecky and A. M. Clarke, eds., New York: New York University Press, pp 67-74.

Mikulecky, D. C. (1993) Applications of Network Thermodynamics to Problems in B Biomedical Engineering, New York University Press, NY.

Mintz, E., S. R. Thomas and D. C. Mikulecky (1986a) Exploration of apical sodium transport mechanisms in an epithelial model by network thermodynamic simulation of the effect of mucosal sodium depletion: I. Comparison of three different apical sodium permeability mechanisms, J. theor. Biol. 123: 1-19.

Mintz, E., S. R. Thomas and D. C. Mikulecky (1986a) Exploration of apical sodium transport mechanisms in an epithelial model by network thermodynamic simulation of the effect of mucosal sodium depletion: II. An apical sodium channel and amiloride blocking, J. theor. Biol. 123:21-34.

Mead, C. (1989) Analog VLSI and Neural Systems, Addison-Wesley, NY. Oster, G. F., A. Perelson, and A. Katchalsky (1973) Network thermodynamics: dynamic modeling of biophysical systems, Quart. Rev. Biophys. 6:1-134.

Peacocke, A.R. (1985) Reductionism in academic disciplines SRHE & NFER-Nelson, Surrey.

Peacocke, A. R., An Introduction to the physical chemistry of biological organization, Clarendon Press, Oxford (1983).

Peusner, L. (1970) The principles of network thermodynamics and biophysical applications , Ph. D. Thesis, Harvard University, Cambridge, MA [Reprinted by Entropy Limited, South Great Road, Lincoln, MA 01773, 1987]. Peusner, L. (1986) Studies in network thermodynamics, Elsevier, Amsterdam, Holland.

Prideaux, J. A., J. L. Ware, A. M. Clarke, and D. C. Mikulecky (1993) From Neural Networks to Cell Signalling: Chemical Communications among Cell Populations, J. Biol. Sys. 1: 131-146.

Prideaux, J. A., D. C. Mikulecky, A. M. Clarke, and J. L. Ware (1993) A Modified Neural Network Model of Tumor Cell Interactions and Subpopulation Dynamics, Invasion and Metastasis 13:50-56.

Prideaux, J. A. (1996) Feed-Forward activation in a theoretical first-order biochemical pathway which contains an anticipatory model, (this volume)

Prigogine, I (1961) Thermodynamics of Irreversible Processes, Wiley, N.Y.

Prigogine, I. and I. Stengers (1984) Order out of Chaos: Man's new dialogue with nature, Bantam, NY.

Rashevsky, N. (1954) Topology and life: In search of general mathematical principles in biology and sociology, Bull. Math. Biophys. 16:317-348.

Rosen, R. Fundamentals of Measurement, North-Holland, 1978.

Rosen,R. (1985) Theoretical Biology and Complexity, Academic Press, London.

Rosen,R. (1985) Anticipatory systems, Pergamon, London. Rosen, R. (1991) Life Itself, Columbia Univ. Press, NY.

Rosen, R. (1993a) Bionics Revisited, in The machine as metaphor and tool (H. Haken, A. Karlqvist, and U. Svedin, eds.), Springer-Verlag, NY, pp 87-100.

Rosen, R. (1993b) Drawing the boundary between subject and object: Comments on the mind-brain problem, Theoretical Medicine 14:89-100.

Rosen, R. (1993c) Some random thoughts about chaos and some chaotic thoughts about randomness, J. Biol. Sys. 1: 19-26.

Rosen, R. (1994) On psychomimesis, J. theor. Biol. 171:87-92.

Rosen, R. (1995) Cooperation and Chimera, in Cooperation & Conflict in General Evolutionary Processes, (J. L. Casti and A. Karlqvist, eds.), Wiley, N. Y.

Schneider, E.D. and J.J. Kay (1994) "Life as a Manifestation of the Second Law of Thermodynamics", in Modeling Complex Biological Systems, (M. Witten and D. C. Mikulecky, eds.), Mathl. Comput. Modelling 19:25-48.

Stewart, J. (1966) Chaos et criticalit‚ auto-organis‚e dans le syst‚me immunitaire, (this volume).

Thomas, S. R. and D. C. Mikulecky (1978) A network thermodynamic model of salt and water flow across the kidney proximal tubule, Am. J. Physiol. 235:F638-F648.

Ulanowicz, R. E. (1990) Aristotelian Causalities in Ecosystem Development, Oikos 57: 42-48.

Varela, F. J. (1979) Principles of Biological Autonomy, North Holland, NY.

Varela, F. J., A Coutinho, and J. Stewart (1993) What is the Immune Network for? in Thinking About Biology, (W. stein and J. Varela, eds.) Addison-Wesley, Reading, MA. pp 215-230.

Waldrop, M. M. (1992) Complexity: The Emerging Science at the Edge of Order and Chaos, Touchstone, N. Y.

Walz, D., S. R. Caplan, D. R. L. Scriven and D. C. Mikulecky (1995) Methods of Bioelectrochemical Modelling, in Treatise on Bioelectrochemistry, (G. Millazzo, ed.) Birkhauser Verlag, Basel, pp 49-131.

Return to Don Mikulecky's Home Page