Category Archives: Emergence

Greedy Reductionism and Statistical Shadows

Biological evolution is, as has often been noted, both fact and theory. It is a fact that all extant organisms came to exist in their current forms through a process of descent with modification from ancestral forms. The overwhelming evidence for this empirical claim was recognized relatively soon after Darwin published On the Origin of Species in 1859, and support for it has grown to the point where it is as well established as any historical claim might be. In this sense, biological evolution is no more a theory than it is a “theory” that Napoleon Bonaparte commanded the French army in the late eighteenth century. Of course, the details of how extant and extinct organisms are related to one another, and of what descended from what and when, are still being worked out, and will probably never be known in their entirety. The same is true of the details of Napoleon’s life and military campaigns. However, this lack of complete knowledge certainly does not alter the fundamental nature of the claims made, either by historians or by evolutionary biologists. (Pigliucci et al. 2006: 1)

On the other hand, evolutionary biology is also a rich patchwork of theories seeking to explain the patterns observed in the changes in populations of organisms over time. These theories range in scope form “natural selection,” which is evoked extensively at many different levels, to finer-grained explanations involving particular mechanisms (e.g., reproductive isolation induced by geographic barriers leading to speciation events). (Pigliucci et al. 2006: 1)

(….) There are a number of different ways in which these questions have been addressed, and a number of different accounts of these areas of evolutionary biology. These different accounts, we will maintain, are not always compatible, either with one another or with other accepted practices in evolutionary biology. (Pigliucci et al. 2006: 1)

(….) Because we will be making some potentially controversial claims throughout this volume, it is crucial for the reader to understand two basic ideas underlying most of what we say, as well as exactly what we think are some implications of our views for the general theory of evolutionary quantitative genetics, which we discuss repeatedly in critical fashion. (Pigliucci et al. 2006: 2)

(….) The first central idea we wish to put forth as part of the framework of this book will be readily familiar to biologists, although some of its consequences may not be. The idea can be expressed by the use of a metaphor proposed by Bill Shipley (2000) …. the shadow theater popular in Southeast Asia. In one form, the wayang golek of Bali and other parts of Indonesia, three-dimensional wooden puppets are used to project two-dimensional shadows on a screen, where the action is presented to the spectator. Shipley’s idea is that quantitative biologists find themselves very much in the position of wayang golek’s spectators: we have access to only the “statistical shadows” projected by a set of underlying causal factors. Unlike the wayang golek’s patrons, however, biologists want to peek around the screen and infer the position of the light source as well as the actual three-dimensional shapes of the puppets. This, of course, is the familiar problem of the relationship between causation and correlation, and, as any undergraduate science major soon learns, correlation is not causation (although a popular joke among scientists is that the two are nevertheless often correlated). (Pigliucci et al. 2006: 2)

The loose relationship between causation and correlation has two consequences that are crucial…. On the one hand, there is the problem that, strictly speaking, it makes no sense to attempt to infer mechanisms directly from patterns…. On the other hand, as Shipley elegantly show in his book, there is an alternative route that gets (most of) the job done, albeit in a more cicuitous route and painful way. What one can do is to produce a series of alternative hypotheses about the causal pathways underlying a given set of observations; these hypotheses can then be used to “project” the expected statistical shadows, which can be compared with the observed one. If the projected and actual shadows do not match, one can discard the corresponding causal hypothesis and move on to the next one; if the two shadows do match (within statistical margins of error, of course), then one had identified at least one causal explanation compatible with the observations. As any philosopher or scientist worth her salt knows, of course, this cannot be the end of the process, for more than one causal model may be compatible with the observations, which means that one needs additional observations or refinements of the causal models to be able to discard more wrong explanations and continue to narrow the field. A crucial point here is that the causal models to be tested against the observed statistical shadow can be suggested by the observations themselves, especially if coupled with further knowledge about the system under study (such as details of the ecology, developmental biology, genetics, or past evolutionary history of the populations in question). But the statistical shadows cannot be used as direct supporting evidence for any particular causal model. (Pigliucci et al. 2006: 4)

The second central idea … has been best articulated by John Dupré (1993), and it deals with the proper way to think about reductionism. The term “reductionism” has a complex history, and it evokes strong feelings in both scientists and philosophers (often, though not always, with scientists hailing reductionism as fundamental to the success of science and some philosophers dismissing it as a hopeless epistemic dream). Dupré introduces a useful distinction that acknowledges the power of reductionism in science while at the same time sharply curtailing its scope. His idea is summarized … as two possible scenarios: In one case, reductionism allows one to explain and predict higher-level phenomena (say, development in living organisms) entirely in terms of lower-level processes (say, genetic switches throughout development). In the most extreme case, one can also infer the details of the lower-level processes from the higher-level patterns produced (something we have just seen is highly unlikely in the case of any complex biological phenomenon because of Shipley’s “statistical shadow” effect). This form of “greedy” reductionism … is bound to fail in most (though not all) cases for two reasons. The first is that the relationships between levels of manifestation of reality (e.g., genetic machinery vs. development, or population genetics vs. evolutionary pathways) are many-to-many (again, as pointed out above in our discussion of the shadow theater). The second is the genuine existence of “emergent properties” (i.e., properties of higher-level phenomena that arise from the nonadditive interaction among lower-level processes). It is, for example, currently impossible to predict the physicochemical properties of water from the simple properties of individual atoms of hydrogen and oxygen, or, for that matter, from the properties of H20 molecules and the smattering of necessary impurities. (Pigliucci et al. 2006: 4-5)

Biological Emergence and Pan-selection Hand-Waving

Epigenetic Algorithms

Mechanical metaphors have appealed to many philosophers who sought materialist explanations of life. The definitive work on this subject is T. S. Hall’s Ideas of Life and Matter (1969). Descartes, though a dualist, thought of animal bodies as automata that obeyed mechanical rules. Julien de la Mettrie applied stricter mechanistic principles to humans in L’Homme machine (1748). Clockwork and heat engine models were popular during the Industrial Revolution. Lamarck proposed hydraulic processes as causes of variation. In the late nineteenth century, the embryologists Wilhelm His and Wilhelm Roux theorized about developmental mechanics. However, as biochemical and then molecular biological information expanded, popular machine models were refuted, but it is not surprising that computers should have filled the gap. Algorithms that systematically provide instructions for a progressive sequence of events seem to be suitable analogues for epigenetic procedures. (Reid 2007: 263)

A common error in applying this analogy is the belief that the genetic code, or at least the total complement of an organism’s DNA contains the program for its own differential expression. In the computer age it is easy to fall into that metaphysical trap. However, in the computer age we should also know that algorithms are the creations of programmers. As Charles Babbage (1838) and Robert Chambers (1844) tried to tell us, the analogy is more relevant to creationism than evolutionism. At the risk of offending the sophisticates who have indulged me so far, I want to state the problems in the most simple terms. To me, that is a major goal of theoretical biology, rather than the conversion of life to mathematics. (Reid 2007: 263)

Robert G.B. Reid (2007, 263) Biological Emergences: Evolution by Natural Experiment. The Vienna Series in Theoretical Biology.

If the emergentist-materialist ontology underlying biology (and, as a matter of fact, all the factual sciences) is correct, the bios constitutes a distinct ontic level the entities in which are characterized by emergent properties. The properties of biotic systems are then not (ontologically) reducible to the properties of their components, although we may be able to partially explain and predict them from the properties of their components… The belief that one has reduced a system by exhibiting [for instance] its components, which is indeed nothing but physical and chemical, is insufficient: physics and chemistry do not account for the structure, in particular the organization, of biosystems and their emergent properties (Mahner and Bunge 1997: 197) (Robert 2004: 132)

Jason Scott Robert (2004, 132) Embryology, Epigenesis, and Evolution: Taking Development Seriously

The science of biology enters the twenty-first century in turmoil, in a state of conceptual disarray, although at first glance this is far from apparent. When has biology ever been in a more powerful position to study living systems? The sequencing juggernaut has still to reach full steam, and it is constantly spewing forth all manner of powerful new approaches to biological systems, many of which were previously unimaginable: a revolutionized medicine that reaches beyond diagnosis and cure of disease into defining states of the organism in general; revolutionary agricultural technology built on genomic understanding and manipulation of animals and plants; the age-old foundation of biology, taxonomy, made rock solid, greatly extended, and become far more useful in its new genomic setting; a microbial ecology that is finally able to contribute to our understanding of the biosphere; and the list goes on. (Woese 2005: 99)

All this is an expression of the power inherent in the methodology of molecular biology, especially the sequencing of genomes. Methodology is one thing, however, and understanding and direction another. The fact is that the understanding of biology emerging from the mass of data that flows from the genome sequencing machines brings into question the classical concepts of organism, lineage, and evolution as the same time it gainsays the molecular perspective that spawned the enterprise. The fact is that the molecular perspective, which so successfully guided and shaped twentieth-century biology, has effectively run its course (as all paradigms do) and no longer provides a focus, a vision of the biology of the future, with the result that biology is wandering will-nilly into that future. This is a prescription for revolution–conceptual revolution. One can be confident that the new paradigm will soon emerge to guide biology in this new century…. Molecular biology has ceased to be a genuine paradigm, and it is now only a body of (very powerful) technique…. The time has come to shift biology’s focus from trying to understand organisms solely by dissecting them into their parts to trying to understand the fundamental nature of biological organization, of biological form. (Woese 2005: 99-100)

Conceptualizing Cells

We should all take seriously an assessment of biology made by the physicist David Bohm over 30 years ago (and universally ignored):

“It does seem odd … that just when physics is … moving away from mechanism, biology and psychology are moving closer to it. If the trend continues … scientists will be regarding living and intelligent beings as mechanical, while they suppose that inanimate matter is to complex and subtle to fit into the limited categories of mechanism.” [D. Bohm, “Some Remarks on the Notion of Order,” in C. H. Waddington, ed., Towards a Theoretical Biology: 2 Sketches. (Edinburgh: Edinburgh Press 1969), p. 18-40.]

The organism is not a machine! Machines are not made of parts that continually turn over and renew; the cell is. A machine is stable because its parts are strongly built and function reliably. The cell is stable for an entirely different reason: It is homeostatic. Perturbed, the cell automatically seeks to reconstitute its inherent pattern. Homeostasis and homeorhesis are basic to all living things, but not machines.

If not a machine, then what is the cell?

Carl R. Woese (2005, 100) on Evolving Biological Organization

(….) When one has worked one’s entire career within the framework of a powerful paradigm, it is almost impossible to look at that paradigm as anything but the proper, if not the only possible, perspective one can have on (in this case) biology. Yet despite its great accomplishments, molecular biology is far from the “perfect paradigm” most biologists take it to be. This child of reductionist materialism has nearly driven the biology out of biology. Molecular biology’s reductionism is fundamentalist, unwavering, and procrustean. It strips the organism from its environment, shears it of its history (evolution), and shreds it into parts. A sense of the whole, of the whole cell, of the whole multicellular organism, of the biosphere, of the emergent quality of biological organization, all have been lost or sidelined. (Woese 2005: 101)

Our thinking is fettered by classical evolutionary notions as well. The deepest and most subtle of these is the concept of variation and selection. How we view the evolution of cellular design or organization is heavily colored by how we view variation and selection. From Darwin’s day onward, evolutionists have debated the nature of the concept, and particularly whether evolutionary change is gradual, salutatory, or of some other nature. However, another aspect of the concept concerns us here more. In the terms I prefer, it is the nature of the phase (or propensity) space in which evolution operates. Looked at one way, variation and selection are all there is to evolution: The evolutionary phase space is wide open, and all manner of things are possible. From this “anything goes” perspective, a given biological form (pattern) has no meaning outside of itself, and the route by which it arises is one out of an enormous number of possible paths, which makes the evolution completely idiosyncratic and, thus, uninteresting (molecular biology holds this position: the molecular biologist sees evolution as merely a series of meaningless historical accidents). (Woese 2005: 101)

The alternative viewpoint is that the evolutionary propensity space is highly constrained, being more like a mountainous terrain than a wide open prairie: Only certain paths are possible, and they lead to particular (a relatively small set of) outcomes. Generic biological form preexists in the same sense that form in the inanimate world does. It is not the case that “anything goes” in the world of biological evolution. In other words, biological form (pattern) is important: It has meaning beyond itself; a deeper, more general significance. Understanding of biology lies, then, in understanding the evolution and nature of biological form (pattern). Explaining biological form by variation and selection hand-waving argumentation is far from sufficient: The motor does not explain where the car goes. (Woese 2005: 101-102)

Evolution by Natural Experiment

This is the age of the evolution of Evolution. All thoughts that the Evolutionist works with, all theories and generalizations, have themselves evolved and are now being evolved. Even were his theory perfected, its first lesson would be that it was itself but a phase of the Evolution of other opinion, no more fixed than a species, no more final than the theory which it displaced.

— Henry Drummond, 1883

Charles Darwin described The Origin of Species as “one long argument” for evolution by natural selection. Subsequently Ernst Mayr applied the expression to the continuing debate over Darwin’s ideas. My explanation of why the debate lingers is that although Darwin was right about the reality of evolution, his causal theory was fundamentally wrong, and its errors have been compounded by neo-Darwinism. In 1985 my book Evolutionary Theory: The Unfinished Synthesis was published. In it I discussed Darwinian problems that have never been solved, and the difficulties suffered historically by holistic approaches to evolutionary theory. The most important of these holistic treatments was “emergent evolution,” which enjoyed a brief moment of popularity about 80 years ago before being eclipsed when natural selection was mathematically formalized by theoretical population geneticists. I saw that the concept of biological emergence could provide a matrix for a reconstructed evolutionary theory that might displace selectionism. At that time, I naively thought that there was a momentum in favor of such a revision, and that there were enough open-minded, structuralistic evolutionists to displace the selectionist paradigm within a decade or so. Faint hope! (Robert G. B. Reid. Biological Emergences: Evolution by Natural Experiment (Vienna Series in Theoretical Biology) (Kindle Locations 31-37). Kindle Edition.)

Instead, the conventional “Modern Synthesis” produced extremer forms of selectionism. Although some theoreticians were dealing effectively with parts of the problem, I decided I should try again, from a more general biological perspective. This book is the result. (Reid 2007, Preface)

The main thrust of the book is an exploration of evolutionary innovation, after a critique of selectionism as a mechanistic explanation of evolution. Yet it is impossible to ignore the fact that the major periods of biological history were dominated by dynamic equilibria where selection theory does apply. But emergentism and selectionism cannot be synthesized within an evolutionary theory. A “biological synthesis” is necessary to contain the history of life. I hope that selectionists who feel that I have defiled their discipline might find some comfort in knowing that their calculations and predictions are relevant for most of the 3.5 billion years that living organisms have inhabited the Earth, and that they forgive me for arguing that those calculations and predictions have little to do with evolution. (Reid 2007, Preface)

Evolution is about change, especially complexifying change, not stasis. There are ways in which novel organisms can emerge with properties that are not only self-sufficient but more than enough to ensure their status as the founders of kingdoms, phyla, or orders. And they have enough generative potential to allow them to diversify into a multiplicity of new families, genera, and species. Some of these innovations are all-or-none saltations. Some of them emerge at thresholds in lines of gradual and continuous evolutionary change. Some of them are largely autonomous, coming from within the organism; some are largely imposed by the environment. Their adaptiveness comes with their generation, and their adaptability may guarantee success regardless of circumstances. Thus, the filtering, sorting, or eliminating functions of natural selection are theoretically redundant. (Reid 2007, Preface)

Therefore, evolutionary theory should focus on the natural, experimental generation of evolutionary changes, and should ask how they lead to greater complexity of living organisms. Such progressive innovations are often sudden, and have new properties arising from new internal and external relationships. They are emergent. In this book I place such evolutionary changes in causal arenas that I liken to a three-ring circus. For the sake of bringing order to many causes, I deal with the rings one at a time, while noting that the performances in each ring interact with each other in crucial ways. One ring contains symbioses and other kinds of biological association. In another, physiology and behavior perform. The third ring contains of developmental or epigenetic evolution. (Reid 2007, Preface)

After exploring the generative causes of evolution, I devote several chapters to subtheories that might arise from them, and consider how they might be integrated into a thesis of emergent evolution. In the last chapter I propose a biological synthesis. (Reid 2007, Preface)

~ ~ ~

Introduction Re-Invention of Natural Selection

I regard it as unfortunate that the theory of natural selection was first developed as an explanation for evolutionary change. It is much more important as an explanation for the maintenance of adaptation.
George Williams, 1966

Natural selection cannot explain the origin of new variants and adaptations, only their spread.
John Endler, 1986

We could, if we wished, simply replace the term natural selection with dynamic stabilization….
Brian Goodwin, 1994

Nobody is going to re-invent natural selection….
Nigel Hawkes, 1997

Ever since Charles Darwin published The Origin of Species, it has been widely believed that natural selection is the primary cause of evolution. However, while George Williams and John Endler take the trouble to distinguish between the causes of variation and what natural selection does with them; the latter is what matters to them. In contrast, Brian Goodwin does not regard natural selection as a major evolutionary force, but as a process that results in stable organisms, populations, and ecosystems. He would prefer to understand how evolutionary novelties are generated, a question that frustrated Darwin for all of his career. (Reid 2007)

During the twentieth century, Darwin’s followers eventually learned how chromosomal recombination and gene mutation could provide variation as fuel for natural selection. They also re-invented Darwinian evolutionary theory as neo-Darwinism by formalizing natural selection mathematically. Then they redefined it as differential survival and reproduction, which entrenched it as the universal cause of evolution. Nigel Hawkes’s remark that natural selection cannot be re-invented demonstrates its continued perception as an incorruptible principle. But is it even a minor cause of evolution? (Reid 2007)

Natural selection supposedly builds order from purely random accidents of nature by preserving the fit and discarding the unfit. On the face of it, that makes more than enough sense to justify its importance. Additionally, it avoids any suggestion that a supernatural creative hand has ever been at work. But it need not be the only mechanistic option. And the current concept of natural selection, which already has a history of re-invention, is not immune to further change. Indeed, if its present interpretation as the fundamental mechanism of evolution were successfully challenged, some of the controversies now swirling around the modern paradigm might be resolved. (Reid 2007)

A Paradigm in Crisis?

Just what is the evolutionary paradigm that might be in crisis? It is sometimes called “the Modern Synthesis.” Fundamentally it comes down to a body of knowledge, interpretation, supposition, and extrapolation, integrated with the belief that natural selection is the all-sufficient cause of evolutionif it is assumed that variation is caused by gene mutations. The paradigm has built a strong relationship between ecology and evolution, and has stimulated a huge amount of research into population biology. It has also been the perennial survivor of crises that have ebbed and flowed in the tide of evolutionary ideas. Yet signs of discord are visible in the strong polarization of those who see the whole organism as a necessary component of evolution and those who want to reduce all of biology to the genes. Since neo-Darwinists are also hypersensitive to creationism, they treat any criticism of the current paradigm as a breach of the scientific worldview that will admit the fundamentalist hordes. Consequently, questions about how selection theory can claim to be the all-sufficient explanation of evolution go unanswered or ignored. Could most gene mutations be neutral, essentially invisible to natural selection, their distribution simply adrift? Did evolution follow a pattern of punctuated equilibrium, with sudden changes separated by long periods of stasis? Were all evolutionary innovations gene-determined? Are they all adaptive? Is complexity built by the accumulation of minor, selectively advantageous mutations? Are variations completely random, or can they be directed in some way? Is the generation of novelty not more important than its subsequent selection? (Reid 2007)

Long before Darwin, hunters, farmers, and naturalists were familiar with the process that he came to call “natural selection.” And they had not always associated it with evolution. It is recognized in the Bible, a Special Creation text. Lamarck had thought that evolution resulted from a universal progressive force of nature, not from natural selection. Organisms responded to adaptational needs demanded by their environments. The concept of adaptation led Lamarck’s rival, Georges Cuvier, to argue the opposite. If existing organisms were already perfectly adapted, change would be detrimental, and evolution impossible. Nevertheless, Cuvier knew that biogeography and the fossil record had been radically altered by natural catastrophes. These Darwin treated as minor aberrations during the long history of Earth. He wanted biological and geographical change to be gradual, so that natural selection would have time to make appropriate improvements. The process of re-inventing the events themselves to fit the putative mechanism of change was now under way. (Reid 2007)

Gradualism had already been brought to the fore when geologists realized that what was first interpreted as the effects of the sudden Biblical flood was instead the result of prolonged glaciation. Therefore, Darwin readily fell in with Charles Lyell’s belief that geological change had been uniformly slow. Now, more than a century later, catastrophism has been resurrected by confirmation of the K-T (Cretaceous-Tertiary) bolide impact that ended the Cretaceous and the dinosaurs. Such disasters are also linked to such putative events as the Cambrian “Big Bang of Biology,” when all of the major animal phyla seem to have appeared almost simultaneously.’ The luck of the draw has returned to evolutionary theory. Being in the right place at the right time during a cataclysm might have been the most important condition of survival and subsequent evolution. (Reid 2007)

Beyond the fringe of Darwinism, there are heretics who believe the neo-Lamarckist tenet that the environment directly shapes the organism in a way that can be passed on from one generation to the next. They argue that changes imposed by the environment, and by the behavior of the organism, are causally prior to natural selection. Nor is neo-Lamarckism the only alternative. Some evolutionary biologists, for example, think that the establishment of unique symbioses between different organisms constituted major evolutionary novelties. Developmental evolutionists are reviewing the concept that evolution was not gradual but saltatory (i.e., advancing in leaps to greater complexity). However, while they emphasize the generation of evolutionary novelty, they accommodate natural selection as the complementary and essential causal mechanism. (Reid 2007)

Notes on isms

Before proceeding further, I want to explain how I arbitrarily, but I hope consistently, use the names that refer to evolutionary movements and their originators. “Darwinian” and “Lamarckian” refer to any idea or interpretation that Darwin and Lamarck originated or strongly adhered to. Darwinism is the paradigm that rose from Darwinian concepts, and Lamarckism is the movement that followed Lamarck. They therefore include ideas that Darwin and Lamarck may not have thought of nor emphasized, but which were inspired by them and consistent with their thinking. Lamarck published La philosophie zoologique in 1809, and Lamarckism lasted for about 80 years until neo-Lamarckism developed. Darwinism occupied the time frame between the publication of The Origin of Species (1859) and the development of neo-Darwinism. The latter came in two waves. The first was led by August Weismann, who was out to purify evolutionary theory of Darwinian vacillation. The second wave, which arose in theoretical population genetics in the 1920s, quantified and redefined the basic tenets of Darwinism. Selectionism is the belief that natural selection is the primary cause of evolution. Its influence permeates the Modern Synthesis, which was originally intended to bring together all aspects of biology that bear upon evolution by natural selection. Niles Eldredge (1995) uses the expression “ultra-Darwinian” to signify an extremist position that makes natural selection an active causal evolutionary force. For grammatical consistency, I prefer “ultra-Darwinist,” which was used in the same sense by Pierre-Paul Grasse in 1973. (Reid 2007)

The Need for a More Comprehensive Theory

I have already hinted that the selectionist paradigm is either insufficient to explain evolution or simply dead wrong. Obviously, I want to find something better. Neo-Darwinists themselves concede that while directional selection can cause adaptational change, most natural selection is not innovative. Instead, it establishes equilibrium by removing extreme forms and preserving the status quo. John Endler, the neo-Darwinist quoted in one of this chapter’s epigraphs, is in good company when he says that novelty has to appear before natural selection can operate on it. But he is silent on how novelty comes into being, and how it affects the internal organization of the organismquestions much closer to the fundamental process of evolution. He is not being evasive; the issue is just irrelevant to the neo-Darwinist thesis. (Reid 2007)

Darwin knew that nature had to produce variations before natural selection could act, so he eventually co-opted Lamarckian mechanisms to make his theory more comprehensive. The problem had been caught by other evolutionists almost as soon as The Origin of Species was first published. Sir Charles Lyell saw it clearly in 1860, before he even became an evolutionist:

If we take the three attributes of the deity of the Hindoo Triad, the Creator, Brahmah, the preserver or sustainer, Vishnu, & the destroyer, Siva, Natural Selection will be a combination of the two last but without the first, or the creative power, we cannot conceive the others having any function.

Consider also the titles of two books: St. George Jackson Mivart’s On the Genesis of Species (1872) and Edward Cope’s Origin of the Fittest (1887). Their play on Darwin’s title emphasized the need for a complementary theory of how new biological phenomena came into being. Soon, William Bateson’s Materials for the Study of Variation Treated with Especial Regard to Discontinuity in the Origin of Species (1894) was to distinguish between the emergent origin of novel variations and the action of natural selection. (Reid 2007)

The present work resumes the perennial quest for explanations of evolutionary genesis and will demonstrate that the stock answerpoint mutations and recombinations of the genes, acted upon by natural selectiondoes not suffice. There are many circumstances under which novelties emerge, and I allocate them to arenas of evolutionary causation that include association (symbiotic, cellular, sexual, and social), functional biology (physiology and behavior), and development and epigenetics. Think of them as three linked circus rings of evolutionary performance, under the “big top” of the environment. Natural selection is the conservative ringmaster who ensures that tried-and-true traditional acts come on time and again. It is the underlying syndrome that imposes dynamic stabilityits hypostasis (a word that has the additional and appropriate meaning of “significant constancy”). (Reid 2007)

Selection as Hypostasis

The stasis that natural selection enforces is not unchanging inertia. Rather, it is a state of adaptational and neutral flux that involves alterations in the numerical proportions of particular alleles and types of organism, and even minor extinctions. It does not produce major progressive changes in organismal complexity. Instead, it tends to lead to adaptational specialization. Natural selection may not only thwart progress toward greater complexity, it may result in what Darwin called retrogression, whereby complex and adaptable organisms revert to simplified conditions of specialization. This is common among parasites, but not unique to them. For example, our need for ascorbic acid-vitamin C-results from the regression of a synthesis pathway that was functional in our mammalian ancestors. (Reid 2007)

On the positive side, it may be argued that dynamic stability, at any level of organization, ensures that the foundations from which novelties emerge are solid enough to support them on the rare occasions when they escape its hypostasis. A world devoid of the agents of natural selection might be populated with kludges-gimcrack organisms of the kind that might have been designed by Heath Robinson, Rube Goldberg, or Tim Burton. The enigmatic “bizarre and dream-like” Hallucigenia of the Burgess Shale springs to mind.’ Even so, if physical and embryonic factors constrain some of the extremest forms before they mature and reproduce, the benefits of natural selection are redundant. Novelty that is first and foremost integrative (i.e., allows the organism to operate better as a whole) has a quality that is resistant to the slings and arrows of selective fortune. (Reid 2007)

Natural selection has to do with relative differences in survival and reproduction and the numerical distribution of existent variations that have already evolved. In this form it requires no serious re-invention. But selectionism goes on to infer that natural selection creates complex novelty by saving adaptive features that can be further built upon. Such qualities need no saving by metaphorical forces. Having the fundamental property of persistence that characterizes life, they can look after themselves. As Ludwig von Bertalanffy remarked in 1967, “favored survival of `better’ precursors of life presupposes self-maintaining, complex, open systems which may compete; therefore natural selection cannot account for the origin of those symptoms.” These qualities were in the nature of the organisms that first emerged from non-living origins, and they are prior to any action of natural selection. Compared to them, ecological competitiveness is a trivial consequence. (Reid 2007)

But to many neo-Darwinists the only “real” evolution is just that: adaptationthe selection of random genetic changes that better fit the present environment. Adaptation is appealingly simple, and many good little examples crop up all the time. However, adaptation only reinforces the prevailing circumstances, and represents but a fragment of the big picture of evolution. Too often, genetically fixed adaptation is confused with adaptabilitythe self-modification of an individual organism that allows responsiveness to internal and external change. The logical burden of selectionism is compounded by the universally popular metaphor of selection pressure, which under some conditions of existence is supposed to force appropriate organismic responses to pop out spontaneously. How can a metaphor, however heuristic, be a biological cause? As a metaphor, it is at best is an inductive guide that must be used with caution. (Reid 2007)

Even although metaphors cannot be causes, their persuasive powers have given natural selection and selection pressure perennial dominance of evolutionary theory. It is hard enough to sideline them, so as to get to generative causes, far less to convince anyone that they are obstructive. Darwin went so far as to make this admission:

In the literal sense of the word, no doubt, natural selection is a false term…. It has been said that I speak of natural selection as an active power or Deity…. Everyone knows what is meant and is implied by such metaphorical expressions; and they are almost necessary for brevity…. With a little familiarity such superficial objections will be forgotten. [Darwin 1872, p. 60.]

Alas, in every subsequent generation of evolutionists, familiarity has bred contempt as well as forgetfulness for such “superficial” objections. (Reid 2007)

Are All Changes Adaptive?

Here is one of my not-so-superficial objections. The persuasiveness of the selection metaphor gets extra clout from its link with the vague but pervasive concept of adaptiveness, which can supposedly be both created and preserved by natural selection. For example, a book review insists that a particular piece of pedagogy be “required reading for non-Darwinist `evolutionists’ who are trying to make sense of the world without the relentless imperatives of natural selection and the adaptive trends it produces.” (Reid 2007)

Adaptiveness, as a quality of life that is “useful,” or competitively advantageous, can always be applied in ways that seem to make sense. Even where adaptiveness seems absent, there is confidence that adequate research will discover it. If equated with integrativeness, adaptiveness is even a necessity of existence. The other day, one of my students said to me: “If it exists, it must have been selected.” This has a pleasing parsimony and finality, just like “If it exists it must have been created.” But it infers that anything that exists must not only be adaptive but also must owe its existence to natural selection. I responded: “It doesn’t follow that selection caused its existence, and it might be truer to say ‘to be selected it must first exist.”‘ A more complete answer would have addressed the meaning of existence, but I avoid ontology during my physiology course office hours. (Reid 2007)

“Adaptive,” unassuming and uncontroversial as it seems, has become a “power word” that resists analysis while enforcing acceptance. Some selectionists compound their logical burden by defining adaptiveness in terms of allelic fitness. But there are sexually attractive features that expose their possessors to predation, and there are “Trojan genes” that increase reproductive success but reduce physiological adaptability. They may be the fittest in terms of their temporarily dominant numbers, but detrimental in terms of ultimate persistence. (Reid 2007)

It is more logical to start with the qualities of evolutionary changes. They may be detrimental or neutral. They may be generally advantageous (because they confer adaptability), or they may be locally advantageous, depending on ecological circumstances. Natural selection is a consequence of advantageous or “adaptive” qualities. Therefore, examination of the origin and nature of adaptive novelty comes closer to the fundamental evolutionary problem. It is, however, legitimate to add that once the novel adaptive feature comes into being, any variant that is more advantageous than other variants survives differentiallyif under competition. Most biologists are Darwinists to that extent, but evolutionary novelty is still missing from the causal equation. Thus, with the reservation that some neutral or redundant qualities often persist in Darwin’s “struggle for existence,” selection theory seems to offer a reasonable way to look at what occurs after novelty has been generatedthat is, after evolution has happened. (Reid 2007)

“Oh,” cry my student inquisitors, “but the novelty to which you refer would be meaningless if it were not for correlated and necessary novelties that natural selection had already preserved and maintained.” So again I reiterate first principles: Self-sustaining integrity, an ability to reproduce biologically, and hence evolvability were inherent qualities of the first living organisms, and were prior to differential survival and reproduction. They were not, even by the lights of extreme neo-Darwinists, created by natural selection. And their persistence is fundamental to their nature. To call such features adaptive, for the purpose of implying they were caused by natural selection, is sophistry as well as circumlocution. Sadly, many biologists find it persuasive. Ludwig von Bertalanffy (1952) lamented:

Like a Tibetan prayer wheel, Selection Theory murmurs untiringly: ‘everything is useful,’ but as to what actually happened and which lines evolution has actually followed, selection theory says nothing, for the evolution is the product of ‘chance,’ and therein obeys no ‘law. [Bertalanffy 1952, p. 92.]

In The Variation of Animals in Nature (1936), G. C. Robson and O. W. Richards examined all the major known examples of evolution by natural selection, concluding that none were sufficient to account for any significant taxonomic characters. Despite the subsequent political success of ecological genetics, some adherents to the Modern Synthesis are still puzzled by the fact that the defining characteristics of higher taxa seem to be adaptively neutral. For example, adult echinoderms such as sea urchins are radially symmetrical, i.e., they are round-bodied like sea anemones and jellyfish, and lack a head that might point them in a particular direction. This shape would seem to be less adaptive than the bilateral symmetry of most active marine animals, which are elongated and have heads at the front that seem to know where they want to go. Another puzzler: How is the six-leg body plan of insects, which existed before the acquisition of wings, more or less adaptive than that of eight-legged spiders or ten-legged legged lobsters? The distinguished neo-Darwinists Dobzhansky, Ayala, Stebbins, and Valentine (1977) write:

This view is a radical deviation from the theory that evolutionary changes are governed by natural selection. What is involved here is nothing less than one of the major unresolved problems of evolutionary biology. []

The problem exists only for selectionists, and so they happily settle for the first plausible selection pressure that occurs to them. But it could very well be that insect and echinoderm and jellyfish body plans were simply novel complexities that were consistent with organismal integritythey worked. There is no logical need for an arbiter to judge them adaptive after the fact.

Some innovations result from coincidental interactions between formerly independent systems. Natural selection can take no credit for their origin, their co-existence, or their interaction. And some emergent novelties often involve redundant features that persisted despite the culling hand of nature. Indeed, life depends on redundancy to make evolutionary experiments. Initially selectionism strenuously denies the existence of such events. When faced with the inevitable, it downplays their importance in favor of selective adjustments necessary to make them more viable. Behavior is yet another function that emphasizes the importance of the whole organism, in contrast to whole populations. Consistent changes in behavior alter the impact of the environment on the organism, and affect physiology and development. In other words, the actions of plants or animals determine what are useful adaptations and what are not. This cannot even be conceived from the abstract population gene pools that neo-Darwinists emphasize.

If some evolutionists find it easier to understand the fate of evolutionary novelty through the circumlocution of metaphorical forces, so be it. But when they invent such creative forces to explain the origin of evolutionary change, they do no better than Special Creationists or the proponents of Intelligent Design. Thus, the latter find selectionists an easy target. Neo-Darwinist explanations, being predictive in demographic terms, are certainly “more scientific” than those of the creationists. But if those explanations are irrelevant to the fundamentals of evolution, their scientific predictiveness is of no account.

What we really need to discover is how novelties are generated, how they integrate with what already exists, and how new, more complex whole organisms can be greater than the sums of their parts. Evolutionists who might agree that these are desirable goals are only hindered by cant about the “relentless imperatives of natural selection and the adaptive trends it produces.”

(….) Reductionism

Reduction is a good, logical tool for solving organismal problems by going down to their molecular structure, or to physical properties. But reductionism is a philosophical stance that embraces the belief that physical or chemical explanations are somehow superior to biological ones. Molecular biologists are inclined to reduce the complexity of life to its simplest structures, and there abandon the quest. “Selfish genes” in their “gene pools” are taken to be more important than organisms. To compound the confusion, higher emergent functions such as intelligence and conscious altruism are simplistically defined in such a way as to make them apply to the lower levels. This is reminiscent of William Livant’s (1998) “cure for baldness”: You simply shrink the head to the degree necessary for the remaining hair to cover the entire patethe brain has to be shrunk as well, of course. This “semantic reductionism” is rife in today’s ultra-Darwinism, a shrunken mindset that regards evolution as no more than the differential reproduction of genes.

Although reducing wholes to their parts can make them more understandable, fascination with the parts makes it too easy to forget that they are only subunits with no functional independence, whether in or out of the organism. It is their interactions with higher levels of organization that are important. Nevertheless, populations of individuals are commonly reduced to gene pools, meaning the totality of genes of the interbreeding organisms. Originating as a mathematical convenience, the gene pool acquired a life of its own, imbued with a higher reality than the organism. Because genes mutated to form different alleles that could be subjected to natural selection, it was the gene pool of the whole population that evolved. This argument was protected by polemic that decried any reference to the whole organism as essentialistic. Then came the notion that genes have a selfish nature. Even later, advances in molecular biology, and propaganda for the human genome project, have allowed the mistaken belief that there must be a gene for everything, and once the genes and their protein products have been identified that’s all we need to know. Instead, the completion of the genome project has clearly informed us that knowing the genes in their entirety tells us little about evolution. Yet biology still inhabits a genocentric universe, and most of its intellectual energy and material resources are sucked in by the black hole of reductionism at its center.

(….) Epigenetic Algorithms

Mechanical metaphors have appealed to many philosophers who sought materialist explanations of life. The definitive work on this subject is T. S. Hall’s Ideas of Life and Matter (1969). Descartes, though a dualist, thought of animal bodies as automata that obeyed mechanical rules. Julien de la Mettrie applied stricter mechanistic principles to humans in LʼHomme machine (1748). Clockwork and heat engine models were popular during the Industrial Revolution. Lamarck proposed hydraulic processes as causes of variation. In the late nineteenth century, the embryologists Wilhelm His and Wilhelm Roux theorized about developmental mechanics. However, as biochemical and then molecular biological information expanded, popular machine models were refuted, but it is not surprising that computers should have filled the gap. Algorithms that systematically provide instructions for a progressive sequence of events seem to be suitable analogues for epigenetic procedures.

A common error in applying this analogy is the belief that the genetic code, or at least the total complement of an organism’s DNA contains the program for its own differential expression. In the computer age it is easy to fall into that metaphysical trap. However, in the computer age we should also know that algorithms are the creations of programmers. As Charles Babbage (1838) and Robert Chambers (1844) tried to tell us, the analogy is more relevant to creationism than evolutionism. At the risk of offending the sophisticates who have indulged me so far, I want to state the problems in the most simple terms. To me, that is a major goal of theoretical biology, rather than the conversion of life to mathematics. (Robert G. B. Reid. Biological Emergences: Evolution by Natural Experiment (Vienna Series in Theoretical Biology) (p. 263). Kindle Edition.)

It From Bit

Henry Louis Mencken [1917] once wrote that “[t]here is always an easy solution to every human problem — neat, plausible and wrong.” And neoclassical economics has indeed been wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real world target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.

The punch line of this is that most of the problems that neoclassical economics is wrestling with, issues from its attempts at formalistic modeling per se of social phenomena. Reducing microeconomics to refinements of hyper-rational Bayesian deductivist models is not a viable way forward. It will only sentence to irrelevance the most interesting real world economic problems. And as someone has so wisely remarked, murder is unfortunately the only way to reduce biology to chemistry — reducing macroeconomics to Walrasian general equilibrium microeconomics basically means committing the same crime.

Lars Pålsson Syll. On the use and misuse of theories and models in economics.

~ ~ ~

Emergence, some say, is merely a philosophical concept, unfit for scientific consumption. Or, others predict, when subjected to empirical testing it will turn out to be nothing more than shorthand for a whole batch of discrete phenomena involving novelty, which is, if you will, nothing novel. Perhaps science can study emergences, the critics continue, but not emergence as such. (Clayton 2004: 577)*

It’s too soon to tell. But certainly there is a place for those, such as the scientist to whom this volume is dedicated, who attempt to look ahead, trying to gauge what are Nature’s broadest patterns and hence where present scientific resources can best be invested. John Archibald Wheeler formulated an important motif of emergence in 1989:

Directly opposite the concept of universe as machine built on law is the vision of a world self-synthesized. On this view, the notes struck out on a piano by the observer-participants of all places and all times, bits though they are, in and by themselves constituted the great wide world of space and time and things.

(Wheeler 1999: 314)

Wheeler summarized his idea — the observer-participant who is both the result of an evolutionary process and, in some sense, the cause of his own emergence — in two ways: in the famous sketch given in Fig.26.1 and in the maxim “It from bit.” In the attempt to summarize this chapter’s thesis with an equal economy of words I offer the corresponding maxim, “Us from it.” The maxim expresses the bold question that gives rise to the emergentist research program: Does nature, in its matter and its laws, manifest an inbuilt tendency to bring about increasing complexity? Is there an apparently inevitable process of complexification that runs from the period table of the elements through the explosive variations of evolutionary history to the unpredictable progress of human cultural history, and perhaps even beyond? (Clayton 2004: 577)

The emergence hypothesis requires that we proceed though at least four stages. The first stage involves rather straightforward physics — say, the emergence of classical phenomena from the quantum world (Zurek 1991, 2002) or the emergence of chemical properties through molecular structure (Earley 1981). In a second stage we move from the obvious cases of emergence in evolutionary history toward what may be the biology of the future: a new, law-based “general biology” (Kauffman 2000) that will uncover the laws of emergence underlying natural history. Stage three of the research program involves the study of “products of the brain” (perception, cognition, awareness), which the program attempts to understand not as unfathomable mysteries but as emergent phenomena that arise as natural products of the complex interactions of brain and central nervous system. Some add a fourth stage to the program, one that is more metaphysical in nature: the suggestion that the ultimate results, or the original causes, of natural emergence transcend or lie beyond Nature as a whole. Those who view stage-four theories with suspicion should note that the present chapter does not appeal to or rely on metaphysical speculations of this sort in making its case. (Clayton 2004: 578-579)

Defining terms and assumptions

The basic concept of emergence is not complicated, even if the empirical details of emergent processes are. We turn to Wheeler, again, for an opening formulation:

When you put enough elementary units together, you get something that is more than the sum of these units. A substance made of a great number of molecules, for instance, has properties such as pressure and temperature that no one molecule possesses. It may be a solid or a liquid or a gas, although no single molecule is solid or liquid or gas. (Wheeler 1998: 341)

Or, in the words of biochemist Arthur Peacocke, emergence takes place when “new forms of matter, and a hierarchy of organization of these forms … appear in the course of time” and ” these new forms have new properties, behaviors, and networks of relations” that must be used to describe them (Peacocke 1993: 62).

Clearly, no one-size-fits-all theory of emergence will be adequate to the wide variety of emergent phenomena in the world. Consider the complex empirical differences that are reflected in these diverse senses of emergence:

• temporal or spatial emergence
• emergence in the progression from simple to complex
• emergence in increasingly complex levels of information processing
• the emergence of new properties (e.g., physical, biological, psychological)
• the emergence of new causal entities (atoms, molecules, cells, central nervous system)
• the emergence of new organizing principles or degrees of inner organization (feedback loops, autocatalysis, “autopoiesis”)
• emergence in the development of “subjectivity” (if one can draw a ladder from perception, through awareness, self-awareness, and self-consciousness, to rational intuition).

Despite the diversity, certain parameters do constrain the scientific study of emergence:

  1. Emergence studies will be scientific only if emergence can be explicated in terms that the relevant sciences can study, check, and incorporate into actual theories.
  2. Explanations concerning such phenomena must thus be given in terms of the structures and functions of stuff in the world. As Christopher Southgate writes, “An emergent property is one describing a higher level of organization of matter, where the description is not epistemologically reducible to lower-level concepts” (Southgate et al. 1999: 158).
  3. It also follows that all forms of dualism are disfavored. For example, only those research programs count as emergentist which refuse to accept an absolute break between neurophysiological properties and mental properties. “Substance dualisms,” such as the Cartesian delineation of reality into “matter” and “mind,” are generally avoided. Instead, research programs in emergence tend to combine sustained research into (in this case) the connections between brain and “mind,” on the one hand, with the expectation that emergent mental phenomena will not be fully explainable in terms of underlying causes on the other.
  4. By definition, emergence transcends any single scientific discipline. At a recent international consultation on emergence theory, each scientist was asked to define emergence, and each offered a definition of the term in his or her own specific field of inquiry: physicists made emergence a product of tome-invariant natural laws; biologists presented emergence as a consequence of natural history; neuroscientists spoke primarily of “things that emerge from brains”; and engineers construed emergence in terms of new things that we can build or create. Each of these definitions contributes to, but none can be the sole source for, a genuinely comprehensive theory of emergence. (Clayton 2004: 579-580)

Physics to chemistry

(….) Things emerge in the development of complex physical systems that are understood by observation and cannot be derived from first principles, even given a complete knowledge of the antecedent states. One would not know about conductivity, for example, from a study of individual electrons alone; conductivity is a property that emerges only in complex solid state systems with huge numbers of electrons…. Such examples are convincing: physicists are familiar with a myriad of cases in which physical wholes cannot be predicted based on knowledge of their parts. Intuitions differ, though, on the significance of this unpredictability. (Clayton 2004: 580)

(….) [Such examples are] unpredictable even in principle — if the system-as-a-whole is really more than the sum of its parts.

Simulated Evolutionary Systems

Computer simulations study the processes whereby very simple rules give rise to complex emergent properties. John Conway’s program “Life,” which simulates cellular automata, is already widely known…. Yet even in as simple a system as Conway’s “Life,” predicting the movement of larger structures in terms of the simple parts alone turns out to be extremely complex. Thus in the messy real world of biology, behaviors of complex systems quickly become noncomputable in practice…. As a result — and, it now appears, necessarily — scientists rely on explanations given in terms of the emerging structures and their causal powers. Dreams of a final reduction “downwards” are fundamentally impossible. Recycled lower-level descriptions cannot do justice to the actual emergent complexity of the natural world as it has evolved. (Clayton 2004: 582)

Ant colony behavior

Neural network models of emergent phenomena can model … the emergence of ant colony behavior from simple behavioral “rules” that are genetically programmed into individual ants. (….) Even if the behavior of an ant colony were nothing more than an aggregate of the behaviors of the individual ants, whose behavior follows very simple rules, the result would be remarkable, for the behavior of the ant colony as a whole is extremely complex and highly adaptive to complex changes in its ecosystem. The complex adaptive potentials of the ant colony as a whole are emergent features of the aggregated system. The scientific task is to correctly describe and comprehend such emergent phenomena where the whole is more than the sum of the parts. (Clayton 2004: 586-587)

Biochemistry

So far we have considered models of how nature could build highly complex and adaptive behaviors from relatively simple processing rules. Now we must consider actual cases in which significant order emerges out of (relative) chaos. The big question is how nature obtains order “out of nothing,” that is, when the order is not present in the initial conditions but is produced in the course of a system’s evolution. What are some of the mechanisms that nature in fact uses? We consider four examples. (Clayton 2004: 587)

Fluid convection

The Benard instability is often cited as an example of a system far from thermodynamic equilibrium, where a stationary state becomes unstable and then manifests spontaneous organization (Peacocke 1994: 153). In the Bernard case, the lower surface of a horizontal layer of liquid is heated. This produces a heat flux from the bottom to the top of the liquid. When the temperature gradient reaches a certain threshold value, conduction no longer suffices to convey the heat upward. At that point convection cells form at right angles to th4e vertical heat flow. The liquid spontaneously organizes itself into these hexagonal structures or cells. (Clayton 2004: 587-588)

Differential equations describing the heat flow exhibit a bifurcation of the solutions. This bifurcation represents the spontaneous self-organization of large numbers of molecules, formally in random motion, into convection cells. This represents a particularly clear case of the spontaneous appearance of order in a system. According to the emergence hypothesis, many cases of emergent order in biology are analogous. (Clayton 2004: 588)

Autocatalysis in biochemical metabolism

Autocatalytic processes play a role in some of the most fundamental examples of emergence in the biosphere. These are relatively simple chemical processes with catalytic steps, yet they well express the thermodynamics of the far-from-equilibrium chemical processes that lie at the base of biology. (….) Such loops play an important role in metabolic functions. (Clayton 2004: 588)

Belousov-Zhabotinsky reactions

The role of emergence becomes clearer as one considers more complex examples. Consider the famous Belousov-Zhabotinsky reaction (Prigogine 1984: 152). This reaction consists of the oxidation of an organic acid (malonic acid) by potassium bromate in the presence of a catalyst such as cerium, manganese, or ferroin. From the four inputs into the chemical reactor more than 30 products and intermediaries are produced. The Belousov-Zhabotinsky reaction provides an example of a biochemical process where a high level of disorder settles into a patterned state. (Clayton 2004: 589)

(….) Put into philosophical terms, the data suggest that emergence is not merely epistemological but can also be ontological in nature. That is, it’s not just that we can’t predict emergent behaviors in these systems from a complete knowledge of the structures and energies of the parts. Instead, studying the systems suggests that structural features of the system — which are emergent features of the system as such and not properties pertaining to any of its parts — determine the overall state of the system, and hence as a result the behavior of individual particles within the system. (Clayton 2004: 589-590)

The role of emergent features of systems is increasingly evident as one moves from the very simple systems so far considered to the sorts of systems one actually encounters in the biosphere. (….) (Clayton 2004: 589-590)

The biochemistry of cell aggregation and differentiation

We move finally to processes where a random behavior or fluctuation gives rise to organized behavior between cells based on self-organization mechanisms. Consider the process of cell aggregation and differentiation in cellular slime molds (specifically, in Dictyostelium discoideum). The slime mold cycle begins when the environment becomes poor in nutrients and a population of isolated cells joins into a single mass on the order of 104 cells (Prigogine 1984: 156) . The aggregate migrates until it finds a higher nutrient source. Differentiation than occurs: a stalk or “foot” forms out of about one-third of the cells and is soon covered with spores. The spores detach and spread, growing when they encounter suitable nutrients and eventually forming a new colony of amoebas. (Clayton 2004: 589-591) [See Levinton 2001: 166;]

Note that this aggregation process is randomly initiated. Autocatalysis begins in a random cell within the colony, which then becomes the attractor center. It begins to produce cyclic adenosine monophosphate (AMP). As AMP is released in greater quantities into extracellular medium, it catalyzes the same reaction in the other cells, amplifying the fluctuation and total output. Cells then move up the gradient to the source cell, and other cells in turn follow their cAMP trail toward the attractor center. (Clayton 2004: 589-591)

Biology

Ilya Prigogine did not follow the notion of “order out of chaos” up through the entire ladder of biological evolution. Stuart Kauffman (1995, 2000) and others (Gell-Mann 1994; Goodwin 2001; see also Cowan et al. 1994 and other works in the same series) have however recently traced the role of the same principles in living systems. Biological processes in general are the result of systems that create and maintain order (stasis) through massive energy input from their environment. In principle these types of processes could be the object of what Kauffman envisions as “a new general biology,” based on sets of still-to-be-determined laws of emergent ordering or self-complexification. Like the biosphere itself, these laws (if they indeed exist) are emergent: they depend on the underlying physical and chemical regularities but are not reducible to them. [Note, there is no place for mind as a causal source.] Kauffman (2000: 35) writes: (Clayton 2004: 592)

I wish to say that life is an expected, emergent property of complex chemical reaction networks. Under rather general conditions, as the diversity of molecular species in a reaction system increases, a phase transition is crossed beyond which the formation of collectively autocatalytic sets of molecules suddenly becomes almost inevitable. (Clayton 2004: 593)

Until a science has been developed that formulates and tests physics-like laws at the level of biology [evo-devo is the closest we have so far come], the “new general biology” remains an as-yet-unverified, though intriguing, hypothesis. Nevertheless recent biology, driven by the genetic revolution on the one side and by the growth on the environmental sciences on the other, has made explosive advances in understanding the role of self-organizing complexity in the biosphere. Four factors in particular play a central role in biological emergence. (Clayton 2004: 593)

The role of scaling

As one moves up the ladder of complexity, macrostructures and macromechanisms emerge. In the formation of new structures, scale matters — or, better put, changes in scale matter. Nature continually evolves new structures and mechanisms as life forms move up the scale of molecules (c. 1 Ångstrom) to neurons (c. 100 micrometers) to the human central nervous system (c. 1 meter). As new structures are developed, new whole-part relations emerge. (Clayton 2004: 593)

John Holland argues that different sciences in the hierarchy of emergent complexity occur at jumps of roughly three orders of magnitude in scale. By the point systems have become too complex for predictions to be calculated, one is forced to “move the description ‘up a level’” (Holland 1998: 201). The “microlaws” still constrain outcomes, of course, but additional basic descriptive units must also be added. This pattern of introducing new explanatory levels iterates in a periodic fashion as one moves up the ladder of increasing complexity. To recognize the pattern is to make emergence an explicit feature of biological research. As of now, however, science possesses only a preliminary understanding of the principles underlying this periodicity. (Clayton 2004: 593)

The role of feedback loops

The role of feedback loops, examined above for biochemical processes, become increasingly important from the cellular level upwards. (….) (Clayton 2004: 593)

The role of local-global interactions

In complex dynamical systems the interlocked feedback loops can produce an emergent global structure. (….) In these cases, “the global property — [the] emergent behavior — feeds back to influence the behavior of the individuals … that produced it” (Lewin 1999). The global structure may have properties the local particles do not have. (Clayton 2004: 594)

(….) In contrast …, Kauffman insists that an ecosystem is in one sense “merely” a complex web of interactions. Yet consider a typical ecosystem of organisms of the sort that Kauffman (2000: 191) analyzes … Depending on one’s research interests, one can focus attention either on holistic features of such systems or on the interactions of the components within them. Thus Langston’s term “global” draws attention to system-level features and properties, whereas Kauffman’s “merely” emphasizes that no mysterious outside forces need to be introduced (such as, e.g., Rupert Sheldrake’s (1995) “morphic resonance”). Since the two dimensions are complementary, neither alone is scientifically adequate; the explosive complexity manifested in the evolutionary process involves the interplay of both systemic features and component interactions. (Clayton 2004: 595)

The role of nested hierarchies

A final layer of complexity is added in cases where the local-global structure forms a nested hierarchy. Such hierarchies are often represented using nested circles. Nesting is one of the basic forms of combinatorial explosion. Such forms appear extensively in natural biological systems (Wolfram 2002: 357ff.; see his index for dozens of further examples of nesting). Organisms achieve greater structural complexity, and hence increased chances of survival, as they incorporate discrete subsystems. Similarly, ecosystems complex enough to contain a number of discrete subsystems evidence greater plasticity in responding to destabilizing factors. (Clayton 2004: 595-596)

“Strong” versus “weak” emergence

The resulting interactions between parts and wholes mirror yet exceed the features of emergence that we observed in chemical processes. To the extent that the evolution of organisms and ecosystems evidences a “combinatorial explosion” (Morowitz 2002) based on factors such as the four just summarized, the hope of explaining entire living systems in terms of simple laws appears quixotic. Instead, natural systems made of interacting complex systems form a multileveled network of interdependency (cf. Gregersen 2003), and each level contributes distinct elements to the overall explanation. (Clayton 2004: 596-597)

Systems biology, the Siamese twin of genetics, has established many of the features of life’s “complexity pyramid” (Oltvai and Barabási 2002; cf. Barabási 2002). Construing cells as networks of genes and proteins, systems biologists distinguish four distinct levels: (1) the base functional organization (genome, transcriptome, proteome, and metabalome) [see below, Morowitz on the “dogma of molecular biology.”]; (2) the metabolic pathways built up out of these components; (3) larger functional modules responsible for major cell functions; and (4) the large-scale organization that arises from the nesting of the functional modules. Oltvai and Barabási (2002) conclude that “[the] integration of different organizational levels increasingly forces us to view cellular functions as distributed among groups of heterogeneous components that all interact within large networks.” Milo et al. (2002) have recently shown that a common set of “network motifs” occurs in complex networks in fields as diverse as biochemistry, neurobiology, and ecology. As they note, “similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans.” (Clayton 2004: 598)

Such compounding of complexity — the system-level features of networks, the nodes of which are themselves complex systems — is sometimes said to represent only a quantitative increase in complexity, in which nothing “really new” emerges. This view I have elsewhere labeled “weak emergence.” [This would be a form of philosophical materialism qua philosophical reductionism.] It is the view held by (among others) John Holland (1998) and Stephen Wolfram (2002). But, as Leon Kass (1999: 62) notes in the context of evolutionary biology, “it never occurred to Darwin that certain differences of degree — produced naturally, accumulated gradually (even incrementally), and inherited in an unbroken line of descent — might lead to a difference in kind …” Here Kass nicely formulates the principle involved. As long as nature’s process of compounding complex systems leads to irreducibly complex systems with structures and causal mechanisms of their own, then the natural world evidences not just weak emergence but also a more substantive change that we might label strong emergence. Cases of strong emergence are cases where the “downward causation” emphasized by George Ellis [see p. 607, True complexity and its associated ontology.] … is most in evidence. By contrast, in the relatively rare cases where rules relate the emergent system to its subvening system (in simulated systems, via algorithms; in natural systems, via “bridge laws”) weak emergence interpretation suffices. In the majority of cases, however, such rules are not available; in these cases, especially where we have reason to think that such lower-level rules are impossible in principle, the strong emergence interpretation is suggested. (Clayton 2004: 597-598)

Neuroscience, qualia, and consciousness

Consciousness, many feel, is the most important instance of a clearly strong form of emergence. Here if anywhere, it seems, nature has produced something irreducible — no matter how strong the biological dependence of mental qualia (i.e., subjective experiences) on antecedent states of the central nervous system may be. To know everything there is to know about the progression of brain states is not to know what it’s like to be you, to experience your joy, your pain, or your insights. No human researcher can know, as Thomas Nagel (1980) so famously argued, “what it’s like to be a bat.” (Clayton 2004: 598)

Unfortunately consciousness, however intimately familiar we may be with it on a personal level, remains an almost total mystery from a scientific perspective. Indeed, as Jerry Fodor (1992) noted, “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness.” (Clayton 2004: 598)

Given our lack of comprehension of the transition from brain states to consciousness, there is virtually no way to talk about the “C” word without sliding into the domain of philosophy. The slide begins if the emergence of consciousness is qualitatively different from other emergences; in fact, it begins even if consciousness is different from the neural correlates of consciousness.Much suggests that both differences obtain. How far can neuroscience go, even in principle, in explaining consciousness? (Clayton 2004: 598-599)

Science’s most powerful ally, I suggest, is emergence. As we’ve seen, emergence allows one to acknowledge the undeniable differences between mental properties and physical properties, while still insisting on the dependence of the entire mental life on the brain states that produce it. Consciousness, the thing to be explained, is different because it represents a new level of emergence; but brain states — understood both globally (as the state of the brain as a whole) and in terms of their microcomponents — are consciousness’s sine qua non. The emergentist framework allows science to identify the strongest possible analogies with complex systems elsewhere in the biosphere. So, for example, other complex adaptive systems also “learn,” as long as one defines learning as “a combination of exploration of the environment and improvement of performance through adaptative change” (Schuster 1994). Obviously, systems from primitive organisms to primate brains record information from their environment and use it to adjust future responses to that environment. (Clayton 2004: 599)

Even the representation of visual images in the brain, a classically mental phenomenon, can be parsed in this way. Consider Max Velman’s (2000) schema … Here a cat-in-the-world and the neural representation of the cat are both parts of a natural system; no nonscientific mental “things” like ideas or forms are introduced. In principle, then, representation might be construed as merely a more complicated version of the feedback loop between a plant and its environment … Such is the “natural account of phenomenal consciousness” defended by (e.g.) Le Doux (1978). In a physicalist account of mind, no mental causes are introduced. Without emergence, the story of consciousness must be retold such that thoughts and intentions play no causal role. … If one limits the causal interactions to world and brains, mind must appear as a sort of thought-bubble outside the system. Yet it is counter to our empirical experience in the world, to say the least, to leave no causal role to thoughts and intentions. For example, it certainly seems that your intention to read this … is causally related to the physical fact of your presently holding this book [or browsing this web page, etc.,] in your hands. (Clayton 2004: 599-600)

Arguments such as this force one to acknowledge the disanologies between emergence of consciousness and previous examples of emergence in complex systems. Consciousness confronts us with a “hard problem” different from those already considered (Chalmers 1995: 201):

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

The distinct features of human cognition, it seems, depend on a quantitative increase in brain complexity vis-à-vis other higher primates. Yet, if Chalmers is right (as I fear he is), this particular quantitative increase gives rise to a qualitative change. Even if the development of conscious awareness occurs gradually over the course of primate evolution, the (present) end of that process confronts the scientist with conscious, symbol-using beings clearly distinct from those who preceded them (Deacon 1997). Understanding consciousness even as an emergent phenomenon in the natural world — that is, naturalistically — requires a theory of “felt qualities,” “subjective intentions,” and “states of experience.” Intention-based explanations and, it appears, a new set of sciences: the social or human sciences. By this point emergence has driven us to a level beyond the natural-science-based framework of the present book. New concepts, new testing mechanisms, and perhaps even new standards for knowledge are now required. From the perspective of physics the trail disappears into the clouds; we can follow it no further. (Clayton 2004: 600-601)

The five emergences

In the broader discussion the term “emergence” is used in multiple and incompatible senses, some of which are incompatible with the scientific project. Clarity is required to avoid equivocation between five distinct levels on which the term may be applied: (Clayton 2004: 601)

• Let emergence-1 refer to occurrences of the term within the context of a specific scientific theory. Here it describes features of a specified physical or biological system of which we have some scientific understanding. Scientists who employ these theories claim that the term (in a theory-specific sense) is currently useful for describing features of the natural world. The preceding pages include various examples of theories in which this term occurs. At the level of emergence-1 alone there is no way to establish whether the term is used analogously across theories, or whether it really means something utterly distinct in each theory in which it appears. (Clayton 2004: 601-602)

• Emergence-2 draws attention to features of the world that may eventually become part of a unified scientific theory. Emergence in this sense expresses postulated connections or laws that may in the future become the basis for one or more branches of science. One thinks, for example, of the role of emergence in Stuart Kauffman’s notion of a new “general biology,” or in certain proposed theories of complexity or complexification. (Clayton 2004: 602)

• Emergence-3 is a mata-scientific term that points out a broad pattern across scientific theories. Used in this sense, the term is not drawn from a particular scientific theory; it is an observation about a significant pattern that connects a range of scientific theories. In the preceding pages I have often employed the term in this fashion. My purpose has been to draw attention to common features of the physical systems under discussion, as in (e.g.) the phenomena of autocatalysis, complexity, and self-organization. Each is scientifically understood, each shares common features that are significant. Emergence draws attention to these features, whether or not the individual theories actually use the same label for the phenomena they describe. (Clayton 2004: 602)

Emergence-3 thus serves a heuristic function. It assists in the recognition of common features between theories. Recognizing such patterns can help to extend existing theories, to formulate insightful new hypotheses, or to launch new interdisciplinary research programes.[4] (Clayton 2004: 602)

• Emergence-4 expresses a feature in the movement between scientific disciplines, including some of the most controversial transition points. Current scientific work is being done, for example, to understand how chemical structures are formed, to reconstruct the biochemical dynamics underlying the origins of life, and to conceive how complicated neural processes produce cognitive phenomena such as memory, language, rationality, and creativity. Each involves efforts to understand diverse phenomena involving levels of self-organization within the natural world. Emergence-4 attempts to express what might be shared in common by these (and other) transition points. (Clayton 2004: 602)

Here, however, a clear limitation arises. A scientific theory that explains how chemical structures are formed is perhaps unlikely to explain the origins of life. Neither theory will explain how self-organizing neural nets encode memories. Thus emergence-4 stands closer to the philosophy of science than it does to actual scientific theory. Nonetheless, it is the sort of philosophy of science that should be helpful to scientists.[5] (Clayton 2004: 602)

• Emergence-5 is a metaphysical theory. It represents the view that the nature of the natural world is such that it produces continually more complex realities in a process of ongoing creativity. The present does not comment on such metaphysical claims about emergence.[6] (Clayton 2004: 603)

Conclusion

(….) Since emergence is used as an integrative ordering concept across scientific fields …. It remains, at least in part, a meta-scientific term. (Clayton 2004: 603)

Does the idea of distinct levels then conflict with “standard reductionist science?” No, one can believe that there are levels in Nature and corresponding levels of explanation while at the same time working to explain any given set of higher-order phenomena in terms of underlying laws and systems. In fact, isn’t the first task of science to whittle away at every apparent “break” in Nature, to make it smaller, to eliminate it if possible? Thus, for example, to study the visual perceptual system scientifically is to attempt to explain it fully in terms of the neural structures and electrochemical processes that produce it. The degree to which downward explanation is possible will be determined by long-term empirical research. At present we can only wager on the one outcome or the other based on the evidence before us. (Clayton 2004: 603)

Notes:

[2] Gordon (2000) disputes this claim: “One lesson from ants is that to understand a system like theirs, it is not sufficient to take the system apart. The behavior of each unit is not encapsulated inside that unit but comes from its connections with the rest of the system.” I likewise break strongly with the aggregate model of emergence.

[3] Generally this seems to be a question that makes physicists uncomfortable (“Why, that’s impossible, of course!”), whereas biologists tend to recognize in it one of the core mysteries in the evolution of living systems.

[4] For this reason, emergence-3 stands closer to the philosophy of science than do the previous two senses. Yet it is a kind of philosophy of science that stands rather close to actual science and that seeks to be helpful to it. [The goal of all true “philosophy of science” is to seek critical clarification of ideas, concepts, and theoretical formulations; hence to be “helpful” to science and the question for human knowledge.] By way of analogy one thinks of the work of philosophers of quantum physics such as Jeremy Butterfield or James Cushing, whose work can be and has actually been helpful to bench physicists. One thinks as well of the analogous work of certain philosophers in astrophysics (John Barrow) or in evolutionary biology (David Hull, Michael Ruse).

[5] This as opposed, for example, to the kind of philosophy of science currently popular in English departments and in journals like Critical Inquiry — the kind of philosophy of science that asserts that science is a text that needs to be deconstructed, or that science and literature are equally subjective, or that the worldview of Native Americans should be taught in science classes.

— Clayton, Philip D. Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.). Cambridge: Cambridge University Press; 2004; pp. 577-606.

~ ~ ~

* Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.)