Category Archives: Evolutionary Theory

Natural selection of algorithms?

If we suppose that the action of the human brain, conscious or otherwise, is merely the acting out of some very complicated algorithm, then we must ask how such an extraordinary effective algorithm actually came about. The standard answer, of course, would be ‘natural selection’. as creatures with brains evolved, those with more effective algorithms would have a better tendency to survive and therefore, on the whole, had more progeny. These progeny also tended to carry more effective algorithms than their cousins, since they inherited the ingredients of these better algorithms from their parents; so gradually the algorithms improved not necessarily steadily, since there could have been considerable fits and starts in their evolution until they reached the remarkable status that we (would apparently) find in the human brain. (Compare Dawkins 1986). (Penrose 1990: 414)

Even according to my own viewpoint, there would have to be some truth in this picture, since I envisage that much of the brain’s action is indeed algorithmic, and as the reader will have inferred from the above discussion I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. (Penrose 1990: 414)

Imagine an ordinary computer program. How would it have come into being? Clearly not (directly) by natural selection! Some human computer programmer would have conceived of it and would have ascertained that it correctly carries out the actions that it is supposed to. (Actually, most complicated computer programs contain errors usually minor, but often subtle ones that do not come to light except under unusual circumstances. The presence of such errors does not substantially affect my argument.) Sometimes a computer program might itself have been ‘written’ by another, say a ‘master’ computer program, but then the master program itself would have been the product of human ingenuity and insight; or the program itself might well be pieced together from ingredients some of which were the products of other computer programs. But in all cases the validity and the very conception of the program would have ultimately been the responsibility of (at least) one human consciousness. (Penrose 1990: 414)

One can imagine, of course, that this need not have been the case, and that, given enough time, the computer programs might somehow have evolved spontaneously by some process of natural selection. If one believes that the actions of the computer programmers’ consciousness are themselves simply algorithms, then one must, in effect, believe algorithms have evolved in just this way. However, what worries me about this is that the decision as to the validity of an algorithm is not itself an algorithmic process! … (The question of whether or not a Turing machine will actually stop is not something that can be decided algorithmically.) In order to decide whether or not an algorithm will actually work, one needs insights, not just another algorithm. (Penrose 414-415)

Nevertheless, one still might imagine some kind of natural selection process being effective for producing approximately valid algorithms. Personally, I find this very difficult to believe, however. Any selection process of this kind could act only on the output of the algorithms and not directly on the ideas underlying the actions of the algorithms. This is not simply extremely inefficient; I believe that it would be totally unworkable. In the first place, it is not easy to ascertain what an algorithm actually is, simply by examining its output. (It would be an easy matter to construct two quite different simple Turing machine actions for which the output tapes did not differ until, say, the 2^65536th place — and this difference could never be spotted in the entire history of the universe!) Moreover, the slightest ‘mutation’ of an algorithm (say a slight change in a Turing machine specification, or in its input tape) would tend to render it totally useless, and it is hard to see how actual improvements in algorithms could ever arise in this random way. (Even deliberate improvements are difficult without ‘meanings’ being available. This inadequately documented and complicated computer program needs to be altered or corrected; and the original programmer has departed or perhaps died. Rather than try to disentangle all the various meanings and intentions that the program implicitly depended upon, it is probably easier just to scrap it and start all over again!) (Penrose 1990: 415)

Perhaps some much more ‘robust’ way of specifying algorithms could be devised, which would not be subject to the above criticisms. In a way, this is what I am saying myself. The ‘robust’ specifications are the ideas that underlie the algorithms. But ideas are things that, as far as we know, need conscious minds for their manifestation. We are back with the problem of what consciousness actually is, and what it can actually do that unconscious objects are incapable of — and how on earth natural selection has been clever enough to evolve that most remarkable of qualities. (Penrose 1990: 415)

(….) To my way of thinking, there is still something mysterious about evolution, with its apparent ‘groping’ towards some future purpose. Things at least seem to organize themselves somewhat better than they ‘ought’ to, just on the basis of blind-chance evolution and natural selection…. There seems to be something about the way that the laws of physics work, which allows natural selection to be much more effective process than it would be with just arbitrary laws. The resulting apparently ‘intelligent groping’ is an interesting issue. (Penrose 1990: 416)

The non-algorithmic nature of mathematical insight

… [A] good part of the reason for believing that consciousness is able to influence truth-judgements in a non-algorithmic way stems from consideration of Gödel’s theorem. If we can see that the role of consciousness is non-algorithmic when forming mathematical judgements, where calculation and rigorous proof constitute such an important factor, then surely we may be persuaded that such a non-algorithmic ingredient could be crucial also for the role of consciousness in more general (non-mathematical) circumstances. (Penrose 1990: 416)

… Gödel’s theorem and its relation to computability … [has] shown that whatever (sufficiently extensive) algorithm a mathematician might use to establish mathematical truth — or, what amounts to the same thing, whatever formal system he might adopt as providing his criterion of truth — there will always be mathematical propositions, such as the explicit Gödel proposition P(K) of the system …, that his algorithm cannot provide an answer for. If the workings of the mathematician’s mind are entirely algorithmic, then the algorithm (or formal system) that he actually uses to form his judgements is not capable of dealing with the proposition P(K) constructed from his personal algorithm. Nevertheless, we can (in principle) see that P(K) is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all! (Penrose 1990: 416-417)

(….) The message should be clear. Mathematical truth is not something that we ascertain merely by use of an algorithm. I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must ‘see’ the truth of a mathematical argument to be convinced of its validity. This ‘seeing’ is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we conceive ourselves of the validity of Gödel’s theorem we not only ‘see’ it, but by so doing we reveal the very non-algorithmic nature of the ‘seeing’ process itself. (Penrose 1990: 418)

Evo-Devo and Arrival of the Fittest

The molecular mechanisms that bring about biological form in modern-day embryos … should not be confused with the causes that led to the appearance of these forms in the first place … selection can only work on what already exists. (G. B. Muller and S. A. Newman 2003: 3, Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology)

Cited in Minelli and Fusco 2008: xv. Evolving Pathways: Key Themes in Evolutionary Developmental Biology.

The evolution of organismal form consists of a continuing production and ordering of anatomical parts: the resulting arrangement of parts is nonrandom and lineage specific. The organization of morphological order is thus a central feature of organismal evolution, whose explanation requires a theory of morphological organization. Such a theory will have to account for (1) the generation of initial parts; (2) the fixation of such parts in lineage-specific combinations; (3) the modification of parts; (4) the loss of parts; (5) the reappearance of lost parts [atavism]; and (6) the addition of new parts. Eventually, it will have to specify proximate and ultimate causes for each of these events as well.

Only a few of the processes listed above are addressed by the canonical neo-Darwinian theory, which is chiefly concerned with gene frequencies in populations and with the factors responsible for their variation and fixation. Although, at the phenotypic level, it deals with the modification of existing parts, the theory is intended to explain neither the origin of parts, nor morphological organization, nor innovation. In the neo-Darwinian world the motive factor for morphological change is natural selection, which can account for the modification and loss of parts. But selection has no innovative capacity; it eliminates or maintains what exists. The generative and the ordering aspects of morphological evolution are thus absent from evolutionary theory.

— Muller, Gerd B. (2003) Homology: The Evolution of Morphological Organization. In Origination of Organismal Form: Beyond the Gene in Development and Evolutionary Biology. (eds., Gerd B. Muller and Stuart A. Newman). The Vienna Series in Theoretical Biology. MIT Press. p. 51.

What is evo-devo? Undoubtedly this is a shorthand for evolutionary developmental biology. There, however, agreement stops. Evo-devo has been regarded as either a new discipline within evolutionary biology or simply a new perspective upon it, a lively interdisciplinary field of studies, or even necessary complement to the standard (neo-Darwinian) theory of evolution, which is an obligate step towards an expanded New Synthesis. Whatever the exact nature of evo-devo, its core is a view of the process of evolution in which evolutionary change is the transformation of (developmental) processes rather than (genetic or phenotypic) patterns. Thus our original question could be more profitably rephrased as: What is evo-devo for? (Minelli and Fusco 2008: 1)

(….) Evo-devo aims to provide a mechanistic explanation of how developmental mechanisms have changed during evolution, and how these modifications are reflected in changes in organismal form. Thus, in contrast with studies on natural selection, which aim to explain the ‘survival of the fittest’, the main target of evo-devo is to determine the mechanisms behind the ‘arrival of the fittest’. At the most basic level, the mechanistic question about the arrival of the fittest involves changes in the function of genes controlling developmental programs. Thus it is important to reflect on the nature of the elements and systems underlying inheritable developmental modification using an updated molecular background. (Minelli and Fusco 2008: 2)

Biology and Ideology

Why should we be concerned about biology and ideology? One good reason is that the use of biology for non-biological ends has been the cause of immense human suffering. Biology has been used to justify eugenic genic programs, enforced sterilization, experimentation on living humans, death camps, and political ambitions based on notions of racial superiority, ity, to name but a few examples. We should also be concerned because biological ideas continue to be used, if not in these specific ways, then in other ways that lie well beyond science. Investigating the past should help us to be more reflective about the science of our own day, hopefully more equipped to discern the ideological abuse of science when it occurs. (Alexander and Numbers 2010)

Not so many decades ago science represented the antithesis of ideology. Indeed, science rested securely on a pedestal, enshrined as the very “norm of truth.” According to the founding father of the history of science, George Sarton (1884-1956), the “main purpose” of science, pursued by disinterested scholars, was “the discovery of truth.” Convinced that science was the only human activity that “is obviously and undoubtedly cumulative and progressive,” he described the history of science as “the story of a protracted struggle, which will never end, against the inertia of superstition and ignorance, against the liars and hypocrites, and the deceivers and the self-deceived, against all the forces of darkness and nonsense.” (Alexander and Numbers 2010)

By the late nineteenth century, practicing scientists, as well as science educators and popularizers, were increasingly attributing the success of science to something called “the scientific method,” a slippery but rhetorically powerful slogan. In the words of the distinguished American astronomer Simon Newcomb, who devoted considerable thought to scientific methodology, “the most marked characteristic of the science of the present day … is its entire rejection of all speculation on propositions which do not admit of being brought to the test of experience.”‘ (Alexander and Numbers 2010)

To such devotees, science was not only true but edifying, totally unlike the “grubby worlds” of business and politics. As Harvard president Charles W. Eliot, an erstwhile chemist, declared at the opening of the American Museum of Natural History in 1878, science produced a “searching, open, humble mind … having no other end than to learn, prizing above all things accuracy, thoroughness, and candor.” Many of its practitioners, asserts the historian David A. Hollinger, saw science “as a religious calling,” “a moral enterprise.” Those who used science for ideological purposes often found themselves denounced as charlatans and pseudo-scientists. (Alexander and Numbers 2010)

Until well into the twentieth century neither scientists themselves nor the scholars who studied science linked science with ideology, a term coined in the late eighteenth century and typically employed pejoratively to designate ideas in the use of particular interests. Among the first to connect ideology and science were Karl Marx and his followers, who identified “ideologies” as ideas that served the social interests of the bourgeoisie. Western historians of science first encountered the linkage between science and ideology at the Second International Congress of the History of Science and Technology, held in London in 1931, when a delegation from the Soviet Union contrasted “the relations between science, technology, and economics” under the capitalist and socialist systems. The Russian physicist Boris Hessen, under intense political pressure at home to prove his Marxist orthodoxy, delivered an iconoclastic paper on “The Socio- Economic Roots of Newton’s Principia,” which described Newtonian science in the service of the ideological (that is, industrial and commercial) needs of the rising bourgeoisie. Despite his bravura effort, he died in a Soviet prison five years later, falsely convicted of terrorism. (Alexander and Numbers 2010)

Such “vulgar Marxism” exerted little influence on the writing of the history of science outside the Soviet Union. It was not until the 1960s that Marxism penetrated Anglo-American historiography, largely through the efforts of Robert M. (Bob) Young, an expatriate Texan working in Cambridge, bridge, England. In 1970, at a conference on “The Social Impact of Modern Biology,” he delivered a paper on “Evolutionary Biology and Ideology,” in which he “treated science as ideology.” He acknowledged that the term “ideology” traditionally had derogatory and political connotations that were connected with its popularization by Marx, who concentrated his use of it as a term of abuse for ideas that served as weapons for social interests. But Marxists were soon subjected to their own critique, and this led to Young’s general definition of ideology:

When a particular definition of reality comes to be attached to a concrete power interest, it may be called an ideology…. In its early manifestations the concept of ideology conveyed a sense of more or less conscious distortion bordering on deliberate lies. I do not mean to imply this…. [T]he effort to absorb the ideological logical point of view into positive science only illustrates the ubiquitousness of ideology in intellectual life…. We need to see that ideology is an inescapable level of discourse.

In contrast to earlier Marxists, who had damned ideology as inimical to good science, Young argued that all facts are theory-laden and that no science is value-free. The late historian Roy Porter described the efforts of Young and his fellow New Marxists as concentrating on “exposing the dazzling conjuring trick whereby science had acquired and legitimated authority precisely while claiming to be value-neutral.” Their goal was to liberate humanity from the thrall of science by demoting it from its privileged intellectual position and relocating it on the same level as other belief systems. Thus, at a time when some observers were declaring “the end of ideology,” a small group of historians of science was rushing to embrace it.

Meanwhile, scholars of a less radical persuasion were also undermining the notion of science as a value-neutral enterprise. In 1958 the philosopher Norwood Russell Hanson, who would soon found the Indiana University program in the history and philosophy of science, published Patterns of Discovery, which described all observations as “theory-laden.” Influenced in part by Hanson, the historian of science Thomas Kuhn published his best-selling The Structure of Scientific Revolutions (1962), by far the most influential book ever written about the history of science and one of the most important books on any topic published in the twentieth century. In his slight monograph, Kuhn challenged Sarton’s cherished notion that science was cumulative, arguing instead that scientific paradigms are incommensurable mensurable and therefore that science does not progressively approach a truthful description of nature. Although he insisted that “there is no standard higher than the assent of the relevant community” in determining the boundaries of good science, he shied away from equating science and ideology. In fact, he used the latter term only to dismiss a commitment to the cumulative nature of science as “the ideology of the scientific profession.” Some critics denounced Kuhn’s work for promoting “irrationality and relativism”—and many postmodernists and other denigrators of science drew inspiration from it in their attempts to undermine the privileged status of science—but Kuhn never joined the revolutionaries. He took pride in the description of The Structure of Scientific Revolutions as “a profoundly conservative book.”

(….) The most influential blow to the traditional separation between science and ideology came in the 1970s and 1980s from a group of scholars in the Edinburgh University Science Studies Unit dedicated to creating a thoroughgoing sociology of scientific knowledge. Unlike such pioneers in the sociology of science as Robert K. Merton, who explored the impact of social factors on the growth of scientific institutions but left scientific knowledge untainted by ideologies, the Edinburgh scholars advocated a “strong programme” that treated science like any body of knowledge, vulnerable to psychological, social, and cultural factors. These “constructivists” insisted on treating “true” and “false” scientific claims identically and on exploring the role played by “biasing and distorting factors” in both cases, not just for unsuccessful or pseudo-science. Contrary to the claims of some of their critics, they never asserted that science was “purely social” or “that knowledge depended exclusively on social variables such as interests.” “The strong programme says that the social component is always present and always constitutive of knowledge,” explained David Bloor, one of the founders of the Science Studies Unit. “It does not say that it is the only component, or that it is the component that must necessarily be located as the trigger of any and every change.”‘

(….) In the early 1980s a young historian of science at Edinburgh, Steven Shapin, collaborated with Simon Schaffer on a landmark book that dramatically illustrated the applicability of the “strong programme” to the history of science. In Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life, which the authors described as “an exercise in the sociology ology of scientific knowledge,” Shapin and Schaffer sought to identify the role played by ideology in establishing trust in the experimental way of producing knowledge about the workings of nature. As good constructivists, they treated the views of Thomas Hobbes (the loser) symmetrically with the opinions of Robert Boyle (the winner). In the end they concluded that “scientific activity, the scientist’s role, and the scientific community have always been dependent: they exist, are valued, and supported insofar as the state or its various agencies see point in them.”

By the 1990s the sometimes acrimonious debate over ideology and science was dying down. Although a few historians of science held out for value-free science, the great majority, it seems, had come to accept a moderate form of constructivism-not so much for ideological reasons but because the evidence supported it. While rejecting the radical claim that science was merely social, they readily granted the propriety, indeed the necessity, of exploring the constitutive role of ideologies in the making of science. Ideologies had morphed from antiscience to the heart of the scientific enterprise.

But the flow has gone both ways, not only “outwards” from biology into the worlds of politics, philosophy, or social structures, but also “inwards,” with whole scientific programs being shaped by ideological concerns…. At other times there is more of an iterative process of “co-evolution,” as occurred in theories about “racial hygiene” …, whereby the ideology shaped the biology, which in turn was used to prop up the ideology.

(….) [I]deology provides an interpretative framework that serves a social purpose, motivated by ethical, religious, or political convictions. The history of biology does certainly evince ideologies as either motivating or as being justified by certain kinds of scientific research and declaration, and most of the contributors investigate episodes in the history of biology in which biological science has become thoroughly entangled with social causes.

(….) [F]irst systematic investigations of the natural world in the early modern period attracted prestige by their support for natural theology and for the moral order. Even Descartes’ idea of animals as machines without souls, invoking thereby a sharp demarcation between human and animal, was employed as part of the argument for design. (….) Biological ideas connecting life and matter played a central role in the materialistic arguments of the French philosophes, which in turn were employed in the subversion of the social order. (….) [T]he eighteenth century also saw something of a reaction against the mechanistic analogies that had proven so influential in the natural philosophy of the preceding century, reformulating an “Enlightenment vitalism” that sought to revive ideas of nature ture as a dynamic system. This renewed emphasis on the internal driving forces and systematic organization of living things was used to generate a new science of humanity, which in turn was deployed to argue for particular economic and political structures. From the structure of organisms to the structure of societies has often been a short step in the history of biology.

One of the striking insights highlighted by this [history] is the way in which the ideological application of biological concepts is shaped by place as well as time. In some cases the same biological ideas have been used during the same period for quite opposite ideological purposes in different countries. The biology that in France was utilized by the philosophes to subvert the social order was in Britain used as a key resource for natural theology, whereas in Germany it was being used politically as an analogy for the structure of nation states.

Evolution in Four Dimensions

At some point, such heritable regulatory changes will be created in a test animal in the laboratory, generating a trait intentionally drawing on various conserved processes. At that point, doubters [of organic evolution] would have to admit that if humans can generate phenotypic variation in the laboratory in a manner consistent with known evolutionary changes, perhaps it is plausible that facilitated variation has generated change in nature.

Gerhart, C. and Kirschner Marc W. The Plausibility of Life: Resolving Darwin’s Dilemma. New Haven: Yale University Press; 2005; p. 237, emphasis added.

Our basic claim is that biological thinking about heredity and evolution is undergoing a revolutionary change. What is emerging is a new synthesis, which challenges the gene-centered version of neo-Darwinism that has dominated biological tought for the last fifty years. The conceptual changes that are taking place are based on knowledge from almost all branches of biology, but our focus in this book will be on heredity. We will be arguing that

  • there is more to heredity than genes;
  • some hereditary variations are nonrandom in origin;
  • some acquired information is inherited;
  • evolutionary change can result from instruction as well as selection.

These statements may sound heretical to anyone who has been taught the usual version of Darwin’s theory of evolution, which is that adaptation occurs through natural selection of chance genetic variations. Nevertheless, they are firmly grounded on new data as well as on new ideas. (Jablonka, Eva. Evolution in Four Dimensions (Life and Mind: Philosophical Issues in Biology and Psychology) (p. 1). The MIT Press. Kindle Edition.)

Putting Humpty Dumpty Together Again 

Imagine an entangled bank, clothed with many plants of many kinds, with birds singing in the bushes, with various insects flitting about, with worms crawling through the damp earth, and a square-jawed nineteenth-century naturalist contemplating the scene. What would a modern-day evolutionary biologist have to say about this image—about the plants, the insects, the worms, the singing birds, and the nineteenth-century naturalist deep in thought? What would she say about the evolutionary processes that shaped the scene? (Jablonka 2014, 235)

Undoubtedly the first thing she would say is that the tangled bank image is very familiar, because we borrowed it from the closing paragraph of On the Origin of Species. The nineteenth-century naturalist who is contemplating the scene is obviously Charles Darwin. The famous last paragraph is constantly being quoted, the biologist would tell us, because in it Darwin summarized his theory of evolution. He suggested that over vast spans of time natural selection of heritable variations had produced all the elaborate and interdependent forms in the entangled bank. (Jablonka 2014, 235)

Our modern-day evolutionary biologist would almost certainly go on to say that she thinks Darwin’s theory is basically correct. However, she would also point out that Darwin’s seemingly simple suggestion hides enormous complications because there are several types of heritable variation, they are transmitted in different ways, and selection operates simultaneously on different traits and at different levels of biological organization. Moreover, the conditions that bring about selection—those aspects of the world that make a difference to the reproductive success of a plant or animal—are neither constant nor passive. In the entangled bank, the plants, the singing birds, the bushes, the flitting insects, the worms, the damp earth, and the naturalist observing and experimenting with them form a complex web of ever-changing interactions. The plants and the insects are part of each other’s environment, and both are parts of the birds’ environment and vice versa. The worms help to determine the conditions of life for the plants and birds, and the plants and birds influence the worms’ conditions. Everything interacts. The difficulty for our evolutionary biologist is unraveling how changes occur in the patterns of interactions within the community and within each species. (Jablonka 2014, 235-236)

Take something seemingly simple, like where a plant-eating insect chooses to lay its eggs. Often it will show a strong preference for one particular type of plant. Is this preference determined by its genes, or by its own experiences, or by the experiences of its mother? The answer is that sometimes the insect’s genetic endowment is sufficient to explain the preference, but often behavioral imprinting is involved. Darwin discussed this in the case of cabbage butterflies. If a female butterfly lays her eggs on cabbage, and cabbage is the food of the hatching caterpillars, then when they metamorphose into butterflies her offspring will choose to lay their eggs on cabbage rather than on a related plant. In this way the preference for cabbage is transmitted to descendants by nongenetic means. There are therefore at least two ways of inheriting a preference—genetic and behavioral. An evolutionary biologist would naturally ask whether and how these two are related. Can the experience-dependent preference evolve to become an inbuilt response that no longer depends on experience? Conversely, can an inbuilt preference evolve to become more flexible, so that food preferences are determined by local conditions? (Jablonka 2014, 236)

Similar questions can be asked about the plants on the entangled bank. The most obvious effects of the insects’ behavior are on the survival and reproduction of the plants. Being the preferred food of an insect species may be an advantage to some of them, because it means that their flowers are more readily and efficiently pollinated. If so, those plants that the insects find tasty may become more abundant. Any variation, be it genetic or epigenetic, that makes a plant even more attractive to the insects, or that makes its imprinting effects more effective or reliable, will be selected. Conversely, if the insects’ food preference damages the plants, variations that make it less attractive or more resistant to insect attack will be favored. For example, plants often produce toxic compounds that are protective because insects cannot tolerate them. The ability to produce such toxins will be selected. In some species toxin production is an induced response, brought about by insect attack, but in others it is a permanent part of the plant’s makeup. Once again, an evolutionary biologist would want to know whether there is any significance in this. When there is an induced response, presumably involving changes in gene activities, does this affect the likelihood or nature of changes in the plant’s DNA sequence? Do epigenetic variations bias the rate or the direction of genetic changes? Are the genetic and epigenetic responses related in any way? (Jablonka 2014, 236-237)

How would an evolutionary biologist think about the worms that feature in Darwin’s entangled bank? Earthworms must have been one of Darwin’s favorite animals, because he devoted the whole of his last book to them. Visitors to Down House, his home for many years, can still see vestiges of his worm experiments in the garden there. Earthworms are a good example of something that is true for many animals and plants: they help to construct their own environment. Darwin realized that as earthworms burrow through the soil, mixing it, passing it through their guts, and leaving casts on the surface, they change the soil’s properties. The environment constructed by the earthworms’ activities is the one in which they and their descendants will grow, develop, and be selected. An evolutionary biologist therefore wants to know how the species’ ability to change its environment and pass on the newly constructed environment to its descendants influences its evolution. How important is such niche construction? (Jablonka, Eva. Evolution in Four Dimensions (Jablonka 2014, 237)

Very wisely, Darwin avoided mentioning human beings when he summarized his “laws” of evolution in the final paragraph of The Origin. He realized that suggesting that humans had evolved from apelike ancestors would land him in very deep trouble, and he was going to be in enough trouble as it was. Although he knew full well that his own species is also a product of natural selection, he left discussing it to a later book. He did devote a lot of space in The Origin to humans, however. In particular, he described how, through selection, they had changed plants and animals during domestication. Darwin would have been well aware that the naturalist observing the entangled bank was potentially the most powerful evolutionary influence acting on it. Humans could divert a stream, so that the bank dries out and many of the organisms inhabiting it die; or they might introduce new plants or animals, thereby altering the whole web of interactions in the bank. Without doubt, humans are the major selective agents on our planet, and have carried out the most dramatic reconstruction (usually destruction) of environments. Today, in addition to changing plants and animals by artificial selection, humans can alter the genetic, epigenetic, and behavioral state of organisms by direct genetic, physiological, and behavioral manipulations. We are only at the beginning of this man-made evolutionary revolution, which will affect our own species as well as others. Our ability to manipulate evolution in this way is derived from the human capacity to think and communicate in symbols. Through the symbolic system, we have the power of planning and foresight. As the evolutionary biologist knows, this has had and will continue to have effects on all biological evolution. (Jablonka 2014, 237-240)

As she looks at the entangled bank, a modern-day evolutionary biologist would know that explaining how natural selection has produced the complex, interacting living forms she sees is a formidable task. She could recruit the help of specialists, who might enable her to explain bits of the scene: the geneticists could look at the genetic variants in the plant and animal populations, and see how they influence survival and reproductive success; the physiologists, biochemists, and developmental biologists could look at the adaptive capacity of individuals; the ethologists and psychologists could tell her about the animals’ behavior, and how it is shaped by and shapes conditions; the sociologists and historians would tell her what role humans have had in developing the bank; the ecologists would investigate the interactions between the plants, animals, and their physical environment. Each of the specialists would probably be convinced that their own findings and interpretations are the most significant for understanding the whole picture, and that the other parts of the study are of marginal significance. This is what usually happens when people look at the isolated parts of a system. A lot of knowledge can be gained from this approach, but eventually it is necessary to reassemble the bits—to put Humpty Dumpty together again. How do the genetic, epigenetic, behavioral, and cultural dimensions of heredity and evolution fit together? What influence have they had on each other? (Jablonka 2014, 240)

[Now, Jablonka, begins the interesting part of the story, putting Humpty Dumpty back together again, the unfinished synthesis, still in progress, the re-synthesis of evolutionary theory which includes development, epigenetics, and ongoing revelations that few can even keep pace with.]

Just So Stories and Hardened Adaptationism

Theories are excellent servants but very bad masters.

Thomas Henry Huxley

Natural selection does not act on anything, nor does it select (for or against), force, maximize, create, modify, shape, operate, drive, favor, maintain, push, or adjust. Natural selection does nothing. Natural selection as a natural force belongs in the insubstantial category already populated by the Becker/Stahl phlogiston or Newton’s “ether.” ….

Having natural selection select is nifty because it excuses the necessity of talking about the actual causation of natural selection. Such talk was excusable for Charles Darwin, but inexcusable for evolutionists now.

Provine 2001: 199-200, The Origins of Theoretical Population Genetics.

Darwin has often been depicted as a radical selectionist at heart who invoked other mechanisms only in retreat, and only as a result of his age’s own lamented ignorance about the mechanisms of heredity. This view is false. Although Darwin regarded selection as the most important of evolutionary mechanisms (as do we), no argument from opponents angered him more than the common attempt to caricature and trivialize his theory by stating that it relied exclusively upon natural selection. In the last edition of the Origin, he wrote (1872, p. 395):

As my conclusions have lately been much misrepresented, and it has been stated that I attribute the modification of species exclusively to natural selection, I may be permitted to remark that in the first edition of this work, and subsequently, I placed in a most conspicuous positionnamely at the close of the introductionthe following words: “I am convinced that natural selection has been the main, but not the exclusive means of modification.” This has been of no avail. Great is the power of steady misinterpretation.

Charles Darwin, Origin of Species (1872, p. 395)

Gould, Stephen J., & Lewontin, Richard C. (1979) The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme. PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON, SERIES B, VOL. 205, NO. 1161, PP. 581-598.

~ ~ ~

Ernest Mayr’s (1963, p. 586) epitome of Darwinism as preached by the Modern Synthesis: “All evolution is due to the accumulation of small genetic changes, guided by natural selection …, and that transpecific evolution … is nothing but an extrapolation and magnification of the events that take place within populations and species.” (Gould 2002: 160) [See The Evolution of the Genome ]

Throughout Mayr’s 1963 book [Animal Species and Evolution, Cambridge MA: Harvard University Press] — with a cadence that sounds, at times, almost like a morality play — phenomenon after phenomenon falls to the explanatory unity of adaptation, as the light of nature’s truth expands into previous darkness: non-genetic variation (p. 139), homeostasis (pp. 57, 61), prevention of hybridization (p. 109). Former standard bearers of the opposition fall into disarray, finally succumbing to defeat almost by definition: “It is now evident that the term ‘drift’ was ill-chosen and that all or virtually all of the cases listed in the literature as ‘evolutionary change due to genetic drift’ are to be interpreted in terms of selection” (p. 214). All particular Goliaths have been slain (although later genetic studies would revivify this particular old warrior): “The human blood-group genes have in the past been held up as an exemplary case of ‘neutral-genes,’ that is, genes of no selective significance. This assumption has now been thoroughly disproved.” (p. 161). (Gould 2002: 539)

However, Mayr’s most interesting expression of movement towards a hardened adaptationism occurs not so much in these explicit claims for near ubiquity, but even more forcefully in the subtle redefinition of all evolutionary problems as issues in adaptation. The very meaning of terms, questions, groupings and weights of phenomena, now enter evolutionary discourse under adaptationist presumptions. Not only have alternatives to adaptation been routed on an objective playing field, Mayr claims in 1963, but the conceptual space of evolutionary inquiry has also become so reconfigured that hardly any room (or even language) remains for considering, or even formulating, a potential way to consider answers outside an adaptationist framework. (Gould 2002: 539)

Major subjects, the origin of evolutionary novelty for example, now reside exclusively within an adaptationist framework by purely functional definition: “We may begin by defining evolutionary novelty as any newly acquired structure or property that permits the performance of a new function, which, in turn, will open a new adaptive zone” (p. 602). In a world of rapid and precise adaptation, morphological similarity between distantly related groups can only arise through convergence imposed by similar adaptive regimes upon fundamentally different genetic material. The older, internalist view (constraint-based and potentially nonadaptationist) the claim that we might attribute such similarities to parallelism produced by homologous genes is dismissed as both old-fashioned and wrong headed. (In modern hindsight, this claim provides a particularly compelling example of how hardened adaptationism can suppress interesting questions for such homologues have now been found in abundance. Their discovery ranks as one of the most important events in modern evolutionary science see Chapter 10, p. 1092, where we will revisit this particular Mayrian claim): “In the early days of Mendalism there was much search for homologous genes that would account for such similarities. Much that has been learned about gene physiology makes it evident that the search for homologous genes is quite futile except in very close relatives” (1963 [Animal Species and Evolution, Cambridge MA: Harvard University Press], p. 609). (Gould 2002: 539)

(….) All potential anomalies yield to a more complex selectionist scenario, often presented as a “just-so-story.” Why did the crown height of molars increase slowly, if hypsodontry became so advantageous once horses shifted to vegetational regimes of newly-evolved grasses with high silica content? Mayr devises a story sensible, though empirically wrong in this case and regards such a hypothetical claim for plausibility as an adequate reason to affirm a selectionist cause. (The average increase may have been as small as the figure cited by Mayr, but horses did not change in anagenetic continuity at constant rates. Horses probably evolved predominantly by punctuated equilibrium see Prothero and Shubin, 1989). The average of a millimeter per million years represents a meaningless amalgam of geological moments of rapid change during speciation mixed with long periods of stasis: “An increase in tooth length (hysodontry) was a selective advantage to primitive horses shifting from browsing to grazing in an increasingly arid environment. However, such a change in feeding habits required a larger jaw and stronger jaw muscle, hence a bigger and heavier skull supported by heavier neck muscles, as well as shifts in the intestinal tract. Too rapid an increase in tooth length was consequently opposed by selection, and indeed the increase averaged only about 1 millimeter per million years” (1963, p. 238) (Gould 2002: 540)

(….) [T]he synthesis can no longer assert full sufficiency to explain evolution at all scales…. I advance this opinion only with respect to a particular, but … quite authoritative, definition of the synthesis…. The definition begins Mayr’s chapter on “species and transspecific evolution” from his 1963 classic… Mayr wrote …: “The proponents of the synthetic theory maintain that all evolution is due to the accumulation of small genetic changes, guided by natural selection, and that transspecific evolution is nothing but an extrapolation and magnification of the events that take place within populations and species.” (Gould 2002: 1003)

(….) The discovery [evo-devo] that has so discombobulated the confident expectations of orthodox theory can be stated briefly and baldly: the extensive “deep homology” now documented in both the genetic structure and developmental architecture of phyla separated at least since the Cambrian explosion (ca. 530 million years ago) should not, and cannot, exist under conventional concepts of natural selection as the dominant cause of evolutionary change. (Gould 2002: 1065)

(….) Darwinian biology attributes the origin of shared homologous characters to ordinary adaptation by natural selection in a common ancestor. Moreover, homologous characters not only continue to express their adaptive origin, but also remain fully subject to further adapative change even to the point of losing their ready identity as homologies if they become inadaptive in the environment of any descendent lineage. Homological similarity in related taxa living in different environments therefore indicates a lack of selective pressure for alteration, not a limitation upon the power of selection to generate such changes. (At the Chicago Macroevolution meeting in 1980, for example, Maynard Smith acknowledged the allometric basis of many homologies, but stated that the attribution of such similarity to “developmental constraint” would represent what he proposed to christen as the “Gould-Lewontin fallacy” for natural selection can unlock any inherited developmental correlation if adaptation to immediate environment favors such an alteration.) (Gould 2002: 1065-1066)

(….) Second, homological holds must be limited in taxonomic and structural extent to close relatives of similar Bauplan and functional design. The basic architectural building blocks of life — the DNA code, or the biomolecular structure of fundamental organic compounds for example may be widely shared by homology among phyla. But the particular blueprints of actual designs and the pathways of their construction … must be limited to clades of closer relationship. (Gould 2002: 1066)

(….) Any wider hold of homology [which has already occurred] would have to inspire suspicions that the central tenet of orthodox Darwinism can no longer be sustained: the control of rates and directions of evolutionary change by the functional force of natural selection. In a particularly revealing quote within the greatest summary document of the Modern Synthesis, for example, Mayr … formulated the issue in a forthright manner. After all, he argued, more than 500 million years of independent evolution must erase any extensive genetic homology among phyla if natural selection holds such power to generate favorable change [novelty]. Adaptive evolution, over these long intervals, must have crafted and recrafted every genetic locus, indeed every nucleotide position, time and time again to meet the constantly changing selective requirements of continually varying environments. At this degree of cladistic separation, any independently evolved phenotypic similarity in basic adaptive architecture must represent the selective power of separate shaping by convergence, and cannot record conserved influence of retained genetic sequences, or common generation by parallelism: “In the early days of Mendelism there was much search for homologous genes that would account for such similarities. Much that has been learned about gene physiology makes it evident that the search for homologous genes is quite futile except in very close relatives.” (Gould 2002: 1066)

But we now know that extensive genetic homology for fundamental features of development does hold across the most disparate animal phyla. For an orthodox Darwinian functionalist, only one fallback position remains viable in this new and undeniable light… One can admit the high frequency and great importance of such genetic constraints (and also designate their discovery as stunningly unexpected), while continuing to claim that natural selection holds exclusive sway over evolutionary change because deep homology only imposes limits upon styles and ranges of developmental pathways, but cannot power any particular phyletic alteration. Natural selection can still reign supreme as the pool cue of actual evolutionary motion. (Gould 2002: 1066-1067)

But a formalist defender of positive constraint will reply that such unanticipated deep homology also channels change in positive ways and that the key to this central argument resides in an old distinction that, unfortunately, cannot be matched for both conceptual and terminological confusion, and for consequent failure of most evolutionists to engage in the issue seriously: namely, the differences in causal meaning (not just in geometric pattern) between parallelism and convergence. (Gould 2002: 1067)

(….) But and now we come to the nub of the issue, and to the central role of positive developmental constraint as a major challenge to selectionist orthodoxy the attribution of similar evolutionary changes in independent lineages to internal constraint of homologous genes and developmental pathways, and not only to an external impetus of common selective pressures, must be limited to very close relatives still capable of maintaining substantial genetic identity as a consequence of recent common ancestry. Mayr’s characterization of selectionist orthodoxy comes again to mind: distantly related lineages cannot be subject to such internal limitation or channeling because the pervasive scrutiny and ruthless efficiency of natural selection, operating on every feature over countless generations in geological immensity, must have fractured any homological hold by underlying genes and developmental pathways over the freedom of phenotypes to follow wherever selection leads. (Gould 2002: 1067)

Darwin’s famous words, so often quoted, haunt the background of this discussion (1859, p. 84): “It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life.” (Gould 2002: 1068)

Therefore, an uncannily detailed phenotypic similarity evolved between distantly related groups must arise by convergence from substrates of non-homologous genotypes thus affirming our usual view of selection’s overarching power, especially if common function for the two similar forms can validate the hypothesis of generation within a comparable adaptational matrix. (Note the logical danger of circularity that intrudes upon the argument at this point, for this extent of detailed similarity the very datum that, in an unbiased approach, would lead one to entertain parallelism based upon common internal constraint as a viable alternative to convergence based on similar adaptive needs now becomes an a priori affirmation of selection’s power, the hypothesis supposedly under test.) (Gould 2002: 1068)

For this reason, such detailed functional and structural similarities, evolved independently in distantly related lineages, have become “poster boy” examples of convergence itself the “poster boy” phenomenon and general concept for showcasing selection’s dominant sway precisely because similarities evolved in this mode cannot, by Mayr’s argument, be ascribed to parallelism based on positive constraint imposed by homologous genetic and developmental pathways. With internal channeling thus theoretically barred as a potential source of impressive similarity, convergence becomes the favored explanation by default. The argument, surely “tight” in logic and principle, seems incontrovertible. (Gould 2002: 1068)

(….) [O]ne of the major discoveries of evo-devo has revealed a deep genetic homology underlying and promoting the separate evolution of lens eyes in cephalopods and vertebrates…. [B]oth phyla share key underlying genes and developmental pathways as homologies, and the example [of ‘convergence’] has lost its former status as the principle textbook case of natural selection’s power to craft stunning similarities from utterly disparate raw materials. (Gould 2002: 1069)

With this “one liner” of maximal force evo-devo has reinterpreted several textbook examples of convergence as consequences of substantial parallelism we can encapsulate the depth of theoretical disturbance introduced by this subject into the heart of Darwinian theory. Our former best exaples of full efficacy for the functional force of natural selection only exist because internal constraints of homologous genes and developmental pathways have kept fruitful channels of change open and parallel, even in the most disparate and most genealogically distant bilaterian phyla. The homological hold of historical constraint channels change at all levels, even for the broadest patterning of morphospace, and not only for details of parallel evolution in very closely related groups. (Gould 2002: 1069)

(….) [P]arallelism marks the formal influence of internal constraint, while convergence reflects the functional operation of natural selection upon two substrates different enough to exclude internal factors as influences upon the resulting similarity. This recognition of internal channeling as the root cause of parallelism the principle basis for ascribing evolutionary change, and not only limitation, to historical constraint lies at the heart of evo-devo’s theoretical novelty and importance to the Darwinian worldview. (Gould 2002: 1075)

(….) I began this “symphony” of evo-devo with a quotation from one of the great architects of the Modern Synthesis Mayr’s statement, based on adaptationist premises then both reasonable and conventional, that any search for genetic homology between distantly-related animal phyla would be doomed a priori and in theory by selection’s controlling power, a mechanism that would surely recycle every nucleotide position (often several times) during so long a period of independent evolution between two lines. The new data of evo-devo have falsified this claim and revised our basic theory to admit a great, and often controlling, power for historical constraints based on conserved developmental patterns coed by the very genetic homologies that Mayr had deemed impossible. (Gould 2002: 1175)

(….) The argument that structural and morphological archetypes underlie, and actively generate, a basic and common architecture in taxonomically distant groups defines both as a fact of our profession’s actual history and as a dictate of the logic of our explanatory theories the strongest kind of claim for developmental constraint as a major factor in patterns of evolutionary change and the occupation of morphospace. I suspect that the depth of this challenge has always been recognized, but the empirical case for such constraining archetypes has remained weak, since the heyday of Geoffroy and Own some 150 years ago, that the issue simply didn’t generate much serious concern and rightly so.

The concept of interphylum archetypes, deemed too bizarre to warrant active refutation, experienced the curt and derisive dismissal reserved for crackpot ideas in science. (Goldschmidt’s saltational apostasy, on the other hand, inspired voluminous and impassioned denial because his ideas did seem sufficiently and dangerously plausible to the Modern Synthesis see pp. 451-466). Indeed, the notion of interphylum archetypes struck most biologists as so inconceivable in theory that empirical counterclaims hardly seemed necessary. After all, the notion required extensive genetic homology among phyla, and the power of natural selection, working on different paths for minimum of 530 million years since the origin of distinct phyla in the Cambrian explosion, seemed to guarantee such thorough change at effectively every nucleotide position that the requisite common foundation could not possibly have been maintained (see Mayr, 1963, p. 609, as previously discussed on pp. 539 and 1066).

~ ~ ~

No case has received more attention, generated more surprise, rested upon firmer data, or so altered previous “certainties,” than the discovery of an important and clearly homologous developmental pathway underlying the ubiquitous and venerable paradigm of convergence in our textbooks: the independent evolution of image-forming lens eyes in several phyla, with the stunning anatomical similarities of single-lens eyes in cephalopods and vertebrates as the most salient illustration. As Tomarev et al. (1997, p. 2421) write: “The complex eyes of cephalopod mollusks and vertebrates have been considered a classical example of convergent evolution.” (….)


DATA AND DISCOVERY. Salvini-Plawen and Mayer (1977), in a classical article nearly always cited in this context, argued that photoreceptors of some form have evolved independently some 40 to 60 times among animals, with six phyla developing complex image-forming eyes, ranging from cubomedusoids among the Cnidaria, through annelids, onychophores, arthropods and mollusks to vertebrates along the conventional chain of life. In the early 1990s, using Drosophila probes, researchers cloned a family of mammalian Pax genes, most notably Pax-6, which includes both a paired box and homeobox (Walther and Gruss, 1991). (….) The similar function of these Pax-6 homologs in different phyla was then dramatically affirmed by expressing the mouse gene in Drosophila (Halder et al., 1995), and finding that the mammalian version could still induce the formation of normal fly eyes. (….) [T]he Pax-6 story has now furnished an important homological basis in underlying developmental pathways for generating complex eyes in cephalopods and vertebrates. Thus, a channel of inherited internal constraint has strongly facilitated the resulting, nearly identical solution in two phyla, and evolutionists can no longer argue that such similar eyes originated along entirely separate routes, directed only by natural selection, and without benefit of any common channel of shared developmental architecture. But just as the advocates of pure convergence erred in claiming exclusive rights of explanation, the discovery of Pax-6 homologies does not permit a complete flip to exclusive explanation by constraint. (Gould 2002: 1123-1128)

~ ~ ~

[Gold reminds us we must not forget] … the striking reformation of evolutionary theory implied by the well-documented genetic and developmental homologies alone. De Robertis expresses this key argument in the final line of his 1997 article on the ancestry of segmentation: “The realization that all Bilateria are derived from a complex ancestor represents a major change in evolutionary thinking, suggesting that the constraints imposed by the previous history of species played a greater role in the outcome of animal evolution than anyone would have predicted until recently.” (Gould 2002: 1152) [De Robertis, E.M. 1997. The ancestory of segmentation. Nature 387: 25-26. See also, De Robertis, E.M., G. Oliver, and C.V.E. Wright. 1990. Homeobox genes and the vertebrate body plan. Scientific American, July, pp. 46-52; De Robertis, E.M., and Y. Sasai. 1996. A common plan for dorsoventral patterning in Bilateria. Nature 380: 37-40.]

(….) Hughes (2000, p. 65) has expressed this cardinal discovery of evo-devo in phyletic and paleontological terms: “It is hard to escape the suspicion that what we witness in the Cambrian is mainly tinkering with developmental systems already firmly established by the time these Cambrian beasts showed up.” (Gould 2002: 1155) [Hughes, N.C. 2000. The rocky road to Mendal’s play. Evol. and Develop. 2: 63-66.]

Natural Selection as a Creative Force

The evolutionary synthesis [i.e., neo-Darwinian theory] came unraveled for me during the period since 1980. Historically, my examination of this period, after editing with Ernst Mayr “The Evolutionary Synthesis” (Mayer and Provine 1980), showed that it was not a synthesis, but rather a systematic diminution of the factors in evolution, and I now call it the “evolutionary constriction” (Provine 1989). The unity of evolutionary biology inherent in the “synthesis” has been replaced by a much more interesting and fascinating complex of different levels marching to different drummers….. In 1970 I could see the origins of theoretical population genetics as being an unalloyed good for evolutionary biology, and thus obviously a great subject for an historian. Now I see these same theoretical models of the early 1930s, still widely used today, as an impediment to understanding evolutionary biology, and their amazing persistence in textbooks and classrooms as a great topic for other historians.

Provine, William B. The Origins of Theoretical Population Genetics. Chicago: Chicago University Press; 2001; c1971 pp. 203-204.

1895, Wilhelm Roux, in his manifesto for experimental embryology, postulated that there would be two types of developmental mechanics. The first—ontogenetic developmental mechanics—would uncover how development occurred. The second—phylogenetic developmental mechanics—would determine how changes in embryonic development caused evolutionary change. A century later, we are starting to make good on Roux’s prophecy. The homologies of process within morphogenetic fields provide some of the best evidence for evolution—just as skeletal and organ homologies did earlier. Thus, the evidence for evolution is better than ever. The role of natural selection in evolution, however, is seen to play less an important role. It is merely a filter for unsuccessful morphologies generated by development. Population genetics is destined to change if it is not to become as irrelevant to evolution as Newtonian mechanics is to contemporary physics. The population genetics of regulatory genes and their possible combinations within fields should become a major new research program. Developmental genetics would also change, reflecting an emphasis on the initiation and maintenance of genetic circuits within cells and epigenetic circuits within the field. One of its major research programs would be to find the target genes of these pathways which differ from field to field and from organism to organism, i.e., those genes that provide the diversity in evolution. (Gilbert, Opitz, and Raff 1996: 368)

Developmental biology is reclaiming its appropriate place in evolutionary theory. We conclude with a remarkable prophecy from one of those evolutionary-minded embryologists, Gavin de Beer (1951), who saw homology and fields as being crucial to the study of evolution:

But since phylogeny is but the result of modified ontogeny, there is the possibility of a causal analytic study of present evolution in an experimental study of the variability and genetics of ontogenetic processes. Finally, it may be possible that, freed from the trammels and fetters which have so long confined thought, the whole of the animal kingdom may appear in a new light, more homogeneous and compact than had been imagined, and with the gaps between its major groups less formidable and perhaps even bridgeable.

Gilbert S. F., Opitz J. M. and Raff R. A. Resynthesizing evolutionary and developmental biology. Developmental Biology. In Developmental Biology, 173, 357-72.

~ ~ ~

Natural selection ranked as a standard item in biological discourse—but with a crucial difference from Darwin’s version: the usual interpretation invoked natural selection as part of a larger argument for created permanency. Natural selection, in this negative formulation, acted only to preserve the type, constant and inviolate, by eliminating extreme variants and unfit individuals who threatened to degrade the essence of created form. (Gould 2002: 137-139)

(….) Darwin’s theory … cannot be equated with the simple claim that natural selection operates. Nearly all his colleagues and predecessors accepted this postulate. Darwin, in his characteristic and radical way, grasped that this standard mechanism for preserving the type could be inverted, and then converted into the primary cause of evolutionary change. Natural selection obviously lies at the center of Darwin’s theory, but we must recognize, as Darwin’s second key postulate, that claim that natural selection acts as the creative force of evolutionary change. The essence of Darwinism cannot reside in the mere observation that natural selection operates—for everyone had long accepted a negative role for natural selection in eliminating the unfit and preserving the type. (Gould 2002: 139)

(….) We have lost this context and distinction today, and our current perspective often hampers an understanding of the late 19th century literature and its preoccupations. Anyone who has read deeply in this literature knows that no argument inspired more discussion, while no Darwinian claim seemed more vulnerable to critics, than the proposition that natural selection should be viewed as a positive force, and therefore as the primary cause of evolutionary change. The “creativity of natural selection”—the phrase generally used in Darwin’s time as a shorthand description of the problem—set the cardinal subject for debate about evolutionary mechanisms during Darwin’s lifetime and throughout the late 19th century. [It is poised to once again become the central question due to the scientific discoveries taking place in the fields of epigenetics and developmental biology (evo-devo).] (Gould 2002: 139)

Non-Darwinian evolutionists did not deny the reality, or the operationality, of natural selection as a genuine cause stated in the most basic or abstract manner. (….) They held, rather, that natural selection, as a headsman or executioner, could only eliminate the unfit, while some other cause must play the positive role of constructing the fit. (Gould 2002: 139)

(….) We can understand the trouble that Darwin’s contemporaries experienced in comprehending how selection could work as a creative force when we confront the central paradox of Darwin’s crucial argument: natural selection makes nothing; it can only choose among variants originating by other means. How then can selection possibly be conceived as a “progressive,” or “creative,” or “positive” force? (Gould 2002: 140)

The Requirements for Variation

In order to act as raw material only, variation must walk a tightrope between two unacceptable alternatives. First and foremost, variation must exist in sufficient amounts, for natural selection can make nothing, and must rely upon bounty thus provided; but variation must not be too florid or showy either, lest it become the creative agent of change itself. Variation, in short, must be copious, small in extent, and undirected. A full taxonomy of non-Darwinian evolutionary theories may be elaborated by their denials of one or more of these central assumptions. (Gould 2002: 141)

COPIOUS. Since natural selection makes nothing and can only work with raw material presented to its stringent review, variation must be generated in copious and dependable amounts…. Darwin’s scenario for selective modification always includes the postulate, usually stated explicitly, that all structures vary, and therefore evolve…. If these universally recognized distinctions arise as consequences of differences in the intrinsic capacity of species to vary, then Darwin’s key postulate of copiousness would be compromised—for failure of sufficient raw material would then be setting a primary limit upon the rate and style of evolutionary change, and selection would not occupy the driver’s seat. (Gould 2002: 141-142)

Darwin responds by denying this interpretation, and arguing that differing intensities of selection, rather than intrinsically distinct capacities for variation, generally cause the greater or lesser differentiation observed among domestic species. I regard this argument as among the most forced and uncomfortable in the Origin—a rare example of Darwinian special pleading. But Darwin realizes the centrality of copiousness to his argument for the creativity of natural selection, and he must therefore face the issue directly:

Although I do not doubt that some domestic animals vary less than others, yet the rarity or absence of distinct breeds for the cat, the donkey, peacock, goose, etc., may be attributed in main part to selection not having been brought into play: in cats, from the difficulty in pairing them; in donkeys, from only a few being kept by poor people and little attention paid to their breeding; in peacocks, from not being very easily reared and a large stock not kept; in geese, from being valuable only for two purposes, food and feathers, and more especially from no pleasure having been felt the display of distinct breeds (p. 42).

On the Origin of Species

Second, copiousness must also be asserted in the face of a powerful argument about limits to variation following modal departure from “type.” To use Fleeming Jenkin’s (1867) famous analogy: a species may be compared to a rigid sphere, with modal morphology of individuals at the center, and limits to variation defined by the surface. So long as individuals lie near the center, variation will be copious in all directions [isotropic; non-directional]. But if selection brings the mode to the surface, then further variation in the same direction will cease—and evolution will be stymied by an intrinsic limitation upon raw material, even when selection would favor further movement. Evolution, in other words, might consume its own fuel and bring itself to an eventual halt thereby. This potential refutation stood out as especially serious—not only for threatening the creativity of natural selection, but also for challenging the validity of uniformitarian extrapolation as a methodology of research. Darwin responded, as required by logical necessity, that such limits do not exist, and that new spheres of equal radius can be reconstructed around new modes: “No case is on record of a variable being ceasing to be variable under cultivation. Our oldest cultivated plants, such as wheat, still often yield new varieties: our oldest domesticated animals are still capable of rapid improvement or modification” (p. 8). (Gould 2002: 142)

(….) One of the most appealing features of Mendalism a strong reason for acceptance following its “rediscovery” in 1900 lay in the argument that mutation could restore variation “used up” by selection. (Gould 2002: 143) [See Klein & Tanaka, 2002, “Where Do We Come From: The Molecular Evidence for Human Descent,” p. 197-204, regarding atavism.]

SMALL IN EXTENT. If the variations that yielded evolutionary change were large—producing new major features, or even new taxa in a single step then natural selection would not disappear as an evolutionary force. Selection would still function in an auxiliary and negative role as headsman—to heap, up the hecatomb of the unfit, permit new saltation to spread among organisms in subsequent generations, and eventually to take over the population. But Darwinism, as a theory of evolutionary change, would perish—for selection would become both subsidiary and negative, and variation itself would emerge as the primary, and truly creative, force of evolution, the source of occasionally lucky saltation. For this reason, the quite properly, saltationist (or macromutational) theories have always been viewed as anti-Darwinian—despite the protestations of de Vries …, who tried to retrain the Darwinian label for his continued support of selection as a negative force. The unthinking, knee-jerk response of many orthodox Darwinians whenever they hear the word “rapid” or the name “Goldschmidt,” testifies to the conceptual power of saltation as a cardinal danger to an entire theoretical edifice. (Gould 2002: 143)

Darwin held firmly to the credo of small-scale variability as raw material because both poles of his great accomplishment required this proviso…. At the theoretical pole, natural selection can only operate in a creative manner if its cumulating force builds adaptation step by step from an isotropic pool of small-scale variability. If the primary source of evolutionary innovation must be sought in the occasional luck of fortuitous saltations, then internal forces of variation become the creative agents of change, and natural selection can only help to eliminate the unfit after the fit arise by some other process. (Gould 2002: 143-142)


UNDIRECTED. Textbooks of evolution still often refer to variation as “random.” We all recognize this designation is a misnomer, but continue to use the phrase by force of habit. Darwinians have never argued for “random” mutation in the restricted and technical sense of “equally likely in all directions,” as in tossing a die. [Rather it means statistical frequencies around a modal norm, like the bell curve for example, which does not imply that the underlying cause is totally random like tossing die.] But our sloppy use of “random” (see Eble, 1999) does capture, at least in a vernacular sense, the essence of the important claim that we do wish to convey—namely, that variation must be unrelated to the direction of evolutionary change; or, more strongly, that nothing about the process of creating raw material biases the pathway of subsequent change in adaptive directions. This fundamental postulate gives Darwinism its “two step” character, the “chance” and “necessity” of Monad’s famous formulation the separation of a source of raw material (mutation, recombination, etc.) from a force of change (natural selection). (Gould 2002: 144)

In a sense, the specter of directed variability threatens Darwinism even more seriously than any putative failure of the other two postulates. Insufficient variation stalls natural selection; saltation deprives selection of a creative role but still calls upon Darwin’s mechanism as a negative force. With directed variation, however, natural selection can be bypassed entirely. If adaptive pressures automatically trigger heritable variation in favored directions, then trends can proceed under regimes of random mortality; natural selection, acting as a negative force, can, at most, accelerate the change. (Gould 2002: 145)

(….) Darwin clearly understood the threat of directed variability to his cardinal postulate of creativity for natural selection. He explicitly restricted the sources of variation to auxiliary roles as providers of raw material, and granted all power over the direction of evolutionary change to natural selection…. He recognized biased tendencies to certain states of variation, particularly reversions toward ancestral features. But he viewed such tendencies as weak and easily overcome by selection. Thus, by the proper criterion of relative power and frequency, selection controls the direction of change: “When under nature the conditions of life do change, variations and reversions of character probably do occur; but natural selection, as will hereafter be explained, will determine how far the new characters thus arising shall be preserved” (p. 15) (Gould 2002: 145)

We may summarize Darwin’s third requirement for variation under the rubric of isotropy, a common term in mineralogy (and other sciences) for the concept of a structure or system that exhibits no preferred pathway as a consequence of construction with equal properties in all directions. Darwinian variation must be copious in amount, small in extent, and effectively isotropic. (….) Only under these stringent conditions can natural selection—a force that makes nothing directly, and must rely upon variation for all raw material—be legitimately regarded as creative. (Gould 2002: 145)

(….) Gradualism. Selection becomes creative only if it can impart direction to evolution by superintending the slow and steady accumulation of favored subsets from an isotropic pool of variation. If gradualism does not accompany this process of change, selection must relinquish this creative role and Darwinism then fails as a creative source of evolutionary novelty. If important new features, or entire new taxa, arise as large and discontinuous variations, then creativity lies in the production of the variation itself. Natural selection no longer causes evolution, and can only act as a headsman for the unfit, thus promoting changes that originated in other ways. Gradualism therefore becomes a logical consequence of the operation of natural selection in Darwin’s creative mode. (Gould 2002: 149)

(….) INSENSIBILITY OF INTERMEDIACY. We now come to the heart of what natural selection requires. This … “just right,” statement does not advance a claim about how much time a transition must take, or how variable a rate of change might be. (….) [And in this meaning of “gradualism,” it is simply asserted that] … in going from A to a substantially different B, evolution must pass through a long and insensible sequence of intermediary steps—in other words, that ancestor and descendant must be linked by a series of changes, each within the range of what natural selection might construct from ordinary variability. Without gradualism in this form, large variations of discontinuous morphological import—rather than natural selection—might provide the creative force of evolutionary change. But if the tiny increment of each step remains inconsequential in itself, then creativity must reside in the summation of these steps into something substantial natural selection, in Darwin’s theory, acts as the agent of accumulation. (Gould 2002: 150)

(….) If the altered morphology of new species often arose in single steps by fortuitous macromutation, then selection would lose its creative role and could act only as a secondary and auxiliary force to spread the sudden blessing through a population. But can we justify Darwin’s application of the same claim to single organs? (….) Would natural selection perish if change in this mode were common? I don’t think so. Darwinian theory would require some adjustments and compromises particularly a toning down of assertions about isotropy of variation, and a more vigorous study of internal constraint in genetics and development … —but natural selection would still enjoy a status far higher than that of a mere executioner. A new organ does not make a new species; and a new morphology must be brought into functional integration—a process that requires secondary adaptation and fine tuning, presumably by natural selection, whatever the extent of the initial step. (Gould 2002: 150)

The Evolution of the Genome

The biggest intellectual danger of any evolutionary research is the temptation to find satisfaction in ingenious “just so” stories. Devo-evo, as the youngest member of the evolutionary sciences, is in particular danger of falling into this trap, as other branches of evolutionary biology did in the past. (Laubichler and Maienschein 2007: 529).

(….) One of the main sources of intellectual excitement in devo-evo is the prospect of understanding major evolutionary transformations. If developmental evolution were to focus exclusively on microevolutionary processes, the field would abandon that major objective. In other words, even a very successful microevolutionary approach to developmental evolution would not fulfill the expectations that have been raised: bridging the gap between evolutionary genetics and macroevolutionary pattern. (Laubichler and Maienschein 2007: 530)

— Laubichler and Maienschein 2007: 529. In Embryology to Evo-Devo: A History of Developmental Evolution.

Darwin has often been depicted as a radical selectionist at heart who invoked other mechanisms only in retreat, and only as a result of his age’s own lamented ignorance about the mechanisms of heredity. This view is false. Although Darwin regarded selection as the most important of evolutionary mechanisms (as do we), no argument from opponents angered him more than the common attempt to caricature and trivialize his theory by stating that it relied exclusively upon natural selection. In the last edition of the Origin, he wrote (1872, p. 395):

As my conclusions have lately been much misrepresented, and it has been stated that I attribute the modification of species exclusively to natural selection, I may be permitted to remark that in the first edition of this work, and subsequently, I placed in a most conspicuous position–namely at the close of the introduction—the following words: “I am convinced that natural selection has been the main, but not the exclusive means of modification.” This has been of no avail. Great is the power of steady misinterpretation.

Charles Darwin, Origin of Species (1872, p. 395)

— Gould, Stephen J., & Lewontin, Richard C. (1979) The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme. PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON, SERIES B, VOL. 205, NO. 1161, PP. 581-598.

Dichotomy is both our preferred mental mode, perhaps intrinsically so, and our worst enemy in parsing a complex and massively multivariate world (both conceptual and empirical). Simpson, in discussing “the old but still vital problem of micro-evolution as opposed to macro-evolution” (ref. 10, p. 97), correctly caught the dilemma of dichotomy by writing (ref. 10, p. 97): “If the two proved to be basically different, the innumerable studies of micro-evolution would become relatively unimportant and would have minor value for the study of evolution as a whole.”

Faced with elegant and overwhelming documentation of microevolution, and following the synthesist’s program of theoretical reduction to a core of population genetics, Simpson opted for denying any distinctive macroevolutionary theory and encompassing all the vastness of time by extrapolation. But if we drop the model of dichotomous polarization, then other, more fruitful, solutions become available.

— Gould, Stephen Jay. Tempo and mode in the macroevolutionary reconstruction of Darwinism. National Academy of Sciences Colloquim “Tempo and Mode in Evolution”; 1994 Jul: 6767-6768. Emphasis added.

If Darwin were alive today, I have no doubt his love of truth would lead him to follow the evidence—the facts—wherever they might chance to lead. Darwin was not a dogmatist, but he was dogged in pursuing facts and duly humble in his theoretical interpretations of them. The question of real import is not whether natural selection is a real phenomenon, for it is (aside from its reification into a ‘thing’, which it is not), but whether it is the source of novelty. There is no doubt that we can through artificial selection bring forth existing phenotypic plasticity (e.g., shifting the number of hairs on a fruit fly) ; but that is merely tweaking already existing features similar to how the environment brings about morphological changes due to phenotypic plasticity, the former being artificial, while the latter natural. Natural selection reveals how nature sifts the survival of the fittest, but theoretically speaking, tells us nothing when used as the basis for an unwarranted extrapolation about the arrival of the fittest. When science has mastered the arrival of the fittest we will have mastered evolution itself.

At some point, such heritable regulatory changes will be created in a test animal in the laboratory, generating a trait intentionally drawing on various conserved processes. At that point, doubters [of organic evolution] would have to admit that if humans can generate phenotypic variation in the laboratory in a manner consistent with known evolutionary changes, perhaps it is plausible that facilitated variation has generated change in nature.

— Gerhart, C. and Kirschner Marc W. The Plausibility of Life: Resolving Darwin’s Dilemma. New Haven: Yale University Press; 2005; p. 237. Emphasis added.

Natural selection does not act on anything, nor does it select (for or against), force, maximize, create, modify, shape, operate, drive, favor, maintain, push, or adjust. Natural selection does nothing. Natural selection as a natural force belongs in the insubstantial category already populated by the Becker/Stahl phlogiston or Newton’s “ether.” ….

Having natural selection select is nifty because it excuses the necessity of talking about the actual causation of natural selection. Such talk was excusable for Charles Darwin, but inexcusable for evolutionists now. Creationists have discovered our empty “natural selection” language, and the “actions” of natural selection make huge vulnerable targets. (Provine 2001: 199-200)

Provine, William B. The Origins of Theoretical Population Genetics. Chicago: Chicago University Press; 2001; c1971 pp. 199-200. Emphasis added.

The Epigenetic System of Heredity and Phenotypic Variation

In response to various environmental stimuli metazoans develop a wide variety of discrete biological adaptations, new phenotypic characters, without changes in genes and genetic information. Such abrupt emergence or new morphological and life history (as well as physiological and behavioral) characters requires information. The fact that the genetic information does not change, implies that information of a type other than genetic information is responsible for the development of those characters.

(…) [T]he CNS, in response to external stimuli, releases specific chemical signals, which start signal cascades that result in adaptive morphological and physiological changes in various organ or parts of the body. In other words, information for those adaptations flows from the CNS to the target cells, tissues and organs. … [I]t was also proven the nongenetic, computational nature and origin of that information.

The CNS generates its information by processing the input of external stimuli. As defined in this work, a stimulus is a perceptible change in an environmental agent to which the CNS responds adaptively. Changes in the environment may be as big as to cause stress condition and radical changes in environment are often associated with adaptive changes in morphology. (Cabej 2004: 209)

Cabej, Nelson R. Neural Control of Development: The Epigenetic Theory of Heredity. New Jersey: Albanet; 2004; p. 209.

Natural selection is today, understood in the context of what we now know, a description of the relationship between an organism’s phenotypic plasticity and its environment. This relationship can be empirically observed, both in nature and the laboratory, shifting existing features and attributes of an organism, by selectively altering gene frequencies in the lab, or by observing phenotypic responses to environmental signals in nature (survival of the fittest). But the question of the role of natural selection in the origin of novelty (the arrival of the fittest) is today being reexamined in light of new evidence of hereditary variation and its causes and origins.

Darwin’s theory of evolution by natural selection was based on the observation that there is variation between individuals within the same species. This fundamental observation is a central concept in evolutionary biology. However, variation is only rarely treated directly. It has remained peripheral to the study of mechanisms of evolutionary change. The explosion of knowledge in genetics, developmental biology, and the ongoing synthesis of evolutionary and developmental biology has made it possible to study the factors that limit, enhance, or structure variation at the level of an animal’s physical appearance and behavior. Knowledge of the significance of variability is crucial to this emerging synthesis. This volume positions the role of variability within this broad framework, bringing variation back to the center of the evolutionary stage. This book is intended for scholars, advanced undergraduate students and graduates in evolutionary biology, biological anthropology, paleontology, morphology, developmental biology, genomics and other related disciplines.

Hallgrimsson, Benedikt and Hall, Brian (2005) Variation: A Central Concept in Biology.

Macroevolution and the Genome

There are many ways of studying the mechanisms and outcomes of evolution, ranging from genetics and genomics at the lowest scales through to paleontology at the highest. Unfortunately, the division into specialties according to scale has often led to protracted disagreement among evolutionary theorists from different disciplines regarding the nature of the evolutionary processes. Although the resulting debate has undoubtedly led to a refinement of the various theoretical approaches employed, it has also prevented the development of a complete and unified theory of evolution. Without such a theory, all evolutionary phenomena, including those involving features of the genome, will remain at best only partially understood. (….) The goal … is to provide an expansion, not a refutation, of existing evolutionary theory, and to build some much-needed bridges across traditionally disparate disciplines. (Gregory 2005: 679-680)

From Darwin to Neo-Darwinism

Charles Darwin did not invent the concept of evolution (“descent with modification” or “transmutation,” in the terminology of his time). In fact, the notion of evolutionary change long predates Darwin’s (1859) contributions in On the Origin of Species, which were essentially twofold: (1) providing extensive evidence, from a variety of sources, for the fact that species are related by descent, and (2) developing his theory of natural selection to explain this fact. Although quite successful in establishing the fact of evolution (the subsequent Creationist movement in parts of North America notwithstanding), Darwin’s explanatory mechanism of natural selection received only a lukewarm reception in contemporary scientific circles. (Gregory 2005: 680)

By the beginning of the 20th century, Darwinian natural selection had fallen largely out of favor, having been overshadowed by several other proposed mechanisms including mutationism, whereby species form suddenly by single mutations, with no intermediates; saltationism, in which major chromosomal rearrangements generate new species suddenly; neo-Lamarckism, which supposed that traits are improved directly through use and lost through disuse; and orthogenesis, under which inherent propelling forces drive evolutionary changes, sometimes even to the point of being maladaptive. Mutationism, in particular, gained favor after the rediscovery of Mendel’s laws of inheritance by Hugo de Vries and others, which showed heredity to be “particulate”—with individual traits passed on intact, even if hidden for a generation—rather than “blending,” as Darwin had believed. Particulate inheritance was taken by de Vries and others to imply that discontinuous variation in traits would be much more important than continuous variability expected under gradual Darwinian selection. (Gregory 2005: 680-681)

The problem faced by proponents of Darwinism was to reconcile the concept of discrete hereditary units with the graded variation required by natural selection. This issue was settled in the 1930s and 1940s with the advent of population genetics, which provided mathematical models to describe the behavior of genic variants (“alleles”) within populations, and showed that a particulate mechanism of inheritance did not prohibit the action of natural selection. This new theoretical framework is generally know as “neo-Darwinism” or the “Modern Synthesis,” because it sought to synthesize (i.e., combine) Mendelian genetics and Darwinian natural selection. (Gregory 2005: 681)

The first stage in the development of population genetics was to determine how alleles segregate within populations under “equilibrium” conditions. The issue was addressed by H.G. Hardy and Wilhelm Weinberg, resulting in what is now know as the “Hardy-Weinberg equilibrium,” a null hypothesis about the behavior of alleles in population that are not subject to natural selection, genetic drift (random changes in allele frequencies, for example by the accidental loss of a subset of the population, passage through a population bottleneck, or the founding of a new population by an unrepresentative sample of the parental population), gene flow (an influx of alleles from other populations by migration), or mutation (the generation of new alleles). When populations are not in Hardy-Weinberg equilibrium, one can begin to investigate which of these processes is (or are) responsible. More complex population genetics models were developed for dealing with this issue, most notably by Ronald Fisher, Sewall Wright, and J.B. Haldane. Others, like Theodosius Dobzhansky and G.L. Stebbins, established that natural populations contain sufficient genetic variation for these new models to work. (Gregory 2005: 681)

According to Provine (1988), the Modern Synthesis was really more of a “constriction” than an actual “synthesis,” in which a major goal was the elimination of the non-Darwinian alternatives listed previously and the associated restoration of selection to prominence in evolutionary theory. In at least one important sense, the term “synthesis” is clearly a misnomer, given that there remained a highly acrimonious divide between Fisher, who favored models based on large populations with a dominant role for selection, and Wright, whose “adaptive landscape” model dealt primarily with small populations and emphasized genetic drift. Despite these divisions, neo-Darwinians did succeed in narrowing the range of explanatory approaches to those involving mutation, selection, drift, and gene flow. (Gregory 2005: 681)

Genomes, Fossils, and Theoretical Inertia

As far as genetics is concerned, evolutionary theory has always been far ahead of its time. Darwin’s theory of natural selection was developed in the absence of concrete knowledge of hereditary mechanisms, and the mathematical framework of neo-Darwinism was assembled before the structure of DNA had been established (and even before DNA was identified as the molecule of inheritance). As a consequence, numerous surprises, puzzles, and conflicts have emerged from new discoveries in genetics and genomics. Consider, for example, the recent findings of deep genetic homology undergirding “analogous” features of unrelated organisms, the role of clustered master control genes in regulating development, the remarkable low gene numbers in humans, the collapse of the “one gene-one protein” model, the extraordinary abundance of transposable elements in the genomes of humans and other species, and the increasing evidence for the role of large-scale genome duplications in evolution. Also recognized for decades (and still the subject of healthy debate) are the importance of smaller-scale gene duplications, the role recurrent hybridization and polyploidy, the preponderance of neutral evolution at the molecular level, and the initially quite alarming disconnect between genome size and organismal complexity. Advances in genetics and genomics have also provided revolutionary insights into the relationships among organisms, from the smallest scales (e.g., human-chimpanzee genetic similarity) to the largest (e.g., deep divergences between “prokaryote” groups). None of these was (or indeed, could have been) predicted or expected by the accepted formulation of evolutionary theory that preceded it. This historical record in evolutionary biology is that theories are developed under assumptions about the existence—or perhaps more commonly, the absence—of certain genetic mechanisms, and must later be revised as new knowledge comes to light regarding genomic structure, organization, and function. This mode of progress is not necessarily problematic, except when theoretical inertia forestalls the acceptance of the new information and its implications. (Gregory 2005: 682)

Genomics is not the only field to have faced theoretical inertia. For decades, prominent paleontologists have argued that their observations of the fossil record fail to fit the expectations of strict Darwinian gradualism. Darwin’s view of speciation, sometimes labeled as “phyletic gradualism,” was based on the slow, gradual (but not necessarily constant) evolution of one species or large segments thereof into another through a series of imperceptible changes, often without any splitting of lineages (i.e., by “anagenesis”). By contrast, the theory of “punctuated equilibria” (“punk eek” to afficionados,” “evolution by jerks” to some critics) proposes that most species experience pronounced morphological stasis for most of their time, with change occurring only in geologically rapid bursts associated with speciation events (Eldredge and Gould, 1972; Gould and Eldredge, 1977, 1993; Gould, 1992, 2002). Moreover, speciation in this second case involves the branching off of new species (“cladogenesis”) via small, peripherally isolated populations rather than the gradual transformation of the parental stock itself. (Gregory 2005: 682-683)

Based on differences such as these, many of those who study evolutionary patterns in deep time have developed alternative theoretical approaches to account for the large-scale features of evolution. This, too, has generally proceeded with a minimal consideration of genomic information, and as such there is a need for increased communication between these two fields. In fact, despite their residence at opposite ends of the spectrum in evolutionary science, there is great potential for intergration between genomics and paleontology because ultimately both are concerned with variation among species and higher taxa. (Gregory 2005: 683-684)


Microevolution, Macroevolution, And Extrapolationism

The extent to which processes observable within populations and tractable in mathematical models can be extrapolated to explain patterns of diversification occurring in deep time remains one of the most contentious issues in modern evolutionary biology. This is a debate with a lengthy pedigree, extending back more than 75 years, and therefore long predating any of the issues of genome evolution … Nevertheless, genomes reside at an important nexus in this debate by containing the genes central to population-level discussions, but also having their own complex large-scale evolutionary histories. (Gregory 2005: 684)

Writing as an orthogeneticist in 1927, prior to the Modern Synthesis when Darwinian natural selection was largely eclipsed as a mechanism of evolutionary change, Iurii Filipchenko made the following argument:

Modern genetics doubtless represents the veil of the evolution of Jordanian and Linnaean biotypes (microevolution), contrasted with the evolution of higher systematic groups (macroevolution), which has long been of central interest. This serves to underline the above-cited consideration of the absence of any intrinisic connection between genetics and the doctrine of evolution, which deals particularly with macroevolution.

[Translation as in Hendry and Kinnison, 2001].

In modern parlance, microevolution represents the small-scale changes in allele frequencies that occur within populations (as studied by population geneticists and often observable over the span of a human lifetime), whereas macroevolution involves the generation of broad patterns above the species level over the course of Earth history (as studied in the fossil record by paleontologists, and with regard to extant taxa by systematists). … Dobzhansky (1937, p. 12) noted that because macroevolution could not be observed directly, “we are compelled at the present level of knowledge reluctantly to put a sign of equality between the mechanisms of macro- and micro-evolution.” However, although Dobzhansky was tentative in his assertion of micro-macro equivalence, the doctrine of “extrapolationism” was embraced as a fact by many other architects and early adherents of the Modern Synthesis. Thus as Mayr (1963, p. 586) later explained, “the proponents of the synthetic theory maintain that all evolution is due to the accumulation of small genetic changes, guided by natural selection, and the events that take place within populations and species” (emphasis added). There was an obvious reason for this strict adherence to extrapolationism at the time, namely the belief that if micro- and macroevolution “proved to be basically different, the innumerable studies of micro-evolution would become relatively unimportant and would have minor value in the study of evolution as a whole” (Simpson, 1944, p. 97). As such, only proponents of non-Darwinian mechanisms, most notably the much-maligned saltationist Richard Goldschmidt (1940, p. 8), argued at the time that “the facts of microevolution do not suffice for an understanding of macroevolution.” (Gregory 2005: 684-685)

Obviously, the “present level of knowledge” is not the same today as it was in Dobzhansky’s time. A great deal of new information has since been gleaned—and continues to accrue—regarding the mechanisms of heredity and the major patterns of evolutionary diversification. Considering Mayr’s statement, it is now clear that not all relevant genetic changes are small (cf., genome duplications), nor is all change guided by natural selection (cf., neutral molecular evolution), nor do all relevant processes operate within populations and species (cf., hybridization). In one of the more notorious exchanges on the subject, Gould (1980) went so far as to declare this simple version of the neo-Darwinian synthesis as “effectively dead, despite its persistence as textbook orthodoxy.”1 To be more specific, this applies not to the Modern Synthesis at large, but to strict extrapolationism. Using a far less aggressive tone, another prominent macroevolutionist put it as follows: “The advances in molecular biology contribute to the need for a formal expansion of evolutionary theory is an exigency we can hardly hold against the early architects of the synthesis” (Eldredge, 1985, p. 86), it is interesting to imagine the view that Fisher, Dobshansky, Haldane, Wright, or even Darwin might have taken had they been privy to modern insights. (Gregory 2005: 685-686)

Simpson’s (1944) account of the threat to the relevance of microevolution is also in need of revision. It is simply not the case that a mechanistic disconnect between micro- and macroevolution would render microevolutionary study obsolete. Far from it, because any genomic changes, regardless of the magnitude of their effects, must still pass through the filters of selection and drift to reach a sufficiently high frequency if they are to be of evolutionary significance. So, even if understanding this filtration process does not, by itself, provide a complete understanding of macroevolution, it would still be a crucial component of an expanded evolutionary theory. Consider, for example, the topic of major developmental regulation genes, which involves at least four different questions, all mutually compatible, studied by four different disciplines: (1) Evolutionary developmental biology (“evo-devo”)—How do such genes act to produce observed phenotypes? (2) Comparative genomics—What is the structure of these genes, and what role did processes like gene (or genome) duplication play in their evolution? (3) Population genetics—How would such genes have been filtered by selection, drift, and gene flow to reach their current rate of fixation? (4) Paleontology—What is the relevance of these genes for understanding the emergence of new body plans and thus new macroevolutionary trajectories (e.g., Carroll, 2000; Erwin, 2000; Jablonski, 2000; Shubin and Marshall, 2000)? (Gregory 2005: 686)

Though the protagonists have often been divided along these professional lines, the micro-macro debate is not between paleontologists and population geneticists per se. Rather, it is between strict extrapolationists who argue that all evolution can be understood by studying population-level processes and those who argue that there are additional factors to consider. Members of this latter camp may come from all quarters of evolutionary biology, from genome biologists to paleontologists, although the latter have been by far the most vocal proponents of an expanded outlook. For strict extrapolationists, there may be little value in pursuing this debate. But for those open to a more pluralistic approach who seek a resolution to the issue, there is much value in understanding the arguments presented in favor of a distinct macroevolutionary theory that coexists with, but is not subsumed by, established microevolutionary principles. (Gregory 2005: 686)

Critiques of Strict Extrapolationism

1 Of course, far from simply mourning their loss, microevolutionists responded to this charge with some vigor (e.g., Stebbins and Ayala, 1981; Charlesworth et al., 1982; Hecht and Hoffman, 1986), perhaps overlooking the fact that only the strict extrapolationist definition given by Mayr (1963), and not the synthesis in its entirety, was proclaimed deceased (see Gould, 2002). Although some may argue that Mayr’s (1963) definition was already outdated by this time, and that Gould’s (1980) criticism was therefore misplaced, it bears noting that such a definition had been in common use throughout the period in question and well beyond (e.g., Mayr, 1980; Ruse, 1982; Hecht and Hoffman, 1986). As for Gould’s (1980) claim of “‘textbook orthodoxy,’ one may consider Freeman and Herron’s (1998) recent textbook, which considers the Modern Synthesis to be composed of two main postulates: “[1] Gradual evolution results from small genetic changes that are acted upon by natural selection. [2] The origin of species and higher taxa, or macroevolution, can be explained in terms of natural selection acting on individuals, or microevolution.” Futuyma’s (1998) more advanced text provides a much more detailed description of the Modern Synthesis but the fundamental extrapolationist point remains.

It From Bit

Henry Louis Mencken [1917] once wrote that “[t]here is always an easy solution to every human problem — neat, plausible and wrong.” And neoclassical economics has indeed been wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real world target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.

The punch line of this is that most of the problems that neoclassical economics is wrestling with, issues from its attempts at formalistic modeling per se of social phenomena. Reducing microeconomics to refinements of hyper-rational Bayesian deductivist models is not a viable way forward. It will only sentence to irrelevance the most interesting real world economic problems. And as someone has so wisely remarked, murder is unfortunately the only way to reduce biology to chemistry — reducing macroeconomics to Walrasian general equilibrium microeconomics basically means committing the same crime.

Lars Pålsson Syll. On the use and misuse of theories and models in economics.

~ ~ ~

Emergence, some say, is merely a philosophical concept, unfit for scientific consumption. Or, others predict, when subjected to empirical testing it will turn out to be nothing more than shorthand for a whole batch of discrete phenomena involving novelty, which is, if you will, nothing novel. Perhaps science can study emergences, the critics continue, but not emergence as such. (Clayton 2004: 577)*

It’s too soon to tell. But certainly there is a place for those, such as the scientist to whom this volume is dedicated, who attempt to look ahead, trying to gauge what are Nature’s broadest patterns and hence where present scientific resources can best be invested. John Archibald Wheeler formulated an important motif of emergence in 1989:

Directly opposite the concept of universe as machine built on law is the vision of a world self-synthesized. On this view, the notes struck out on a piano by the observer-participants of all places and all times, bits though they are, in and by themselves constituted the great wide world of space and time and things.

(Wheeler 1999: 314)

Wheeler summarized his idea — the observer-participant who is both the result of an evolutionary process and, in some sense, the cause of his own emergence — in two ways: in the famous sketch given in Fig.26.1 and in the maxim “It from bit.” In the attempt to summarize this chapter’s thesis with an equal economy of words I offer the corresponding maxim, “Us from it.” The maxim expresses the bold question that gives rise to the emergentist research program: Does nature, in its matter and its laws, manifest an inbuilt tendency to bring about increasing complexity? Is there an apparently inevitable process of complexification that runs from the period table of the elements through the explosive variations of evolutionary history to the unpredictable progress of human cultural history, and perhaps even beyond? (Clayton 2004: 577)

The emergence hypothesis requires that we proceed though at least four stages. The first stage involves rather straightforward physics — say, the emergence of classical phenomena from the quantum world (Zurek 1991, 2002) or the emergence of chemical properties through molecular structure (Earley 1981). In a second stage we move from the obvious cases of emergence in evolutionary history toward what may be the biology of the future: a new, law-based “general biology” (Kauffman 2000) that will uncover the laws of emergence underlying natural history. Stage three of the research program involves the study of “products of the brain” (perception, cognition, awareness), which the program attempts to understand not as unfathomable mysteries but as emergent phenomena that arise as natural products of the complex interactions of brain and central nervous system. Some add a fourth stage to the program, one that is more metaphysical in nature: the suggestion that the ultimate results, or the original causes, of natural emergence transcend or lie beyond Nature as a whole. Those who view stage-four theories with suspicion should note that the present chapter does not appeal to or rely on metaphysical speculations of this sort in making its case. (Clayton 2004: 578-579)

Defining terms and assumptions

The basic concept of emergence is not complicated, even if the empirical details of emergent processes are. We turn to Wheeler, again, for an opening formulation:

When you put enough elementary units together, you get something that is more than the sum of these units. A substance made of a great number of molecules, for instance, has properties such as pressure and temperature that no one molecule possesses. It may be a solid or a liquid or a gas, although no single molecule is solid or liquid or gas. (Wheeler 1998: 341)

Or, in the words of biochemist Arthur Peacocke, emergence takes place when “new forms of matter, and a hierarchy of organization of these forms … appear in the course of time” and ” these new forms have new properties, behaviors, and networks of relations” that must be used to describe them (Peacocke 1993: 62).

Clearly, no one-size-fits-all theory of emergence will be adequate to the wide variety of emergent phenomena in the world. Consider the complex empirical differences that are reflected in these diverse senses of emergence:

• temporal or spatial emergence
• emergence in the progression from simple to complex
• emergence in increasingly complex levels of information processing
• the emergence of new properties (e.g., physical, biological, psychological)
• the emergence of new causal entities (atoms, molecules, cells, central nervous system)
• the emergence of new organizing principles or degrees of inner organization (feedback loops, autocatalysis, “autopoiesis”)
• emergence in the development of “subjectivity” (if one can draw a ladder from perception, through awareness, self-awareness, and self-consciousness, to rational intuition).

Despite the diversity, certain parameters do constrain the scientific study of emergence:

  1. Emergence studies will be scientific only if emergence can be explicated in terms that the relevant sciences can study, check, and incorporate into actual theories.
  2. Explanations concerning such phenomena must thus be given in terms of the structures and functions of stuff in the world. As Christopher Southgate writes, “An emergent property is one describing a higher level of organization of matter, where the description is not epistemologically reducible to lower-level concepts” (Southgate et al. 1999: 158).
  3. It also follows that all forms of dualism are disfavored. For example, only those research programs count as emergentist which refuse to accept an absolute break between neurophysiological properties and mental properties. “Substance dualisms,” such as the Cartesian delineation of reality into “matter” and “mind,” are generally avoided. Instead, research programs in emergence tend to combine sustained research into (in this case) the connections between brain and “mind,” on the one hand, with the expectation that emergent mental phenomena will not be fully explainable in terms of underlying causes on the other.
  4. By definition, emergence transcends any single scientific discipline. At a recent international consultation on emergence theory, each scientist was asked to define emergence, and each offered a definition of the term in his or her own specific field of inquiry: physicists made emergence a product of tome-invariant natural laws; biologists presented emergence as a consequence of natural history; neuroscientists spoke primarily of “things that emerge from brains”; and engineers construed emergence in terms of new things that we can build or create. Each of these definitions contributes to, but none can be the sole source for, a genuinely comprehensive theory of emergence. (Clayton 2004: 579-580)

Physics to chemistry

(….) Things emerge in the development of complex physical systems that are understood by observation and cannot be derived from first principles, even given a complete knowledge of the antecedent states. One would not know about conductivity, for example, from a study of individual electrons alone; conductivity is a property that emerges only in complex solid state systems with huge numbers of electrons…. Such examples are convincing: physicists are familiar with a myriad of cases in which physical wholes cannot be predicted based on knowledge of their parts. Intuitions differ, though, on the significance of this unpredictability. (Clayton 2004: 580)

(….) [Such examples are] unpredictable even in principle — if the system-as-a-whole is really more than the sum of its parts.

Simulated Evolutionary Systems

Computer simulations study the processes whereby very simple rules give rise to complex emergent properties. John Conway’s program “Life,” which simulates cellular automata, is already widely known…. Yet even in as simple a system as Conway’s “Life,” predicting the movement of larger structures in terms of the simple parts alone turns out to be extremely complex. Thus in the messy real world of biology, behaviors of complex systems quickly become noncomputable in practice…. As a result — and, it now appears, necessarily — scientists rely on explanations given in terms of the emerging structures and their causal powers. Dreams of a final reduction “downwards” are fundamentally impossible. Recycled lower-level descriptions cannot do justice to the actual emergent complexity of the natural world as it has evolved. (Clayton 2004: 582)

Ant colony behavior

Neural network models of emergent phenomena can model … the emergence of ant colony behavior from simple behavioral “rules” that are genetically programmed into individual ants. (….) Even if the behavior of an ant colony were nothing more than an aggregate of the behaviors of the individual ants, whose behavior follows very simple rules, the result would be remarkable, for the behavior of the ant colony as a whole is extremely complex and highly adaptive to complex changes in its ecosystem. The complex adaptive potentials of the ant colony as a whole are emergent features of the aggregated system. The scientific task is to correctly describe and comprehend such emergent phenomena where the whole is more than the sum of the parts. (Clayton 2004: 586-587)


So far we have considered models of how nature could build highly complex and adaptive behaviors from relatively simple processing rules. Now we must consider actual cases in which significant order emerges out of (relative) chaos. The big question is how nature obtains order “out of nothing,” that is, when the order is not present in the initial conditions but is produced in the course of a system’s evolution. What are some of the mechanisms that nature in fact uses? We consider four examples. (Clayton 2004: 587)

Fluid convection

The Benard instability is often cited as an example of a system far from thermodynamic equilibrium, where a stationary state becomes unstable and then manifests spontaneous organization (Peacocke 1994: 153). In the Bernard case, the lower surface of a horizontal layer of liquid is heated. This produces a heat flux from the bottom to the top of the liquid. When the temperature gradient reaches a certain threshold value, conduction no longer suffices to convey the heat upward. At that point convection cells form at right angles to th4e vertical heat flow. The liquid spontaneously organizes itself into these hexagonal structures or cells. (Clayton 2004: 587-588)

Differential equations describing the heat flow exhibit a bifurcation of the solutions. This bifurcation represents the spontaneous self-organization of large numbers of molecules, formally in random motion, into convection cells. This represents a particularly clear case of the spontaneous appearance of order in a system. According to the emergence hypothesis, many cases of emergent order in biology are analogous. (Clayton 2004: 588)

Autocatalysis in biochemical metabolism

Autocatalytic processes play a role in some of the most fundamental examples of emergence in the biosphere. These are relatively simple chemical processes with catalytic steps, yet they well express the thermodynamics of the far-from-equilibrium chemical processes that lie at the base of biology. (….) Such loops play an important role in metabolic functions. (Clayton 2004: 588)

Belousov-Zhabotinsky reactions

The role of emergence becomes clearer as one considers more complex examples. Consider the famous Belousov-Zhabotinsky reaction (Prigogine 1984: 152). This reaction consists of the oxidation of an organic acid (malonic acid) by potassium bromate in the presence of a catalyst such as cerium, manganese, or ferroin. From the four inputs into the chemical reactor more than 30 products and intermediaries are produced. The Belousov-Zhabotinsky reaction provides an example of a biochemical process where a high level of disorder settles into a patterned state. (Clayton 2004: 589)

(….) Put into philosophical terms, the data suggest that emergence is not merely epistemological but can also be ontological in nature. That is, it’s not just that we can’t predict emergent behaviors in these systems from a complete knowledge of the structures and energies of the parts. Instead, studying the systems suggests that structural features of the system — which are emergent features of the system as such and not properties pertaining to any of its parts — determine the overall state of the system, and hence as a result the behavior of individual particles within the system. (Clayton 2004: 589-590)

The role of emergent features of systems is increasingly evident as one moves from the very simple systems so far considered to the sorts of systems one actually encounters in the biosphere. (….) (Clayton 2004: 589-590)

The biochemistry of cell aggregation and differentiation

We move finally to processes where a random behavior or fluctuation gives rise to organized behavior between cells based on self-organization mechanisms. Consider the process of cell aggregation and differentiation in cellular slime molds (specifically, in Dictyostelium discoideum). The slime mold cycle begins when the environment becomes poor in nutrients and a population of isolated cells joins into a single mass on the order of 104 cells (Prigogine 1984: 156) . The aggregate migrates until it finds a higher nutrient source. Differentiation than occurs: a stalk or “foot” forms out of about one-third of the cells and is soon covered with spores. The spores detach and spread, growing when they encounter suitable nutrients and eventually forming a new colony of amoebas. (Clayton 2004: 589-591) [See Levinton 2001: 166;]

Note that this aggregation process is randomly initiated. Autocatalysis begins in a random cell within the colony, which then becomes the attractor center. It begins to produce cyclic adenosine monophosphate (AMP). As AMP is released in greater quantities into extracellular medium, it catalyzes the same reaction in the other cells, amplifying the fluctuation and total output. Cells then move up the gradient to the source cell, and other cells in turn follow their cAMP trail toward the attractor center. (Clayton 2004: 589-591)


Ilya Prigogine did not follow the notion of “order out of chaos” up through the entire ladder of biological evolution. Stuart Kauffman (1995, 2000) and others (Gell-Mann 1994; Goodwin 2001; see also Cowan et al. 1994 and other works in the same series) have however recently traced the role of the same principles in living systems. Biological processes in general are the result of systems that create and maintain order (stasis) through massive energy input from their environment. In principle these types of processes could be the object of what Kauffman envisions as “a new general biology,” based on sets of still-to-be-determined laws of emergent ordering or self-complexification. Like the biosphere itself, these laws (if they indeed exist) are emergent: they depend on the underlying physical and chemical regularities but are not reducible to them. [Note, there is no place for mind as a causal source.] Kauffman (2000: 35) writes: (Clayton 2004: 592)

I wish to say that life is an expected, emergent property of complex chemical reaction networks. Under rather general conditions, as the diversity of molecular species in a reaction system increases, a phase transition is crossed beyond which the formation of collectively autocatalytic sets of molecules suddenly becomes almost inevitable. (Clayton 2004: 593)

Until a science has been developed that formulates and tests physics-like laws at the level of biology [evo-devo is the closest we have so far come], the “new general biology” remains an as-yet-unverified, though intriguing, hypothesis. Nevertheless recent biology, driven by the genetic revolution on the one side and by the growth on the environmental sciences on the other, has made explosive advances in understanding the role of self-organizing complexity in the biosphere. Four factors in particular play a central role in biological emergence. (Clayton 2004: 593)

The role of scaling

As one moves up the ladder of complexity, macrostructures and macromechanisms emerge. In the formation of new structures, scale matters — or, better put, changes in scale matter. Nature continually evolves new structures and mechanisms as life forms move up the scale of molecules (c. 1 Ångstrom) to neurons (c. 100 micrometers) to the human central nervous system (c. 1 meter). As new structures are developed, new whole-part relations emerge. (Clayton 2004: 593)

John Holland argues that different sciences in the hierarchy of emergent complexity occur at jumps of roughly three orders of magnitude in scale. By the point systems have become too complex for predictions to be calculated, one is forced to “move the description ‘up a level’” (Holland 1998: 201). The “microlaws” still constrain outcomes, of course, but additional basic descriptive units must also be added. This pattern of introducing new explanatory levels iterates in a periodic fashion as one moves up the ladder of increasing complexity. To recognize the pattern is to make emergence an explicit feature of biological research. As of now, however, science possesses only a preliminary understanding of the principles underlying this periodicity. (Clayton 2004: 593)

The role of feedback loops

The role of feedback loops, examined above for biochemical processes, become increasingly important from the cellular level upwards. (….) (Clayton 2004: 593)

The role of local-global interactions

In complex dynamical systems the interlocked feedback loops can produce an emergent global structure. (….) In these cases, “the global property — [the] emergent behavior — feeds back to influence the behavior of the individuals … that produced it” (Lewin 1999). The global structure may have properties the local particles do not have. (Clayton 2004: 594)

(….) In contrast …, Kauffman insists that an ecosystem is in one sense “merely” a complex web of interactions. Yet consider a typical ecosystem of organisms of the sort that Kauffman (2000: 191) analyzes … Depending on one’s research interests, one can focus attention either on holistic features of such systems or on the interactions of the components within them. Thus Langston’s term “global” draws attention to system-level features and properties, whereas Kauffman’s “merely” emphasizes that no mysterious outside forces need to be introduced (such as, e.g., Rupert Sheldrake’s (1995) “morphic resonance”). Since the two dimensions are complementary, neither alone is scientifically adequate; the explosive complexity manifested in the evolutionary process involves the interplay of both systemic features and component interactions. (Clayton 2004: 595)

The role of nested hierarchies

A final layer of complexity is added in cases where the local-global structure forms a nested hierarchy. Such hierarchies are often represented using nested circles. Nesting is one of the basic forms of combinatorial explosion. Such forms appear extensively in natural biological systems (Wolfram 2002: 357ff.; see his index for dozens of further examples of nesting). Organisms achieve greater structural complexity, and hence increased chances of survival, as they incorporate discrete subsystems. Similarly, ecosystems complex enough to contain a number of discrete subsystems evidence greater plasticity in responding to destabilizing factors. (Clayton 2004: 595-596)

“Strong” versus “weak” emergence

The resulting interactions between parts and wholes mirror yet exceed the features of emergence that we observed in chemical processes. To the extent that the evolution of organisms and ecosystems evidences a “combinatorial explosion” (Morowitz 2002) based on factors such as the four just summarized, the hope of explaining entire living systems in terms of simple laws appears quixotic. Instead, natural systems made of interacting complex systems form a multileveled network of interdependency (cf. Gregersen 2003), and each level contributes distinct elements to the overall explanation. (Clayton 2004: 596-597)

Systems biology, the Siamese twin of genetics, has established many of the features of life’s “complexity pyramid” (Oltvai and Barabási 2002; cf. Barabási 2002). Construing cells as networks of genes and proteins, systems biologists distinguish four distinct levels: (1) the base functional organization (genome, transcriptome, proteome, and metabalome) [see below, Morowitz on the “dogma of molecular biology.”]; (2) the metabolic pathways built up out of these components; (3) larger functional modules responsible for major cell functions; and (4) the large-scale organization that arises from the nesting of the functional modules. Oltvai and Barabási (2002) conclude that “[the] integration of different organizational levels increasingly forces us to view cellular functions as distributed among groups of heterogeneous components that all interact within large networks.” Milo et al. (2002) have recently shown that a common set of “network motifs” occurs in complex networks in fields as diverse as biochemistry, neurobiology, and ecology. As they note, “similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans.” (Clayton 2004: 598)

Such compounding of complexity — the system-level features of networks, the nodes of which are themselves complex systems — is sometimes said to represent only a quantitative increase in complexity, in which nothing “really new” emerges. This view I have elsewhere labeled “weak emergence.” [This would be a form of philosophical materialism qua philosophical reductionism.] It is the view held by (among others) John Holland (1998) and Stephen Wolfram (2002). But, as Leon Kass (1999: 62) notes in the context of evolutionary biology, “it never occurred to Darwin that certain differences of degree — produced naturally, accumulated gradually (even incrementally), and inherited in an unbroken line of descent — might lead to a difference in kind …” Here Kass nicely formulates the principle involved. As long as nature’s process of compounding complex systems leads to irreducibly complex systems with structures and causal mechanisms of their own, then the natural world evidences not just weak emergence but also a more substantive change that we might label strong emergence. Cases of strong emergence are cases where the “downward causation” emphasized by George Ellis [see p. 607, True complexity and its associated ontology.] … is most in evidence. By contrast, in the relatively rare cases where rules relate the emergent system to its subvening system (in simulated systems, via algorithms; in natural systems, via “bridge laws”) weak emergence interpretation suffices. In the majority of cases, however, such rules are not available; in these cases, especially where we have reason to think that such lower-level rules are impossible in principle, the strong emergence interpretation is suggested. (Clayton 2004: 597-598)

Neuroscience, qualia, and consciousness

Consciousness, many feel, is the most important instance of a clearly strong form of emergence. Here if anywhere, it seems, nature has produced something irreducible — no matter how strong the biological dependence of mental qualia (i.e., subjective experiences) on antecedent states of the central nervous system may be. To know everything there is to know about the progression of brain states is not to know what it’s like to be you, to experience your joy, your pain, or your insights. No human researcher can know, as Thomas Nagel (1980) so famously argued, “what it’s like to be a bat.” (Clayton 2004: 598)

Unfortunately consciousness, however intimately familiar we may be with it on a personal level, remains an almost total mystery from a scientific perspective. Indeed, as Jerry Fodor (1992) noted, “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness.” (Clayton 2004: 598)

Given our lack of comprehension of the transition from brain states to consciousness, there is virtually no way to talk about the “C” word without sliding into the domain of philosophy. The slide begins if the emergence of consciousness is qualitatively different from other emergences; in fact, it begins even if consciousness is different from the neural correlates of consciousness.Much suggests that both differences obtain. How far can neuroscience go, even in principle, in explaining consciousness? (Clayton 2004: 598-599)

Science’s most powerful ally, I suggest, is emergence. As we’ve seen, emergence allows one to acknowledge the undeniable differences between mental properties and physical properties, while still insisting on the dependence of the entire mental life on the brain states that produce it. Consciousness, the thing to be explained, is different because it represents a new level of emergence; but brain states — understood both globally (as the state of the brain as a whole) and in terms of their microcomponents — are consciousness’s sine qua non. The emergentist framework allows science to identify the strongest possible analogies with complex systems elsewhere in the biosphere. So, for example, other complex adaptive systems also “learn,” as long as one defines learning as “a combination of exploration of the environment and improvement of performance through adaptative change” (Schuster 1994). Obviously, systems from primitive organisms to primate brains record information from their environment and use it to adjust future responses to that environment. (Clayton 2004: 599)

Even the representation of visual images in the brain, a classically mental phenomenon, can be parsed in this way. Consider Max Velman’s (2000) schema … Here a cat-in-the-world and the neural representation of the cat are both parts of a natural system; no nonscientific mental “things” like ideas or forms are introduced. In principle, then, representation might be construed as merely a more complicated version of the feedback loop between a plant and its environment … Such is the “natural account of phenomenal consciousness” defended by (e.g.) Le Doux (1978). In a physicalist account of mind, no mental causes are introduced. Without emergence, the story of consciousness must be retold such that thoughts and intentions play no causal role. … If one limits the causal interactions to world and brains, mind must appear as a sort of thought-bubble outside the system. Yet it is counter to our empirical experience in the world, to say the least, to leave no causal role to thoughts and intentions. For example, it certainly seems that your intention to read this … is causally related to the physical fact of your presently holding this book [or browsing this web page, etc.,] in your hands. (Clayton 2004: 599-600)

Arguments such as this force one to acknowledge the disanologies between emergence of consciousness and previous examples of emergence in complex systems. Consciousness confronts us with a “hard problem” different from those already considered (Chalmers 1995: 201):

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

The distinct features of human cognition, it seems, depend on a quantitative increase in brain complexity vis-à-vis other higher primates. Yet, if Chalmers is right (as I fear he is), this particular quantitative increase gives rise to a qualitative change. Even if the development of conscious awareness occurs gradually over the course of primate evolution, the (present) end of that process confronts the scientist with conscious, symbol-using beings clearly distinct from those who preceded them (Deacon 1997). Understanding consciousness even as an emergent phenomenon in the natural world — that is, naturalistically — requires a theory of “felt qualities,” “subjective intentions,” and “states of experience.” Intention-based explanations and, it appears, a new set of sciences: the social or human sciences. By this point emergence has driven us to a level beyond the natural-science-based framework of the present book. New concepts, new testing mechanisms, and perhaps even new standards for knowledge are now required. From the perspective of physics the trail disappears into the clouds; we can follow it no further. (Clayton 2004: 600-601)

The five emergences

In the broader discussion the term “emergence” is used in multiple and incompatible senses, some of which are incompatible with the scientific project. Clarity is required to avoid equivocation between five distinct levels on which the term may be applied: (Clayton 2004: 601)

• Let emergence-1 refer to occurrences of the term within the context of a specific scientific theory. Here it describes features of a specified physical or biological system of which we have some scientific understanding. Scientists who employ these theories claim that the term (in a theory-specific sense) is currently useful for describing features of the natural world. The preceding pages include various examples of theories in which this term occurs. At the level of emergence-1 alone there is no way to establish whether the term is used analogously across theories, or whether it really means something utterly distinct in each theory in which it appears. (Clayton 2004: 601-602)

• Emergence-2 draws attention to features of the world that may eventually become part of a unified scientific theory. Emergence in this sense expresses postulated connections or laws that may in the future become the basis for one or more branches of science. One thinks, for example, of the role of emergence in Stuart Kauffman’s notion of a new “general biology,” or in certain proposed theories of complexity or complexification. (Clayton 2004: 602)

• Emergence-3 is a mata-scientific term that points out a broad pattern across scientific theories. Used in this sense, the term is not drawn from a particular scientific theory; it is an observation about a significant pattern that connects a range of scientific theories. In the preceding pages I have often employed the term in this fashion. My purpose has been to draw attention to common features of the physical systems under discussion, as in (e.g.) the phenomena of autocatalysis, complexity, and self-organization. Each is scientifically understood, each shares common features that are significant. Emergence draws attention to these features, whether or not the individual theories actually use the same label for the phenomena they describe. (Clayton 2004: 602)

Emergence-3 thus serves a heuristic function. It assists in the recognition of common features between theories. Recognizing such patterns can help to extend existing theories, to formulate insightful new hypotheses, or to launch new interdisciplinary research programes.[4] (Clayton 2004: 602)

• Emergence-4 expresses a feature in the movement between scientific disciplines, including some of the most controversial transition points. Current scientific work is being done, for example, to understand how chemical structures are formed, to reconstruct the biochemical dynamics underlying the origins of life, and to conceive how complicated neural processes produce cognitive phenomena such as memory, language, rationality, and creativity. Each involves efforts to understand diverse phenomena involving levels of self-organization within the natural world. Emergence-4 attempts to express what might be shared in common by these (and other) transition points. (Clayton 2004: 602)

Here, however, a clear limitation arises. A scientific theory that explains how chemical structures are formed is perhaps unlikely to explain the origins of life. Neither theory will explain how self-organizing neural nets encode memories. Thus emergence-4 stands closer to the philosophy of science than it does to actual scientific theory. Nonetheless, it is the sort of philosophy of science that should be helpful to scientists.[5] (Clayton 2004: 602)

• Emergence-5 is a metaphysical theory. It represents the view that the nature of the natural world is such that it produces continually more complex realities in a process of ongoing creativity. The present does not comment on such metaphysical claims about emergence.[6] (Clayton 2004: 603)


(….) Since emergence is used as an integrative ordering concept across scientific fields …. It remains, at least in part, a meta-scientific term. (Clayton 2004: 603)

Does the idea of distinct levels then conflict with “standard reductionist science?” No, one can believe that there are levels in Nature and corresponding levels of explanation while at the same time working to explain any given set of higher-order phenomena in terms of underlying laws and systems. In fact, isn’t the first task of science to whittle away at every apparent “break” in Nature, to make it smaller, to eliminate it if possible? Thus, for example, to study the visual perceptual system scientifically is to attempt to explain it fully in terms of the neural structures and electrochemical processes that produce it. The degree to which downward explanation is possible will be determined by long-term empirical research. At present we can only wager on the one outcome or the other based on the evidence before us. (Clayton 2004: 603)


[2] Gordon (2000) disputes this claim: “One lesson from ants is that to understand a system like theirs, it is not sufficient to take the system apart. The behavior of each unit is not encapsulated inside that unit but comes from its connections with the rest of the system.” I likewise break strongly with the aggregate model of emergence.

[3] Generally this seems to be a question that makes physicists uncomfortable (“Why, that’s impossible, of course!”), whereas biologists tend to recognize in it one of the core mysteries in the evolution of living systems.

[4] For this reason, emergence-3 stands closer to the philosophy of science than do the previous two senses. Yet it is a kind of philosophy of science that stands rather close to actual science and that seeks to be helpful to it. [The goal of all true “philosophy of science” is to seek critical clarification of ideas, concepts, and theoretical formulations; hence to be “helpful” to science and the question for human knowledge.] By way of analogy one thinks of the work of philosophers of quantum physics such as Jeremy Butterfield or James Cushing, whose work can be and has actually been helpful to bench physicists. One thinks as well of the analogous work of certain philosophers in astrophysics (John Barrow) or in evolutionary biology (David Hull, Michael Ruse).

[5] This as opposed, for example, to the kind of philosophy of science currently popular in English departments and in journals like Critical Inquiry — the kind of philosophy of science that asserts that science is a text that needs to be deconstructed, or that science and literature are equally subjective, or that the worldview of Native Americans should be taught in science classes.

— Clayton, Philip D. Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.). Cambridge: Cambridge University Press; 2004; pp. 577-606.

~ ~ ~

* Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.)