Biological evolution is, as has often been noted, both fact and theory. It is a fact that all extant organisms came to exist in their current forms through a process of descent with modification from ancestral forms. The overwhelming evidence for this empirical claim was recognized relatively soon after Darwin published On the Origin of Species in 1859, and support for it has grown to the point where it is as well established as any historical claim might be. In this sense, biological evolution is no more a theory than it is a “theory” that Napoleon Bonaparte commanded the French army in the late eighteenth century. Of course, the details of how extant and extinct organisms are related to one another, and of what descended from what and when, are still being worked out, and will probably never be known in their entirety. The same is true of the details of Napoleon’s life and military campaigns. However, this lack of complete knowledge certainly does not alter the fundamental nature of the claims made, either by historians or by evolutionary biologists. (Pigliucci et al. 2006: 1)
On the other hand, evolutionary biology is also a rich patchwork of theories seeking to explain the patterns observed in the changes in populations of organisms over time. These theories range in scope form “natural selection,” which is evoked extensively at many different levels, to finer-grained explanations involving particular mechanisms (e.g., reproductive isolation induced by geographic barriers leading to speciation events). (Pigliucci et al. 2006: 1)
(….) There are a number of different ways in which these questions have been addressed, and a number of different accounts of these areas of evolutionary biology. These different accounts, we will maintain, are not always compatible, either with one another or with other accepted practices in evolutionary biology. (Pigliucci et al. 2006: 1)
(….) Because we will be making some potentially controversial claims throughout this volume, it is crucial for the reader to understand two basic ideas underlying most of what we say, as well as exactly what we think are some implications of our views for the general theory of evolutionary quantitative genetics, which we discuss repeatedly in critical fashion. (Pigliucci et al. 2006: 2)
(….) The first central idea we wish to put forth as part of the framework of this book will be readily familiar to biologists, although some of its consequences may not be. The idea can be expressed by the use of a metaphor proposed by Bill Shipley (2000) …. the shadow theater popular in Southeast Asia. In one form, the wayang golek of Bali and other parts of Indonesia, three-dimensional wooden puppets are used to project two-dimensional shadows on a screen, where the action is presented to the spectator. Shipley’s idea is that quantitative biologists find themselves very much in the position of wayang golek’s spectators: we have access to only the “statistical shadows” projected by a set of underlying causal factors. Unlike the wayang golek’s patrons, however, biologists want to peek around the screen and infer the position of the light source as well as the actual three-dimensional shapes of the puppets. This, of course, is the familiar problem of the relationship between causation and correlation, and, as any undergraduate science major soon learns, correlation is not causation (although a popular joke among scientists is that the two are nevertheless often correlated). (Pigliucci et al. 2006: 2)
The loose relationship between causation and correlation has two consequences that are crucial…. On the one hand, there is the problem that, strictly speaking, it makes no sense to attempt to infer mechanisms directly from patterns…. On the other hand, as Shipley elegantly show in his book, there is an alternative route that gets (most of) the job done, albeit in a more circuitous route and painful way. What one can do is to produce a series of alternative hypotheses about the causal pathways underlying a given set of observations; these hypotheses can then be used to “project” the expected statistical shadows, which can be compared with the observed one. If the projected and actual shadows do not match, one can discard the corresponding causal hypothesis and move on to the next one; if the two shadows do match (within statistical margins of error, of course), then one had identified at least one causal explanation compatible with the observations. As any philosopher or scientist worth her salt knows, of course, this cannot be the end of the process, for more than one causal model may be compatible with the observations, which means that one needs additional observations or refinements of the causal models to be able to discard more wrong explanations and continue to narrow the field. A crucial point here is that the causal models to be tested against the observed statistical shadow can be suggested by the observations themselves, especially if coupled with further knowledge about the system under study (such as details of the ecology, developmental biology, genetics, or past evolutionary history of the populations in question). But the statistical shadows cannot be used as direct supporting evidence for any particular causal model. (Pigliucci et al. 2006: 4)
The second central idea … has been best articulated by John Dupré (1993), and it deals with the proper way to think about reductionism. The term “reductionism” has a complex history, and it evokes strong feelings in both scientists and philosophers (often, though not always, with scientists hailing reductionism as fundamental to the success of science and some philosophers dismissing it as a hopeless epistemic dream). Dupré introduces a useful distinction that acknowledges the power of reductionism in science while at the same time sharply curtailing its scope. His idea is summarized … as two possible scenarios: In one case, reductionism allows one to explain and predict higher-level phenomena (say, development in living organisms) entirely in terms of lower-level processes (say, genetic switches throughout development). In the most extreme case, one can also infer the details of the lower-level processes from the higher-level patterns produced (something we have just seen is highly unlikely in the case of any complex biological phenomenon because of Shipley’s “statistical shadow” effect). This form of “greedy” reductionism … is bound to fail in most (though not all) cases for two reasons. The first is that the relationships between levels of manifestation of reality (e.g., genetic machinery vs. development, or population genetics vs. evolutionary pathways) are many-to-many (again, as pointed out above in our discussion of the shadow theater). The second is the genuine existence of “emergent properties” (i.e., properties of higher-level phenomena that arise from the nonadditive interaction among lower-level processes). It is, for example, currently impossible to predict the physicochemical properties of water from the simple properties of individual atoms of hydrogen and oxygen, or, for that matter, from the properties of H20 molecules and the smattering of necessary impurities. (Pigliucci et al. 2006: 4-5)
Mechanical metaphors have appealed to many philosophers who sought materialist explanations of life. The definitive work on this subject is T. S. Hall’s Ideas of Life and Matter (1969). Descartes, though a dualist, thought of animal bodies as automata that obeyed mechanical rules. Julien de la Mettrie applied stricter mechanistic principles to humans in L’Homme machine (1748). Clockwork and heat engine models were popular during the Industrial Revolution. Lamarck proposed hydraulic processes as causes of variation. In the late nineteenth century, the embryologists Wilhelm His and Wilhelm Roux theorized about developmental mechanics. However, as biochemical and then molecular biological information expanded, popular machine models were refuted, but it is not surprising that computers should have filled the gap. Algorithms that systematically provide instructions for a progressive sequence of events seem to be suitable analogues for epigenetic procedures. (Reid 2007: 263)
A common error in applying this analogy is the belief that the genetic code, or at least the total complement of an organism’s DNA contains the program for its own differential expression. In the computer age it is easy to fall into that metaphysical trap. However, in the computer age we should also know that algorithms are the creations of programmers. As Charles Babbage (1838) and Robert Chambers (1844) tried to tell us, the analogy is more relevant to creationism than evolutionism. At the risk of offending the sophisticates who have indulged me so far, I want to state the problems in the most simple terms. To me, that is a major goal of theoretical biology, rather than the conversion of life to mathematics. (Reid 2007: 263)
— Robert G.B. Reid (2007, 263) Biological Emergences: Evolution by Natural Experiment. The Vienna Series in Theoretical Biology.
If the emergentist-materialist ontology underlying biology (and, as a matter of fact, all the factual sciences) is correct, the bios constitutes a distinct ontic level the entities in which are characterized by emergent properties. The properties of biotic systems are then not (ontologically) reducible to the properties of their components, although we may be able to partially explain and predict them from the properties of their components… The belief that one has reduced a system by exhibiting [for instance] its components, which is indeed nothing but physical and chemical, is insufficient: physics and chemistry do not account for the structure, in particular the organization, of biosystems and their emergent properties (Mahner and Bunge 1997: 197) (Robert 2004: 132)
— Jason Scott Robert (2004, 132) Embryology, Epigenesis, and Evolution: Taking Development Seriously
The science of biology enters the twenty-first century in turmoil, in a state of conceptual disarray, although at first glance this is far from apparent. When has biology ever been in a more powerful position to study living systems? The sequencing juggernaut has still to reach full steam, and it is constantly spewing forth all manner of powerful new approaches to biological systems, many of which were previously unimaginable: a revolutionized medicine that reaches beyond diagnosis and cure of disease into defining states of the organism in general; revolutionary agricultural technology built on genomic understanding and manipulation of animals and plants; the age-old foundation of biology, taxonomy, made rock solid, greatly extended, and become far more useful in its new genomic setting; a microbial ecology that is finally able to contribute to our understanding of the biosphere; and the list goes on. (Woese 2005: 99)
All this is an expression of the power inherent in the methodology of molecular biology, especially the sequencing of genomes. Methodology is one thing, however, and understanding and direction another. The fact is that the understanding of biology emerging from the mass of data that flows from the genome sequencing machines brings into question the classical concepts of organism, lineage, and evolution as the same time it gainsays the molecular perspective that spawned the enterprise. The fact is that the molecular perspective, which so successfully guided and shaped twentieth-century biology, has effectively run its course (as all paradigms do) and no longer provides a focus, a vision of the biology of the future, with the result that biology is wandering will-nilly into that future. This is a prescription for revolution–conceptual revolution. One can be confident that the new paradigm will soon emerge to guide biology in this new century…. Molecular biology has ceased to be a genuine paradigm, and it is now only a body of (very powerful) technique…. The time has come to shift biology’s focus from trying to understand organisms solely by dissecting them into their parts to trying to understand the fundamental nature of biological organization, of biological form. (Woese 2005: 99-100)
Conceptualizing Cells
We should all take seriously an assessment of biology made by the physicist David Bohm over 30 years ago (and universally ignored):
“It does seem odd … that just when physics is … moving away from mechanism, biology and psychology are moving closer to it. If the trend continues … scientists will be regarding living and intelligent beings as mechanical, while they suppose that inanimate matter is to complex and subtle to fit into the limited categories of mechanism.” [D. Bohm, “Some Remarks on the Notion of Order,” in C. H. Waddington, ed., Towards a Theoretical Biology: 2 Sketches. (Edinburgh: Edinburgh Press 1969), p. 18-40.]
The organism is not a machine! Machines are not made of parts that continually turn over and renew; the cell is. A machine is stable because its parts are strongly built and function reliably. The cell is stable for an entirely different reason: It is homeostatic. Perturbed, the cell automatically seeks to reconstitute its inherent pattern. Homeostasis and homeorhesis are basic to all living things, but not machines.
If not a machine, then what is the cell?
— Carl R. Woese (2005, 100) on Evolving Biological Organization
(….) When one has worked one’s entire career within the framework of a powerful paradigm, it is almost impossible to look at that paradigm as anything but the proper, if not the only possible, perspective one can have on (in this case) biology. Yet despite its great accomplishments, molecular biology is far from the “perfect paradigm” most biologists take it to be. This child of reductionist materialism has nearly driven the biology out of biology. Molecular biology’s reductionism is fundamentalist, unwavering, and procrustean. It strips the organism from its environment, shears it of its history (evolution), and shreds it into parts. A sense of the whole, of the whole cell, of the whole multicellular organism, of the biosphere, of the emergent quality of biological organization, all have been lost or sidelined. (Woese 2005: 101)
Our thinking is fettered by classical evolutionary notions as well. The deepest and most subtle of these is the concept of variation and selection. How we view the evolution of cellular design or organization is heavily colored by how we view variation and selection. From Darwin’s day onward, evolutionists have debated the nature of the concept, and particularly whether evolutionary change is gradual, salutatory, or of some other nature. However, another aspect of the concept concerns us here more. In the terms I prefer, it is the nature of the phase (or propensity) space in which evolution operates. Looked at one way, variation and selection are all there is to evolution: The evolutionary phase space is wide open, and all manner of things are possible. From this “anything goes” perspective, a given biological form (pattern) has no meaning outside of itself, and the route by which it arises is one out of an enormous number of possible paths, which makes the evolution completely idiosyncratic and, thus, uninteresting (molecular biology holds this position: the molecular biologist sees evolution as merely a series of meaningless historical accidents). (Woese 2005: 101)
The alternative viewpoint is that the evolutionary propensity space is highly constrained, being more like a mountainous terrain than a wide open prairie: Only certain paths are possible, and they lead to particular (a relatively small set of) outcomes. Generic biological form preexists in the same sense that form in the inanimate world does. It is not the case that “anything goes” in the world of biological evolution. In other words, biological form (pattern) is important: It has meaning beyond itself; a deeper, more general significance. Understanding of biology lies, then, in understanding the evolution and nature of biological form (pattern). Explaining biological form by variation and selection hand-waving argumentation is far from sufficient: The motor does not explain where the car goes. (Woese 2005: 101-102)
THE SYSTEM OF HEREDITY AS A CONTROL SYSTEM In a world dominated by thermodynamical forces of disorder and disintegration, all living systems, sooner or later, fall in disarray and succumb to those forces. However, living systems on Earth have survived and evolved for ~3 billion years. They succeeded in surviving because a. during their lifetime they are able to maintain the normal structure by compensating for the lost or disintegrated elements of that structure, and b. they produce offspring. The ability to maintain the normal structure, despite its continual erosion, indicates that living systems have information for their normal structure, can detect deviations from the “normalcy” and restore the normal structure. This implies the presence and functioning of a control system in living organisms. In unicellulars the control system, represented by the genome, the apparatus for gene expression and cell metabolism, functions as a system of heredity during reproduction. Homeostasis and other facts on the development of some organs and phenotypic characters in metazoans prove that a hierarchical control system, involving the CNS [Central Nervous System] and the neuroendocrine system, is also operational in this group. It is hypothesized that, in analogy with unicellulars, the control system in metazoans, in the process of their reproduction, serves as an epigenetic system of heredity.
— Nelson R. Cabej (2004, 11) Neural Control of Development: The Epigenetic Theory of Heredity
THE EPIGENETICS OF EVOLUTIONARY CHANGE Under the influence of external/internal stimuli, the CNS may induce adaptive changes in morphological and life history characters without any changes in genes. Commonly, these changes are not heritable, i.e. they do not reappear in the offspring if the offspring is not exposed to the same stimuli. This is the case for the overwhelming majority of described examples of predatory-induced defenses, polyphenisms, and adaptive camouflage. But reproducible cases of transgenerational changes, without changes in genes, changes that are transmitted to the offspring for one or more generations, occur and are described. All the cases of non-genetic, inherited changes are determined by underlying neural mechanisms. Such changes may represent the “primed”, ready-made material of evolution. The evidence on the neurally induced transgenerational nongenetic changes cannot be overestimated in respect to possible evolutionary implications of the epigentic system of heredity. (Cabej 2004: 201)
— Nelson R. Cabej (2004, 201) Neural Control of Development: The Epigenetic Theory of Heredity
Indeed, epigenetic modifications of phenotypic expression are sometimes considered to be “Lamarckian” because they can be transmitted to subsequent generations after being acquired. Not surprisingly, then, it has taken decades for these molecular effects to be accepted as a part of mainstream genetics. Contemporary awareness of molecular epigenetics has expanded the neo-Darwinian view of DNA sequence as the fundamental mode of inherited developmental information (Jablonka and Lamb 2002; Mattick 2012), placing even the initial phase of gene expression squarely in a dynamic cellular, organismic, and environmental context. (Sultan 2015, 11)
At the mechanistic level, epigenetic modification shape gene expression by altering protein-gene interactions that determine the accessibility of DNA to the biochemical machinery of gene transcription. (….) Epigenetic mechanisms may also be a heretofore unrecognized source of selectively important phenotypic variation in natural populations.
It has become clear that heredity is mediated at the molecular level not purely by discrete, stably transmitted DNA sequence variants but also by multiple information-altering mechanisms that lend the process an unlooked-for flexibility. Qualitatively new modes of cross-generational gene regulation are continuing to be found, including several that show gene silencing and other epigenetic roles for noncoding RNA (Bernstein and Allis 2005; Mattick and Mehler 2008: Lenhard et. al. 2012; Ha and Kim 2014). (Sultan 2015, 12)
Many genomic sequences that were previously considered “junk” are now known to code for small or “micro” RNAs (and possibly long RNAs as well) that play a role, for instance by altering enzymatic access to the chromatin by binding to DNA (Koziol and Rinn 2010). … Interestingly, noncoding RNAs may carry environmentally induced effects on the phenotype form one generation to the next, including the neurobehavioral effects of social environment. In one recent study, traumatic, unpredictable, separation of newborn mice from their mothers altered several aspects of microRNA activity in the pups, including their hippocampi and other brain structures involved in stress responses. These epigenetic changes were associated with different behavioral responses to aversive conditions such as brightly illuminated maze compartments. When sperm RNA from traumatized males was injected into fertilized wild-type egg cells, these phenotypic effects were reproduced in the F2 generation; this result indicates that RNA can contribute to the transmission of stress-induced traits in mammals (Gapp et. al. 2014) (Sultan 2015, 12-13)
— Sonia E. Sultan (2015) Organism & Environment: Ecological Development, Niche Construction, and Adaptation
~ ~ ~
A general character of genomic programs for development is that they progressively regulate their own readout, in contrast, for example, to the way architects’ programs (blueprints) are used in constructing buildings. All of the structural characters of an edifice, from its overall form to local aspects such as placement of wiring and windows, are prespecified in an architectural blueprint. At first glance the blueprints for a complex building might seem to provide a good metaphoric image for the developmental regulatory program that is encoded in the DNA. Just as in considering organismal diversity, it can be said that all the specificity is in the blueprints: A railway station and a cathedral can be built of the same stone, and what makes the difference in form is the architectural plan. Furthermore, in bilaterian development, as in an architectural blueprint, the outcome is hardwired, as each kind of organism generates only its own exactly predictable, species-specific body plan. But the metaphor is basically misleading, in the way the regulatory program is used in development, compared to how the blueprint is used in construction. In development it is as if the wall, once erected, must turn around and talk to the ceiling in order to place the windows in the right positions, and the ceiling must use the joint with the wall to decide where its wires will go, etc. The acts of development cannot all be prespecified at once, because animals are multicellular, and different cells do different things with the same encoded program, that is, the DNA regulatory genome. In development, it is only the potentialities for cis-regulatory information processing that are hardwired in the DNA sequence. These are utilized, conditionally, to respond in different ways to the diverse regulatory states encountered (in our metaphor that is actually the role of the human contractor, who uses something outside of the blueprint, his brain, to select the relevant subprogram at each step). The key, very unusual feature of the genomic regulatory program for development is that the inputs it specifies in the cis-regulatory sequences of its own regulatory and signaling genes suffice to determine the creation of new regulatory states. Throughout, the process of development is animated by internally generated inputs. “Internal” here means not only nonenvironmental — i.e., from within the animal rather than external to it but also, that the input must operate in the intranuclear compartments as a component of regulatory state, or else it will be irrelevant to the process of development. (Davidson 2006: 16-17)
(….) The link between the informational transactions that underlie development and the observed phenomena of development is “specification.” Developmental specification is defined phenomenologically as the process by which cells acquire the identities or fates that they and their progeny will adopt. But in terms of mechanism, specification is neither more nor less than that which results in the institution of new transcriptional regulatory states. Thereby specification results from differential expression of genes, the readout of particular genetic subprograms. For specification to occur, genes have to make decisions, depending on the new inputs they receive, and this brings us back to the information processing capacities of the cis-regulatory modules of the gene regulatory networks that make regulatory state. The point cannot be overemphasized that were it not for the ability of cis-regulatory elements to integrate spatial signaling inputs together with multiple inputs of intracellular origin, then specification, and thus development, could not occur. (Davidson 2006: 17)
The molecular mechanisms that bring about biological form in modern-day embryos … should not be confused with the causes that led to the appearance of these forms in the first place … selection can only work on what already exists. (G. B. Muller and S. A. Newman 2003: 3, Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology)
— Cited in Minelli and Fusco 2008: xv. Evolving Pathways: Key Themes in Evolutionary Developmental Biology.
The evolution of organismal form consists of a continuing production and ordering of anatomical parts: the resulting arrangement of parts is nonrandom and lineage specific. The organization of morphological order is thus a central feature of organismal evolution, whose explanation requires a theory of morphological organization. Such a theory will have to account for (1) the generation of initial parts; (2) the fixation of such parts in lineage-specific combinations; (3) the modification of parts; (4) the loss of parts; (5) the reappearance of lost parts [atavism]; and (6) the addition of new parts. Eventually, it will have to specify proximate and ultimate causes for each of these events as well.
Only a few of the processes listed above are addressed by the canonical neo-Darwinian theory, which is chiefly concerned with gene frequencies in populations and with the factors responsible for their variation and fixation. Although, at the phenotypic level, it deals with the modification of existing parts, the theory is intended to explain neither the origin of parts, nor morphological organization, nor innovation. In the neo-Darwinian world the motive factor for morphological change is natural selection, which can account for the modification and loss of parts. But selection has no innovative capacity; it eliminates or maintains what exists. The generative and the ordering aspects of morphological evolution are thus absent from evolutionary theory.
— Muller, Gerd B. (2003) Homology: The Evolution of Morphological Organization. In Origination of Organismal Form: Beyond the Gene in Development and Evolutionary Biology. (eds., Gerd B. Muller and Stuart A. Newman). The Vienna Series in Theoretical Biology. MIT Press. p. 51.
What is evo-devo? Undoubtedly this is a shorthand for evolutionary developmental biology. There, however, agreement stops. Evo-devo has been regarded as either a new discipline within evolutionary biology or simply a new perspective upon it, a lively interdisciplinary field of studies, or even necessary complement to the standard (neo-Darwinian) theory of evolution, which is an obligate step towards an expanded New Synthesis. Whatever the exact nature of evo-devo, its core is a view of the process of evolution in which evolutionary change is the transformation of (developmental) processes rather than (genetic or phenotypic) patterns. Thus our original question could be more profitably rephrased as: What is evo-devo for? (Minelli and Fusco 2008: 1)
(….) Evo-devo aims to provide a mechanistic explanation of how developmental mechanisms have changed during evolution, and how these modifications are reflected in changes in organismal form. Thus, in contrast with studies on natural selection, which aim to explain the ‘survival of the fittest’, the main target of evo-devo is to determine the mechanisms behind the ‘arrival of the fittest’. At the most basic level, the mechanistic question about the arrival of the fittest involves changes in the function of genes controlling developmental programs. Thus it is important to reflect on the nature of the elements and systems underlying inheritable developmental modification using an updated molecular background. (Minelli and Fusco 2008: 2)
The biggest intellectual danger of any evolutionary research is the temptation to find satisfaction in ingenious “just so” stories. Devo-evo, as the youngest member of the evolutionary sciences, is in particular danger of falling into this trap, as other branches of evolutionary biology did in the past. (Laubichler and Maienschein 2007: 529).
(….) One of the main sources of intellectual excitement in devo-evo is the prospect of understanding major evolutionary transformations. If developmental evolution were to focus exclusively on microevolutionary processes, the field would abandon that major objective. In other words, even a very successful microevolutionary approach to developmental evolution would not fulfill the expectations that have been raised: bridging the gap between evolutionary genetics and macroevolutionary pattern. (Laubichler and Maienschein 2007: 530)
— Laubichler and Maienschein 2007: 529. In Embryology to Evo-Devo: A History of Developmental Evolution.
Darwin has often been depicted as a radical selectionist at heart who invoked other mechanisms only in retreat, and only as a result of his age’s own lamented ignorance about the mechanisms of heredity. This view is false. Although Darwin regarded selection as the most important of evolutionary mechanisms (as do we), no argument from opponents angered him more than the common attempt to caricature and trivialize his theory by stating that it relied exclusively upon natural selection. In the last edition of the Origin, he wrote (1872, p. 395):
As my conclusions have lately been much misrepresented, and it has been stated that I attribute the modification of species exclusively to natural selection, I may be permitted to remark that in the first edition of this work, and subsequently, I placed in a most conspicuous position–namely at the close of the introduction—the following words: “I am convinced that natural selection has been the main, but not the exclusive means of modification.” This has been of no avail. Great is the power of steady misinterpretation.
Charles Darwin, Origin of Species (1872, p. 395)
— Gould, Stephen J., & Lewontin, Richard C. (1979) The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme. PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON, SERIES B, VOL. 205, NO. 1161, PP. 581-598.
Dichotomy is both our preferred mental mode, perhaps intrinsically so, and our worst enemy in parsing a complex and massively multivariate world (both conceptual and empirical). Simpson, in discussing “the old but still vital problem of micro-evolution as opposed to macro-evolution” (ref. 10, p. 97), correctly caught the dilemma of dichotomy by writing (ref. 10, p. 97): “If the two proved to be basically different, the innumerable studies of micro-evolution would become relatively unimportant and would have minor value for the study of evolution as a whole.”
Faced with elegant and overwhelming documentation of microevolution, and following the synthesist’s program of theoretical reduction to a core of population genetics, Simpson opted for denying any distinctive macroevolutionary theory and encompassing all the vastness of time by extrapolation. But if we drop the model of dichotomous polarization, then other, more fruitful, solutions become available.
— Gould, Stephen Jay. Tempo and mode in the macroevolutionary reconstruction of Darwinism. National Academy of Sciences Colloquim “Tempo and Mode in Evolution”; 1994 Jul: 6767-6768. Emphasis added.
If Darwin were alive today, I have no doubt his love of truth would lead him to follow the evidence—the facts—wherever they might chance to lead. Darwin was not a dogmatist, but he was dogged in pursuing facts and duly humble in his theoretical interpretations of them. The question of real import is not whether natural selection is a real phenomenon, for it is (aside from its reification into a ‘thing’, which it is not), but whether it is the source of novelty. There is no doubt that we can through artificial selection bring forth existing phenotypic plasticity (e.g., shifting the number of hairs on a fruit fly) ; but that is merely tweaking already existing features similar to how the environment brings about morphological changes due to phenotypic plasticity, the former being artificial, while the latter natural. Natural selection reveals how nature sifts the survival of the fittest, but theoretically speaking, tells us nothing when used as the basis for an unwarranted extrapolation about the arrival of the fittest. When science has mastered the arrival of the fittest we will have mastered evolution itself.
At some point, such heritable regulatory changes will be created in a test animal in the laboratory, generating a trait intentionally drawing on various conserved processes. At that point, doubters [of organic evolution] would have to admit that if humans can generate phenotypic variation in the laboratory in a manner consistent with known evolutionary changes, perhaps it is plausible that facilitated variation has generated change in nature.
— Gerhart, C. and Kirschner Marc W. The Plausibility of Life: Resolving Darwin’s Dilemma. New Haven: Yale University Press; 2005; p. 237. Emphasis added.
Natural selection does not act on anything, nor does it select (for or against), force, maximize, create, modify, shape, operate, drive, favor, maintain, push, or adjust. Natural selection does nothing. Natural selection as a natural force belongs in the insubstantial category already populated by the Becker/Stahl phlogiston or Newton’s “ether.” ….
Having natural selection select is nifty because it excuses the necessity of talking about the actual causation of natural selection. Such talk was excusable for Charles Darwin, but inexcusable for evolutionists now. Creationists have discovered our empty “natural selection” language, and the “actions” of natural selection make huge vulnerable targets. (Provine 2001: 199-200)
— Provine, William B. The Origins of Theoretical Population Genetics. Chicago: Chicago University Press; 2001; c1971 pp. 199-200. Emphasis added.
The Epigenetic System of Heredity and Phenotypic Variation
In response to various environmental stimuli metazoans develop a wide variety of discrete biological adaptations, new phenotypic characters, without changes in genes and genetic information. Such abrupt emergence or new morphological and life history (as well as physiological and behavioral) characters requires information. The fact that the genetic information does not change, implies that information of a type other than genetic information is responsible for the development of those characters.
(…) [T]he CNS, in response to external stimuli, releases specific chemical signals, which start signal cascades that result in adaptive morphological and physiological changes in various organ or parts of the body. In other words, information for those adaptations flows from the CNS to the target cells, tissues and organs. … [I]t was also proven the nongenetic, computational nature and origin of that information.
The CNS generates its information by processing the input of external stimuli. As defined in this work, a stimulus is a perceptible change in an environmental agent to which the CNS responds adaptively. Changes in the environment may be as big as to cause stress condition and radical changes in environment are often associated with adaptive changes in morphology. (Cabej 2004: 209)
— Cabej, Nelson R. Neural Control of Development: The Epigenetic Theory of Heredity. New Jersey: Albanet; 2004; p. 209.
Natural selection is today, understood in the context of what we now know, a description of the relationship between an organism’s phenotypic plasticity and its environment. This relationship can be empirically observed, both in nature and the laboratory, shifting existing features and attributes of an organism, by selectively altering gene frequencies in the lab, or by observing phenotypic responses to environmental signals in nature (survival of the fittest). But the question of the role of natural selection in the origin of novelty (the arrival of the fittest) is today being reexamined in light of new evidence of hereditary variation and its causes and origins.
Darwin’s theory of evolution by natural selection was based on the observation that there is variation between individuals within the same species. This fundamental observation is a central concept in evolutionary biology. However, variation is only rarely treated directly. It has remained peripheral to the study of mechanisms of evolutionary change. The explosion of knowledge in genetics, developmental biology, and the ongoing synthesis of evolutionary and developmental biology has made it possible to study the factors that limit, enhance, or structure variation at the level of an animal’s physical appearance and behavior. Knowledge of the significance of variability is crucial to this emerging synthesis. This volume positions the role of variability within this broad framework, bringing variation back to the center of the evolutionary stage. This book is intended for scholars, advanced undergraduate students and graduates in evolutionary biology, biological anthropology, paleontology, morphology, developmental biology, genomics and other related disciplines.
— Hallgrimsson, Benedikt and Hall, Brian (2005) Variation: A Central Concept in Biology.
Macroevolution and the Genome
There are many ways of studying the mechanisms and outcomes of evolution, ranging from genetics and genomics at the lowest scales through to paleontology at the highest. Unfortunately, the division into specialties according to scale has often led to protracted disagreement among evolutionary theorists from different disciplines regarding the nature of the evolutionary processes. Although the resulting debate has undoubtedly led to a refinement of the various theoretical approaches employed, it has also prevented the development of a complete and unified theory of evolution. Without such a theory, all evolutionary phenomena, including those involving features of the genome, will remain at best only partially understood. (….) The goal … is to provide an expansion, not a refutation, of existing evolutionary theory, and to build some much-needed bridges across traditionally disparate disciplines. (Gregory 2005: 679-680)
From Darwin to Neo-Darwinism
Charles Darwin did not invent the concept of evolution (“descent with modification” or “transmutation,” in the terminology of his time). In fact, the notion of evolutionary change long predates Darwin’s (1859) contributions in On the Origin of Species, which were essentially twofold: (1) providing extensive evidence, from a variety of sources, for the fact that species are related by descent, and (2) developing his theory of natural selection to explain this fact. Although quite successful in establishing the fact of evolution (the subsequent Creationist movement in parts of North America notwithstanding), Darwin’s explanatory mechanism of natural selection received only a lukewarm reception in contemporary scientific circles. (Gregory 2005: 680)
By the beginning of the 20th century, Darwinian natural selection had fallen largely out of favor, having been overshadowed by several other proposed mechanisms including mutationism, whereby species form suddenly by single mutations, with no intermediates; saltationism, in which major chromosomal rearrangements generate new species suddenly; neo-Lamarckism, which supposed that traits are improved directly through use and lost through disuse; and orthogenesis, under which inherent propelling forces drive evolutionary changes, sometimes even to the point of being maladaptive. Mutationism, in particular, gained favor after the rediscovery of Mendel’s laws of inheritance by Hugo de Vries and others, which showed heredity to be “particulate”—with individual traits passed on intact, even if hidden for a generation—rather than “blending,” as Darwin had believed. Particulate inheritance was taken by de Vries and others to imply that discontinuous variation in traits would be much more important than continuous variability expected under gradual Darwinian selection. (Gregory 2005: 680-681)
The problem faced by proponents of Darwinism was to reconcile the concept of discrete hereditary units with the graded variation required by natural selection. This issue was settled in the 1930s and 1940s with the advent of population genetics, which provided mathematical models to describe the behavior of genic variants (“alleles”) within populations, and showed that a particulate mechanism of inheritance did not prohibit the action of natural selection. This new theoretical framework is generally know as “neo-Darwinism” or the “Modern Synthesis,” because it sought to synthesize (i.e., combine) Mendelian genetics and Darwinian natural selection. (Gregory 2005: 681)
The first stage in the development of population genetics was to determine how alleles segregate within populations under “equilibrium” conditions. The issue was addressed by H.G. Hardy and Wilhelm Weinberg, resulting in what is now know as the “Hardy-Weinberg equilibrium,” a null hypothesis about the behavior of alleles in population that are not subject to natural selection, genetic drift (random changes in allele frequencies, for example by the accidental loss of a subset of the population, passage through a population bottleneck, or the founding of a new population by an unrepresentative sample of the parental population), gene flow (an influx of alleles from other populations by migration), or mutation (the generation of new alleles). When populations are not in Hardy-Weinberg equilibrium, one can begin to investigate which of these processes is (or are) responsible. More complex population genetics models were developed for dealing with this issue, most notably by Ronald Fisher, Sewall Wright, and J.B. Haldane. Others, like Theodosius Dobzhansky and G.L. Stebbins, established that natural populations contain sufficient genetic variation for these new models to work. (Gregory 2005: 681)
According to Provine (1988), the Modern Synthesis was really more of a “constriction” than an actual “synthesis,” in which a major goal was the elimination of the non-Darwinian alternatives listed previously and the associated restoration of selection to prominence in evolutionary theory. In at least one important sense, the term “synthesis” is clearly a misnomer, given that there remained a highly acrimonious divide between Fisher, who favored models based on large populations with a dominant role for selection, and Wright, whose “adaptive landscape” model dealt primarily with small populations and emphasized genetic drift. Despite these divisions, neo-Darwinians did succeed in narrowing the range of explanatory approaches to those involving mutation, selection, drift, and gene flow. (Gregory 2005: 681)
Genomes, Fossils, and Theoretical Inertia
As far as genetics is concerned, evolutionary theory has always been far ahead of its time. Darwin’s theory of natural selection was developed in the absence of concrete knowledge of hereditary mechanisms, and the mathematical framework of neo-Darwinism was assembled before the structure of DNA had been established (and even before DNA was identified as the molecule of inheritance). As a consequence, numerous surprises, puzzles, and conflicts have emerged from new discoveries in genetics and genomics. Consider, for example, the recent findings of deep genetic homology undergirding “analogous” features of unrelated organisms, the role of clustered master control genes in regulating development, the remarkable low gene numbers in humans, the collapse of the “one gene-one protein” model, the extraordinary abundance of transposable elements in the genomes of humans and other species, and the increasing evidence for the role of large-scale genome duplications in evolution. Also recognized for decades (and still the subject of healthy debate) are the importance of smaller-scale gene duplications, the role recurrent hybridization and polyploidy, the preponderance of neutral evolution at the molecular level, and the initially quite alarming disconnect between genome size and organismal complexity. Advances in genetics and genomics have also provided revolutionary insights into the relationships among organisms, from the smallest scales (e.g., human-chimpanzee genetic similarity) to the largest (e.g., deep divergences between “prokaryote” groups). None of these was (or indeed, could have been) predicted or expected by the accepted formulation of evolutionary theory that preceded it. This historical record in evolutionary biology is that theories are developed under assumptions about the existence—or perhaps more commonly, the absence—of certain genetic mechanisms, and must later be revised as new knowledge comes to light regarding genomic structure, organization, and function. This mode of progress is not necessarily problematic, except when theoretical inertia forestalls the acceptance of the new information and its implications. (Gregory 2005: 682)
Genomics is not the only field to have faced theoretical inertia. For decades, prominent paleontologists have argued that their observations of the fossil record fail to fit the expectations of strict Darwinian gradualism. Darwin’s view of speciation, sometimes labeled as “phyletic gradualism,” was based on the slow, gradual (but not necessarily constant) evolution of one species or large segments thereof into another through a series of imperceptible changes, often without any splitting of lineages (i.e., by “anagenesis”). By contrast, the theory of “punctuated equilibria” (“punk eek” to afficionados,” “evolution by jerks” to some critics) proposes that most species experience pronounced morphological stasis for most of their time, with change occurring only in geologically rapid bursts associated with speciation events (Eldredge and Gould, 1972; Gould and Eldredge, 1977, 1993; Gould, 1992, 2002). Moreover, speciation in this second case involves the branching off of new species (“cladogenesis”) via small, peripherally isolated populations rather than the gradual transformation of the parental stock itself. (Gregory 2005: 682-683)
Based on differences such as these, many of those who study evolutionary patterns in deep time have developed alternative theoretical approaches to account for the large-scale features of evolution. This, too, has generally proceeded with a minimal consideration of genomic information, and as such there is a need for increased communication between these two fields. In fact, despite their residence at opposite ends of the spectrum in evolutionary science, there is great potential for intergration between genomics and paleontology because ultimately both are concerned with variation among species and higher taxa. (Gregory 2005: 683-684)
IS A THEORY OF MACROEVOLUTION NECESSARY?
Microevolution, Macroevolution, And Extrapolationism
The extent to which processes observable within populations and tractable in mathematical models can be extrapolated to explain patterns of diversification occurring in deep time remains one of the most contentious issues in modern evolutionary biology. This is a debate with a lengthy pedigree, extending back more than 75 years, and therefore long predating any of the issues of genome evolution … Nevertheless, genomes reside at an important nexus in this debate by containing the genes central to population-level discussions, but also having their own complex large-scale evolutionary histories. (Gregory 2005: 684)
Writing as an orthogeneticist in 1927, prior to the Modern Synthesis when Darwinian natural selection was largely eclipsed as a mechanism of evolutionary change, Iurii Filipchenko made the following argument:
Modern genetics doubtless represents the veil of the evolution of Jordanian and Linnaean biotypes (microevolution), contrasted with the evolution of higher systematic groups (macroevolution), which has long been of central interest. This serves to underline the above-cited consideration of the absence of any intrinisic connection between genetics and the doctrine of evolution, which deals particularly with macroevolution.
[Translation as in Hendry and Kinnison, 2001].
In modern parlance, microevolution represents the small-scale changes in allele frequencies that occur within populations (as studied by population geneticists and often observable over the span of a human lifetime), whereas macroevolution involves the generation of broad patterns above the species level over the course of Earth history (as studied in the fossil record by paleontologists, and with regard to extant taxa by systematists). … Dobzhansky (1937, p. 12) noted that because macroevolution could not be observed directly, “we are compelled at the present level of knowledge reluctantly to put a sign of equality between the mechanisms of macro- and micro-evolution.” However, although Dobzhansky was tentative in his assertion of micro-macro equivalence, the doctrine of “extrapolationism” was embraced as a fact by many other architects and early adherents of the Modern Synthesis. Thus as Mayr (1963, p. 586) later explained, “the proponents of the synthetic theory maintain that all evolution is due to the accumulation of small genetic changes, guided by natural selection, and the events that take place within populations and species” (emphasis added). There was an obvious reason for this strict adherence to extrapolationism at the time, namely the belief that if micro- and macroevolution “proved to be basically different, the innumerable studies of micro-evolution would become relatively unimportant and would have minor value in the study of evolution as a whole” (Simpson, 1944, p. 97). As such, only proponents of non-Darwinian mechanisms, most notably the much-maligned saltationist Richard Goldschmidt (1940, p. 8), argued at the time that “the facts of microevolution do not suffice for an understanding of macroevolution.” (Gregory 2005: 684-685)
Obviously, the “present level of knowledge” is not the same today as it was in Dobzhansky’s time. A great deal of new information has since been gleaned—and continues to accrue—regarding the mechanisms of heredity and the major patterns of evolutionary diversification. Considering Mayr’s statement, it is now clear that not all relevant genetic changes are small (cf., genome duplications), nor is all change guided by natural selection (cf., neutral molecular evolution), nor do all relevant processes operate within populations and species (cf., hybridization). In one of the more notorious exchanges on the subject, Gould (1980) went so far as to declare this simple version of the neo-Darwinian synthesis as “effectively dead, despite its persistence as textbook orthodoxy.”1 To be more specific, this applies not to the Modern Synthesis at large, but to strict extrapolationism. Using a far less aggressive tone, another prominent macroevolutionist put it as follows: “The advances in molecular biology contribute to the need for a formal expansion of evolutionary theory is an exigency we can hardly hold against the early architects of the synthesis” (Eldredge, 1985, p. 86), it is interesting to imagine the view that Fisher, Dobshansky, Haldane, Wright, or even Darwin might have taken had they been privy to modern insights. (Gregory 2005: 685-686)
Simpson’s (1944) account of the threat to the relevance of microevolution is also in need of revision. It is simply not the case that a mechanistic disconnect between micro- and macroevolution would render microevolutionary study obsolete. Far from it, because any genomic changes, regardless of the magnitude of their effects, must still pass through the filters of selection and drift to reach a sufficiently high frequency if they are to be of evolutionary significance. So, even if understanding this filtration process does not, by itself, provide a complete understanding of macroevolution, it would still be a crucial component of an expanded evolutionary theory. Consider, for example, the topic of major developmental regulation genes, which involves at least four different questions, all mutually compatible, studied by four different disciplines: (1) Evolutionary developmental biology (“evo-devo”)—How do such genes act to produce observed phenotypes? (2) Comparative genomics—What is the structure of these genes, and what role did processes like gene (or genome) duplication play in their evolution? (3) Population genetics—How would such genes have been filtered by selection, drift, and gene flow to reach their current rate of fixation? (4) Paleontology—What is the relevance of these genes for understanding the emergence of new body plans and thus new macroevolutionary trajectories (e.g., Carroll, 2000; Erwin, 2000; Jablonski, 2000; Shubin and Marshall, 2000)? (Gregory 2005: 686)
Though the protagonists have often been divided along these professional lines, the micro-macro debate is not between paleontologists and population geneticists per se. Rather, it is between strict extrapolationists who argue that all evolution can be understood by studying population-level processes and those who argue that there are additional factors to consider. Members of this latter camp may come from all quarters of evolutionary biology, from genome biologists to paleontologists, although the latter have been by far the most vocal proponents of an expanded outlook. For strict extrapolationists, there may be little value in pursuing this debate. But for those open to a more pluralistic approach who seek a resolution to the issue, there is much value in understanding the arguments presented in favor of a distinct macroevolutionary theory that coexists with, but is not subsumed by, established microevolutionary principles. (Gregory 2005: 686)
Critiques of Strict Extrapolationism
1 Of course, far from simply mourning their loss, microevolutionists responded to this charge with some vigor (e.g., Stebbins and Ayala, 1981; Charlesworth et al., 1982; Hecht and Hoffman, 1986), perhaps overlooking the fact that only the strict extrapolationist definition given by Mayr (1963), and not the synthesis in its entirety, was proclaimed deceased (see Gould, 2002). Although some may argue that Mayr’s (1963) definition was already outdated by this time, and that Gould’s (1980) criticism was therefore misplaced, it bears noting that such a definition had been in common use throughout the period in question and well beyond (e.g., Mayr, 1980; Ruse, 1982; Hecht and Hoffman, 1986). As for Gould’s (1980) claim of “‘textbook orthodoxy,’ one may consider Freeman and Herron’s (1998) recent textbook, which considers the Modern Synthesis to be composed of two main postulates: “[1] Gradual evolution results from small genetic changes that are acted upon by natural selection. [2] The origin of species and higher taxa, or macroevolution, can be explained in terms of natural selection acting on individuals, or microevolution.” Futuyma’s (1998) more advanced text provides a much more detailed description of the Modern Synthesis but the fundamental extrapolationist point remains.