Literature Only Economics vs. Practical Problem Solving Economics

This was a paper hard to read. It does not mean that the paper was badly written. The difficulty of the task that the author sought enforced him to write this difficult paper. After struggling a week in reading the paper, I am rather sympathetic with Delorme. In a sense, he was unfortunate, because he came to be interested in complexity problems by encountering two problems: (1) road safety problem and (2) the Regime of Interactions between the State and the Economy (RISE). I say “unfortunate,” because these are not good problems with which to start the general discussion on complexity in economics, as I will explain later. Of course, one cannot choose the first problems one encounters and we cannot blame the author on this point, but in my opinion the good starting problems are crucial to further development of the argument of complexity in economics.

Let us take the example of the beginning of modern physics. Do not think of Newton. It is a final accomplishment of the first phase of modern physics. There will be no many people who object that modern physics started by two (almost simultaneous) discoveries: (1) Kepler’s laws of orbital movements and (2) Galileo’s law of falling bodies and others. The case of Galilei can be explained by a gradual rise of the spirit of experiments. Kepler’s case is more interesting. One of crucial data for him was Tycho Brahe’s observations. He improved the accuracy of observation about 1 digit. Before Brahe for more than one thousand years, accuracy of astronomical observations was about 1 tenth of a degree (i.e. 6 minutes in angular unit system). Brahe improved this up to an accuracy of 1/2 minute to 1 minute. With this data, Kepler was confident that 8 minutes of error he detected in Copernican system was clear evidence that refutes Copernican and Ptolemaic systems. Kepler declared that these 8 minutes revolutionize whole astronomy. After many years of trials and errors, he came to discover that Mars follows an elliptic orbit. Newton’s great achievement was only possible because he knew these two results (of Galilei and Kepler). For example, Newton’s law of gravitation was not a simple result of induction or abduction. The law of square-inverse was a result of halflogical deduction from Kepler’s third law.

I cite this example, because this explains in which conditions a science can emerge. In the same vein, the economics of complexity (or more correctly economics) can be a good science when we find this good starting point. (Science should not be interpreted in a conventional meaning. I mean by science as a generic term for a good framework and system of knowledge). For example, imagine that solar system was composed of two binary stars and earth is orbiting with a substantial relative weight. It is easy to see that this system has to be solved as three-body problem and it would be very difficult for a Kepler to find any law of orbital movement. Then the history of modern physics would have been very different. This simple example shows us that any science is conditioned by complexity problems, or by tractable and intractable problem of the subject matter or objects we want to study.

The lesson we should draw form the history of modern physics is a science is most likely to start from more tractable problems and evolve to a state that can incorporate more complex and intractable phenomena. I am afraid that Delorme is forgetting this precious lesson. Isn’t he imagining that an economic science (and social science in general) can be well constructed if we gain a good philosophy and methodology of complex phenomena?

I do not object that many (or most) of economic phenomena are deeply complex ones. What I propose as a different approach is to climb the complexity hill by taking a more easy route or track than to attack directly the summit of complexity. Finding this track should be the main part of research program but I could not find any such arguments in Delorme’s paper. (Yoshinori Shiozawa, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 10/10/2017.)

1) My paper can be viewed as an exercise in problem solving in a context of empirical intractability in social science. It was triggered by the empirical discovery of complex phenomena raising questions that are not amenable to available tools of analysis, i.e., are intractable. Then the problem is to devise a model and tools of analysis enabling to cope with these questions. Then, unless someone comes with a complex system analysis or whatever tool that solves the problem at stake, a thing I would welcome, I can’t think of any other way to proceed than focusing on the very cognitive process of knowledge creation and portraying it as a reflective, open-ended, problem-first cognitive behavioral endeavour. It is an approach giving primacy both to looking and discovering rather than to assuming and deducing, and to complexity addressed in its own right rather than to complex systems in which complexity is often viewed tautologically as the behavior of complex systems. The outcome is a new tool of analysis named Deep Complexity in short. I believe that the availability of this tool provides a means to take more seriously the limitations of knowledge in a discipline like economics in which inconclusive and non demonstrative developments are not scarce when sizeable issues are involved.

2) Yoshinori Shiozawa raises the question of where to start from, from tractable problems or from the intractable? He advocates the former and suggests to “evolve to a state that can incorporate more complex and intractable phenomena”. But then, with what tools of analysis for intractable phenomena? And I would have never addressed intractability if I had not bumped into unresolved empirical obtacles. Non commutative complementarity is at work here: starting with the tractable in a discipline dominated by non conclusive and non demonstrative debates doesn’t create any incentive to explore thoroughly the intractable. It is even quite intimidating for those who engage in it. This sociology of the profession excludes de facto intractability from legitimate investigation. Then starting from the possibility of intractability incorporates establishing a dividing line and entails a procedural theorizing in which classical analysis can be developed for tractable problems when they are identified, otherwise the deep complexity tool is appropriate, before a substantive theorizing can be initiated. It is a counterintuitive process: complexification comes first, before a further necessary simplification or reduction. (Robert Delorme, (WEA Conference), 11/30/2017.)

In my first comment in this paper, I have promised to argue the track I propose. I could not satisfy my promise. Please read my second post for the general comments in discussion forum. I have given a short description on the working of an economy that can be as big as world economy. It explains how an economy works. The working of economy (not economics) is simple but general equilibrium theory disfigured it. The track I propose for economics is to start form these simple observations

As I have wrote in my first post, modern science started from Galileo Galilei’s physics and Johaness Kepler’s astronomy. We should not imagine that we can solve a really difficult problem (Delorme’s deep complexity) in a simple way. It is not a wise way to try to attack deep complexity unless we have succeeded to develop a sufficient apparatus by which to treat it. (Yoshinori Shiozawa, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 11/30/2017.)

Dear Dr Shiozawa, it seems that we are not addressing the same objects of inquiry. Yours seems to stand at an abstract level of modern science in general. Mine is much less ambitious: it is grounded in research on how to deal with particular, empirically experienced problems in real economic and social life, that appear intractable, and subject to scientific practice. Deep Complexity is the tool that is manufactured to address this particular problem. It may have wider implications in social science. but that is another story. (Robert Delorme, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 11/30/2017.)

You are attacking concrete social problems. I am rather a general theorist. That may be the reason of our differences of stance toward your problem.

Our situation reminds me the history of medicine. This is one of the oldest science and yet as the organism is highly complex system, many therapies remained symptomatic. Even though, they were to some extent useful and practical. I do not deny this fact. However, modern medicine is now changing its features, because biophysical theories and discoveries are changing medical research. Researchers are investigating the molecular level mechanism why a disease emerges. Using this knowledge, they can now design drugs at the molecular level. Without having a real science, this is not possible.

[Note Shiozawa’s implicit claim that previous medical science was not real science, but became real with the advent of molecular biology. No doubt molecular biology has opened up new domains of knowledge, but of course it is simply ludicrous to claim medicine wasn’t real science prior to molecular biology, as many perfectly valid scientific discoveries prior to and/or discovered without molecular biology are available to prove this assertion simply false. As Delorme states plainly below, this is scientism, not to mention an abysmal attempt to use revisionist history for purely rhetorical purposes. For more examples of Shiozawa’s scientism and sophistry see Semantic Negligence and for a description of literature-only economics see Payson 2017. For a good description of the kind of scientism Shiozawa is parroting see Pilkington 2016. To use one of Shiozawa’s favorite authors for go-to appeals to authority (unfortunately his memory doesn’t serve him well as Andreski contradicts his claim on RWER), see Stanislav Andreski’s Social Sciences as Sorcery (1973, 22-23).]

Economics is still in the age of pre-Copernican stage. It would be hard to find a truth mechanism why one of your examples occurs. I understand your intention, if you want say by the word of “deep complexity” a set of problems that are still beyond our ability of cognition or analysis. We may take a method very different from the regular science and probably similar to symptomatology and diagnostics. If you have argue in this way, it would have made a great contribution to our forum on complexities in economics. This is what I wanted to argue as the third aspect of complexity, i.e. complexity that conditions the development of economics as science.

To accumulate symptomatic and diagnostic knowledge in economics is quite important but most neglected part of the present day economics. (Yoshinori Shiozawa, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 12/1/2017, italics added.)

It is interesting to learn that, as an economist and social scientist, I must be in a “pre-Copernican” stage. Although what this means is not totally clear to me, I take it as revealing that our presuppositions about scientific practice differ. You claim to know what is the most appropriate way of investigating the subject I address, and that this way is the methods and tools of natural science. I claim to have devised a way which works, without knowing if it is the most appropriate, a thing whose decidability would seem to be quite problematic. And the way I have devised meets the conditions of a reflective epistemology of scientific practice, in natural science as well as in social science. Your presupposition is that the application of the methods of natural science is the yardstick for social science. This is scientism.

My presupposition is that there may be a difference between them, and that one cannot think of an appropriate method in social science without having first investigated and formulated the problem that is presented by the subject. As a “general theorist”, your position is enjoyable. May I recall what Keynes told Harrod: “Do not be reluctant to soil your hands”. I am ready to welcome any effective alternative provided it works on the object of inquiry that is at stake. It is sad that you don’t bring such an alternative. As Herb Simon wrote, ”You can’t beat something with nothing”. I borrow from your own sentence that “if you had argued this way, it would have made a great contribution to our forum…” (Robert Delorme, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 12/1/2017, italics added.)

Greedy Reductionism and Statistical Shadows

Biological evolution is, as has often been noted, both fact and theory. It is a fact that all extant organisms came to exist in their current forms through a process of descent with modification from ancestral forms. The overwhelming evidence for this empirical claim was recognized relatively soon after Darwin published On the Origin of Species in 1859, and support for it has grown to the point where it is as well established as any historical claim might be. In this sense, biological evolution is no more a theory than it is a “theory” that Napoleon Bonaparte commanded the French army in the late eighteenth century. Of course, the details of how extant and extinct organisms are related to one another, and of what descended from what and when, are still being worked out, and will probably never be known in their entirety. The same is true of the details of Napoleon’s life and military campaigns. However, this lack of complete knowledge certainly does not alter the fundamental nature of the claims made, either by historians or by evolutionary biologists. (Pigliucci et al. 2006: 1)

On the other hand, evolutionary biology is also a rich patchwork of theories seeking to explain the patterns observed in the changes in populations of organisms over time. These theories range in scope form “natural selection,” which is evoked extensively at many different levels, to finer-grained explanations involving particular mechanisms (e.g., reproductive isolation induced by geographic barriers leading to speciation events). (Pigliucci et al. 2006: 1)

(….) There are a number of different ways in which these questions have been addressed, and a number of different accounts of these areas of evolutionary biology. These different accounts, we will maintain, are not always compatible, either with one another or with other accepted practices in evolutionary biology. (Pigliucci et al. 2006: 1)

(….) Because we will be making some potentially controversial claims throughout this volume, it is crucial for the reader to understand two basic ideas underlying most of what we say, as well as exactly what we think are some implications of our views for the general theory of evolutionary quantitative genetics, which we discuss repeatedly in critical fashion. (Pigliucci et al. 2006: 2)

(….) The first central idea we wish to put forth as part of the framework of this book will be readily familiar to biologists, although some of its consequences may not be. The idea can be expressed by the use of a metaphor proposed by Bill Shipley (2000) …. the shadow theater popular in Southeast Asia. In one form, the wayang golek of Bali and other parts of Indonesia, three-dimensional wooden puppets are used to project two-dimensional shadows on a screen, where the action is presented to the spectator. Shipley’s idea is that quantitative biologists find themselves very much in the position of wayang golek’s spectators: we have access to only the “statistical shadows” projected by a set of underlying causal factors. Unlike the wayang golek’s patrons, however, biologists want to peek around the screen and infer the position of the light source as well as the actual three-dimensional shapes of the puppets. This, of course, is the familiar problem of the relationship between causation and correlation, and, as any undergraduate science major soon learns, correlation is not causation (although a popular joke among scientists is that the two are nevertheless often correlated). (Pigliucci et al. 2006: 2)

The loose relationship between causation and correlation has two consequences that are crucial…. On the one hand, there is the problem that, strictly speaking, it makes no sense to attempt to infer mechanisms directly from patterns…. On the other hand, as Shipley elegantly show in his book, there is an alternative route that gets (most of) the job done, albeit in a more circuitous route and painful way. What one can do is to produce a series of alternative hypotheses about the causal pathways underlying a given set of observations; these hypotheses can then be used to “project” the expected statistical shadows, which can be compared with the observed one. If the projected and actual shadows do not match, one can discard the corresponding causal hypothesis and move on to the next one; if the two shadows do match (within statistical margins of error, of course), then one had identified at least one causal explanation compatible with the observations. As any philosopher or scientist worth her salt knows, of course, this cannot be the end of the process, for more than one causal model may be compatible with the observations, which means that one needs additional observations or refinements of the causal models to be able to discard more wrong explanations and continue to narrow the field. A crucial point here is that the causal models to be tested against the observed statistical shadow can be suggested by the observations themselves, especially if coupled with further knowledge about the system under study (such as details of the ecology, developmental biology, genetics, or past evolutionary history of the populations in question). But the statistical shadows cannot be used as direct supporting evidence for any particular causal model. (Pigliucci et al. 2006: 4)

The second central idea … has been best articulated by John Dupré (1993), and it deals with the proper way to think about reductionism. The term “reductionism” has a complex history, and it evokes strong feelings in both scientists and philosophers (often, though not always, with scientists hailing reductionism as fundamental to the success of science and some philosophers dismissing it as a hopeless epistemic dream). Dupré introduces a useful distinction that acknowledges the power of reductionism in science while at the same time sharply curtailing its scope. His idea is summarized … as two possible scenarios: In one case, reductionism allows one to explain and predict higher-level phenomena (say, development in living organisms) entirely in terms of lower-level processes (say, genetic switches throughout development). In the most extreme case, one can also infer the details of the lower-level processes from the higher-level patterns produced (something we have just seen is highly unlikely in the case of any complex biological phenomenon because of Shipley’s “statistical shadow” effect). This form of “greedy” reductionism … is bound to fail in most (though not all) cases for two reasons. The first is that the relationships between levels of manifestation of reality (e.g., genetic machinery vs. development, or population genetics vs. evolutionary pathways) are many-to-many (again, as pointed out above in our discussion of the shadow theater). The second is the genuine existence of “emergent properties” (i.e., properties of higher-level phenomena that arise from the nonadditive interaction among lower-level processes). It is, for example, currently impossible to predict the physicochemical properties of water from the simple properties of individual atoms of hydrogen and oxygen, or, for that matter, from the properties of H20 molecules and the smattering of necessary impurities. (Pigliucci et al. 2006: 4-5)

Biological Emergence and Pan-selection Hand-Waving

Epigenetic Algorithms

Mechanical metaphors have appealed to many philosophers who sought materialist explanations of life. The definitive work on this subject is T. S. Hall’s Ideas of Life and Matter (1969). Descartes, though a dualist, thought of animal bodies as automata that obeyed mechanical rules. Julien de la Mettrie applied stricter mechanistic principles to humans in L’Homme machine (1748). Clockwork and heat engine models were popular during the Industrial Revolution. Lamarck proposed hydraulic processes as causes of variation. In the late nineteenth century, the embryologists Wilhelm His and Wilhelm Roux theorized about developmental mechanics. However, as biochemical and then molecular biological information expanded, popular machine models were refuted, but it is not surprising that computers should have filled the gap. Algorithms that systematically provide instructions for a progressive sequence of events seem to be suitable analogues for epigenetic procedures. (Reid 2007: 263)

A common error in applying this analogy is the belief that the genetic code, or at least the total complement of an organism’s DNA contains the program for its own differential expression. In the computer age it is easy to fall into that metaphysical trap. However, in the computer age we should also know that algorithms are the creations of programmers. As Charles Babbage (1838) and Robert Chambers (1844) tried to tell us, the analogy is more relevant to creationism than evolutionism. At the risk of offending the sophisticates who have indulged me so far, I want to state the problems in the most simple terms. To me, that is a major goal of theoretical biology, rather than the conversion of life to mathematics. (Reid 2007: 263)

Robert G.B. Reid (2007, 263) Biological Emergences: Evolution by Natural Experiment. The Vienna Series in Theoretical Biology.

If the emergentist-materialist ontology underlying biology (and, as a matter of fact, all the factual sciences) is correct, the bios constitutes a distinct ontic level the entities in which are characterized by emergent properties. The properties of biotic systems are then not (ontologically) reducible to the properties of their components, although we may be able to partially explain and predict them from the properties of their components… The belief that one has reduced a system by exhibiting [for instance] its components, which is indeed nothing but physical and chemical, is insufficient: physics and chemistry do not account for the structure, in particular the organization, of biosystems and their emergent properties (Mahner and Bunge 1997: 197) (Robert 2004: 132)

Jason Scott Robert (2004, 132) Embryology, Epigenesis, and Evolution: Taking Development Seriously

The science of biology enters the twenty-first century in turmoil, in a state of conceptual disarray, although at first glance this is far from apparent. When has biology ever been in a more powerful position to study living systems? The sequencing juggernaut has still to reach full steam, and it is constantly spewing forth all manner of powerful new approaches to biological systems, many of which were previously unimaginable: a revolutionized medicine that reaches beyond diagnosis and cure of disease into defining states of the organism in general; revolutionary agricultural technology built on genomic understanding and manipulation of animals and plants; the age-old foundation of biology, taxonomy, made rock solid, greatly extended, and become far more useful in its new genomic setting; a microbial ecology that is finally able to contribute to our understanding of the biosphere; and the list goes on. (Woese 2005: 99)

All this is an expression of the power inherent in the methodology of molecular biology, especially the sequencing of genomes. Methodology is one thing, however, and understanding and direction another. The fact is that the understanding of biology emerging from the mass of data that flows from the genome sequencing machines brings into question the classical concepts of organism, lineage, and evolution as the same time it gainsays the molecular perspective that spawned the enterprise. The fact is that the molecular perspective, which so successfully guided and shaped twentieth-century biology, has effectively run its course (as all paradigms do) and no longer provides a focus, a vision of the biology of the future, with the result that biology is wandering will-nilly into that future. This is a prescription for revolution–conceptual revolution. One can be confident that the new paradigm will soon emerge to guide biology in this new century…. Molecular biology has ceased to be a genuine paradigm, and it is now only a body of (very powerful) technique…. The time has come to shift biology’s focus from trying to understand organisms solely by dissecting them into their parts to trying to understand the fundamental nature of biological organization, of biological form. (Woese 2005: 99-100)

Conceptualizing Cells

We should all take seriously an assessment of biology made by the physicist David Bohm over 30 years ago (and universally ignored):

“It does seem odd … that just when physics is … moving away from mechanism, biology and psychology are moving closer to it. If the trend continues … scientists will be regarding living and intelligent beings as mechanical, while they suppose that inanimate matter is to complex and subtle to fit into the limited categories of mechanism.” [D. Bohm, “Some Remarks on the Notion of Order,” in C. H. Waddington, ed., Towards a Theoretical Biology: 2 Sketches. (Edinburgh: Edinburgh Press 1969), p. 18-40.]

The organism is not a machine! Machines are not made of parts that continually turn over and renew; the cell is. A machine is stable because its parts are strongly built and function reliably. The cell is stable for an entirely different reason: It is homeostatic. Perturbed, the cell automatically seeks to reconstitute its inherent pattern. Homeostasis and homeorhesis are basic to all living things, but not machines.

If not a machine, then what is the cell?

Carl R. Woese (2005, 100) on Evolving Biological Organization

(….) When one has worked one’s entire career within the framework of a powerful paradigm, it is almost impossible to look at that paradigm as anything but the proper, if not the only possible, perspective one can have on (in this case) biology. Yet despite its great accomplishments, molecular biology is far from the “perfect paradigm” most biologists take it to be. This child of reductionist materialism has nearly driven the biology out of biology. Molecular biology’s reductionism is fundamentalist, unwavering, and procrustean. It strips the organism from its environment, shears it of its history (evolution), and shreds it into parts. A sense of the whole, of the whole cell, of the whole multicellular organism, of the biosphere, of the emergent quality of biological organization, all have been lost or sidelined. (Woese 2005: 101)

Our thinking is fettered by classical evolutionary notions as well. The deepest and most subtle of these is the concept of variation and selection. How we view the evolution of cellular design or organization is heavily colored by how we view variation and selection. From Darwin’s day onward, evolutionists have debated the nature of the concept, and particularly whether evolutionary change is gradual, salutatory, or of some other nature. However, another aspect of the concept concerns us here more. In the terms I prefer, it is the nature of the phase (or propensity) space in which evolution operates. Looked at one way, variation and selection are all there is to evolution: The evolutionary phase space is wide open, and all manner of things are possible. From this “anything goes” perspective, a given biological form (pattern) has no meaning outside of itself, and the route by which it arises is one out of an enormous number of possible paths, which makes the evolution completely idiosyncratic and, thus, uninteresting (molecular biology holds this position: the molecular biologist sees evolution as merely a series of meaningless historical accidents). (Woese 2005: 101)

The alternative viewpoint is that the evolutionary propensity space is highly constrained, being more like a mountainous terrain than a wide open prairie: Only certain paths are possible, and they lead to particular (a relatively small set of) outcomes. Generic biological form preexists in the same sense that form in the inanimate world does. It is not the case that “anything goes” in the world of biological evolution. In other words, biological form (pattern) is important: It has meaning beyond itself; a deeper, more general significance. Understanding of biology lies, then, in understanding the evolution and nature of biological form (pattern). Explaining biological form by variation and selection hand-waving argumentation is far from sufficient: The motor does not explain where the car goes. (Woese 2005: 101-102)

False Apostles of Rationality

In April 1998, I traveled from London to the United States to interview several economics and finance professors. It was during this trip that I learned how derivatives had broken down the wall of skepticism between Wall Street and academia. My trip started at the University of Chicago, whose economists had become famous for their theories about market rationality. They argued that markets were supposed to reach equilibrium, which means that everyone makes an informed judgment about the risk associated with different assets, and the market adjusts so that the risk is correctly compensated for by returns. Also, markets are supposed to be efficient—all pertinent information about a security, such as a stock, is already factored into its price. In April 1998, I traveled from London to the United States to interview several economics and finance professors. It was during this trip that I learned how derivatives had broken down the wall of skepticism between Wall Street and academia. My trip started at the University of Chicago, whose economists had become famous for their theories about market rationality. They argued that markets were supposed to reach equilibrium, which means that everyone makes an informed judgment about the risk associated with different assets, and the market adjusts so that the risk is correctly compensated for by returns. Also, markets are supposed to be efficient—all pertinent information about a security, such as a stock, is already factored into its price. (Dunbar 2011, 36-37)

At the university’s Quadrangle Club, I enjoyed a pleasant lunch with Merton Miller, a professor whose work with Franco Modigliani in the 1950s had won him a Nobel Prize for showing that companies could not create value by changing their mix of debt and equity. A key aspect of Miller-Modigliani (as economists call the theory) was that if a change in the debt-equity mix did influence stock prices, traders could build a money machine by buying and shorting (borrowing a stock or bond to sell it and then buying it back later) in order to gain a free lunch. Although the theory was plagued with unrealistic assumptions, the idea that traders might build a mechanism like this was prescient. (Dunbar 2011, 37)

Miller had a profound impact on the current financial world in three ways. He:

  1. Mentored academics who further developed his theoretical mechanism, called arbitrage.
  2. Created the tools that made the mechanism feasible.
  3. Trained many of the people who went to Wall Street and implemented it.

One of the MBA students who studied under Miller in the 1970s was John Meriwether, who went to work for the Wall Street firm Salomon Brothers. By the end of that decade, he had put into practice what Miller only theorized about, creating a trading desk at Salomon specifically aimed at profiting from arbitrage opportunities in the bond markets. Meriwether and his Salomon traders, together with a handful of other market-making firms, used the new futures contracts to find a mattress in securities markets that otherwise would have been too dangerous to trade in. Meanwhile, Miller and other academics associated with the University of Chicago had been advising that city’s long-established futures exchanges on creating new contracts linked to interest rates, stock market indexes, and foreign exchange markets. (Dunbar 2011, 37)

The idea of arbitrage is an old one, dating back to the nineteenth century, when disparities in the price of gold in different cities motivated some speculators (including Nathan Rothschild, founder of the Rothschild financial dynasty) to buy it where it was cheap and then ship it and sell it where it was more expensive. But in the volatile markets of the late 1970s, futures seemed to provide something genuinely different and exciting, bringing together temporally and geographically disparate aspects of buying and selling into bundles of transactions. Buy a basket of stocks reflecting an index, and sell an index future. Buy a Treasury bond, and sell a Treasury bond future. It was only the difference between the fundamental asset (called an underlying asset) and its derivative that mattered, not the statistics or economic theories that supposedly provided a benchmark for market prices. (Dunbar 2011, 38)

In the world Merton Miller lived in, the world of the futures exchanges (he was chairman emeritus of the Chicago Mercantile Exchange when I met him), they knew they needed speculators like Meriwether. Spotting arbitrage opportunities between underlying markets and derivatives enticed the likes of Salomon to come in and trade on that exchange. That provided liquidity to risk-averse people who wanted to use the exchange for hedging purposes. And if markets were efficient—in other words, if people like Meriwether did their job—then the prices of futures contracts should be mathematically related to the underlying asset using “no-arbitrage” principles. (Dunbar 2011, 38)

Bending Reality to Match the Textbook

The next leg of my U.S. trip took me to Boston and Connecticut. There I met two more Nobel-winning finance professors—Robert Merton and Myron Scholes—who took Miller’s idea to its logical conclusion at a hedge fund called Long-Term Capital Management (LTCM). Scholes had benefited directly from Miller’s mentorship as a University of Chicago PhD candidate, while Merton had studied under Paul Samuelson at MIT. What made Merton and Scholes famous (with the late Fischer Black) was their contemporaneous discovery of a formula for pricing options on stocks and other securities. (Dunbar 2011, 38)

Again, the key idea was based on arbitrage, but this time the formula was much more complicated. The premise: A future or forward contract is very similar (although not identical) to the underlying security, which is why one can be used to synthesize exposure to the other. An option contract, on the other hand, is asymmetrical. It lops off the upside or downside of the security’s performance—it is “nonlinear” in mathematical terms. Think about selling options in the same way as manufacturing a product, like a car. How many components do you need? To manufacture a stock option using a single purchase of underlying stock is impossible because the linearity of the latter can’t keep up with the nonlinearity of the former. Finding the answer to the manufacturing problem meant breaking up the lifetime of an option into lots of little bits, in the same way that calculus helps people work out the trajectory of a tennis ball in flight. The difference is that stock prices zigzag in a way that looks random, requiring a special kind of calculus that Merton was particularly good at. The math gave a recipe for smoothly tracking the option by buying and selling varying amounts of the underlying stock over time. Because the replication recipe played catch-up with the moves in the underlying market (Black, Scholes, and Merton didn’t claim to be fortune-tellers), it cost money to execute. In other words you can safely manufacture this nonlinear financial product called an option, but you have to spend a certain amount of money trading in the market in order to do so. But why believe the math? (Dunbar 2011, 38-39)

The breakthrough came next. Imagine that the option factory is up and running and selling its products in the market. By assuming that smart, aggressive traders like Meriwether would snap up any mispriced options and build their own factory to pick them apart again using the mathematical recipe, Black, Scholes, and Merton followed in Miller’s footsteps with a no-arbitrage rule. In other words, you’d better believe the math because, otherwise, traders will use it against you. That was how the famous Black-Scholes formula entered finance. (Dunbar 2011, 39, emphasis added)

When the formula was first published in the Journal of Political Economy in 1973, it was far from obvious that anyone would actually try to use its hedging recipe to extract money from arbitrage, although the Chicago Board Options Exchange (CBOE) did start offering equity option contracts that year. However, there was now an added incentive to play the arbitrage game because Black, Scholes, and Merton had shown that (subject to some assumptions) their formula exorcised the uncertainty in the returns on underlying assets. (Dunbar 2011, 39)

Over the following twenty-five years, the outside world would catch up with the eggheads in the ivory tower. Finance academics who had clustered around Merton at MIT (and elsewhere) moved to Wall Street. Trained to spot and replicate mispriced options across all financial markets, they became trading superstars. By the time Meriwether left Salomon in 1992, its proprietary trading group was bringing in revenues of over $1 billion a year. He set up his own highly lucrative hedge fund, LTCM, which made $5 billion from 1994 to 1997, earning annual returns of over 40 percent. By April 1998, Merton and Scholes were partners at LTCM and making millions of dollars per year, a nice bump from a professor’s salary. (Dunbar 2011, 40)

(….) It is hard to overemphasize the impact of this financial revolution. The neoclassical economic paradigm of equilibrium, efficiency, and rational expectations may have reeled under the weight of unrealistic assumptions and assaults of behavioral economics. But here was the classic “show me the money” riposte. A race of superhumans had emerged at hedge funds and investment banks whose rational self-interest made the theory come true and earned them billions in the process. (Dunbar 2011, 40)

If there was a high priest behind this, it had to be Merton, who in a 1990 speech talked about “blueprints” and “production technologies” that could be used for “synthesizing an otherwise nonexistent derivative security.” He wrote of a “spiral of innovation,” wherein the existence of markets in simpler derivatives would serve as a platform for the invention of new ones. As he saw his prescience validated, Merton would increasingly adopt a utopian tone, arguing that derivatives contracts created by large financial institutions could solve the risk management needs of both families and emerging market nations. To see the spiral in action, consider an over-the-counter derivative offered by investment banks from 2005 onward: an option on the VIX index. If for some reason you were financially exposed to the fear gauge, such a contract would protect you against it. The new option would be dynamically hedged by the bank, using VIX futures, providing liquidity to the CBOE contract. In turn, that would prompt arbitrage between the VIX and the S&P 500 options used to calculate it, ultimately leading to trading in the S&P 500 index itself. (Dunbar 2011, 40-41)

As this example demonstrates, Merton’s spiral was profitable in the sense that every time a new derivative product was created, an attendant retinue of simpler derivatives or underlying securities needed to be traded in order to replicate it. Remember, for market makers, volume normally equates to profit. For the people whose job it was to trade the simpler building blocks—the “flow” derivatives or cash products used to manufacture more complex products—this amounted to a safe opportunity to make money—or in other words, a mattress. In some markets, the replication recipe book would create more volume than the fundamental sources of supply and demand in that market. (Dunbar 2011, 41)

The banks started aggressively recruiting talent that could handle the arcane, complicated mathematical formulas needed to identify and evaluate these financial replication opportunities. Many of these quantitative analysts—quants—were refugees from academic physics. During the 1990s, research in fundamental physics was beset by cutbacks in government funding and a feeling that after the heroic age of unified theories and successful particle experiments, the field was entering a barren period. Wall Street and its remunerative rewards were just too tempting to pass up. Because the real-world uncertainty was supposedly eliminated by replication, quants did not need to make the qualitative judgments required of traditional securities analysts. What they were paid to get right was the industrial problem of derivative production: working out the optimal replication recipe that would pass the no-arbitrage test. Solving these problems was an ample test of PhD-level math skills. (Dunbar 2011, 41)

On the final leg of my trip in April 1998, I went to New York, where I had brunch with Nassim Taleb, an option trader at the French bank Paribas (now part of BNP Paribas). Not yet the fiery, best-selling intellectual he subsequently became (author of 2007’s The Black Swan), Taleb had already attacked VAR in a 1997 magazine interview as “charlatanism,” but he was in no doubt about how options theory had changed the world. “Merton had the premonition,” Taleb said admiringly. “One needs arbitrageurs to make markets efficient, and option markets provide attractive opportunities for replicators. We are indeed lucky . . . the world of finance has agreed to resemble the textbook, in order to operate better.” (Dunbar 2011, 42)

Although Taleb would subsequently change his views about how well the world matched up with Merton’s textbook, the tidal wave of money churned up by derivatives in free market economics carried most people along in its wake.9 People in the regulatory community found it hard to resist this intellectual juggernaut. After all, many of them had studied economics or business, where equilibrium and efficiency were at the heart of the syllabus. Confronted with the evidence of derivatives market efficiency and informational advantages, why should they stand in the way? (Dunbar 2011, 42)

Arrangers as Market Makers

It is easy to view investment banks and other arrangers as mechanics who simply operated the machinery that linked lenders to capital markets. In reality, arrangers orchestrated subprime lending behind the scenes. Drawing on his experience as a former derivatives trader, Frank Partnoy wrote, “The driving force behind the explosion of subprime mortgage lending in the U.S. was neither lenders nor borrowers. It was the arrangers of CDOs. They were the ones supplying the cocaine. The lenders and borrowers were just mice pushing the button.”

Behind the scenes, arrangers were the real ones pulling the strings of subprime lending, but their role received scant attention. One explanation for this omission is that the relationships between arrangers and lenders were opaque and difficult to dissect. Furthermore, many of the lenders who could have “talked” went out of business. On the investment banking side, the threat of personal liability may well have discouraged people from coming forward with information.

The evidence that does exist comes from public documents and the few people who chose to spill the beans. One of these is William Dallas, the founder and former chief executive officer of a lender, Ownit. According to the New York Times, Dallas said that investment banks pressured his firm to make questionable loans for packaging into securities. Merrill Lynch explicitly told Dallas to increase the number of stated-income loans Ownit was producing. The message, Dallas said, was obvious: “You are leaving money on the table—do more [low-doc loans].”

Publicly available documents echo this depiction. An annual report from Fremont General portrayed how Fremont changed its mix of loan products to satisfy demand from Wall Street:

The company [sought] to maximize the premiums on whole loan sales and securitizations by closely monitoring the requirements of the various institutional purchasers, investors and rating agencies, and focusing on originating the types of loans that met their criteria and for which higher premiums were more likely to be realized. (The Subprime Virus: Reckless Credit, Regulatory Failure, and Next Steps by Kathleen C. Engel, Patricia A. McCoy, 2011, 56-57)

The Regulatory Genome

THE SYSTEM OF HEREDITY AS A CONTROL SYSTEM

In a world dominated by thermodynamical forces of disorder and disintegration, all living systems, sooner or later, fall in disarray and succumb to those forces. However, living systems on Earth have survived and evolved for ~3 billion years. They succeeded in surviving because a. during their lifetime they are able to maintain the normal structure by compensating for the lost or disintegrated elements of that structure, and b. they produce offspring. The ability to maintain the normal structure, despite its continual erosion, indicates that living systems have information for their normal structure, can detect deviations from the “normalcy” and restore the normal structure. This implies the presence and functioning of a control system in living organisms. In unicellulars the control system, represented by the genome, the apparatus for gene expression and cell metabolism, functions as a system of heredity during reproduction. Homeostasis and other facts on the development of some organs and phenotypic characters in metazoans prove that a hierarchical control system, involving the CNS [Central Nervous System] and the neuroendocrine system, is also operational in this group. It is hypothesized that, in analogy with unicellulars, the control system in metazoans, in the process of their reproduction, serves as an epigenetic system of heredity.

Nelson R. Cabej (2004, 11) Neural Control of Development: The Epigenetic Theory of Heredity

A general character of genomic programs for development is that they progressively regulate their own readout, in contrast, for example, to the way architects’ programs (blueprints) are used in constructing buildings. All of the structural characters of an edifice, from its overall form to local aspects such as placement of wiring and windows, are prespecified in an architectural blueprint. At first glance the blueprints for a complex building might seem to provide a good metaphoric image for the developmental regulatory program that is encoded in the DNA. Just as in considering organismal diversity, it can be said that all the specificity is in the blueprints: A railway station and a cathedral can be built of the same stone, and what makes the difference in form is the architectural plan. Furthermore, in bilaterian development, as in an architectural blueprint, the outcome is hardwired, as each kind of organism generates only its own exactly predictable, species-specific body plan. But the metaphor is basically misleading, in the way the regulatory program is used in development, compared to how the blueprint is used in construction. In development it is as if the wall, once erected, must turn around and talk to the ceiling in order to place the windows in the right positions, and the ceiling must use the joint with the wall to decide where its wires will go, etc. The acts of development cannot all be prespecified at once, because animals are multicellular, and different cells do different things with the same encoded program, that is, the DNA regulatory genome. In development, it is only the potentialities for cis-regulatory information processing that are hardwired in the DNA sequence. These are utilized, conditionally, to respond in different ways to the diverse regulatory states encountered (in our metaphor that is actually the role of the human contractor, who uses something outside of the blueprint, his brain, to select the relevant subprogram at each step). The key, very unusual feature of the genomic regulatory program for development is that the inputs it specifies in the cis-regulatory sequences of its own regulatory and signaling genes suffice to determine the creation of new regulatory states. Throughout, the process of development is animated by internally generated inputs. “Internal” here means not only nonenvironmental — i.e., from within the animal rather than external to it but also, that the input must operate in the intranuclear compartments as a component of regulatory state, or else it will be irrelevant to the process of development. (Davidson 2006: 16-17)

(….) The link between the informational transactions that underlie development and the observed phenomena of development is “specification.” Developmental specification is defined phenomenologically as the process by which cells acquire the identities or fates that they and their progeny will adopt. But in terms of mechanism, specification is neither more nor less than that which results in the institution of new transcriptional regulatory states. Thereby specification results from differential expression of genes, the readout of particular genetic subprograms. For specification to occur, genes have to make decisions, depending on the new inputs they receive, and this brings us back to the information processing capacities of the cis-regulatory modules of the gene regulatory networks that make regulatory state. The point cannot be overemphasized that were it not for the ability of cis-regulatory elements to integrate spatial signaling inputs together with multiple inputs of intracellular origin, then specification, and thus development, could not occur. (Davidson 2006: 17)

Evolution by Natural Experiment

This is the age of the evolution of Evolution. All thoughts that the Evolutionist works with, all theories and generalizations, have themselves evolved and are now being evolved. Even were his theory perfected, its first lesson would be that it was itself but a phase of the Evolution of other opinion, no more fixed than a species, no more final than the theory which it displaced.

— Henry Drummond, 1883

Charles Darwin described The Origin of Species as “one long argument” for evolution by natural selection. Subsequently Ernst Mayr applied the expression to the continuing debate over Darwin’s ideas. My explanation of why the debate lingers is that although Darwin was right about the reality of evolution, his causal theory was fundamentally wrong, and its errors have been compounded by neo-Darwinism. In 1985 my book Evolutionary Theory: The Unfinished Synthesis was published. In it I discussed Darwinian problems that have never been solved, and the difficulties suffered historically by holistic approaches to evolutionary theory. The most important of these holistic treatments was “emergent evolution,” which enjoyed a brief moment of popularity about 80 years ago before being eclipsed when natural selection was mathematically formalized by theoretical population geneticists. I saw that the concept of biological emergence could provide a matrix for a reconstructed evolutionary theory that might displace selectionism. At that time, I naively thought that there was a momentum in favor of such a revision, and that there were enough open-minded, structuralistic evolutionists to displace the selectionist paradigm within a decade or so. Faint hope! (Robert G. B. Reid. Biological Emergences: Evolution by Natural Experiment (Vienna Series in Theoretical Biology) (Kindle Locations 31-37). Kindle Edition.)

Instead, the conventional “Modern Synthesis” produced extremer forms of selectionism. Although some theoreticians were dealing effectively with parts of the problem, I decided I should try again, from a more general biological perspective. This book is the result. (Reid 2007, Preface)

The main thrust of the book is an exploration of evolutionary innovation, after a critique of selectionism as a mechanistic explanation of evolution. Yet it is impossible to ignore the fact that the major periods of biological history were dominated by dynamic equilibria where selection theory does apply. But emergentism and selectionism cannot be synthesized within an evolutionary theory. A “biological synthesis” is necessary to contain the history of life. I hope that selectionists who feel that I have defiled their discipline might find some comfort in knowing that their calculations and predictions are relevant for most of the 3.5 billion years that living organisms have inhabited the Earth, and that they forgive me for arguing that those calculations and predictions have little to do with evolution. (Reid 2007, Preface)

Evolution is about change, especially complexifying change, not stasis. There are ways in which novel organisms can emerge with properties that are not only self-sufficient but more than enough to ensure their status as the founders of kingdoms, phyla, or orders. And they have enough generative potential to allow them to diversify into a multiplicity of new families, genera, and species. Some of these innovations are all-or-none saltations. Some of them emerge at thresholds in lines of gradual and continuous evolutionary change. Some of them are largely autonomous, coming from within the organism; some are largely imposed by the environment. Their adaptiveness comes with their generation, and their adaptability may guarantee success regardless of circumstances. Thus, the filtering, sorting, or eliminating functions of natural selection are theoretically redundant. (Reid 2007, Preface)

Therefore, evolutionary theory should focus on the natural, experimental generation of evolutionary changes, and should ask how they lead to greater complexity of living organisms. Such progressive innovations are often sudden, and have new properties arising from new internal and external relationships. They are emergent. In this book I place such evolutionary changes in causal arenas that I liken to a three-ring circus. For the sake of bringing order to many causes, I deal with the rings one at a time, while noting that the performances in each ring interact with each other in crucial ways. One ring contains symbioses and other kinds of biological association. In another, physiology and behavior perform. The third ring contains of developmental or epigenetic evolution. (Reid 2007, Preface)

After exploring the generative causes of evolution, I devote several chapters to subtheories that might arise from them, and consider how they might be integrated into a thesis of emergent evolution. In the last chapter I propose a biological synthesis. (Reid 2007, Preface)

~ ~ ~

Introduction Re-Invention of Natural Selection

I regard it as unfortunate that the theory of natural selection was first developed as an explanation for evolutionary change. It is much more important as an explanation for the maintenance of adaptation.
George Williams, 1966

Natural selection cannot explain the origin of new variants and adaptations, only their spread.
John Endler, 1986

We could, if we wished, simply replace the term natural selection with dynamic stabilization….
Brian Goodwin, 1994

Nobody is going to re-invent natural selection….
Nigel Hawkes, 1997

Ever since Charles Darwin published The Origin of Species, it has been widely believed that natural selection is the primary cause of evolution. However, while George Williams and John Endler take the trouble to distinguish between the causes of variation and what natural selection does with them; the latter is what matters to them. In contrast, Brian Goodwin does not regard natural selection as a major evolutionary force, but as a process that results in stable organisms, populations, and ecosystems. He would prefer to understand how evolutionary novelties are generated, a question that frustrated Darwin for all of his career. (Reid 2007)

During the twentieth century, Darwin’s followers eventually learned how chromosomal recombination and gene mutation could provide variation as fuel for natural selection. They also re-invented Darwinian evolutionary theory as neo-Darwinism by formalizing natural selection mathematically. Then they redefined it as differential survival and reproduction, which entrenched it as the universal cause of evolution. Nigel Hawkes’s remark that natural selection cannot be re-invented demonstrates its continued perception as an incorruptible principle. But is it even a minor cause of evolution? (Reid 2007)

Natural selection supposedly builds order from purely random accidents of nature by preserving the fit and discarding the unfit. On the face of it, that makes more than enough sense to justify its importance. Additionally, it avoids any suggestion that a supernatural creative hand has ever been at work. But it need not be the only mechanistic option. And the current concept of natural selection, which already has a history of re-invention, is not immune to further change. Indeed, if its present interpretation as the fundamental mechanism of evolution were successfully challenged, some of the controversies now swirling around the modern paradigm might be resolved. (Reid 2007)

A Paradigm in Crisis?

Just what is the evolutionary paradigm that might be in crisis? It is sometimes called “the Modern Synthesis.” Fundamentally it comes down to a body of knowledge, interpretation, supposition, and extrapolation, integrated with the belief that natural selection is the all-sufficient cause of evolutionif it is assumed that variation is caused by gene mutations. The paradigm has built a strong relationship between ecology and evolution, and has stimulated a huge amount of research into population biology. It has also been the perennial survivor of crises that have ebbed and flowed in the tide of evolutionary ideas. Yet signs of discord are visible in the strong polarization of those who see the whole organism as a necessary component of evolution and those who want to reduce all of biology to the genes. Since neo-Darwinists are also hypersensitive to creationism, they treat any criticism of the current paradigm as a breach of the scientific worldview that will admit the fundamentalist hordes. Consequently, questions about how selection theory can claim to be the all-sufficient explanation of evolution go unanswered or ignored. Could most gene mutations be neutral, essentially invisible to natural selection, their distribution simply adrift? Did evolution follow a pattern of punctuated equilibrium, with sudden changes separated by long periods of stasis? Were all evolutionary innovations gene-determined? Are they all adaptive? Is complexity built by the accumulation of minor, selectively advantageous mutations? Are variations completely random, or can they be directed in some way? Is the generation of novelty not more important than its subsequent selection? (Reid 2007)

Long before Darwin, hunters, farmers, and naturalists were familiar with the process that he came to call “natural selection.” And they had not always associated it with evolution. It is recognized in the Bible, a Special Creation text. Lamarck had thought that evolution resulted from a universal progressive force of nature, not from natural selection. Organisms responded to adaptational needs demanded by their environments. The concept of adaptation led Lamarck’s rival, Georges Cuvier, to argue the opposite. If existing organisms were already perfectly adapted, change would be detrimental, and evolution impossible. Nevertheless, Cuvier knew that biogeography and the fossil record had been radically altered by natural catastrophes. These Darwin treated as minor aberrations during the long history of Earth. He wanted biological and geographical change to be gradual, so that natural selection would have time to make appropriate improvements. The process of re-inventing the events themselves to fit the putative mechanism of change was now under way. (Reid 2007)

Gradualism had already been brought to the fore when geologists realized that what was first interpreted as the effects of the sudden Biblical flood was instead the result of prolonged glaciation. Therefore, Darwin readily fell in with Charles Lyell’s belief that geological change had been uniformly slow. Now, more than a century later, catastrophism has been resurrected by confirmation of the K-T (Cretaceous-Tertiary) bolide impact that ended the Cretaceous and the dinosaurs. Such disasters are also linked to such putative events as the Cambrian “Big Bang of Biology,” when all of the major animal phyla seem to have appeared almost simultaneously.’ The luck of the draw has returned to evolutionary theory. Being in the right place at the right time during a cataclysm might have been the most important condition of survival and subsequent evolution. (Reid 2007)

Beyond the fringe of Darwinism, there are heretics who believe the neo-Lamarckist tenet that the environment directly shapes the organism in a way that can be passed on from one generation to the next. They argue that changes imposed by the environment, and by the behavior of the organism, are causally prior to natural selection. Nor is neo-Lamarckism the only alternative. Some evolutionary biologists, for example, think that the establishment of unique symbioses between different organisms constituted major evolutionary novelties. Developmental evolutionists are reviewing the concept that evolution was not gradual but saltatory (i.e., advancing in leaps to greater complexity). However, while they emphasize the generation of evolutionary novelty, they accommodate natural selection as the complementary and essential causal mechanism. (Reid 2007)

Notes on isms

Before proceeding further, I want to explain how I arbitrarily, but I hope consistently, use the names that refer to evolutionary movements and their originators. “Darwinian” and “Lamarckian” refer to any idea or interpretation that Darwin and Lamarck originated or strongly adhered to. Darwinism is the paradigm that rose from Darwinian concepts, and Lamarckism is the movement that followed Lamarck. They therefore include ideas that Darwin and Lamarck may not have thought of nor emphasized, but which were inspired by them and consistent with their thinking. Lamarck published La philosophie zoologique in 1809, and Lamarckism lasted for about 80 years until neo-Lamarckism developed. Darwinism occupied the time frame between the publication of The Origin of Species (1859) and the development of neo-Darwinism. The latter came in two waves. The first was led by August Weismann, who was out to purify evolutionary theory of Darwinian vacillation. The second wave, which arose in theoretical population genetics in the 1920s, quantified and redefined the basic tenets of Darwinism. Selectionism is the belief that natural selection is the primary cause of evolution. Its influence permeates the Modern Synthesis, which was originally intended to bring together all aspects of biology that bear upon evolution by natural selection. Niles Eldredge (1995) uses the expression “ultra-Darwinian” to signify an extremist position that makes natural selection an active causal evolutionary force. For grammatical consistency, I prefer “ultra-Darwinist,” which was used in the same sense by Pierre-Paul Grasse in 1973. (Reid 2007)

The Need for a More Comprehensive Theory

I have already hinted that the selectionist paradigm is either insufficient to explain evolution or simply dead wrong. Obviously, I want to find something better. Neo-Darwinists themselves concede that while directional selection can cause adaptational change, most natural selection is not innovative. Instead, it establishes equilibrium by removing extreme forms and preserving the status quo. John Endler, the neo-Darwinist quoted in one of this chapter’s epigraphs, is in good company when he says that novelty has to appear before natural selection can operate on it. But he is silent on how novelty comes into being, and how it affects the internal organization of the organismquestions much closer to the fundamental process of evolution. He is not being evasive; the issue is just irrelevant to the neo-Darwinist thesis. (Reid 2007)

Darwin knew that nature had to produce variations before natural selection could act, so he eventually co-opted Lamarckian mechanisms to make his theory more comprehensive. The problem had been caught by other evolutionists almost as soon as The Origin of Species was first published. Sir Charles Lyell saw it clearly in 1860, before he even became an evolutionist:

If we take the three attributes of the deity of the Hindoo Triad, the Creator, Brahmah, the preserver or sustainer, Vishnu, & the destroyer, Siva, Natural Selection will be a combination of the two last but without the first, or the creative power, we cannot conceive the others having any function.

Consider also the titles of two books: St. George Jackson Mivart’s On the Genesis of Species (1872) and Edward Cope’s Origin of the Fittest (1887). Their play on Darwin’s title emphasized the need for a complementary theory of how new biological phenomena came into being. Soon, William Bateson’s Materials for the Study of Variation Treated with Especial Regard to Discontinuity in the Origin of Species (1894) was to distinguish between the emergent origin of novel variations and the action of natural selection. (Reid 2007)

The present work resumes the perennial quest for explanations of evolutionary genesis and will demonstrate that the stock answerpoint mutations and recombinations of the genes, acted upon by natural selectiondoes not suffice. There are many circumstances under which novelties emerge, and I allocate them to arenas of evolutionary causation that include association (symbiotic, cellular, sexual, and social), functional biology (physiology and behavior), and development and epigenetics. Think of them as three linked circus rings of evolutionary performance, under the “big top” of the environment. Natural selection is the conservative ringmaster who ensures that tried-and-true traditional acts come on time and again. It is the underlying syndrome that imposes dynamic stabilityits hypostasis (a word that has the additional and appropriate meaning of “significant constancy”). (Reid 2007)

Selection as Hypostasis

The stasis that natural selection enforces is not unchanging inertia. Rather, it is a state of adaptational and neutral flux that involves alterations in the numerical proportions of particular alleles and types of organism, and even minor extinctions. It does not produce major progressive changes in organismal complexity. Instead, it tends to lead to adaptational specialization. Natural selection may not only thwart progress toward greater complexity, it may result in what Darwin called retrogression, whereby complex and adaptable organisms revert to simplified conditions of specialization. This is common among parasites, but not unique to them. For example, our need for ascorbic acid-vitamin C-results from the regression of a synthesis pathway that was functional in our mammalian ancestors. (Reid 2007)

On the positive side, it may be argued that dynamic stability, at any level of organization, ensures that the foundations from which novelties emerge are solid enough to support them on the rare occasions when they escape its hypostasis. A world devoid of the agents of natural selection might be populated with kludges-gimcrack organisms of the kind that might have been designed by Heath Robinson, Rube Goldberg, or Tim Burton. The enigmatic “bizarre and dream-like” Hallucigenia of the Burgess Shale springs to mind.’ Even so, if physical and embryonic factors constrain some of the extremest forms before they mature and reproduce, the benefits of natural selection are redundant. Novelty that is first and foremost integrative (i.e., allows the organism to operate better as a whole) has a quality that is resistant to the slings and arrows of selective fortune. (Reid 2007)

Natural selection has to do with relative differences in survival and reproduction and the numerical distribution of existent variations that have already evolved. In this form it requires no serious re-invention. But selectionism goes on to infer that natural selection creates complex novelty by saving adaptive features that can be further built upon. Such qualities need no saving by metaphorical forces. Having the fundamental property of persistence that characterizes life, they can look after themselves. As Ludwig von Bertalanffy remarked in 1967, “favored survival of `better’ precursors of life presupposes self-maintaining, complex, open systems which may compete; therefore natural selection cannot account for the origin of those symptoms.” These qualities were in the nature of the organisms that first emerged from non-living origins, and they are prior to any action of natural selection. Compared to them, ecological competitiveness is a trivial consequence. (Reid 2007)

But to many neo-Darwinists the only “real” evolution is just that: adaptationthe selection of random genetic changes that better fit the present environment. Adaptation is appealingly simple, and many good little examples crop up all the time. However, adaptation only reinforces the prevailing circumstances, and represents but a fragment of the big picture of evolution. Too often, genetically fixed adaptation is confused with adaptabilitythe self-modification of an individual organism that allows responsiveness to internal and external change. The logical burden of selectionism is compounded by the universally popular metaphor of selection pressure, which under some conditions of existence is supposed to force appropriate organismic responses to pop out spontaneously. How can a metaphor, however heuristic, be a biological cause? As a metaphor, it is at best is an inductive guide that must be used with caution. (Reid 2007)

Even although metaphors cannot be causes, their persuasive powers have given natural selection and selection pressure perennial dominance of evolutionary theory. It is hard enough to sideline them, so as to get to generative causes, far less to convince anyone that they are obstructive. Darwin went so far as to make this admission:

In the literal sense of the word, no doubt, natural selection is a false term…. It has been said that I speak of natural selection as an active power or Deity…. Everyone knows what is meant and is implied by such metaphorical expressions; and they are almost necessary for brevity…. With a little familiarity such superficial objections will be forgotten. [Darwin 1872, p. 60.]

Alas, in every subsequent generation of evolutionists, familiarity has bred contempt as well as forgetfulness for such “superficial” objections. (Reid 2007)

Are All Changes Adaptive?

Here is one of my not-so-superficial objections. The persuasiveness of the selection metaphor gets extra clout from its link with the vague but pervasive concept of adaptiveness, which can supposedly be both created and preserved by natural selection. For example, a book review insists that a particular piece of pedagogy be “required reading for non-Darwinist `evolutionists’ who are trying to make sense of the world without the relentless imperatives of natural selection and the adaptive trends it produces.” (Reid 2007)

Adaptiveness, as a quality of life that is “useful,” or competitively advantageous, can always be applied in ways that seem to make sense. Even where adaptiveness seems absent, there is confidence that adequate research will discover it. If equated with integrativeness, adaptiveness is even a necessity of existence. The other day, one of my students said to me: “If it exists, it must have been selected.” This has a pleasing parsimony and finality, just like “If it exists it must have been created.” But it infers that anything that exists must not only be adaptive but also must owe its existence to natural selection. I responded: “It doesn’t follow that selection caused its existence, and it might be truer to say ‘to be selected it must first exist.”‘ A more complete answer would have addressed the meaning of existence, but I avoid ontology during my physiology course office hours. (Reid 2007)

“Adaptive,” unassuming and uncontroversial as it seems, has become a “power word” that resists analysis while enforcing acceptance. Some selectionists compound their logical burden by defining adaptiveness in terms of allelic fitness. But there are sexually attractive features that expose their possessors to predation, and there are “Trojan genes” that increase reproductive success but reduce physiological adaptability. They may be the fittest in terms of their temporarily dominant numbers, but detrimental in terms of ultimate persistence. (Reid 2007)

It is more logical to start with the qualities of evolutionary changes. They may be detrimental or neutral. They may be generally advantageous (because they confer adaptability), or they may be locally advantageous, depending on ecological circumstances. Natural selection is a consequence of advantageous or “adaptive” qualities. Therefore, examination of the origin and nature of adaptive novelty comes closer to the fundamental evolutionary problem. It is, however, legitimate to add that once the novel adaptive feature comes into being, any variant that is more advantageous than other variants survives differentiallyif under competition. Most biologists are Darwinists to that extent, but evolutionary novelty is still missing from the causal equation. Thus, with the reservation that some neutral or redundant qualities often persist in Darwin’s “struggle for existence,” selection theory seems to offer a reasonable way to look at what occurs after novelty has been generatedthat is, after evolution has happened. (Reid 2007)

“Oh,” cry my student inquisitors, “but the novelty to which you refer would be meaningless if it were not for correlated and necessary novelties that natural selection had already preserved and maintained.” So again I reiterate first principles: Self-sustaining integrity, an ability to reproduce biologically, and hence evolvability were inherent qualities of the first living organisms, and were prior to differential survival and reproduction. They were not, even by the lights of extreme neo-Darwinists, created by natural selection. And their persistence is fundamental to their nature. To call such features adaptive, for the purpose of implying they were caused by natural selection, is sophistry as well as circumlocution. Sadly, many biologists find it persuasive. Ludwig von Bertalanffy (1952) lamented:

Like a Tibetan prayer wheel, Selection Theory murmurs untiringly: ‘everything is useful,’ but as to what actually happened and which lines evolution has actually followed, selection theory says nothing, for the evolution is the product of ‘chance,’ and therein obeys no ‘law. [Bertalanffy 1952, p. 92.]

In The Variation of Animals in Nature (1936), G. C. Robson and O. W. Richards examined all the major known examples of evolution by natural selection, concluding that none were sufficient to account for any significant taxonomic characters. Despite the subsequent political success of ecological genetics, some adherents to the Modern Synthesis are still puzzled by the fact that the defining characteristics of higher taxa seem to be adaptively neutral. For example, adult echinoderms such as sea urchins are radially symmetrical, i.e., they are round-bodied like sea anemones and jellyfish, and lack a head that might point them in a particular direction. This shape would seem to be less adaptive than the bilateral symmetry of most active marine animals, which are elongated and have heads at the front that seem to know where they want to go. Another puzzler: How is the six-leg body plan of insects, which existed before the acquisition of wings, more or less adaptive than that of eight-legged spiders or ten-legged legged lobsters? The distinguished neo-Darwinists Dobzhansky, Ayala, Stebbins, and Valentine (1977) write:

This view is a radical deviation from the theory that evolutionary changes are governed by natural selection. What is involved here is nothing less than one of the major unresolved problems of evolutionary biology. []

The problem exists only for selectionists, and so they happily settle for the first plausible selection pressure that occurs to them. But it could very well be that insect and echinoderm and jellyfish body plans were simply novel complexities that were consistent with organismal integritythey worked. There is no logical need for an arbiter to judge them adaptive after the fact.

Some innovations result from coincidental interactions between formerly independent systems. Natural selection can take no credit for their origin, their co-existence, or their interaction. And some emergent novelties often involve redundant features that persisted despite the culling hand of nature. Indeed, life depends on redundancy to make evolutionary experiments. Initially selectionism strenuously denies the existence of such events. When faced with the inevitable, it downplays their importance in favor of selective adjustments necessary to make them more viable. Behavior is yet another function that emphasizes the importance of the whole organism, in contrast to whole populations. Consistent changes in behavior alter the impact of the environment on the organism, and affect physiology and development. In other words, the actions of plants or animals determine what are useful adaptations and what are not. This cannot even be conceived from the abstract population gene pools that neo-Darwinists emphasize.

If some evolutionists find it easier to understand the fate of evolutionary novelty through the circumlocution of metaphorical forces, so be it. But when they invent such creative forces to explain the origin of evolutionary change, they do no better than Special Creationists or the proponents of Intelligent Design. Thus, the latter find selectionists an easy target. Neo-Darwinist explanations, being predictive in demographic terms, are certainly “more scientific” than those of the creationists. But if those explanations are irrelevant to the fundamentals of evolution, their scientific predictiveness is of no account.

What we really need to discover is how novelties are generated, how they integrate with what already exists, and how new, more complex whole organisms can be greater than the sums of their parts. Evolutionists who might agree that these are desirable goals are only hindered by cant about the “relentless imperatives of natural selection and the adaptive trends it produces.”

(….) Reductionism

Reduction is a good, logical tool for solving organismal problems by going down to their molecular structure, or to physical properties. But reductionism is a philosophical stance that embraces the belief that physical or chemical explanations are somehow superior to biological ones. Molecular biologists are inclined to reduce the complexity of life to its simplest structures, and there abandon the quest. “Selfish genes” in their “gene pools” are taken to be more important than organisms. To compound the confusion, higher emergent functions such as intelligence and conscious altruism are simplistically defined in such a way as to make them apply to the lower levels. This is reminiscent of William Livant’s (1998) “cure for baldness”: You simply shrink the head to the degree necessary for the remaining hair to cover the entire patethe brain has to be shrunk as well, of course. This “semantic reductionism” is rife in today’s ultra-Darwinism, a shrunken mindset that regards evolution as no more than the differential reproduction of genes.

Although reducing wholes to their parts can make them more understandable, fascination with the parts makes it too easy to forget that they are only subunits with no functional independence, whether in or out of the organism. It is their interactions with higher levels of organization that are important. Nevertheless, populations of individuals are commonly reduced to gene pools, meaning the totality of genes of the interbreeding organisms. Originating as a mathematical convenience, the gene pool acquired a life of its own, imbued with a higher reality than the organism. Because genes mutated to form different alleles that could be subjected to natural selection, it was the gene pool of the whole population that evolved. This argument was protected by polemic that decried any reference to the whole organism as essentialistic. Then came the notion that genes have a selfish nature. Even later, advances in molecular biology, and propaganda for the human genome project, have allowed the mistaken belief that there must be a gene for everything, and once the genes and their protein products have been identified that’s all we need to know. Instead, the completion of the genome project has clearly informed us that knowing the genes in their entirety tells us little about evolution. Yet biology still inhabits a genocentric universe, and most of its intellectual energy and material resources are sucked in by the black hole of reductionism at its center.

(….) Epigenetic Algorithms

Mechanical metaphors have appealed to many philosophers who sought materialist explanations of life. The definitive work on this subject is T. S. Hall’s Ideas of Life and Matter (1969). Descartes, though a dualist, thought of animal bodies as automata that obeyed mechanical rules. Julien de la Mettrie applied stricter mechanistic principles to humans in LʼHomme machine (1748). Clockwork and heat engine models were popular during the Industrial Revolution. Lamarck proposed hydraulic processes as causes of variation. In the late nineteenth century, the embryologists Wilhelm His and Wilhelm Roux theorized about developmental mechanics. However, as biochemical and then molecular biological information expanded, popular machine models were refuted, but it is not surprising that computers should have filled the gap. Algorithms that systematically provide instructions for a progressive sequence of events seem to be suitable analogues for epigenetic procedures.

A common error in applying this analogy is the belief that the genetic code, or at least the total complement of an organism’s DNA contains the program for its own differential expression. In the computer age it is easy to fall into that metaphysical trap. However, in the computer age we should also know that algorithms are the creations of programmers. As Charles Babbage (1838) and Robert Chambers (1844) tried to tell us, the analogy is more relevant to creationism than evolutionism. At the risk of offending the sophisticates who have indulged me so far, I want to state the problems in the most simple terms. To me, that is a major goal of theoretical biology, rather than the conversion of life to mathematics. (Robert G. B. Reid. Biological Emergences: Evolution by Natural Experiment (Vienna Series in Theoretical Biology) (p. 263). Kindle Edition.)