Author Archives: Meta Capitalism

About Meta Capitalism

Passion for studying history, philosophy, science, and religion and everything in between.

Uchimura Kanzō (内村 鑑三)

Uchimura saw the origins of denominations … as reflections of secular history in the country concerned. He asked which of these teachings actually represented Jesus’ ideas as opposed to historical accretions of almost two millennia. (Howes 2005: 10)

To me, forms are not only not helps for worship, but positive hindrances. I worship God inwardly in spirit and serve him outwardly in ordinary human conduct. [This formless Christianity is called mukyokai-shugi-no-Kirisutokyo, Christianity of no-church principle.] It is not a negative faith but positive; else my countrymen would never have received it….

Faith and Thinking

Faith is not thinking; what a man thinks is not his faith. Faith is rather being; what a man is is his faith. Thinking is only part of being; rather a superficial part . The modern man thinks he can know God’s truth by thinking . [but] Faith is the soul in passive activity. It is the soul letting itself to be acted upon by the mighty power of God. Passive though faith is, it is intensely active because of the power that works in it. This is the paradox of faith . The Christian is a newly created soul which engenders special activity called faith. Faith is thus a Christian activity of far higher order than thinking. It is the whole soul in beneficent action. (Howes 2005: 336)

Christianity the enemy of Buddhism? Not so! Christianity is a sworn enemy of these warlike Westerners, and not of Buddha and his peace-loving disciples. To make Christianity represent the Warlike West, and make it an enemy of Buddhism, a religion of love and non-resistance, is the greatest possible misrepresentation that can be made of it. (Howes 2005: 337)

The Regulatory Genome

THE SYSTEM OF HEREDITY AS A CONTROL SYSTEM
In a world dominated by thermodynamical forces of disorder and disintegration, all living systems, sooner or later, fall in disarray and succumb to those forces. However, living systems on Earth have survived and evolved for ~3 billion years. They succeeded in surviving because a. during their lifetime they are able to maintain the normal structure by compensating for the lost or disintegrated elements of that structure, and b. they produce offspring. The ability to maintain the normal structure, despite its continual erosion, indicates that living systems have information for their normal structure, can detect deviations from the “normalcy” and restore the normal structure. This implies the presence and functioning of a control system in living organisms. In unicellulars the control system, represented by the genome, the apparatus for gene expression and cell metabolism, functions as a system of heredity during reproduction. Homeostasis and other facts on the development of some organs and phenotypic characters in metazoans prove that a hierarchical control system, involving the CNS [Central Nervous System] and the neuroendocrine system, is also operational in this group. It is hypothesized that, in analogy with unicellulars, the control system in metazoans, in the process of their reproduction, serves as an epigenetic system of heredity.

Nelson R. Cabej (2004, 11) Neural Control of Development: The Epigenetic Theory of Heredity

THE EPIGENETICS OF EVOLUTIONARY CHANGE
Under the influence of external/internal stimuli, the CNS may induce adaptive changes in morphological and life history characters without any changes in genes. Commonly, these changes are not heritable, i.e. they do not reappear in the offspring if the offspring is not exposed to the same stimuli. This is the case for the overwhelming majority of described examples of predatory-induced defenses, polyphenisms, and adaptive camouflage. But reproducible cases of transgenerational changes, without changes in genes, changes that are transmitted to the offspring for one or more generations, occur and are described. All the cases of non-genetic, inherited changes are determined by underlying neural mechanisms. Such changes may represent the “primed”, ready-made material of evolution. The evidence on the neurally induced transgenerational nongenetic changes cannot be overestimated in respect to possible evolutionary implications of the epigentic system of heredity. (Cabej 2004: 201)

— Nelson R. Cabej (2004, 201) Neural Control of Development: The Epigenetic Theory of Heredity

Indeed, epigenetic modifications of phenotypic expression are sometimes considered to be “Lamarckian” because they can be transmitted to subsequent generations after being acquired. Not surprisingly, then, it has taken decades for these molecular effects to be accepted as a part of mainstream genetics. Contemporary awareness of molecular epigenetics has expanded the neo-Darwinian view of DNA sequence as the fundamental mode of inherited developmental information (Jablonka and Lamb 2002; Mattick 2012), placing even the initial phase of gene expression squarely in a dynamic cellular, organismic, and environmental context. (Sultan 2015, 11)

At the mechanistic level, epigenetic modification shape gene expression by altering protein-gene interactions that determine the accessibility of DNA to the biochemical machinery of gene transcription. (….) Epigenetic mechanisms may also be a heretofore unrecognized source of selectively important phenotypic variation in natural populations.

It has become clear that heredity is mediated at the molecular level not purely by discrete, stably transmitted DNA sequence variants but also by multiple information-altering mechanisms that lend the process an unlooked-for flexibility. Qualitatively new modes of cross-generational gene regulation are continuing to be found, including several that show gene silencing and other epigenetic roles for noncoding RNA (Bernstein and Allis 2005; Mattick and Mehler 2008: Lenhard et. al. 2012; Ha and Kim 2014). (Sultan 2015, 12)

Many genomic sequences that were previously considered “junk” are now known to code for small or “micro” RNAs (and possibly long RNAs as well) that play a role, for instance by altering enzymatic access to the chromatin by binding to DNA (Koziol and Rinn 2010). … Interestingly, noncoding RNAs may carry environmentally induced effects on the phenotype form one generation to the next, including the neurobehavioral effects of social environment. In one recent study, traumatic, unpredictable, separation of newborn mice from their mothers altered several aspects of microRNA activity in the pups, including their hippocampi and other brain structures involved in stress responses. These epigenetic changes were associated with different behavioral responses to aversive conditions such as brightly illuminated maze compartments. When sperm RNA from traumatized males was injected into fertilized wild-type egg cells, these phenotypic effects were reproduced in the F2 generation; this result indicates that RNA can contribute to the transmission of stress-induced traits in mammals (Gapp et. al. 2014) (Sultan 2015, 12-13)

Sonia E. Sultan (2015) Organism & Environment: Ecological Development, Niche Construction, and Adaptation

~ ~ ~

A general character of genomic programs for development is that they progressively regulate their own readout, in contrast, for example, to the way architects’ programs (blueprints) are used in constructing buildings. All of the structural characters of an edifice, from its overall form to local aspects such as placement of wiring and windows, are prespecified in an architectural blueprint. At first glance the blueprints for a complex building might seem to provide a good metaphoric image for the developmental regulatory program that is encoded in the DNA. Just as in considering organismal diversity, it can be said that all the specificity is in the blueprints: A railway station and a cathedral can be built of the same stone, and what makes the difference in form is the architectural plan. Furthermore, in bilaterian development, as in an architectural blueprint, the outcome is hardwired, as each kind of organism generates only its own exactly predictable, species-specific body plan. But the metaphor is basically misleading, in the way the regulatory program is used in development, compared to how the blueprint is used in construction. In development it is as if the wall, once erected, must turn around and talk to the ceiling in order to place the windows in the right positions, and the ceiling must use the joint with the wall to decide where its wires will go, etc. The acts of development cannot all be prespecified at once, because animals are multicellular, and different cells do different things with the same encoded program, that is, the DNA regulatory genome. In development, it is only the potentialities for cis-regulatory information processing that are hardwired in the DNA sequence. These are utilized, conditionally, to respond in different ways to the diverse regulatory states encountered (in our metaphor that is actually the role of the human contractor, who uses something outside of the blueprint, his brain, to select the relevant subprogram at each step). The key, very unusual feature of the genomic regulatory program for development is that the inputs it specifies in the cis-regulatory sequences of its own regulatory and signaling genes suffice to determine the creation of new regulatory states. Throughout, the process of development is animated by internally generated inputs. “Internal” here means not only nonenvironmental — i.e., from within the animal rather than external to it but also, that the input must operate in the intranuclear compartments as a component of regulatory state, or else it will be irrelevant to the process of development. (Davidson 2006: 16-17)

(….) The link between the informational transactions that underlie development and the observed phenomena of development is “specification.” Developmental specification is defined phenomenologically as the process by which cells acquire the identities or fates that they and their progeny will adopt. But in terms of mechanism, specification is neither more nor less than that which results in the institution of new transcriptional regulatory states. Thereby specification results from differential expression of genes, the readout of particular genetic subprograms. For specification to occur, genes have to make decisions, depending on the new inputs they receive, and this brings us back to the information processing capacities of the cis-regulatory modules of the gene regulatory networks that make regulatory state. The point cannot be overemphasized that were it not for the ability of cis-regulatory elements to integrate spatial signaling inputs together with multiple inputs of intracellular origin, then specification, and thus development, could not occur. (Davidson 2006: 17)

Evolution by Natural Experiment

Darwin has often been depicted as a radical selectionist at heart who invoked other mechanisms only in retreat, and only as a result of his age’s own lamented ignorance about the mechanisms of heredity. This view is false. Although Darwin regarded selection as the most important of evolutionary mechanisms (as do we), no argument from opponents angered him more than the common attempt to caricature and trivialize his theory by stating that it relied exclusively upon natural selection. In the last edition of the Origin, he wrote (1872, p. 395):

As my conclusions have lately been much misrepresented, and it has been stated that I attribute the modification of species exclusively to natural selection, I may be permitted to remark that in the first edition of this work, and subsequently, I placed in a most conspicuous position—namely at the close of the introduction—the following words: “I am convinced that natural selection has been the main, but not the exclusive means of modification.” This has been of no avail. Great is the power of steady misinterpretation.

Charles Darwin, Origin of Species (1872, p. 395)

— Gould, Stephen J., & Lewontin, Richard C. (1979) The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme. PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON, SERIES B, VOL. 205, NO. 1161, PP. 581-598. [ See: Just So Stories and Hardened Adaptationism and Natural Selection as a Creative Force and The Evolution of the Genome ]

This is the age of the evolution of Evolution. All thoughts that the Evolutionist works with, all theories and generalizations, have themselves evolved and are now being evolved. Even were his theory perfected, its first lesson would be that it was itself but a phase of the Evolution of other opinion, no more fixed than a species, no more final than the theory which it displaced.

— Henry Drummond, 1883

Charles Darwin described The Origin of Species as “one long argument” for evolution by natural selection. Subsequently Ernst Mayr applied the expression to the continuing debate over Darwin’s ideas. My explanation of why the debate lingers is that although Darwin was right about the reality of evolution, his causal theory was fundamentally wrong, and its errors have been compounded by neo-Darwinism. In 1985 my book Evolutionary Theory: The Unfinished Synthesis was published. In it I discussed Darwinian problems that have never been solved, and the difficulties suffered historically by holistic approaches to evolutionary theory. The most important of these holistic treatments was “emergent evolution,” which enjoyed a brief moment of popularity about 80 years ago before being eclipsed when natural selection was mathematically formalized by theoretical population geneticists. I saw that the concept of biological emergence could provide a matrix for a reconstructed evolutionary theory that might displace selectionism. At that time, I naively thought that there was a momentum in favor of such a revision, and that there were enough open-minded, structuralistic evolutionists to displace the selectionist paradigm within a decade or so. Faint hope! (Robert G. B. Reid. Biological Emergences: Evolution by Natural Experiment (Vienna Series in Theoretical Biology) (Kindle Locations 31-37). Kindle Edition.)

Instead, the conventional “Modern Synthesis” produced extremer forms of selectionism. Although some theoreticians were dealing effectively with parts of the problem, I decided I should try again, from a more general biological perspective. This book is the result. (Reid 2007, Preface)

The main thrust of the book is an exploration of evolutionary innovation, after a critique of selectionism as a mechanistic explanation of evolution. Yet it is impossible to ignore the fact that the major periods of biological history were dominated by dynamic equilibria where selection theory does apply. But emergentism and selectionism cannot be synthesized within an evolutionary theory. A “biological synthesis” is necessary to contain the history of life. I hope that selectionists who feel that I have defiled their discipline might find some comfort in knowing that their calculations and predictions are relevant for most of the 3.5 billion years that living organisms have inhabited the Earth, and that they forgive me for arguing that those calculations and predictions have little to do with evolution. (Reid 2007, Preface)

Evolution is about change, especially complexifying change, not stasis. There are ways in which novel organisms can emerge with properties that are not only self-sufficient but more than enough to ensure their status as the founders of kingdoms, phyla, or orders. And they have enough generative potential to allow them to diversify into a multiplicity of new families, genera, and species. Some of these innovations are all-or-none saltations. Some of them emerge at thresholds in lines of gradual and continuous evolutionary change. Some of them are largely autonomous, coming from within the organism; some are largely imposed by the environment. Their adaptiveness comes with their generation, and their adaptability may guarantee success regardless of circumstances. Thus, the filtering, sorting, or eliminating functions of natural selection are theoretically redundant. (Reid 2007, Preface)

Therefore, evolutionary theory should focus on the natural, experimental generation of evolutionary changes, and should ask how they lead to greater complexity of living organisms. Such progressive innovations are often sudden, and have new properties arising from new internal and external relationships. They are emergent. In this book I place such evolutionary changes in causal arenas that I liken to a three-ring circus. For the sake of bringing order to many causes, I deal with the rings one at a time, while noting that the performances in each ring interact with each other in crucial ways. One ring contains symbioses and other kinds of biological association. In another, physiology and behavior perform. The third ring contains of developmental or epigenetic evolution. (Reid 2007, Preface)

After exploring the generative causes of evolution, I devote several chapters to subtheories that might arise from them, and consider how they might be integrated into a thesis of emergent evolution. In the last chapter I propose a biological synthesis. (Reid 2007, Preface)

~ ~ ~

Introduction Re-Invention of Natural Selection

I regard it as unfortunate that the theory of natural selection was first developed as an explanation for evolutionary change. It is much more important as an explanation for the maintenance of adaptation.
George Williams, 1966

Natural selection cannot explain the origin of new variants and adaptations, only their spread.
John Endler, 1986

We could, if we wished, simply replace the term natural selection with dynamic stabilization….
Brian Goodwin, 1994

Nobody is going to re-invent natural selection….
Nigel Hawkes, 1997

Ever since Charles Darwin published The Origin of Species, it has been widely believed that natural selection is the primary cause of evolution. However, while George Williams and John Endler take the trouble to distinguish between the causes of variation and what natural selection does with them; the latter is what matters to them. In contrast, Brian Goodwin does not regard natural selection as a major evolutionary force, but as a process that results in stable organisms, populations, and ecosystems. He would prefer to understand how evolutionary novelties are generated, a question that frustrated Darwin for all of his career. (Reid 2007)

During the twentieth century, Darwin’s followers eventually learned how chromosomal recombination and gene mutation could provide variation as fuel for natural selection. They also re-invented Darwinian evolutionary theory as neo-Darwinism by formalizing natural selection mathematically. Then they redefined it as differential survival and reproduction, which entrenched it as the universal cause of evolution. Nigel Hawkes’s remark that natural selection cannot be re-invented demonstrates its continued perception as an incorruptible principle. But is it even a minor cause of evolution? (Reid 2007)

Natural selection supposedly builds order from purely random accidents of nature by preserving the fit and discarding the unfit. On the face of it, that makes more than enough sense to justify its importance. Additionally, it avoids any suggestion that a supernatural creative hand has ever been at work. But it need not be the only mechanistic option. And the current concept of natural selection, which already has a history of re-invention, is not immune to further change. Indeed, if its present interpretation as the fundamental mechanism of evolution were successfully challenged, some of the controversies now swirling around the modern paradigm might be resolved. (Reid 2007)

A Paradigm in Crisis?

Just what is the evolutionary paradigm that might be in crisis? It is sometimes called “the Modern Synthesis.” Fundamentally it comes down to a body of knowledge, interpretation, supposition, and extrapolation, integrated with the belief that natural selection is the all-sufficient cause of evolutionif it is assumed that variation is caused by gene mutations. The paradigm has built a strong relationship between ecology and evolution, and has stimulated a huge amount of research into population biology. It has also been the perennial survivor of crises that have ebbed and flowed in the tide of evolutionary ideas. Yet signs of discord are visible in the strong polarization of those who see the whole organism as a necessary component of evolution and those who want to reduce all of biology to the genes. Since neo-Darwinists are also hypersensitive to creationism, they treat any criticism of the current paradigm as a breach of the scientific worldview that will admit the fundamentalist hordes. Consequently, questions about how selection theory can claim to be the all-sufficient explanation of evolution go unanswered or ignored. Could most gene mutations be neutral, essentially invisible to natural selection, their distribution simply adrift? Did evolution follow a pattern of punctuated equilibrium, with sudden changes separated by long periods of stasis? Were all evolutionary innovations gene-determined? Are they all adaptive? Is complexity built by the accumulation of minor, selectively advantageous mutations? Are variations completely random, or can they be directed in some way? Is the generation of novelty not more important than its subsequent selection? (Reid 2007)

Long before Darwin, hunters, farmers, and naturalists were familiar with the process that he came to call “natural selection.” And they had not always associated it with evolution. It is recognized in the Bible, a Special Creation text. Lamarck had thought that evolution resulted from a universal progressive force of nature, not from natural selection. Organisms responded to adaptational needs demanded by their environments. The concept of adaptation led Lamarck’s rival, Georges Cuvier, to argue the opposite. If existing organisms were already perfectly adapted, change would be detrimental, and evolution impossible. Nevertheless, Cuvier knew that biogeography and the fossil record had been radically altered by natural catastrophes. These Darwin treated as minor aberrations during the long history of Earth. He wanted biological and geographical change to be gradual, so that natural selection would have time to make appropriate improvements. The process of re-inventing the events themselves to fit the putative mechanism of change was now under way. (Reid 2007)

Gradualism had already been brought to the fore when geologists realized that what was first interpreted as the effects of the sudden Biblical flood was instead the result of prolonged glaciation. Therefore, Darwin readily fell in with Charles Lyell’s belief that geological change had been uniformly slow. Now, more than a century later, catastrophism has been resurrected by confirmation of the K-T (Cretaceous-Tertiary) bolide impact that ended the Cretaceous and the dinosaurs. Such disasters are also linked to such putative events as the Cambrian “Big Bang of Biology,” when all of the major animal phyla seem to have appeared almost simultaneously.’ The luck of the draw has returned to evolutionary theory. Being in the right place at the right time during a cataclysm might have been the most important condition of survival and subsequent evolution. (Reid 2007)

Beyond the fringe of Darwinism, there are heretics who believe the neo-Lamarckist tenet that the environment directly shapes the organism in a way that can be passed on from one generation to the next. They argue that changes imposed by the environment, and by the behavior of the organism, are causally prior to natural selection. Nor is neo-Lamarckism the only alternative. Some evolutionary biologists, for example, think that the establishment of unique symbioses between different organisms constituted major evolutionary novelties. Developmental evolutionists are reviewing the concept that evolution was not gradual but saltatory (i.e., advancing in leaps to greater complexity). However, while they emphasize the generation of evolutionary novelty, they accommodate natural selection as the complementary and essential causal mechanism. (Reid 2007)

Notes on isms

Before proceeding further, I want to explain how I arbitrarily, but I hope consistently, use the names that refer to evolutionary movements and their originators. “Darwinian” and “Lamarckian” refer to any idea or interpretation that Darwin and Lamarck originated or strongly adhered to. Darwinism is the paradigm that rose from Darwinian concepts, and Lamarckism is the movement that followed Lamarck. They therefore include ideas that Darwin and Lamarck may not have thought of nor emphasized, but which were inspired by them and consistent with their thinking. Lamarck published La philosophie zoologique in 1809, and Lamarckism lasted for about 80 years until neo-Lamarckism developed. Darwinism occupied the time frame between the publication of The Origin of Species (1859) and the development of neo-Darwinism. The latter came in two waves. The first was led by August Weismann, who was out to purify evolutionary theory of Darwinian vacillation. The second wave, which arose in theoretical population genetics in the 1920s, quantified and redefined the basic tenets of Darwinism. Selectionism is the belief that natural selection is the primary cause of evolution. Its influence permeates the Modern Synthesis, which was originally intended to bring together all aspects of biology that bear upon evolution by natural selection. Niles Eldredge (1995) uses the expression “ultra-Darwinian” to signify an extremist position that makes natural selection an active causal evolutionary force. For grammatical consistency, I prefer “ultra-Darwinist,” which was used in the same sense by Pierre-Paul Grasse in 1973. (Reid 2007)

The Need for a More Comprehensive Theory

I have already hinted that the selectionist paradigm is either insufficient to explain evolution or simply dead wrong. Obviously, I want to find something better. Neo-Darwinists themselves concede that while directional selection can cause adaptational change, most natural selection is not innovative. Instead, it establishes equilibrium by removing extreme forms and preserving the status quo. John Endler, the neo-Darwinist quoted in one of this chapter’s epigraphs, is in good company when he says that novelty has to appear before natural selection can operate on it. But he is silent on how novelty comes into being, and how it affects the internal organization of the organismquestions much closer to the fundamental process of evolution. He is not being evasive; the issue is just irrelevant to the neo-Darwinist thesis. (Reid 2007)

Darwin knew that nature had to produce variations before natural selection could act, so he eventually co-opted Lamarckian mechanisms to make his theory more comprehensive. The problem had been caught by other evolutionists almost as soon as The Origin of Species was first published. Sir Charles Lyell saw it clearly in 1860, before he even became an evolutionist:

If we take the three attributes of the deity of the Hindoo Triad, the Creator, Brahmah, the preserver or sustainer, Vishnu, & the destroyer, Siva, Natural Selection will be a combination of the two last but without the first, or the creative power, we cannot conceive the others having any function.

Consider also the titles of two books: St. George Jackson Mivart’s On the Genesis of Species (1872) and Edward Cope’s Origin of the Fittest (1887). Their play on Darwin’s title emphasized the need for a complementary theory of how new biological phenomena came into being. Soon, William Bateson’s Materials for the Study of Variation Treated with Especial Regard to Discontinuity in the Origin of Species (1894) was to distinguish between the emergent origin of novel variations and the action of natural selection. (Reid 2007)

The present work resumes the perennial quest for explanations of evolutionary genesis and will demonstrate that the stock answerpoint mutations and recombinations of the genes, acted upon by natural selectiondoes not suffice. There are many circumstances under which novelties emerge, and I allocate them to arenas of evolutionary causation that include association (symbiotic, cellular, sexual, and social), functional biology (physiology and behavior), and development and epigenetics. Think of them as three linked circus rings of evolutionary performance, under the “big top” of the environment. Natural selection is the conservative ringmaster who ensures that tried-and-true traditional acts come on time and again. It is the underlying syndrome that imposes dynamic stabilityits hypostasis (a word that has the additional and appropriate meaning of “significant constancy”). (Reid 2007)

Selection as Hypostasis

The stasis that natural selection enforces is not unchanging inertia. Rather, it is a state of adaptational and neutral flux that involves alterations in the numerical proportions of particular alleles and types of organism, and even minor extinctions. It does not produce major progressive changes in organismal complexity. Instead, it tends to lead to adaptational specialization. Natural selection may not only thwart progress toward greater complexity, it may result in what Darwin called retrogression, whereby complex and adaptable organisms revert to simplified conditions of specialization. This is common among parasites, but not unique to them. For example, our need for ascorbic acid-vitamin C-results from the regression of a synthesis pathway that was functional in our mammalian ancestors. (Reid 2007)

On the positive side, it may be argued that dynamic stability, at any level of organization, ensures that the foundations from which novelties emerge are solid enough to support them on the rare occasions when they escape its hypostasis. A world devoid of the agents of natural selection might be populated with kludges-gimcrack organisms of the kind that might have been designed by Heath Robinson, Rube Goldberg, or Tim Burton. The enigmatic “bizarre and dream-like” Hallucigenia of the Burgess Shale springs to mind.’ Even so, if physical and embryonic factors constrain some of the extremest forms before they mature and reproduce, the benefits of natural selection are redundant. Novelty that is first and foremost integrative (i.e., allows the organism to operate better as a whole) has a quality that is resistant to the slings and arrows of selective fortune. (Reid 2007)

Natural selection has to do with relative differences in survival and reproduction and the numerical distribution of existent variations that have already evolved. In this form it requires no serious re-invention. But selectionism goes on to infer that natural selection creates complex novelty by saving adaptive features that can be further built upon. Such qualities need no saving by metaphorical forces. Having the fundamental property of persistence that characterizes life, they can look after themselves. As Ludwig von Bertalanffy remarked in 1967, “favored survival of `better’ precursors of life presupposes self-maintaining, complex, open systems which may compete; therefore natural selection cannot account for the origin of those symptoms.” These qualities were in the nature of the organisms that first emerged from non-living origins, and they are prior to any action of natural selection. Compared to them, ecological competitiveness is a trivial consequence. (Reid 2007)

But to many neo-Darwinists the only “real” evolution is just that: adaptationthe selection of random genetic changes that better fit the present environment. Adaptation is appealingly simple, and many good little examples crop up all the time. However, adaptation only reinforces the prevailing circumstances, and represents but a fragment of the big picture of evolution. Too often, genetically fixed adaptation is confused with adaptabilitythe self-modification of an individual organism that allows responsiveness to internal and external change. The logical burden of selectionism is compounded by the universally popular metaphor of selection pressure, which under some conditions of existence is supposed to force appropriate organismic responses to pop out spontaneously. How can a metaphor, however heuristic, be a biological cause? As a metaphor, it is at best is an inductive guide that must be used with caution. (Reid 2007)

Even although metaphors cannot be causes, their persuasive powers have given natural selection and selection pressure perennial dominance of evolutionary theory. It is hard enough to sideline them, so as to get to generative causes, far less to convince anyone that they are obstructive. Darwin went so far as to make this admission:

In the literal sense of the word, no doubt, natural selection is a false term…. It has been said that I speak of natural selection as an active power or Deity…. Everyone knows what is meant and is implied by such metaphorical expressions; and they are almost necessary for brevity…. With a little familiarity such superficial objections will be forgotten. [Darwin 1872, p. 60.]

Alas, in every subsequent generation of evolutionists, familiarity has bred contempt as well as forgetfulness for such “superficial” objections. (Reid 2007)

Are All Changes Adaptive?

Here is one of my not-so-superficial objections. The persuasiveness of the selection metaphor gets extra clout from its link with the vague but pervasive concept of adaptiveness, which can supposedly be both created and preserved by natural selection. For example, a book review insists that a particular piece of pedagogy be “required reading for non-Darwinist `evolutionists’ who are trying to make sense of the world without the relentless imperatives of natural selection and the adaptive trends it produces.” (Reid 2007)

Adaptiveness, as a quality of life that is “useful,” or competitively advantageous, can always be applied in ways that seem to make sense. Even where adaptiveness seems absent, there is confidence that adequate research will discover it. If equated with integrativeness, adaptiveness is even a necessity of existence. The other day, one of my students said to me: “If it exists, it must have been selected.” This has a pleasing parsimony and finality, just like “If it exists it must have been created.” But it infers that anything that exists must not only be adaptive but also must owe its existence to natural selection. I responded: “It doesn’t follow that selection caused its existence, and it might be truer to say ‘to be selected it must first exist.”‘ A more complete answer would have addressed the meaning of existence, but I avoid ontology during my physiology course office hours. (Reid 2007)

“Adaptive,” unassuming and uncontroversial as it seems, has become a “power word” that resists analysis while enforcing acceptance. Some selectionists compound their logical burden by defining adaptiveness in terms of allelic fitness. But there are sexually attractive features that expose their possessors to predation, and there are “Trojan genes” that increase reproductive success but reduce physiological adaptability. They may be the fittest in terms of their temporarily dominant numbers, but detrimental in terms of ultimate persistence. (Reid 2007)

It is more logical to start with the qualities of evolutionary changes. They may be detrimental or neutral. They may be generally advantageous (because they confer adaptability), or they may be locally advantageous, depending on ecological circumstances. Natural selection is a consequence of advantageous or “adaptive” qualities. Therefore, examination of the origin and nature of adaptive novelty comes closer to the fundamental evolutionary problem. It is, however, legitimate to add that once the novel adaptive feature comes into being, any variant that is more advantageous than other variants survives differentiallyif under competition. Most biologists are Darwinists to that extent, but evolutionary novelty is still missing from the causal equation. Thus, with the reservation that some neutral or redundant qualities often persist in Darwin’s “struggle for existence,” selection theory seems to offer a reasonable way to look at what occurs after novelty has been generatedthat is, after evolution has happened. (Reid 2007)

“Oh,” cry my student inquisitors, “but the novelty to which you refer would be meaningless if it were not for correlated and necessary novelties that natural selection had already preserved and maintained.” So again I reiterate first principles: Self-sustaining integrity, an ability to reproduce biologically, and hence evolvability were inherent qualities of the first living organisms, and were prior to differential survival and reproduction. They were not, even by the lights of extreme neo-Darwinists, created by natural selection. And their persistence is fundamental to their nature. To call such features adaptive, for the purpose of implying they were caused by natural selection, is sophistry as well as circumlocution. Sadly, many biologists find it persuasive. Ludwig von Bertalanffy (1952) lamented:

Like a Tibetan prayer wheel, Selection Theory murmurs untiringly: ‘everything is useful,’ but as to what actually happened and which lines evolution has actually followed, selection theory says nothing, for the evolution is the product of ‘chance,’ and therein obeys no ‘law. [Bertalanffy 1952, p. 92.]

In The Variation of Animals in Nature (1936), G. C. Robson and O. W. Richards examined all the major known examples of evolution by natural selection, concluding that none were sufficient to account for any significant taxonomic characters. Despite the subsequent political success of ecological genetics, some adherents to the Modern Synthesis are still puzzled by the fact that the defining characteristics of higher taxa seem to be adaptively neutral. For example, adult echinoderms such as sea urchins are radially symmetrical, i.e., they are round-bodied like sea anemones and jellyfish, and lack a head that might point them in a particular direction. This shape would seem to be less adaptive than the bilateral symmetry of most active marine animals, which are elongated and have heads at the front that seem to know where they want to go. Another puzzler: How is the six-leg body plan of insects, which existed before the acquisition of wings, more or less adaptive than that of eight-legged spiders or ten-legged legged lobsters? The distinguished neo-Darwinists Dobzhansky, Ayala, Stebbins, and Valentine (1977) write:

This view is a radical deviation from the theory that evolutionary changes are governed by natural selection. What is involved here is nothing less than one of the major unresolved problems of evolutionary biology. []

The problem exists only for selectionists, and so they happily settle for the first plausible selection pressure that occurs to them. But it could very well be that insect and echinoderm and jellyfish body plans were simply novel complexities that were consistent with organismal integritythey worked. There is no logical need for an arbiter to judge them adaptive after the fact.

Some innovations result from coincidental interactions between formerly independent systems. Natural selection can take no credit for their origin, their co-existence, or their interaction. And some emergent novelties often involve redundant features that persisted despite the culling hand of nature. Indeed, life depends on redundancy to make evolutionary experiments. Initially selectionism strenuously denies the existence of such events. When faced with the inevitable, it downplays their importance in favor of selective adjustments necessary to make them more viable. Behavior is yet another function that emphasizes the importance of the whole organism, in contrast to whole populations. Consistent changes in behavior alter the impact of the environment on the organism, and affect physiology and development. In other words, the actions of plants or animals determine what are useful adaptations and what are not. This cannot even be conceived from the abstract population gene pools that neo-Darwinists emphasize.

If some evolutionists find it easier to understand the fate of evolutionary novelty through the circumlocution of metaphorical forces, so be it. But when they invent such creative forces to explain the origin of evolutionary change, they do no better than Special Creationists or the proponents of Intelligent Design. Thus, the latter find selectionists an easy target. Neo-Darwinist explanations, being predictive in demographic terms, are certainly “more scientific” than those of the creationists. But if those explanations are irrelevant to the fundamentals of evolution, their scientific predictiveness is of no account.

What we really need to discover is how novelties are generated, how they integrate with what already exists, and how new, more complex whole organisms can be greater than the sums of their parts. Evolutionists who might agree that these are desirable goals are only hindered by cant about the “relentless imperatives of natural selection and the adaptive trends it produces.”

(….) Reductionism

Reduction is a good, logical tool for solving organismal problems by going down to their molecular structure, or to physical properties. But reductionism is a philosophical stance that embraces the belief that physical or chemical explanations are somehow superior to biological ones. Molecular biologists are inclined to reduce the complexity of life to its simplest structures, and there abandon the quest. “Selfish genes” in their “gene pools” are taken to be more important than organisms. To compound the confusion, higher emergent functions such as intelligence and conscious altruism are simplistically defined in such a way as to make them apply to the lower levels. This is reminiscent of William Livant’s (1998) “cure for baldness”: You simply shrink the head to the degree necessary for the remaining hair to cover the entire patethe brain has to be shrunk as well, of course. This “semantic reductionism” is rife in today’s ultra-Darwinism, a shrunken mindset that regards evolution as no more than the differential reproduction of genes.

Although reducing wholes to their parts can make them more understandable, fascination with the parts makes it too easy to forget that they are only subunits with no functional independence, whether in or out of the organism. It is their interactions with higher levels of organization that are important. Nevertheless, populations of individuals are commonly reduced to gene pools, meaning the totality of genes of the interbreeding organisms. Originating as a mathematical convenience, the gene pool acquired a life of its own, imbued with a higher reality than the organism. Because genes mutated to form different alleles that could be subjected to natural selection, it was the gene pool of the whole population that evolved. This argument was protected by polemic that decried any reference to the whole organism as essentialistic. Then came the notion that genes have a selfish nature. Even later, advances in molecular biology, and propaganda for the human genome project, have allowed the mistaken belief that there must be a gene for everything, and once the genes and their protein products have been identified that’s all we need to know. Instead, the completion of the genome project has clearly informed us that knowing the genes in their entirety tells us little about evolution. Yet biology still inhabits a genocentric universe, and most of its intellectual energy and material resources are sucked in by the black hole of reductionism at its center.

(….) Epigenetic Algorithms

Mechanical metaphors have appealed to many philosophers who sought materialist explanations of life. The definitive work on this subject is T. S. Hall’s Ideas of Life and Matter (1969). Descartes, though a dualist, thought of animal bodies as automata that obeyed mechanical rules. Julien de la Mettrie applied stricter mechanistic principles to humans in LʼHomme machine (1748). Clockwork and heat engine models were popular during the Industrial Revolution. Lamarck proposed hydraulic processes as causes of variation. In the late nineteenth century, the embryologists Wilhelm His and Wilhelm Roux theorized about developmental mechanics. However, as biochemical and then molecular biological information expanded, popular machine models were refuted, but it is not surprising that computers should have filled the gap. Algorithms that systematically provide instructions for a progressive sequence of events seem to be suitable analogues for epigenetic procedures.

A common error in applying this analogy is the belief that the genetic code, or at least the total complement of an organism’s DNA contains the program for its own differential expression. In the computer age it is easy to fall into that metaphysical trap. However, in the computer age we should also know that algorithms are the creations of programmers. As Charles Babbage (1838) and Robert Chambers (1844) tried to tell us, the analogy is more relevant to creationism than evolutionism. At the risk of offending the sophisticates who have indulged me so far, I want to state the problems in the most simple terms. To me, that is a major goal of theoretical biology, rather than the conversion of life to mathematics. (Robert G. B. Reid. Biological Emergences: Evolution by Natural Experiment (Vienna Series in Theoretical Biology) (p. 263). Kindle Edition.)

Story Telling in Economics

A Question I Once Raised During a Conference

Many years ago, when I was attending a session at an economics conference, I heard a presentation by a professor about the relationship between economic growth and technology change. In his presentation he purported to show a high correlation between the number of new patients (registered with the US Patent and Trademark Office) and economic growth. This enabled him to conclude that there was a causal relationship between technological change (as reflected by patent counts) and economic growth. This finding, by the way, is the kind that is very often hailed by organizations that offer research grants to economic professors and to other scientists. This is because findings serve as evidence for the “social benefits of R&D” which these organizations can, and often do, use to drum up political support for their organizations. It is also highly appealing to many people—admittedly, myself included—who love science and loving thinking about how beneficial scientific and technological advancement can be when it is properly and responsibly managed. So I realized that the paper being presented would be music to many people’s ears, and that it would help him receive praise, perhaps a publication, and perhaps even grant money, for his research. (Payson 2017, 3)

Given my own background on the topic … I had a question about his stated findings, which I politely asked during the question-and-answer session. In asking my question I mentioned that I was familiar with a well-known change in patent laws that occurred at the beginning of the time span that he was analyzing. As many who are familiar with patents know, the vast majority of patents that are issued have no real value and are not in fact used by the company that holds the patent. What generally occurs is that a company acquires a very valuable patent and also createes dozens of other patents that are “close” (in their subject matter) to that valuable one. The reason for their doing this is to protect their valuable patent so that no company can produce a similar patent that competes with theirs. The change in patent laws, which I just referred to, had made it easier for companies to acquire similar patents to ones that already existed, which essentially created a need for companies issuing important patents to “surround” their main patent by more of these other unused “protective patents.” (Payson 2017, 3)

So, in my question to the presenter, I asked whether it might simply be possible that the increase in registered patents that his study observed was attributable to that change in patent laws, which was apparently occurring at the same time that GDP was growing fairly well. GDP was growing at that time due to a general upturn in the economy in which employment was on the rise and inflaction had been brought under control. In other words, perhaps it was simpl a coincidental that both patent counts and real GDP were rising during the same period, but there was no causal relationship between the two. I asked him, essentially, if he thought that such a coincidence might be an alternative explanation for why patents and GDP were rising at the same time. (Payson 2017, 3-4)

The presenter’s reaction, especially in terms of his facial expression, reflected a typical response that I must have seen hundreds of times in my 35 years as an economist. Upon hearing my question he condescendingly smiled from ear-to-ear, while constraining himself from laughing, and he replied in an artificially diplomatic and sarcastic tone, “Oh I know all that [about the patent law change.] But … that’s not my story“—the story that he wanted to tell—and he was thoroughly amused that someone in the audience would be naïve enough to actually think about whether his findings were scientifically valid. Scientific validity of one’s findings is not only rarely discussed during paper presentations at economics conferences, but when it is, it is, more often than not, a source of amusement by the presenters of the papers and their audiences than an actual concern that might lead to improving people’s work. (Payson 2017, 4)

The Profession’s Genuine Arrogance toward Concerns about Scientific Integrity

(….) [M]any academic economists respond with smug, arrogant dismissal or laughter when the topic of scientific integrity or professional ethics is brought before them. It might be surprising to those who are less familiar with the profession that such arrogance and frivolity is as observable as much among some of the most prominent economics professors as among those who are not prominent. In the documentary Inside Job, one can observe this kind of arrogance directly among high-ranking professors as they were being interviewed. (Payson 2017, 4)

As another example, Deirdre McCloskey, a former member of the board of directors of the American Economic Association (AEA) (which consists only of highly ranked professors), has told of how she was there when the board broke into laughter when a letter was read aloud at one of their meetings. The letter was someone who was simply asking whether the AEA would consider adopting a code of ethics for economists. (Payson 2017, 4)

Many economics professors do not laugh or make arrogant statements, but express conceit in an entirely different way, such as feeling sorry for those who are even thinking about scientific integrity or professional ethics—thinking to themselves how pathetically stupid, naïve, or childishly innocent those people must be. There is, in fact a substantial literature on the more scholarly problem of arrogance in the academic economics profession. This literature was written entirely by “insiders”—highly prominent professors themselves, some even Nobel laureates. (Payson 2017, 4-5)

(….) In the absence of the commitment to contributing to useful knowledge, the behavior of the work of academic economists have been dominated by two other major forces: (1) the mathematical games that are played for the sake of getting published and acquiring grant money, and (2) cronyism within the profession, which, in combination with the mathematical game playing, has dominated the reward system and incentive system of the profession. (Payson 2017, 10)

[T]o examine the validity of the claim that these are highly useful branches of knowledge [e.g., economics], let us ask what their contribution to mankind’s welfare is supposed to be. To judge by the cues from training courses and textbooks, the practical usefulness … consists of helping people to find their niche in society, to adapt themselves to it painlessly, and to dwell therein contentedly and in harmony with their companions. (Andreski 1973, 26, in Social Sciences as Sorcery)

Andreski 1973, 26, in Social Sciences as Sorcery

Literature-Only Discourse and the Pretense of Scientific Merit

Regardless of all the various arguments made against most theoretical economics, “defenders of the faith” will continue to espouse the party line. That is, they will say that, regardless of the bad and unproductive habits of theoretical economics, good things—namely, genuine and extremely valuable discoveries in economic theory—do fall out of the chaos. They will continue to argue that these valuable discoveries, even though they may be rare, ultimately justify the chaos and the inefficiencies of the system. To get past this convenient, blind faith, I will argue that it is possible for us to identify what characteristics of most top-ranked, theoretical literature actually do prevent it from contributing to valuable knowledge. In this way, we may be able to filter it out from this point on, without removing any of the top-ranked literature that is truly valuable. (Payson 2017, 51)

Defining the Filter

Let us consider a subset of all published papers in economics that meet all of the following three criteria. If it meets any one of the criteria, the paper may still be considered as an acceptable contribution to useful knowledge. (Payson 2017, 51)

Criterion 1: The paper uses a model that has no “real application.” Along these lines, if the paper presents a model for the purpose of being persuasive on a particular policy position, but presents no real evidence in support of that position (and is only a model that essentially “rediscovers its assumptions”) then it would still meet this criterion of having no real application. (Payson 2017, 51)

Criterion 2: The paper relies on assumptions or data that cannot be verified, or the situation exists in which alternative assumptions or data can be reasonably found that would yield entirely different, conflicting results (as in the McCloskey’s A-Prime, C-Prime Theorem). (Payson 2017, 51-52)

Criterion 3: The methodology of the paper would only be understood, valued, and genuinely studied by a very small group of other economists with advanced expertise in that highly specific topic. (Payson 2017, 52)

Let us call a paper that meets all of these criteria a “literature-only paper”—its purpose is only for the career advancement of the author and for the production of literature to be read and actually understood by a very small audience. Similarly, let us call the work done by economists to produce literature-only papers “literature-only work” or “literature-only discourse.” To be clear, this chapter does not discuss top-ranked literature in general—only literature-only papers that meet all (every one) of the above-mentioned criteria. (Payson 2017, 52)

(….) The only thing that truly constitutes “scientific merit”—indeed, the only thing that really matters in science—is an honest and successful effort to learn how the world actually works—not an effort to create impressive systems of mathematical equations that only very smart and very educated people can proudly decipher. Many graduate students in economics, especially those with little interest or experience in natural science, are ignorant of this. They then go on to become economics professors where they remain ignorant, and pass on their ignorance to their graduate students, the cycle repeats with each generation. (Payson 2017, 52)

In response to this accusation, many theoretical economists will argue that, from looking at the work itself, we have no basis for distinguishing between valid, scientific economic theory, and invalid, unscientific economic theory. Nevertheless, I would like to propose a very simple test could enable us to make this distinction: We look at the assumptions made in the analysis, and ask, “Can an alternative set of equally defensible assumptions be made that will lead to very different conclusions?” If the answer is “Yes,” then conclusions of the research in question have no degree of certainty—implying that the research has not contributed to our understanding of how the real world works. If those conclusions are then used to provide a false understanding of how the real world works, then this is simply a deception, which may be harmful in various respects. (Payson 2017, 52-53)

Let us call economic theory that falls under this category “unscientific economic theory” to bring home the point that science plays no role in justifying the existence of such self-serving conceptual games…. So why has the problem not been solved? The answer is that this solution or anything like it, cannot be heard by unscientific theoretical economists—it falls on deaf ears. (Payson 2017, 53)

Selling New Terminology and Supposedly New Concepts

(….) In many cases new terminology is offered in literature-only discourse as the basis for a new theoretical model that appears to capture an important concept. In general, the important concept is already known and understood under different names. Nevertheless, when a prominent theoretical economist presents a new term that they promote as a “new concept,” and at the same time present a very elaborate and sophisticated model to supposedly “explain” the concept in mathematical terms, it may appear, especially to naïve observers, that their research has truly discovered something important. Many may have trouble distinguishing in their own minds the value of the new terminology from the value of the arbitrary assumptions that were used to create a sophisticated model to explain it. (Payson 2017, 60)

Prematurity in Scientific Discovery

Scientists and historians can cite many cases of scientific and technological claims, hypotheses, and proposals that, viewed in retrospect, have apparently taken an unaccountably long time to be recognized, endorsed, or integrated into accepted knowledge and practice. Indeed, some have had to await independent formulation. (Hook 2002, 3)

(….) One may classify at least five grounds on which scientific claims or hypotheses—even those later achieving widespread recognition or endorsement—may be rejected at first offering. In addition to prematurity …, investigators may reject or choose to not follow up on a scientific report or hypothesis because (1) they are unaware of it, (2) having reviewed it, they judge it to be of no immediate relevance to their current work and therefore ignore it, (3) they harbor inappropriate prejudice against some aspect of the claim or its proponent, or (4) it appears to clash directly with their observation or experience. (Hook 2002, 4)

(….) Less readily overcome obstruction may stem from strong social forces—religious, ideological, political, and economic—that lead to challenge, rejection, or suppression. In practice, the only remedy may be to seek expression and circulation of the unrecognized, inhibited, or suppression ideas, proposals, and interventions in areas and social climates where the prohibitive factors do not reign. But in principle, in an enlightened society one may suggest some goals, some general social solutions to overcome the barriers. As obvious as they may be, I believe it worthwhile to list some of them: limitation of economic suppression of new inventions or useful technology, encouragement of ideological tolerance, opposition to implacable doctrinaire social forces, and most important tactically, attempts to disconnect the apparent implications of scientific discoveries from the feared ideological consequences. (Hook 2002, 6)

Factors related to but distinct from more global social forces concern resistance at the individual level. New scientific and technical discoveries may threaten not one’s economic welfare or ideological persuasion but rather the “psychic capital” invested in current scientific views—some involving one’s own work—challenged implicitly or explicitly by a new report. Of course the longer one has held views and invested energy in them, the more reluctant one may be to alter them. This inevitably results in conceptual inertia that some have associated with aging. And ranker reasons than those produced by hardening of cerebral arteries or of scientific beliefs may arise from prejudices of culture, nation, gender, ethnicity, or race. (Hook 2002, 6-7)

All these sources of resistance to discovery originate in what some have termed the “externalist” factors influencing science.[13] And for all the above factors, one may, in principle, suggest some types of science policies to address them. For instance, the review of work by referees without knowledge of its authors, as currently practiced by some journals, clearly diminishes effects of some types of prejudices that inappropriately inhibit publication. Editors close scrutiny of reviewers’ judgements may enable them to distinguish opinions based on wounded psychic capital from legitimate methodological objections. (Hook 2002, 7)

[13] For those not familiar with the term, it refers to factors extrinsic to the putative value-free application of the scientific method. Economic and/or social factors influencing scientific inquiry are externalist. This is opposed to an “internalist approach,” which focuses on those aspects of scientific inquiry seen traditionally as free of values except for the search for truth. The image most scientists have of the ideal working of science is of course the latter. Concern with issues of acceptance of a theory based on replication, falsification, and so on may be regarded as primarily internalist, and concern with those of class and economic factors as primarily externalist. But as has been pointed out on many occasions, it is really not possible to separate those absolutely. See, for example, Nagel 1950, esp. p. 22.

A Universal Science of Man?

The medieval Roman Catholic priesthood conducted its religious preaching and other discussions in Latin, a language no more understandable to ordinary people then are than the mathematical and statistical formulations of economists today. Latin served as a universal language that had the great practical advantage of allowing easy communication within a priestly class transcending national boundaries across Europe. Yet that was not the full story. The use of Latin also separated the priesthood from the ordinary people, one of a number of devices through which the Roman Catholic Church maintained such a separation in the medieval era. It all served to convey an aura of majesty and religious authority—as does the Supreme Court in the United States, still sitting in priestly robes. In employing an arcane language of mathematics and statistics, Samuelson and fellow economists today seek a similar authority in society.

Economics as Religion: From Samuelson to Chicago and Beyond by Robert H. Nelson

This is a book about economics. But it is also a book about human limitations and the difficulty of gaining true insight into the world around us. There is, in truth, no way of separating these two things from one other. To try to discuss economics without understanding the difficulty of applying it to the real world is to consign oneself to dealing with pure makings of our own imaginations. Much of economics at the time of writing is of this sort, although it is unclear such modes of thought should be called ‘economics’ and whether future generations will see them as such. There is every chance that the backward-looking eye of posterity will see much of what today’s economic departments produce in the same way as we now see phrenology: a highly technical, but ultimately ridiculous pseudoscience constructed rather unconsciously to serve the political needs of the era. In the era when men claiming to be scientists felt the skull for bumps and used this to determine a man’s character and his disposition, the political discourse of the day needed a justification for the racial superiority of the white man; today our present political discourse needs a Panglossian doctrine that promotes general ignorance, a technocratic language that can be deployed to cover up certain political aspects of govenmance and tells us that so long as we trust in those in charge everything will work itself out in the long-run. (Pilkington 2016, 1-2)

But the personal motivations of the individual economist today is not primarily political—although it may well be secondarily political, whether that politics turns right or left—the primary motivation of the individual economist today is in search to answers to questions that they can barely forumulate. These men and women, perhaps more than any other, are chasing a shadow that has been taunting mankind since the early days of the Enlightenment. This is the shadow of the mathesis universalis, the Universal Science expressed in the abstract language of mathematics. They want to capture Man’s essence and understand what he will do today, tomorrow and the day after that. To some of us more humble human beings that fell once upon a time onto this strange path, this may seem altogether too much to ask of our capacities for knowledge…. Is it a nobel cause, this Universal Science of Man? Some might say that if it were not so fanciful, it might be. Others might say that it has roots in extreme totalitarian thinking and were it ever taken truly seriously, it would lead to a tyranny with those who espouse it conveniently at the helm. These are moral and political questions that will not be explored in too much detail in the present book. (Pilkington 2016, 2)

What we seek to do here is more humble again. There is a sense today, nearly six years after an economic catastrophe that few still understand and only a few saw coming, that there is something rotten in economics. Something stinks and people are less inclined than ever to trust the funny little man standing next to the blackboard with his equations and his seemingly otherworldly answers to every social and economic problem that one can imagine. This is a healthy feeling and we as a society should promote and embrace it. A similar movement began over half a millennia ago questioning the men of mystery who dictated how people should live their lives from ivory towers; it was called the Reformation and it changed the world…. We are not so much interested in the practices of the economists themselves, as to whether they engage in simony, in nepotism and—could it ever be thought?—the sale of indulgences to those countries that had or were in the process of committing grave sins. Rather we are interested in how we gotten to where we are and how we can fix it. (Pilkington 2016, 2-3)

The roots of the problems with contemporary economics run very deep indeed. In order to comprehend them, we must run the gamut from political motivation to questions of philosophy and methodology to the foundations of the underlying structure itself. When these roots have been exposed, we can then begin the process of digging them up so we can plant a new tree. In doing this, we do not hope to provide all the answers but merely a firm grounding, a shrub that can, given time, grow into something far more robust. (Pilkington 2016, 3)

Down with Mathematics?

(….) Economics needs more people who distrust mathematics when applying thought to the social and economic world, not less. Indeed, … the major problems with economics today arose out of the mathematization of the discipline, especially as it proceeded after the Second World War. Mathematics become to economics what Latin was to the stagnant priest-caste that Luther and other reformers attacked during the Reformation: a means not to clarify, but to obscure through intellectual intimidation. It ensured that the common man could not read the Bible and had to consult the priest and, perhaps, pay him alms. (Pilkington 2016, 3)

(….) [M]athematics can, in certain very limited circumstances, be an opportune way of focusing the debate. It can give us a rather clear and precise conception of what we are talking about. Some aspects—by no means all aspects—of macroeconomics are quantifiable. Investments, profits, the interest rate—we can look the statistics for these things up and use this information to promote economic understanding. That these are quantifiable also means that, to a limited extent, we can conceive of them in mathematical form. It cannot be stressed enough, however, the limited extent to which this is the case. There are always … non-quantifiable elements that play absolutely key roles in how the economy works. (Pilkington 2016, 3-4)

(….) The mathematisation of the discipline was perhaps the crucial turning point when economics began to become something entirely other to the study of the actual economy. It started in the late nineteenth century, but at the time many of those who pioneered the approach became ever more distrustful of doing so. They began to think that it would only lead to obscurity of argument and an inability to communicate properly either with other people or with the real world. Formulae would become synonymous with truth and the interrelation between ideas would become foggy and unclear. A false sense of clarity in the form of pristine equations would be substituted for clarity of thought. Alfred Marshall, a pioneer of mathematics in economics who nevertheless always hid it in footnotes, wrote of his distress in his later years in a letter to his friend. (Pilkington 2016, 4)

[I had] a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics: and I went more and more on the rules—(1) Use mathematics as a shorthand language, rather than an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in (4), burn (3). This last I did often. (Pigou ed. 1966 [1906], pp. 427-428)

The controversy around mathematics appears to have broken out in full force surrounding the issue of econometric estimation in the late 1930s and early 1940s. Econometric estimation … is the practice of putting economic theories into mathematical form and then using them to make predictions based on available statistics…. [I]t is a desperately silly practice. Those who championed the econometric and mathematical approach were men whose names are not known today by anyone who is not deeply interested in the field. The were men like Jan Tinbergen, Oskar Lange, Jacob Marschak and Ragnar Frisch (Louçā 2007). Most of these men were social engineers of one form or another; all of them left-wing and some of them communist. The mood of the time, one reflected in the tendency to try to model the economy itself, was that society and the economy should be planned by men in lab coats. By this they often meant not simply broad government intervention but something more like micro-management of the institutions that people inhabit day-to-day from the top down. Despite the fact that many mathematical economic models today seem outwardly to be concerned with ‘free markets’, they all share this streak, especially in how they conceive that people (should?) act. (Pilkington 2016, 4-5)

Most of the economists at the time were vehemently opposed to this. This was not a particularly left-wing or right-wing issue. On the left, John Maynard Keynes was horrified by what he was seeing develop, while, on the right, Friedrich von Hayek was warning that this was not the way forward. But it was probably Keynes who was the most coherent belligerent of the new approach. This is because before he began to write books on economics, Keynes had worked on the philosophy of probability theory, and probability theory was becoming a key component of the mathematical approach (Keynes 1921). Keynes’ extensive investigations into probability theory allowed him to perceive to what extent mathematical formalism could be applied for understanding society and the economy. He found that it was extremely limited in its ability to illuminate social problems. Keynes was not against statistics or anything like that—he was an early champion and expert—but he was very, very cautious about people who claimed that just because economics produces statistics these can be used in the same as numerical observations form experiments were used in the hard sciences. He was also keenly aware that cetain tendencies towards mathematisation lead to a fogging of the mind. In a more diplomatic letter to one of the new mathematical economists (Keynes, as shall see … could be scathing about these new approaches), he wrote: (Pilkington 2016, 5-6)

Mathematical economics is such risky stuff as compared with nonmathematical economics, because one is deprived of one’s intuition on the one hand, yet there are all kinds of unexpressed unavowed assumptions on the other. Thus I never put much trust in it unless it falls in with my own intuitions; and I am therefore grateful for an author who makes it easier for me to apply this check without too much hard work. (Keynes cited in Louçā 2007, p. 186)

(….) Mathematics, like the high Latin of Luther’s time, is a language. It is a language that facilitates greater precision in some instances and greater obscurity in others. For most issues economic, it promotes obscurity. When a language is used to obscure, it is used as a weapon by those who speak it to repress the voices of those who do not. A good deal of the history of the relationship between mathematics and the other social sciences in the latter half of the twentieth century can be read under this light. If there is anything that this book seeks to do, it is to help people realise that this is not what economics need be or should be. Frankly, we need more of those who speak the languages of the humanities—of philosophy, sociology and psychology—than we do people who speak the language of the engineers but lack the pragmatic spirit of the engineer who can see clearly that his method cannot be deployed to understand those around him. (Pilkington 2016, 6)

Natural selection of algorithms?

If we suppose that the action of the human brain, conscious or otherwise, is merely the acting out of some very complicated algorithm, then we must ask how such an extraordinary effective algorithm actually came about. The standard answer, of course, would be ‘natural selection’. as creatures with brains evolved, those with more effective algorithms would have a better tendency to survive and therefore, on the whole, had more progeny. These progeny also tended to carry more effective algorithms than their cousins, since they inherited the ingredients of these better algorithms from their parents; so gradually the algorithms improved not necessarily steadily, since there could have been considerable fits and starts in their evolution until they reached the remarkable status that we (would apparently) find in the human brain. (Compare Dawkins 1986). (Penrose 1990: 414)

Even according to my own viewpoint, there would have to be some truth in this picture, since I envisage that much of the brain’s action is indeed algorithmic, and as the reader will have inferred from the above discussion I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. (Penrose 1990: 414)

Imagine an ordinary computer program. How would it have come into being? Clearly not (directly) by natural selection! Some human computer programmer would have conceived of it and would have ascertained that it correctly carries out the actions that it is supposed to. (Actually, most complicated computer programs contain errors usually minor, but often subtle ones that do not come to light except under unusual circumstances. The presence of such errors does not substantially affect my argument.) Sometimes a computer program might itself have been ‘written’ by another, say a ‘master’ computer program, but then the master program itself would have been the product of human ingenuity and insight; or the program itself might well be pieced together from ingredients some of which were the products of other computer programs. But in all cases the validity and the very conception of the program would have ultimately been the responsibility of (at least) one human consciousness. (Penrose 1990: 414)

One can imagine, of course, that this need not have been the case, and that, given enough time, the computer programs might somehow have evolved spontaneously by some process of natural selection. If one believes that the actions of the computer programmers’ consciousness are themselves simply algorithms, then one must, in effect, believe algorithms have evolved in just this way. However, what worries me about this is that the decision as to the validity of an algorithm is not itself an algorithmic process! … (The question of whether or not a Turing machine will actually stop is not something that can be decided algorithmically.) In order to decide whether or not an algorithm will actually work, one needs insights, not just another algorithm. (Penrose 414-415)

Nevertheless, one still might imagine some kind of natural selection process being effective for producing approximately valid algorithms. Personally, I find this very difficult to believe, however. Any selection process of this kind could act only on the output of the algorithms and not directly on the ideas underlying the actions of the algorithms. This is not simply extremely inefficient; I believe that it would be totally unworkable. In the first place, it is not easy to ascertain what an algorithm actually is, simply by examining its output. (It would be an easy matter to construct two quite different simple Turing machine actions for which the output tapes did not differ until, say, the 2^65536th place — and this difference could never be spotted in the entire history of the universe!) Moreover, the slightest ‘mutation’ of an algorithm (say a slight change in a Turing machine specification, or in its input tape) would tend to render it totally useless, and it is hard to see how actual improvements in algorithms could ever arise in this random way. (Even deliberate improvements are difficult without ‘meanings’ being available. This inadequately documented and complicated computer program needs to be altered or corrected; and the original programmer has departed or perhaps died. Rather than try to disentangle all the various meanings and intentions that the program implicitly depended upon, it is probably easier just to scrap it and start all over again!) (Penrose 1990: 415)

Perhaps some much more ‘robust’ way of specifying algorithms could be devised, which would not be subject to the above criticisms. In a way, this is what I am saying myself. The ‘robust’ specifications are the ideas that underlie the algorithms. But ideas are things that, as far as we know, need conscious minds for their manifestation. We are back with the problem of what consciousness actually is, and what it can actually do that unconscious objects are incapable of — and how on earth natural selection has been clever enough to evolve that most remarkable of qualities. (Penrose 1990: 415)

(….) To my way of thinking, there is still something mysterious about evolution, with its apparent ‘groping’ towards some future purpose. Things at least seem to organize themselves somewhat better than they ‘ought’ to, just on the basis of blind-chance evolution and natural selection…. There seems to be something about the way that the laws of physics work, which allows natural selection to be much more effective process than it would be with just arbitrary laws. The resulting apparently ‘intelligent groping’ is an interesting issue. (Penrose 1990: 416)

The non-algorithmic nature of mathematical insight

… [A] good part of the reason for believing that consciousness is able to influence truth-judgements in a non-algorithmic way stems from consideration of Gödel’s theorem. If we can see that the role of consciousness is non-algorithmic when forming mathematical judgements, where calculation and rigorous proof constitute such an important factor, then surely we may be persuaded that such a non-algorithmic ingredient could be crucial also for the role of consciousness in more general (non-mathematical) circumstances. (Penrose 1990: 416)

… Gödel’s theorem and its relation to computability … [has] shown that whatever (sufficiently extensive) algorithm a mathematician might use to establish mathematical truth — or, what amounts to the same thing, whatever formal system he might adopt as providing his criterion of truth — there will always be mathematical propositions, such as the explicit Gödel proposition P(K) of the system …, that his algorithm cannot provide an answer for. If the workings of the mathematician’s mind are entirely algorithmic, then the algorithm (or formal system) that he actually uses to form his judgements is not capable of dealing with the proposition P(K) constructed from his personal algorithm. Nevertheless, we can (in principle) see that P(K) is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all! (Penrose 1990: 416-417)

(….) The message should be clear. Mathematical truth is not something that we ascertain merely by use of an algorithm. I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must ‘see’ the truth of a mathematical argument to be convinced of its validity. This ‘seeing’ is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we conceive ourselves of the validity of Gödel’s theorem we not only ‘see’ it, but by so doing we reveal the very non-algorithmic nature of the ‘seeing’ process itself. (Penrose 1990: 418)

A Pragmatic View of Truth

[William] James argued at length for a certain conception of what it means for an idea to be true. This conception was, in brief, that an idea is true if it works. (Stapp 2009, 60)

James’s proposal was at first scorned and ridiculed by most philosophers, as might be expected. For most people can plainly see a big difference between whether an idea is true and whether it works. Yet James stoutly defended his idea, claiming that he was misunderstood by his critics.

It is worthwhile to try and see things from James’s point of view.

James accepts, as a matter of course, that the truth of an idea means its agreement with reality. The questions are: What is the “reality” with which a true idea agrees? And what is the relationship “agreement with reality” by virtue of which that idea becomes true?

All human ideas lie, by definition, in the realm of experience. Reality, on the other hand, is usually considered to have parts lying outside this realm. The question thus arises: How can an idea lying inside the realm of experience agree with something that lies outside? How does one conceive of a relationship between an idea, on the one hand, and something of such a fundamentally different sort? What is the structural form of that connection between an idea and a transexperiential reality that goes by the name of “agreement”? How can such a relationship be comprehended by thoughts forever confined to the realm of experience?

So if we want to know what it means for an idea to agree with a reality we must first accept that this reality lies in the realm of experience.

This viewpoint is not in accord with the usual idea of truth. Certain of our ideas are ideas about what lies outside the realm of experience. For example, I may have the idea that the world is made up of tiny objects called particles. According to the usual notion of truth this idea is true or false according to whether or not the world really is made up of such particles. The truth of the idea depends on whether it agrees with something that lies outside the realm of experience. (Stapp 2009, 61)

Now the notion of “agreement” seems to suggest some sort of similarity or congruence of the things that agree. But things that are similar or congruent are generally things of the same kind. Two triangles can be similar or congruent because they are of the same kind. Two triangles can be similar or congruent because they are the same kind of thing: the relationships that inhere in one can be mapped in a direct and simple way into the relationships that inhere in the other.

But ideas and external realities are presumably very different kinds of things. Our ideas are intimately associated with certain complex, macroscopic, biological entitiesour brainsand the structural forms that can inhere in our ideas would naturally be expected to depend on the structural forms of our brains. External realities, on the other hand, could be structurally very different from human ideas. Hence there is no a priori reason to expect that the relationships that constitute or characterize the essence of external reality can be mapped in any simple or direct fashion into the world of human ideas. Yet if no such mapping exists then the whole idea of “agreement” between ideas and external realities becomes obscure.

The only evidence we have on the question of whether human ideas can be brought into exact correspondence with the essences of the external realites is the success of our ideas in bringing order to our physical experience. Yet success of ideas in this sphere does not ensure the exact correspondence of our ideas to external reality.

On the other hand, the question of whether ideas “agree” with external essences is of no practical importance. What is important is precisely the success of the ideasif the ideas are successful in bringing order to our experience, then they are useful even if they do not “agree”, in some absolute sense, with the external essences. Moreover, if they are successful in bringing order into our experience, then they do “agree” at least with the aspects of our experience that they successfully order. Furthermore, it is only this agreement with aspects of our experience that can ever really be comprehended by man. That which is not an idea is intrinsically incomprehensible, and so are its relationships to other things. This leads to the pragmatic [critical realist?] viewpoint that ideas must be judged by their success and utility in the world of ideas and experience, rather than on the basis of some intrinsically incomprehensible “agreement” with nonideas.

The significance of this viewpoint for science is its negation of the idea that the aim of science is to construct a mental or mathematical image of the world itself. According to the pragmatic view, the proper goal of science is to augment and order our experience. A scientific theory should be judged on how well it serves to extend the range of our experience and reduce it to order. It need not provide a mental or mathematical image of the world itself, for the structural form of the world itself may be such that it cannot be placed in simple correspondence with the types of structures that our mental processes can form. (Stapp 2009, 62)

James was accused of subjectivismof denying the existence of objective reality. In defending himself against this charge, which he termed slanderous, he introduced an interesting ontology consisting of three things: (1) private concepts, (2) sense objects, (3) hypersensible realities. The private concepts are subjective experiences. The sense objects are public sense realities, i.e., sense realities that are independent of the individual. The hypersensible realities are realities that exist independently of all human thinkers.

Of hypersensible realities James can talk only obliquely, since he recognizes both that our knowledge of such things is forever uncertain and that we can moreover never even think of such things without replacing them by mental substitutes that lack the defining characteristics of that which they replace, namely the property of existing independetly of all human thinkers.

James’s sense objects are courious things. They are sense realities and hence belong to the realm of experience. Yet they are public: they are indepedent of the individual. They are, in short, objective experiences. The usual idea about experiences is that they are personal or subjective, not public or objective.

This idea of experienced sense objects as public or objective realities runs through James’s writings. The experience “tiger” can appear in the mental histories of many different individuals. “That desk” is something that I can grasp and shake, and you also can grasp and shake. About this desk James says:

But you and I are commutable here; we can exchange places; and as you go bail for my desk, so I can bail yours. This notion of a reality independent of either of us, taken from ordinary experiences, lies at the base of the pragmatic definition of truth.

These words should, I think, be linked with Bohr’s words about classical concepts as the basis of communication between scientists. In both cases the focus is on the concretely experienced sense realitiessuch as the shaking of the deskas the foundation of social reality. From this point of view the objective world is not built basically out of such airy abstractions as electrons and protons and “space”. It is founded on the concrete sense realities of social experience, such as a block of concrete held in the hand, a sword forged by a blacksmith, a Geiger counter prepared according to specifications by laboratory technicians and placed in a specified position by experimental physicists. (Stapp 2009, 62-63)

Quantum Mechanics and Human Values

We do have minds, we are conscious, and we can reflect upon our private experiences because we have them. Unlike phlogiston … these phenomena exist and are the most common in human experience.

Daniel Robinson, cited in Edward Fullbrook’s (2016, 33) Narrative Fixation in Economics

Valuations are always with us. Disinterested research there has never been and can never be. Prior to answers there must be questions. There can be no view except from a viewpoint. In the questions raised and the viewpoint chosen, valuations are implied. Our valuations determine our approaches to a problem, the definition of our concepts, the choice of models, the selection of observations, the presentations of our conclusions in fact the whole pursuit of a study from beginning to end.

— Gunnar Myrdal (1978, 778-779), cited in Söderbaum (2018, 8)

Philosophers have tried doggedly for three centuries to understand the role of mind in the workings of a brain conceived to function according to principles of classical physics. We now know no such brain exists: no brain, body, or anything else in the real world is composed of those tiny bits of matter that Newton imagined the universe to be made of. Hence it is hardly surprising that those philosophical endeavors were beset by enormous difficulties, which led to such positions as that of the ‘eliminative materialists’, who hold that our conscious thoughts must be eliminated from our scientific understanding of nature; or of the ‘epiphenomenalists’, who admit that human experiences do exist, but claim that they play no role in how we behave; or of the ‘identity theorists’, who claim that each conscious feeling is exactly the same thing as a motion of particles that nineteenth century science thought our brains, and everything else in the universe, were made of, but that twentieth century science has found not to exist, at least as they were formerly conceived. The tremendous difficulty in reconciling consciousness, as we know it, with the older physics is dramatized by the fact that for many years the mere mention of ‘consciousness’ was considered evidence of backwardness and bad taste in most of academia, including, incredibly, even psychology and the philosophy of mind. (Stapp 2007, 139)

What you are, and will become, depends largely upon your values. Values arise from self-image: from what you believe yourself to be. Generally one is led by training, teaching, propaganda, or other forms of indoctrination, to expand one’s conception of the self: one is encouraged to perceive oneself as an integral part of some social unit such as family, ethnic or religious group, or nation, and to enlarge one’s self-interest to include the interests of this unit. If this training is successful your enlarged conception of yourself as good parent, or good son or daughter, or good Christian, Muslim, Jew, or whatever, will cause you to give weight to the welfare of the unit as you would your own. In fact, if well conditioned you may give more weight to the interests of the group than to the well-being of your bodily self. (Stapp 2007, 139)

In the present context it is not relevant whether this human tendency to enlarge one’s self-image is a consequence of natural malleability, instinctual tendency, spiritual insight, or something else. What is important is that we human beings do in fact have the capacity to expand our image of ‘self’, and that this enlarged concept can become the basis of a drive so powerful that it becomes the dominant determinant of human conduct, overwhelming every other factor, including even the instinct for bodily survival. (Stapp 2007, 140)

But where reason is honored, belief must be reconciled with empirical evidence. If you seek evidence for your beliefs about what you are, and how you fit into Nature, then science claims jurisdiction, or at least relevance. Physics presents itself as the basic science, and it is to physics that you are told to turn. Thus a radical shift in the physics-based conception of man from that of an isolated mechanical automaton to that of an integral participant in a non-local holistic process that gives form and meaning to the evolving universe is a seismic event of potentially momentous proportions. (Stapp 2007, 140)

The quantum concept of man, being based on objective science equally available to all, rather than arising from special personal circumstances, has the potential to undergird a universal system of basic values suitable to all people, without regard to the accidents of their origins. With the diffusion of this quantum understanding of human beings, science may fulfill itself by adding to the material benefits it has already provided a philosophical insight of perhaps even greater ultimate value. (Stapp 2007, 140)

This issue of the connection of science to values can be put into perspective by seeing it in the context of a thumb-nail sketch of history that stresses the role of science. For this purpose let human intellectual history be divided into five periods: traditional, modern, transitional, post-modern, and contemporary. (Stapp 2007, 140)

During the ‘traditional’ era our understanding of ourselves and our relationship to Nature was based on ‘ancient traditions’ handed down from generation to generation: ‘Traditions’ were the chief source of wisdom about our connection to Nature. The ‘modern’ era began in the seventeenth century with the rise of what is still called ‘modern science’. That approach was based on the ideas of Bacon, Descartes, Galileo and Newton, and it provided a new source of knowledge that came to be regarded by many thinkers as more reliable than tradition. (Stapp 2007, 140)

The basic idea of ‘modern’ science was ‘materialism’: the idea that the physical world is composed basically of tiny bits of matter whose contact interactions with adjacent bits completely control everything that is now happening, and that ever will happen. According to these laws, as they existed in the late nineteenth century, a person’s conscious thoughts and efforts can make no difference at all to what his body/brain does: whatever you do was deemed to be completely fixed by local interactions between tiny mechanical elements, with your thoughts, ideas, feelings, and efforts, being simply locally determined high-level consequences or re-expressions of the low-level mechanical process, and hence basically just elements of a reorganized way of describing the effects of the absolutely and totally controlling microscopic material causes. (Stapp 2007, 140-141)

This materialist conception of reality began to crumble at the beginning of the twentieth century with Max Planck’s discovery of the quantum of action. Planck announced to his son that he had, on that day, made a discovery as important as Newton’s. That assessment was certainly correct: the ramifications of Planck’s discovery were eventually to cause Newton’s materialist conception of physical reality to come crashing down. Planck’s discovery marks the beginning of the `transitional’ period. (Stapp 2007, 141)

A second important transitional development soon followed. In 1905 Einstein announced his special theory of relativity. This theory denied the validity of our intuitive idea of the instant of time ‘now’, and promulgated the thesis that even the most basic quantities of physics, such as the length of a steel rod, and the temporal order of two events, had no objective ‘true values’, but were well defined only ‘relative’ to some observer’s point of view. (Stapp 2007, 141)

Planck’s discovery led by the mid-1920s to a complete breakdown, at the fundamental level, of the classical material conception of nature. A new basic physical theory, developed principally by Werner Heisenberg, Niels Bohr, Wolfgang Pauli, and Max Born, brought ‘the observer’ explicitly into physics. The earlier idea that the physical world is composed of tiny particles (and electromagnetic and gravitational fields) was abandoned in favor of a theory of natural phenomena in which the consciousness of the human observer is ascribed an essential role. This successor to classical physical theory is called Copenhagen quantum theory. (Stapp 2007, 141)

This turning away by science itself from the tenets of the objective materialist philosophy gave impetus to, and lent support to, post-modernism. That view, which emerged during the second half of the twentieth century, promulgated, in essence, the idea that all ‘truths’ were relative to one’s point of view, and were mere artifacts of some particular social group’s struggle for power over competing groups. Thus each social movement was entitled to its own ‘truth’, which was viewed simply as a socially created pawn in the power game. (Stapp 2007, 141-142)

The connection of post-modern thought to science is that both Copenhagen quantum theory and relativity theory had retreated from the idea of observer-independent objective truth. Science in the first quarter of the twentieth century had not only eliminated materialism as a possible foundation for objective truth, but seemed to have discredited the very idea of objective truth in science. But if the community of scientists has renounced the idea of objective truth in favor of the pragmatic idea that ‘what is true for us is what works for us’, then every group becomes licensed to do the same, and the hope evaporates that science might provide objective criteria for resolving contentious social issues. (Stapp 2007, 142)

This philosophical shift has had profound social and intellectual ramifications. But the physicists who initiated this mischief were generally too interested in practical developments in their own field to get involved in these philosophical issues. Thus they failed to broadcast an important fact: already by mid-century, a further development in physics had occurred that provides an effective antidote to both the ‘materialism’ of the modern era, and the ‘relativism’ and ‘social constructionism’ of the post-modern period. In particular, John von Neumann developed, during the early thirties, a form of quantum theory that brought the physical and mental aspects of nature back together as two aspects of a rationally coherent whole. This theory was elevated, during the forties — by the work of Tomonaga and Schwinger — to a form compatible with the physical requirements of the theory of relativity. (Stapp 2007, 142)

Von Neumann’s theory, unlike the transitional ones, provides a framework for integrating into one coherent idea of reality the empirical data residing in subjective experience with the basic mathematical structure of theoretical physics. Von Neumann’s formulation of quantum theory is the starting point of all efforts by physicists to go beyond the pragmatically satisfactory but ontologically incomplete Copenhagen form of quantum theory. (Stapp 2007, 142)

Von Neumann capitalized upon the key Copenhagen move of bringing human choices into the theory of physical reality. But, whereas the Copenhagen approach excluded the bodies and brains of the human observers from the physical world that they sought to describe, von Neumann demanded logical cohesion and mathematical precision, and was willing to follow where this rational approach led. Being a mathematician, fortified by the rigor and precision of his thought, he seemed less intimidated than his physicist brethren by the sharp contrast between the nature of the world called for by the new mathematics and the nature of the world that the genius of Isaac Newton had concocted. (Stapp 2007, 142-143)

A common core feature of the orthodox (Copenhagen and von Neumann) quantum theory is the incorporation of efficacious conscious human choices into the structure of basic physical theory. How this is done, and how the conception of the human person is thereby radically altered, has been spelled out in lay terms in this book, and is something every well informed person who values the findings of science ought to know about. The conception of self is the basis of values and thence of behavior, and it controls the entire fabric of one’s life. It is irrational, from a scientific perspective, to cling today to false and inadequate nineteenth century concepts about your basic nature, while ignoring the profound impact upon these concepts of the twentieth century revolution in science. (Stapp 2007, 143)

It is curious that some physicists want to improve upon orthodox quantum theory by excluding ‘the observer’, who, by virtue of his subjective nature, must, in their opinion, be excluded from science. That stance is maintained in direct opposition to what would seem to be the most profound advance in physics in three hundred years, namely the overcoming of the most glaring failure of classical physics, its inability to accommodate us, its creators. The most salient philosophical feature of quantum theory is that the mathematics has a causal gap that, by virtue of its intrinsic form, provides a perfect place for Homo sapiens as we know and experience ourselves. (Stapp 2007, 143)

One of the most important tasks of social sciences is to explain the events, processes, and structures that take place and act in society. In a time when scientific relativism (social constructivism, postmodernism, de-constructivism etc.) is expanding, it’s important to guard against reducing science to a pure discursive level [cf. Pålsson Syll 2005]. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is to reveal what this reality actually looks like. This is after all the object of science.

— Lars Pålsson Syll. On the use and misuse of theories and models in economics (Kindle Locations 113-118). WEA. Kindle Edition.

Conclusions

How can our world of billions of thinkers ever come into general concordance on fundamental issues? How do you, yourself, form opinions on such issues? Do you simply accept the message of some ‘authority’, such as a church, a state, or a social or political group? All of these entities promote concepts about how you as an individual fit into the reality that supports your being. And each has an agenda of its own, and hence its own internal biases. But where can you find an unvarnished truth about your nature, and your place in Nature? (Stapp 2007, 145)

Science rests, in the end, on an authority that lies beyond the pettiness of human ambition. It rests, finally, on stubborn facts. The founders of quantum theory certainly had no desire to bring down the grand structure of classical physics of which they were the inheritors, beneficiaries, and torch bearers. It was stubborn facts that forced their hand, and made them reluctantly abandon the two-hundred-year-old old classical ideal of a mechanical universe, and turn to what perhaps should have been seen from the start as a more reasonable endeavor: the creation an understanding of nature that includes in a rationally coherent way the thoughts by which we know and influence the world around us. The labors of scientists endeavoring merely to understand our inanimate environment produced, from its own internal logic, a rationally coherent framework into which we ourselves fit neatly. What was falsified by twentieth-century science was not the core traditions and intuitions that have sustained societies and civilizations since the dawn of mankind, but rather an historical aberration, an impoverished world view within which philosophers of the past few centuries have tried relentlessly but fruitlessly to find ourselves. The falseness of that deviation of science must be made known, and heralded, because human beings are not likely to endure in a society ruled by a conception of themselves that denies the essence of their being. (Stapp 2007, 145)

Einstein’s principle is relativity, not relativism. The historian of science Gerald Holton reports that Einstein was unhappy with the label ‘relativity theory’ and in his correspondence referred to it as Invariantentheorie…. Consider temporal and spatial measurements. Even if temporal and spatial measurements become frame-dependent, the observers who are attached to their different clock-carrying frames, like the respective observer on the platform and the train, can communicate their results to each other. They can even predict what the other observer will measure. The transparency between the reference frames and the mutual predictability of the measurement is due [to] a mathematical relationship, called the Lorentz transformations. The Lorentz transformations state the mathematical rules, which allow an observer to translate his/her coordinates into those of a different observer.

(….) The appropriate criterion for what is fundamentally real will (…) be what is invariant across all points of view…. The invariant is the real. This is a hypothesis about physical reality: what is frame-dependent is apparently real, what is frame-independent may be fundamentally real. To claim that the invariant is the real is to make an inference from the structure of scientific theories to the structure of the natural world.

Weinert (2004, 66, 70-71) The Scientist as Philosopher: Philosophical Consequences of Great Scientific Discoveries

Reply to Sam Harris on Free Will

Sam Harris’s book “Free Will” is an instructive example of how a spokesman dedicated to being reasonable and rational can have his arguments derailed by a reliance on prejudices and false presuppositions so deep-seated that they block seeing science-based possibilities that lie outside the confines of an outmoded world view that is now known to be incompatible with the empirical facts. (Stapp 2017, 97)

A particular logical error appears repeatedly throughout Harris’s book. Early on, he describes the deeds of two psychopaths who have committed some horrible acts. He asserts: “I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people.” (Stapp 2017, 97)

Harris asserts, here, that there is “no extra part of me” that could decide differently. But that assertion, which he calls an admission, begs the question. What evidence rationally justifies that claim? Clearly it is not empirical evidence. It is, rather, a prejudicial and anti-scientific commitment to the precepts of a known-to-be-false conception of the world called classical mechanics. That older scientific understanding of reality was found during the first decades of the twentieth century to be incompatible with empirical findings, and was replaced during the 1920s, and early 1930s, by an adequate and successful revised understanding called quantum mechanics. This newer theory, in the rationally coherent and mathematically rigorous formulation offered by John von Neumann, features a separation of the world process into (1), a physically described part composed of atoms and closely connected physical fields; (2), some psychologically described parts lying outside the atom-based part, and identified as our thinking ego’s; and (3), some psycho-physical actions attributed to nature. Within this empirically adequate conception of reality there is an extra (non-atom-based) part of a person (his thinking ego) that can resist (successfully, if willed with sufficient intensity) the impulse to victimize other people. Harris’s example thus illustrates the fundamental errors that can be caused by identifying honored science with nineteenth century classical mechanics. (Stapp 2017, 97)

Harris goes on to defend “compatibilism”, the view that claims both that every physical event is determined by what came before in the physical world and also that we possess “free will”. Harris says that “Today the only philosophically respectable way to endorse free will is to be a compatibilist—because we know that determinism, in every sense relevant to human behavior, is true”. (Stapp 2017, 97-98)

But what Harris claims that “We know” to be true is, according to quantum mechanics, not known to be true. (Stapp 2017, 98)

The final clause “in every sense relevant to human behavior” is presumably meant to discount the relevance of quantum mechanical indeterminism, by asserting that quantum indeterminism is not relevant to human behavior—presumably because it washes out at the level of macroscopic brain dynamics. But that idea of what the shift to quantum mechanics achieves is grossly deficient. The quantum indeterminism merely opens the door to a complex dynamical process that not only violates determinism (the condition that the physical past determines the future) at the level of human behavior, but allows mental intentions that are not controlled by the physical past to influence human behavior in the intended way. Thus the shift to quantum mechanics opens the door to a causal efficacy of free will that is ruled out by Harris’s effective embrace of false nineteenth science. But what Harris claims that “We know” to be true is, according to quantum mechanics, not known to be true. (Stapp 2017, 98)