The answer, therefore, which the seventeenth century gave to the ancient question … “What is the world made of?” was that the world is a succession of instantaneous configurations of matter — or material, if you wish to include stuff more subtle than ordinary matter…. Thus the configurations determined there own changes, so that the circle of scientific thought was completely closed. This is the famous mechanistic theory of nature, which has reigned supreme ever since the seventeenth century. It is the orthodox creed of physical science…. There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what I will call the ‘Fallacy of Misplaced Concreteness.’ This fallacy is the occasion of great confusion in philosophy. (Whitehead 1967: 50-51)
(….) This conception of the universe is surely framed in terms of high abstractions, and the paradox only arises because we have mistaken our abstractions for concrete realities…. The seventeenth century had finally produced a scheme of scientific thought framed by mathematics, for the use of mathematics. The great characteristic of the mathematical mind is its capacity for dealing with abstractions; and for eliciting from them clear-cut demonstrative trains of reasoning, entirely satisfactory so long as it is those abstractions which you want to think about. The enormous success of the scientific abstractions, yielding on the one hand matter with its simple location in space and time, on the other hand mind, perceiving, suffering, reasoning, but not interfering, has foisted onto philosophy the task of accepting them as the most concrete rendering of fact. (Whitehead 1967: 54-55)
Thereby, modern philosophy has been ruined. It has oscillated in a complex manner between three extremes. These are the dualists, who accept matter and mind as on an equal basis, and the two varieties of monists, those who put mind inside matter, and those who put matter inside mind. But this juggling with abstractions can never overcome the inherent confusion introduced by the ascription of misplaced concreteness to the scientific scheme of the seventeenth century. (Whitehead 1967: 55)
— Alfred North Whitehead in Science and the Modern World
In the UK, for example, 97 percent of money is created by commercial banks and its character takes the form of debt-based, interest-bearing loans. As for its intended use? In the 10 years running up to the 2008 financial crash, over 75 percent of those loans were granted for buying stocks or houses—so fuelling the house-price bubble—while a mere 13 percent went to small businesses engaged in productive enterprise.47 When such debt increases, a growing share of a nation’s income is siphoned off as payments to those with interest-earning investments and as profit for the banking sector, leaving less income available for spending on products and services made by people working in the productive economy. ‘Just as landlords were the archetypal rentiers of their agricultural societies,’ writes economist Michael Hudson, ‘so investors, financiers and bankers are in the largest rentier sector of today’s financialized economies.’ (Raworth 2017, 155)
Once the current design of money is spelled out this way—its creation, its character and its use—it becomes clear that there are many options for redesigning it, involving the state and the commons along with the market. What’s more, many different kinds of money can coexist, with the potential to turn a monetary monoculture into a financial ecosystem. (Raworth 2017, 155)
Imagine, for starters, if central banks were to take back the power to create money and then issue it to commercial banks, while simultaneously requiring them to hold 100 percent reserves for the loans that they make—meaning that every loan would be backed by someone else’s savings, or the bank’s own capital. It would certainly separate the role of providing money from the role of providing credit, so helping to prevent the build-up of debt-fuelled credit bubbles that burst with such deep social costs. That idea may sound outlandish, but it is neither a new nor a fringe suggestion. First proposed during the 1930s Great Depression by influential economists of the day such as Irving Fisher and Milton Friedman, it gained renewed support after the 2008 crash, gaining the backing of mainstream financial experts at the International Monetary Fund and Martin Wolf of the UK’s Financial Times. (Raworth 2017, 155-156)
— Kate Raworth in Doughnut Economics
Suggestions are anchored in neoclassical theory
Despite growing diversity in research, the theory flow of economics, often referred to as neoclassical, continues to dominate teaching and politics. It developed in the 19th century as an attempt to apply the methods of the natural sciences and especially physics to social phenomena, In the search for an “exact” social science, social relationships are abstracted to such an extent that calculations are possible. The neoclassical economics department primarily asks one question: How do rational actors optimize under certain circumstances? This approach is nothing bad in and of itself. However, in view of the ecological crisis, we have to ask ourselves completely different questions in society: How can the planetary collapse be prevented? What can an economic system look like that is social, fair and ecological?
The dematerialization of the value concept boded ill for the tangible world of stable time and concrete motion (Kern 1983). Again, the writer Jorge Luis Borges (1962, p. 159) captured the mood of the metaphor: (Mirowski 1989, 134. Kindle Location 2875-2877)
I reflected there is nothing less material than money, since any coin whatsoever (let us say a coin worth twenty centavos) is, strictly speaking, a repertory of possible futures. Money is abstract, I repeated; money is the future tense. It can be an evening in the suburbs, or music by Brahms; it can be maps, or chess, or coffee; it can be the words of Epictetus teaching us to despise gold; it is a Proteus more versatile than the one on the isle of Pharos. It is unforeseeable time, Bergsonian time . . . (Mirowski 1989, 134-135. Kindle Location 2877-2881)
It was not solely in art that the reconceptualization of value gripped the imagination. Because the energy concept depended upon the value metaphor in part for its credibility, physics was prodded to reinterpret the meaning of its conservation principles. In an earlier, simpler era Clerk Maxwell could say that conservation principles gave the physical molecules “the stamp of the manufactured article” (Barrow and Tipler 1986, p. 88), but as manufacture gave way to finance, seeing conservation principles in nature gave way to seeing them more as contingencies, imposed by our accountants in order to keep confusion at bay. Nowhere is this more evident than in the popular writings of the physicist Arthur Eddington, the Stephen Jay Gould of early twentieth century physics: (Mirowski 1989, 135. Kindle Location 2881-2887)
The famous laws of conservation and energy . . . are mathematical identities. Violation of them is unthinkable. Perhaps I can best indicate their nature by an analogy. An aged college Bursar once dwelt secluded in his rooms devoting himself entirely to accounts. He realised the intellectual and other activities of the college only as they presented themselves in the bills. He vaguely conjectured an objective reality at the back of it all — some sort of parallel to the real college — though he could only picture it in terms of the pounds, shillings and pence which made up what he would call “the commonsense college of everyday experience.” The method of account-keeping had become inveterate habit handed down from generations of hermit-like bursars; he accepted the form of the accounts as being part of the nature of things. But he was of a scientific turn and he wanted to learn more about the college. One day in looking over the books he discovered a remarkable law. For every item on the credit side an equal item appeared somewhere else on the debit side. “Ha!” said the Bursar, “I have discovered one of the great laws controlling the college. It is a perfect and exact law of the real world. Credit must be called plus and debit minus; and so we have the law of conservation of £. s. d. This is the true way to find out things, and there is no limit to what may ultimately be discovered by this scientific method . . .” (Mirowski 1989, 135. Kindle Location 2887-2898)
I have no quarrel with the Bursar for believing that scientific investigation of the accounts is a road to exact (though necessarily partial) knowledge of the reality behind them . . . But I would point out to him that a discovery of the overlapping of the different aspects in which the realities of the college present themselves in the world of accounts, is not a discovery of the laws controlling the college; that he has not even begun to find the controlling laws. The college may totter but the Bursar’s accounts still balance . . . (Mirowski 1989, 135-136. Kindle Location 2898-2902)
Perhaps a better way of expressing this selective influence of the mind on the laws of Nature is to say that values are created by the mind [Eddington 1930, pp. 237–8, 243]. (Mirowski 1989, 136. Kindle Location 2903-2904)
Once physicists had become inured to entertaining the idea that value is not natural, then it was a foregone conclusion that the stable Laplacean dreamworld of a fixed and conserved energy and a single super-variational principle was doomed. Again, Eddington stated it better than I could hope to: (Mirowski 1989, 136. Kindle Location 2904-2907)
[Classical determinism] was the gold standard in the vaults; [statistical laws were] the paper currency actually used. But everyone still adhered to the traditional view that paper currency needs to be backed by gold. As physics progressed the occasions when the gold was actually produced became career until they ceased altogether. Then it occurred to some of us to question whether there still was a hoard of gold in the vaults or whether its existence was a mythical tradition. The dramatic ending of the story would be that the vaults were opened and found to be empty. The actual ending is not quite so simple. It turns out that the key has been lost, and no one can say for certain whether there is any gold in the vaults or not. But I think it is clear that, with either termination, present-day physics is off the gold standard [Eddington 1935, p. 81]. (Mirowski 1989, 136. Kindle Location 2907-2913)
The denaturalization of value presaged the dissolution of the energy concept into a mere set of accounts, which, like national currencies, were not convertable at any naturally fixed rates of exchange. Quantum mechanical energy was not exactly the same thing as relativistic energy or thermodynamic energy. Yet this did not mean that physics had regressed to a state of fragmented autarkies. Trade was still conducted between nations; mathematical structure could bridge subdisciplines of physics. It was just that everyone was coming to acknowledge that money was provisional, and that symmetries expressed by conservatiori principles were contingent upon the purposes of the theory in which they were embedded. (Mirowski 1989, 136. Kindle Location 2913-2918)
Increasingly, this contingent status was expressed by recourse to economic metaphors. The variability of metrics of space-time in general relativity were compared to the habit of describing inflation in such torturous language as: “The pound is now only worth seven and sixpence” (Eddington 1930, p. 26). The fundamentally stochastic character of the energy quantum was said to allow nuclear particles to “borrow” sufficient energy so that they could “tunnel” their way out of the nucleus. And, inevitably, if we live with a banking system wherein money is created by means of loans granted on the basis of near-zero fractional reserves, then this process of borrowing energy could cascade, building upon itself until the entire universe is conceptualized as a “free lunch.” The nineteenth century would have recoiled in horror from this idea, they who believed that banks merely ratified the underlying real transactions with their loans. (Mirowski 1989, 136-137. Kindle Location 2918-2925)
2.2 The evolution of the mind: consciousness, creativity, psychological indeterminacy
If consciousness is accepted as real, it seems reasonable that one would allow for an active consciousness, for us to be aware of the experience of thinking and to engage in that experience. If we didn’t allow for engaged and active thought in consciousness, then consciousness would seem to be a passive “ghost in the machine” sort of consciousness. Siegel (2016) would appear to be in agreement with this notion insofar as he sees the mind as a conscious regulator of energy and information flow. But if we allow consciousness to be real in this manner, we allow the possibility of thoughts which exist for no reason other than “we” (the phenomenological “I” (Luijpen, 1969)) think them consciously and actively. The existence of such a thought does not itself break the principle of sufficient reason (Melamed and Lin, 2015), but the “I” thinking them might. That the “I” brings into being a conscious thought might be the terminus of a particular chain of causation. (Markey-Towler 2018, 8)
We call such thoughts to exist “genuinely creative thought”, they are thoughts which exist for no reason other than they are created by the phenomenological “I”. The capability to imagine new things is endowed by the conscious mind. This poses a difficulty for mathematical models which by their nature (consisting always of statements A ⇒ B) require the principle of sufficient reason to hold. Active conscious thought, insofar as it may be genuinely creative is indeterminate until it exists. However, that we might not be able to determine the existence of such thoughts before they are extant does not preclude us from representing them once their existence is determined. Koestler (1964) taught that all acts of creation are ultimately acts of “bisociation”, that is, of linking two things together in a manner hitherto not the case. Acts of creation, bisociations made by the conscious mind, are indeterminate before they exist, but once they exist they can be represented as relations Rhh’ between two objects of reality h,h’. We may think of such acts of creation as akin to the a priori synthetic statements of which Kant (1781) spoke. (Markey-Towler 2018, 8)
This is no matter of mere assertion. Roger Penrose (1989) holds, and it is difficult to dismiss him, that the famous theorems of Kurt Gödel imply something unique exists in the human consciousness. The human mind can “do” something no machine can. Gödel demonstrated that within certain logical systems there would be true statements which could not be so verified within the confines of the logical system but would require verification by the human consciousness. The consciousness realises connections — in this case truth-values — which cannot be realised by the machinations of mathematical logic alone. It creates. The human mind can therefore (since we have seen those connections made) create connections in the creation of mathematical systems irreducible to machination alone. There are certain connections which consciousness alone can make. (Markey-Towler 2018, 9)
The problem of conscious thought goes a little further though. New relations may be presented to the consciousness either by genuinely creative thought or otherwise, but they must be actually incorporated into the mind, Rhh’ ∈ g(H) ⊂ μ and take their place alongside others in the totality of thought g(H) ⊂ μ. Being a matter of conscious thought by the phenomenological “I”, the acceptance or rejection of such relations is something we cannot determine until the “I” has determined the matter. As Cardinal Newman demonstrated in his Grammar of Assent (1870), connections may be presented to the phenomenological “I”, but they are merely presented to the “I” and therefore inert until the “I” assents to them — accepts and incorporates them into that individual’s worldview. The question of assent to various connections presented to the “I” is an either/or question Newman recognises is ultimately free of the delimitations of reason and a matter for resolution by the “I” alone. (Markey-Towler 2018, 9)
There are thus two indeterminacies introduced to any psychological theory by the existence of consciousness:
1 Indeterminacy born of the possibility of imagining new relations Rhh’ in genuinely creative thought. 2 Indeterminacy born of the acceptance or rejection by conscious thought of any new relation Rhh’ and their incorporation or not into the mind μ ⊃ g(H). (Markey-Towler 2018, 9)
The reality of consciousness thus places a natural limit on the degree to which we can determine the processes of the mind, determine those thoughts which will exist prior to their existence. For psychology, this indeterminacy of future thought until its passage and observance is the (rough) equivalent of the indeterminacy introduced to the physical world by Heisenberg’s principle, the principle underlying the concept of the “wave function” upon which an indeterminate quantum mechanics operates (under certain interpretations (Kent, 2012; Popper, 1934, Ch.9)). (Markey-Towler 2018, 9-10)
2.3 Philosophical conclusions
We hold to the following philosophical notions in this work. The mind is that element of our being which experiences our place in the world and relation to it. We are conscious when we are aware of our place in and relation to the world. We hold to a mix of the “weak Artificial Intelligence” and mystic philosophies that mind is emergent from the brain and that mind, brain and body constitute the individual existing in a monist reality. The mind is a network structure μ = {H g(H)} expressing the connections g(H) the individual construes between the objects and events in the world H, an architecture within which and upon which the psychological process operates. The reality of consciousness introduces an indeterminacy into that architecture which imposes a limit on our ability to determine the psychological process. (Markey-Towler 2018, 10)
~ ~ ~
My own philosophical views differ from the assumptions underlying Markey-Towler. To say that “mind is emergent from the brain and that mind, brain and body constitute the individual existing in a monist reality,” is essentially a form of physical monism that claims mind “emerged” from matter, which really explains nothing. If the universe (and humans) are merely mechanisms and mind is reducible to matter we would never be able to be aware of our place in and relation to the universe nor would there ever be two differing philosophical interpretations of our place in the universe. The hard problem (mind-brain question) in neuroscience remains a debated and unsettled question. There are serious philosophical weaknesses in mechanistic materialism as a philosophical position, as is discussed in Quantum Mechanics and Human Values (Stapp 2007 and 2017).
[William] James argued at length for a certain conception of what it means for an idea to be true. This conception was, in brief, that an idea is true if it works. (Stapp 2009, 60)
James’s proposal was at first scorned and ridiculed by most philosophers, as might be expected. For most people can plainly see a big difference between whether an idea is true and whether it works. Yet James stoutly defended his idea, claiming that he was misunderstood by his critics.
It is worthwhile to try and see things from James’s point of view.
James accepts, as a matter of course, that the truth of an idea means its agreement with reality. The questions are: What is the “reality” with which a true idea agrees? And what is the relationship “agreement with reality” by virtue of which that idea becomes true?
All human ideas lie, by definition, in the realm of experience. Reality, on the other hand, is usually considered to have parts lying outside this realm. The question thus arises: How can an idea lying inside the realm of experience agree with something that lies outside? How does one conceive of a relationship between an idea, on the one hand, and something of such a fundamentally different sort? What is the structural form of that connection between an idea and a transexperiential reality that goes by the name of “agreement”? How can such a relationship be comprehended by thoughts forever confined to the realm of experience?
So if we want to know what it means for an idea to agree with a reality we must first accept that this reality lies in the realm of experience.
This viewpoint is not in accord with the usual idea of truth. Certain of our ideas are ideas about what lies outside the realm of experience. For example, I may have the idea that the world is made up of tiny objects called particles. According to the usual notion of truth this idea is true or false according to whether or not the world really is made up of such particles. The truth of the idea depends on whether it agrees with something that lies outside the realm of experience. (Stapp 2009, 61)
Now the notion of “agreement” seems to suggest some sort of similarity or congruence of the things that agree. But things that are similar or congruent are generally things of the same kind. Two triangles can be similar or congruent because they are of the same kind. Two triangles can be similar or congruent because they are the same kind of thing: the relationships that inhere in one can be mapped in a direct and simple way into the relationships that inhere in the other.
But ideas and external realities are presumably very different kinds of things. Our ideas are intimately associated with certain complex, macroscopic, biological entities—our brains—and the structural forms that can inhere in our ideas would naturally be expected to depend on the structural forms of our brains. External realities, on the other hand, could be structurally very different from human ideas. Hence there is no a priori reason to expect that the relationships that constitute or characterize the essence of external reality can be mapped in any simple or direct fashion into the world of human ideas. Yet if no such mapping exists then the whole idea of “agreement” between ideas and external realities becomes obscure.
The only evidence we have on the question of whether human ideas can be brought into exact correspondence with the essences of the external realites is the success of our ideas in bringing order to our physical experience. Yet success of ideas in this sphere does not ensure the exact correspondence of our ideas to external reality.
On the other hand, the question of whether ideas “agree” with external essences is of no practical importance. What is important is precisely the success of the ideas—if the ideas are successful in bringing order to our experience, then they are useful even if they do not “agree”, in some absolute sense, with the external essences. Moreover, if they are successful in bringing order into our experience, then they do “agree” at least with the aspects of our experience that they successfully order. Furthermore, it is only this agreement with aspects of our experience that can ever really be comprehended by man. That which is not an idea is intrinsically incomprehensible, and so are its relationships to other things. This leads to the pragmatic [critical realist?] viewpoint that ideas must be judged by their success and utility in the world of ideas and experience, rather than on the basis of some intrinsically incomprehensible “agreement” with nonideas.
The significance of this viewpoint for science is its negation of the idea that the aim of science is to construct a mental or mathematical image of the world itself. According to the pragmatic view, the proper goal of science is to augment and order our experience. A scientific theory should be judged on how well it serves to extend the range of our experience and reduce it to order. It need not provide a mental or mathematical image of the world itself, for the structural form of the world itself may be such that it cannot be placed in simple correspondence with the types of structures that our mental processes can form. (Stapp 2009, 62)
James was accused of subjectivism—of denying the existence of objective reality. In defending himself against this charge, which he termed slanderous, he introduced an interesting ontology consisting of three things: (1) private concepts, (2) sense objects, (3) hypersensible realities. The private concepts are subjective experiences. The sense objects are public sense realities, i.e., sense realities that are independent of the individual. The hypersensible realities are realities that exist independently of all human thinkers.
Of hypersensible realities James can talk only obliquely, since he recognizes both that our knowledge of such things is forever uncertain and that we can moreover never even think of such things without replacing them by mental substitutes that lack the defining characteristics of that which they replace, namely the property of existing independetly of all human thinkers.
James’s sense objects are courious things. They are sense realities and hence belong to the realm of experience. Yet they are public: they are indepedent of the individual. They are, in short, objective experiences. The usual idea about experiences is that they are personal or subjective, not public or objective.
This idea of experienced sense objects as public or objective realities runs through James’s writings. The experience “tiger” can appear in the mental histories of many different individuals. “That desk” is something that I can grasp and shake, and you also can grasp and shake. About this desk James says:
But you and I are commutable here; we can exchange places; and as you go bail for my desk, so I can bail yours. This notion of a reality independent of either of us, taken from ordinary experiences, lies at the base of the pragmatic definition of truth.
These words should, I think, be linked with Bohr’s words about classical concepts as the basis of communication between scientists. In both cases the focus is on the concretely experienced sense realities—such as the shaking of the desk—as the foundation of social reality. From this point of view the objective world is not built basically out of such airy abstractions as electrons and protons and “space”. It is founded on the concrete sense realities of social experience, such as a block of concrete held in the hand, a sword forged by a blacksmith, a Geiger counter prepared according to specifications by laboratory technicians and placed in a specified position by experimental physicists. (Stapp 2009, 62-63)
We do have minds, we are conscious, and we can reflect upon our private experiences because we have them. Unlike phlogiston … these phenomena exist and are the most common in human experience.
— Daniel Robinson, cited in Edward Fullbrook’s (2016, 33) Narrative Fixation in Economics
Valuations are always with us. Disinterested research there has never been and can never be. Prior to answers there must be questions. There can be no view except from a viewpoint. In the questions raised and the viewpoint chosen, valuations are implied. Our valuations determine our approaches to a problem, the definition of our concepts, the choice of models, the selection of observations, the presentations of our conclusions — in fact the whole pursuit of a study from beginning to end.
— Gunnar Myrdal (1978, 778-779), cited in Söderbaum (2018, 8)
Philosophers have tried doggedly for three centuries to understand the role of mind in the workings of a brain conceived to function according to principles of classical physics. We now know no such brain exists: no brain, body, or anything else in the real world is composed of those tiny bits of matter that Newton imagined the universe to be made of. Hence it is hardly surprising that those philosophical endeavors were beset by enormous difficulties, which led to such positions as that of the ‘eliminative materialists’, who hold that our conscious thoughts must be eliminated from our scientific understanding of nature; or of the ‘epiphenomenalists’, who admit that human experiences do exist, but claim that they play no role in how we behave; or of the ‘identity theorists’, who claim that each conscious feeling is exactly the same thing as a motion of particles that nineteenth century science thought our brains, and everything else in the universe, were made of, but that twentieth century science has found not to exist, at least as they were formerly conceived. The tremendous difficulty in reconciling consciousness, as we know it, with the older physics is dramatized by the fact that for many years the mere mention of ‘consciousness’ was considered evidence of backwardness and bad taste in most of academia, including, incredibly, even psychology and the philosophy of mind. (Stapp 2007, 139)
What you are, and will become, depends largely upon your values. Values arise from self-image: from what you believe yourself to be. Generally one is led by training, teaching, propaganda, or other forms of indoctrination, to expand one’s conception of the self: one is encouraged to perceive oneself as an integral part of some social unit such as family, ethnic or religious group, or nation, and to enlarge one’s self-interest to include the interests of this unit. If this training is successful your enlarged conception of yourself as good parent, or good son or daughter, or good Christian, Muslim, Jew, or whatever, will cause you to give weight to the welfare of the unit as you would your own. In fact, if well conditioned you may give more weight to the interests of the group than to the well-being of your bodily self. (Stapp 2007, 139)
In the present context it is not relevant whether this human tendency to enlarge one’s self-image is a consequence of natural malleability, instinctual tendency, spiritual insight, or something else. What is important is that we human beings do in fact have the capacity to expand our image of ‘self’, and that this enlarged concept can become the basis of a drive so powerful that it becomes the dominant determinant of human conduct, overwhelming every other factor, including even the instinct for bodily survival. (Stapp 2007, 140)
But where reason is honored, belief must be reconciled with empirical evidence. If you seek evidence for your beliefs about what you are, and how you fit into Nature, then science claims jurisdiction, or at least relevance. Physics presents itself as the basic science, and it is to physics that you are told to turn. Thus a radical shift in the physics-based conception of man from that of an isolated mechanical automaton to that of an integral participant in a non-local holistic process that gives form and meaning to the evolving universe is a seismic event of potentially momentous proportions. (Stapp 2007, 140)
The quantum concept of man, being based on objective science equally available to all, rather than arising from special personal circumstances, has the potential to undergird a universal system of basic values suitable to all people, without regard to the accidents of their origins. With the diffusion of this quantum understanding of human beings, science may fulfill itself by adding to the material benefits it has already provided a philosophical insight of perhaps even greater ultimate value. (Stapp 2007, 140)
This issue of the connection of science to values can be put into perspective by seeing it in the context of a thumb-nail sketch of history that stresses the role of science. For this purpose let human intellectual history be divided into five periods: traditional, modern, transitional, post-modern, and contemporary. (Stapp 2007, 140)
During the ‘traditional’ era our understanding of ourselves and our relationship to Nature was based on ‘ancient traditions’ handed down from generation to generation: ‘Traditions’ were the chief source of wisdom about our connection to Nature. The ‘modern’ era began in the seventeenth century with the rise of what is still called ‘modern science’. That approach was based on the ideas of Bacon, Descartes, Galileo and Newton, and it provided a new source of knowledge that came to be regarded by many thinkers as more reliable than tradition. (Stapp 2007, 140)
The basic idea of ‘modern’ science was ‘materialism’: the idea that the physical world is composed basically of tiny bits of matter whose contact interactions with adjacent bits completely control everything that is now happening, and that ever will happen. According to these laws, as they existed in the late nineteenth century, a person’s conscious thoughts and efforts can make no difference at all to what his body/brain does: whatever you do was deemed to be completely fixed by local interactions between tiny mechanical elements, with your thoughts, ideas, feelings, and efforts, being simply locally determined high-level consequences or re-expressions of the low-level mechanical process, and hence basically just elements of a reorganized way of describing the effects of the absolutely and totally controlling microscopic material causes. (Stapp 2007, 140-141)
This materialist conception of reality began to crumble at the beginning of the twentieth century with Max Planck’s discovery of the quantum of action. Planck announced to his son that he had, on that day, made a discovery as important as Newton’s. That assessment was certainly correct: the ramifications of Planck’s discovery were eventually to cause Newton’s materialist conception of physical reality to come crashing down. Planck’s discovery marks the beginning of the `transitional’ period. (Stapp 2007, 141)
A second important transitional development soon followed. In 1905 Einstein announced his special theory of relativity. This theory denied the validity of our intuitive idea of the instant of time ‘now’, and promulgated the thesis that even the most basic quantities of physics, such as the length of a steel rod, and the temporal order of two events, had no objective ‘true values’, but were well defined only ‘relative’ to some observer’s point of view. (Stapp 2007, 141)
Planck’s discovery led by the mid-1920s to a complete breakdown, at the fundamental level, of the classical material conception of nature. A new basic physical theory, developed principally by Werner Heisenberg, Niels Bohr, Wolfgang Pauli, and Max Born, brought ‘the observer’ explicitly into physics. The earlier idea that the physical world is composed of tiny particles (and electromagnetic and gravitational fields) was abandoned in favor of a theory of natural phenomena in which the consciousness of the human observer is ascribed an essential role. This successor to classical physical theory is called Copenhagen quantum theory. (Stapp 2007, 141)
This turning away by science itself from the tenets of the objective materialist philosophy gave impetus to, and lent support to, post-modernism. That view, which emerged during the second half of the twentieth century, promulgated, in essence, the idea that all ‘truths’ were relative to one’s point of view, and were mere artifacts of some particular social group’s struggle for power over competing groups. Thus each social movement was entitled to its own ‘truth’, which was viewed simply as a socially created pawn in the power game. (Stapp 2007, 141-142)
The connection of post-modern thought to science is that both Copenhagen quantum theory and relativity theory had retreated from the idea of observer-independent objective truth. Science in the first quarter of the twentieth century had not only eliminated materialism as a possible foundation for objective truth, but seemed to have discredited the very idea of objective truth in science. But if the community of scientists has renounced the idea of objective truth in favor of the pragmatic idea that ‘what is true for us is what works for us’, then every group becomes licensed to do the same, and the hope evaporates that science might provide objective criteria for resolving contentious social issues. (Stapp 2007, 142)
This philosophical shift has had profound social and intellectual ramifications. But the physicists who initiated this mischief were generally too interested in practical developments in their own field to get involved in these philosophical issues. Thus they failed to broadcast an important fact: already by mid-century, a further development in physics had occurred that provides an effective antidote to both the ‘materialism’ of the modern era, and the ‘relativism’ and ‘social constructionism’ of the post-modern period. In particular, John von Neumann developed, during the early thirties, a form of quantum theory that brought the physical and mental aspects of nature back together as two aspects of a rationally coherent whole. This theory was elevated, during the forties — by the work of Tomonaga and Schwinger — to a form compatible with the physical requirements of the theory of relativity. (Stapp 2007, 142)
Von Neumann’s theory, unlike the transitional ones, provides a framework for integrating into one coherent idea of reality the empirical data residing in subjective experience with the basic mathematical structure of theoretical physics. Von Neumann’s formulation of quantum theory is the starting point of all efforts by physicists to go beyond the pragmatically satisfactory but ontologically incomplete Copenhagen form of quantum theory. (Stapp 2007, 142)
Von Neumann capitalized upon the key Copenhagen move of bringing human choices into the theory of physical reality. But, whereas the Copenhagen approach excluded the bodies and brains of the human observers from the physical world that they sought to describe, von Neumann demanded logical cohesion and mathematical precision, and was willing to follow where this rational approach led. Being a mathematician, fortified by the rigor and precision of his thought, he seemed less intimidated than his physicist brethren by the sharp contrast between the nature of the world called for by the new mathematics and the nature of the world that the genius of Isaac Newton had concocted. (Stapp 2007, 142-143)
A common core feature of the orthodox (Copenhagen and von Neumann) quantum theory is the incorporation of efficacious conscious human choices into the structure of basic physical theory. How this is done, and how the conception of the human person is thereby radically altered, has been spelled out in lay terms in this book, and is something every well informed person who values the findings of science ought to know about. The conception of self is the basis of values and thence of behavior, and it controls the entire fabric of one’s life. It is irrational, from a scientific perspective, to cling today to false and inadequate nineteenth century concepts about your basic nature, while ignoring the profound impact upon these concepts of the twentieth century revolution in science. (Stapp 2007, 143)
It is curious that some physicists want to improve upon orthodox quantum theory by excluding ‘the observer’, who, by virtue of his subjective nature, must, in their opinion, be excluded from science. That stance is maintained in direct opposition to what would seem to be the most profound advance in physics in three hundred years, namely the overcoming of the most glaring failure of classical physics, its inability to accommodate us, its creators. The most salient philosophical feature of quantum theory is that the mathematics has a causal gap that, by virtue of its intrinsic form, provides a perfect place for Homo sapiens as we know and experience ourselves. (Stapp 2007, 143)
One of the most important tasks of social sciences is to explain the events, processes, and structures that take place and act in society. In a time when scientific relativism (social constructivism, postmodernism, de-constructivism etc.) is expanding, it’s important to guard against reducing science to a pure discursive level [cf. Pålsson Syll 2005]. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is to reveal what this reality actually looks like. This is after all the object of science.
— Lars Pålsson Syll. On the use and misuse of theories and models in economics (Kindle Locations 113-118). WEA. Kindle Edition.
Conclusions
How can our world of billions of thinkers ever come into general concordance on fundamental issues? How do you, yourself, form opinions on such issues? Do you simply accept the message of some ‘authority’, such as a church, a state, or a social or political group? All of these entities promote concepts about how you as an individual fit into the reality that supports your being. And each has an agenda of its own, and hence its own internal biases. But where can you find an unvarnished truth about your nature, and your place in Nature? (Stapp 2007, 145)
Science rests, in the end, on an authority that lies beyond the pettiness of human ambition. It rests, finally, on stubborn facts. The founders of quantum theory certainly had no desire to bring down the grand structure of classical physics of which they were the inheritors, beneficiaries, and torch bearers. It was stubborn facts that forced their hand, and made them reluctantly abandon the two-hundred-year-old old classical ideal of a mechanical universe, and turn to what perhaps should have been seen from the start as a more reasonable endeavor: the creation an understanding of nature that includes in a rationally coherent way the thoughts by which we know and influence the world around us. The labors of scientists endeavoring merely to understand our inanimate environment produced, from its own internal logic, a rationally coherent framework into which we ourselves fit neatly. What was falsified by twentieth-century science was not the core traditions and intuitions that have sustained societies and civilizations since the dawn of mankind, but rather an historical aberration, an impoverished world view within which philosophers of the past few centuries have tried relentlessly but fruitlessly to find ourselves. The falseness of that deviation of science must be made known, and heralded, because human beings are not likely to endure in a society ruled by a conception of themselves that denies the essence of their being. (Stapp 2007, 145)
Einstein’s principle is relativity, not relativism. The historian of science Gerald Holton reports that Einstein was unhappy with the label ‘relativity theory’ and in his correspondence referred to it as Invariantentheorie…. Consider temporal and spatial measurements. Even if temporal and spatial measurements become frame-dependent, the observers who are attached to their different clock-carrying frames, like the respective observer on the platform and the train, can communicate their results to each other. They can even predict what the other observer will measure. The transparency between the reference frames and the mutual predictability of the measurement is due [to] a mathematical relationship, called the Lorentz transformations. The Lorentz transformations state the mathematical rules, which allow an observer to translate his/her coordinates into those of a different observer.
(….) The appropriate criterion for what is fundamentally real will (…) be what is invariant across all points of view…. The invariant is the real. This is a hypothesis about physical reality: what is frame-dependent is apparently real, what is frame-independent may be fundamentally real. To claim that the invariant is the real is to make an inference from the structure of scientific theories to the structure of the natural world.
— Weinert (2004, 66, 70-71) The Scientist as Philosopher: Philosophical Consequences of Great Scientific Discoveries
Reply to Sam Harris on Free Will
Sam Harris’s book “Free Will” is an instructive example of how a spokesman dedicated to being reasonable and rational can have his arguments derailed by a reliance on prejudices and false presuppositions so deep-seated that they block seeing science-based possibilities that lie outside the confines of an outmoded world view that is now known to be incompatible with the empirical facts. (Stapp 2017, 97)
A particular logical error appears repeatedly throughout Harris’s book. Early on, he describes the deeds of two psychopaths who have committed some horrible acts. He asserts: “I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people.” (Stapp 2017, 97)
Harris asserts, here, that there is “no extra part of me” that could decide differently. But that assertion, which he calls an admission, begs the question. What evidence rationally justifies that claim? Clearly it is not empirical evidence. It is, rather, a prejudicial and anti-scientific commitment to the precepts of a known-to-be-false conception of the world called classical mechanics. That older scientific understanding of reality was found during the first decades of the twentieth century to be incompatible with empirical findings, and was replaced during the 1920s, and early 1930s, by an adequate and successful revised understanding called quantum mechanics. This newer theory, in the rationally coherent and mathematically rigorous formulation offered by John von Neumann, features a separation of the world process into (1), a physically described part composed of atoms and closely connected physical fields; (2), some psychologically described parts lying outside the atom-based part, and identified as our thinking ego’s; and (3), some psycho-physical actions attributed to nature. Within this empirically adequate conception of reality there is an extra (non-atom-based) part of a person (his thinking ego) that can resist (successfully, if willed with sufficient intensity) the impulse to victimize other people. Harris’s example thus illustrates the fundamental errors that can be caused by identifying honored science with nineteenth century classical mechanics. (Stapp 2017, 97)
Harris goes on to defend “compatibilism”, the view that claims both that every physical event is determined by what came before in the physical world and also that we possess “free will”. Harris says that “Today the only philosophically respectable way to endorse free will is to be a compatibilist—because we know that determinism, in every sense relevant to human behavior, is true”. (Stapp 2017, 97-98)
But what Harris claims that “We know” to be true is, according to quantum mechanics, not known to be true. (Stapp 2017, 98)
The final clause “in every sense relevant to human behavior” is presumably meant to discount the relevance of quantum mechanical indeterminism, by asserting that quantum indeterminism is not relevant to human behavior—presumably because it washes out at the level of macroscopic brain dynamics. But that idea of what the shift to quantum mechanics achieves is grossly deficient. The quantum indeterminism merely opens the door to a complex dynamical process that not only violates determinism (the condition that the physical past determines the future) at the level of human behavior, but allows mental intentions that are not controlled by the physical past to influence human behavior in the intended way. Thus the shift to quantum mechanics opens the door to a causal efficacy of free will that is ruled out by Harris’s effective embrace of false nineteenth science. But what Harris claims that “We know” to be true is, according to quantum mechanics, not known to be true. (Stapp 2017, 98)
Henry Louis Mencken [1917] once wrote that “[t]here is always an easy solution to every human problem — neat, plausible and wrong.” And neoclassical economics has indeed been wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real world target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.
The punch line of this is that most of the problems that neoclassical economics is wrestling with, issues from its attempts at formalistic modeling per se of social phenomena. Reducing microeconomics to refinements of hyper-rational Bayesian deductivist models is not a viable way forward. It will only sentence to irrelevance the most interesting real world economic problems. And as someone has so wisely remarked, murder is unfortunately the only way to reduce biology to chemistry — reducing macroeconomics to Walrasian general equilibrium microeconomics basically means committing the same crime.
— Lars Pålsson Syll. On the use and misuse of theories and models in economics.
~ ~ ~
Emergence, some say, is merely a philosophical concept, unfit for scientific consumption. Or, others predict, when subjected to empirical testing it will turn out to be nothing more than shorthand for a whole batch of discrete phenomena involving novelty, which is, if you will, nothing novel. Perhaps science can study emergences, the critics continue, but not emergence as such. (Clayton 2004: 577)*
It’s too soon to tell. But certainly there is a place for those, such as the scientist to whom this volume is dedicated, who attempt to look ahead, trying to gauge what are Nature’s broadest patterns and hence where present scientific resources can best be invested. John Archibald Wheeler formulated an important motif of emergence in 1989:
Directly opposite the concept of universe as machine built on law is the vision of a world self-synthesized. On this view, the notes struck out on a piano by the observer-participants of all places and all times, bits though they are, in and by themselves constituted the great wide world of space and time and things.
(Wheeler 1999: 314)
Wheeler summarized his idea — the observer-participant who is both the result of an evolutionary process and, in some sense, the cause of his own emergence — in two ways: in the famous sketch given in Fig.26.1 and in the maxim “It from bit.” In the attempt to summarize this chapter’s thesis with an equal economy of words I offer the corresponding maxim, “Us from it.” The maxim expresses the bold question that gives rise to the emergentist research program: Does nature, in its matter and its laws, manifest an inbuilt tendency to bring about increasing complexity? Is there an apparently inevitable process of complexification that runs from the period table of the elements through the explosive variations of evolutionary history to the unpredictable progress of human cultural history, and perhaps even beyond? (Clayton 2004: 577)
The emergence hypothesis requires that we proceed though at least four stages. The first stage involves rather straightforward physics — say, the emergence of classical phenomena from the quantum world (Zurek 1991, 2002) or the emergence of chemical properties through molecular structure (Earley 1981). In a second stage we move from the obvious cases of emergence in evolutionary history toward what may be the biology of the future: a new, law-based “general biology” (Kauffman 2000) that will uncover the laws of emergence underlying natural history. Stage three of the research program involves the study of “products of the brain” (perception, cognition, awareness), which the program attempts to understand not as unfathomable mysteries but as emergent phenomena that arise as natural products of the complex interactions of brain and central nervous system. Some add a fourth stage to the program, one that is more metaphysical in nature: the suggestion that the ultimate results, or the original causes, of natural emergence transcend or lie beyond Nature as a whole. Those who view stage-four theories with suspicion should note that the present chapter does not appeal to or rely on metaphysical speculations of this sort in making its case. (Clayton 2004: 578-579)
Defining terms and assumptions
The basic concept of emergence is not complicated, even if the empirical details of emergent processes are. We turn to Wheeler, again, for an opening formulation:
When you put enough elementary units together, you get something that is more than the sum of these units. A substance made of a great number of molecules, for instance, has properties such as pressure and temperature that no one molecule possesses. It may be a solid or a liquid or a gas, although no single molecule is solid or liquid or gas. (Wheeler 1998: 341)
Or, in the words of biochemist Arthur Peacocke, emergence takes place when “new forms of matter, and a hierarchy of organization of these forms … appear in the course of time” and ” these new forms have new properties, behaviors, and networks of relations” that must be used to describe them (Peacocke 1993: 62).
Clearly, no one-size-fits-all theory of emergence will be adequate to the wide variety of emergent phenomena in the world. Consider the complex empirical differences that are reflected in these diverse senses of emergence:
• temporal or spatial emergence • emergence in the progression from simple to complex • emergence in increasingly complex levels of information processing • the emergence of new properties (e.g., physical, biological, psychological) • the emergence of new causal entities (atoms, molecules, cells, central nervous system) • the emergence of new organizing principles or degrees of inner organization (feedback loops, autocatalysis, “autopoiesis”) • emergence in the development of “subjectivity” (if one can draw a ladder from perception, through awareness, self-awareness, and self-consciousness, to rational intuition).
Despite the diversity, certain parameters do constrain the scientific study of emergence:
Emergence studies will be scientific only if emergence can be explicated in terms that the relevant sciences can study, check, and incorporate into actual theories.
Explanations concerning such phenomena must thus be given in terms of the structures and functions of stuff in the world. As Christopher Southgate writes, “An emergent property is one describing a higher level of organization of matter, where the description is not epistemologically reducible to lower-level concepts” (Southgate et al. 1999: 158).
It also follows that all forms of dualism are disfavored. For example, only those research programs count as emergentist which refuse to accept an absolute break between neurophysiological properties and mental properties. “Substance dualisms,” such as the Cartesian delineation of reality into “matter” and “mind,” are generally avoided. Instead, research programs in emergence tend to combine sustained research into (in this case) the connections between brain and “mind,” on the one hand, with the expectation that emergent mental phenomena will not be fully explainable in terms of underlying causes on the other.
By definition, emergence transcends any single scientific discipline. At a recent international consultation on emergence theory, each scientist was asked to define emergence, and each offered a definition of the term in his or her own specific field of inquiry: physicists made emergence a product of tome-invariant natural laws; biologists presented emergence as a consequence of natural history; neuroscientists spoke primarily of “things that emerge from brains”; and engineers construed emergence in terms of new things that we can build or create. Each of these definitions contributes to, but none can be the sole source for, a genuinely comprehensive theory of emergence. (Clayton 2004: 579-580)
Physics to chemistry
(….) Things emerge in the development of complex physical systems that are understood by observation and cannot be derived from first principles, even given a complete knowledge of the antecedent states. One would not know about conductivity, for example, from a study of individual electrons alone; conductivity is a property that emerges only in complex solid state systems with huge numbers of electrons…. Such examples are convincing: physicists are familiar with a myriad of cases in which physical wholes cannot be predicted based on knowledge of their parts. Intuitions differ, though, on the significance of this unpredictability. (Clayton 2004: 580)
(….) [Such examples are] unpredictable even in principle — if the system-as-a-whole is really more than the sum of its parts.
Simulated Evolutionary Systems
Computer simulations study the processes whereby very simple rules give rise to complex emergent properties. John Conway’s program “Life,” which simulates cellular automata, is already widely known…. Yet even in as simple a system as Conway’s “Life,” predicting the movement of larger structures in terms of the simple parts alone turns out to be extremely complex. Thus in the messy real world of biology, behaviors of complex systems quickly become noncomputable in practice…. As a result — and, it now appears, necessarily — scientists rely on explanations given in terms of the emerging structures and their causal powers. Dreams of a final reduction “downwards” are fundamentally impossible. Recycled lower-level descriptions cannot do justice to the actual emergent complexity of the natural world as it has evolved. (Clayton 2004: 582)
Ant colony behavior
Neural network models of emergent phenomena can model … the emergence of ant colony behavior from simple behavioral “rules” that are genetically programmed into individual ants. (….) Even if the behavior of an ant colony were nothing more than an aggregate of the behaviors of the individual ants, whose behavior follows very simple rules, the result would be remarkable, for the behavior of the ant colony as a whole is extremely complex and highly adaptive to complex changes in its ecosystem. The complex adaptive potentials of the ant colony as a whole are emergent features of the aggregated system. The scientific task is to correctly describe and comprehend such emergent phenomena where the whole is more than the sum of the parts. (Clayton 2004: 586-587)
Biochemistry
So far we have considered models of how nature could build highly complex and adaptive behaviors from relatively simple processing rules. Now we must consider actual cases in which significant order emerges out of (relative) chaos. The big question is how nature obtains order “out of nothing,” that is, when the order is not present in the initial conditions but is produced in the course of a system’s evolution. What are some of the mechanisms that nature in fact uses? We consider four examples. (Clayton 2004: 587)
Fluid convection
The Benard instability is often cited as an example of a system far from thermodynamic equilibrium, where a stationary state becomes unstable and then manifests spontaneous organization (Peacocke 1994: 153). In the Bernard case, the lower surface of a horizontal layer of liquid is heated. This produces a heat flux from the bottom to the top of the liquid. When the temperature gradient reaches a certain threshold value, conduction no longer suffices to convey the heat upward. At that point convection cells form at right angles to th4e vertical heat flow. The liquid spontaneously organizes itself into these hexagonal structures or cells. (Clayton 2004: 587-588)
Differential equations describing the heat flow exhibit a bifurcation of the solutions. This bifurcation represents the spontaneous self-organization of large numbers of molecules, formally in random motion, into convection cells. This represents a particularly clear case of the spontaneous appearance of order in a system. According to the emergence hypothesis, many cases of emergent order in biology are analogous. (Clayton 2004: 588)
Autocatalysis in biochemical metabolism
Autocatalytic processes play a role in some of the most fundamental examples of emergence in the biosphere. These are relatively simple chemical processes with catalytic steps, yet they well express the thermodynamics of the far-from-equilibrium chemical processes that lie at the base of biology. (….) Such loops play an important role in metabolic functions. (Clayton 2004: 588)
Belousov-Zhabotinsky reactions
The role of emergence becomes clearer as one considers more complex examples. Consider the famous Belousov-Zhabotinsky reaction (Prigogine 1984: 152). This reaction consists of the oxidation of an organic acid (malonic acid) by potassium bromate in the presence of a catalyst such as cerium, manganese, or ferroin. From the four inputs into the chemical reactor more than 30 products and intermediaries are produced. The Belousov-Zhabotinsky reaction provides an example of a biochemical process where a high level of disorder settles into a patterned state. (Clayton 2004: 589)
(….) Put into philosophical terms, the data suggest that emergence is not merely epistemological but can also be ontological in nature. That is, it’s not just that we can’t predict emergent behaviors in these systems from a complete knowledge of the structures and energies of the parts. Instead, studying the systems suggests that structural features of the system — which are emergent features of the system as such and not properties pertaining to any of its parts — determine the overall state of the system, and hence as a result the behavior of individual particles within the system. (Clayton 2004: 589-590)
The role of emergent features of systems is increasingly evident as one moves from the very simple systems so far considered to the sorts of systems one actually encounters in the biosphere. (….) (Clayton 2004: 589-590)
The biochemistry of cell aggregation and differentiation
We move finally to processes where a random behavior or fluctuation gives rise to organized behavior between cells based on self-organization mechanisms. Consider the process of cell aggregation and differentiation in cellular slime molds (specifically, in Dictyostelium discoideum). The slime mold cycle begins when the environment becomes poor in nutrients and a population of isolated cells joins into a single mass on the order of 104 cells (Prigogine 1984: 156) . The aggregate migrates until it finds a higher nutrient source. Differentiation than occurs: a stalk or “foot” forms out of about one-third of the cells and is soon covered with spores. The spores detach and spread, growing when they encounter suitable nutrients and eventually forming a new colony of amoebas. (Clayton 2004: 589-591) [See Levinton 2001: 166;]
Note that this aggregation process is randomly initiated. Autocatalysis begins in a random cell within the colony, which then becomes the attractor center. It begins to produce cyclic adenosine monophosphate (AMP). As AMP is released in greater quantities into extracellular medium, it catalyzes the same reaction in the other cells, amplifying the fluctuation and total output. Cells then move up the gradient to the source cell, and other cells in turn follow their cAMP trail toward the attractor center. (Clayton 2004: 589-591)
Biology
Ilya Prigogine did not follow the notion of “order out of chaos” up through the entire ladder of biological evolution. Stuart Kauffman (1995, 2000) and others (Gell-Mann 1994; Goodwin 2001; see also Cowan et al. 1994 and other works in the same series) have however recently traced the role of the same principles in living systems. Biological processes in general are the result of systems that create and maintain order (stasis) through massive energy input from their environment. In principle these types of processes could be the object of what Kauffman envisions as “a new general biology,” based on sets of still-to-be-determined laws of emergent ordering or self-complexification. Like the biosphere itself, these laws (if they indeed exist) are emergent: they depend on the underlying physical and chemical regularities but are not reducible to them. [Note, there is no place for mind as a causal source.] Kauffman (2000: 35) writes: (Clayton 2004: 592)
I wish to say that life is an expected, emergent property of complex chemical reaction networks. Under rather general conditions, as the diversity of molecular species in a reaction system increases, a phase transition is crossed beyond which the formation of collectively autocatalytic sets of molecules suddenly becomes almost inevitable. (Clayton 2004: 593)
Until a science has been developed that formulates and tests physics-like laws at the level of biology [evo-devo is the closest we have so far come], the “new general biology” remains an as-yet-unverified, though intriguing, hypothesis. Nevertheless recent biology, driven by the genetic revolution on the one side and by the growth on the environmental sciences on the other, has made explosive advances in understanding the role of self-organizing complexity in the biosphere. Four factors in particular play a central role in biological emergence. (Clayton 2004: 593)
The role of scaling
As one moves up the ladder of complexity, macrostructures and macromechanisms emerge. In the formation of new structures, scale matters — or, better put, changes in scale matter. Nature continually evolves new structures and mechanisms as life forms move up the scale of molecules (c. 1 Ångstrom) to neurons (c. 100 micrometers) to the human central nervous system (c. 1 meter). As new structures are developed, new whole-part relations emerge. (Clayton 2004: 593)
John Holland argues that different sciences in the hierarchy of emergent complexity occur at jumps of roughly three orders of magnitude in scale. By the point systems have become too complex for predictions to be calculated, one is forced to “move the description ‘up a level’” (Holland 1998: 201). The “microlaws” still constrain outcomes, of course, but additional basic descriptive units must also be added. This pattern of introducing new explanatory levels iterates in a periodic fashion as one moves up the ladder of increasing complexity. To recognize the pattern is to make emergence an explicit feature of biological research. As of now, however, science possesses only a preliminary understanding of the principles underlying this periodicity. (Clayton 2004: 593)
The role of feedback loops
The role of feedback loops, examined above for biochemical processes, become increasingly important from the cellular level upwards. (….) (Clayton 2004: 593)
The role of local-global interactions
In complex dynamical systems the interlocked feedback loops can produce an emergent global structure. (….) In these cases, “the global property — [the] emergent behavior — feeds back to influence the behavior of the individuals … that produced it” (Lewin 1999). The global structure may have properties the local particles do not have. (Clayton 2004: 594)
(….) In contrast …, Kauffman insists that an ecosystem is in one sense “merely” a complex web of interactions. Yet consider a typical ecosystem of organisms of the sort that Kauffman (2000: 191) analyzes … Depending on one’s research interests, one can focus attention either on holistic features of such systems or on the interactions of the components within them. Thus Langston’s term “global” draws attention to system-level features and properties, whereas Kauffman’s “merely” emphasizes that no mysterious outside forces need to be introduced (such as, e.g., Rupert Sheldrake’s (1995) “morphic resonance”). Since the two dimensions are complementary, neither alone is scientifically adequate; the explosive complexity manifested in the evolutionary process involves the interplay of both systemic features and component interactions. (Clayton 2004: 595)
The role of nested hierarchies
A final layer of complexity is added in cases where the local-global structure forms a nested hierarchy. Such hierarchies are often represented using nested circles. Nesting is one of the basic forms of combinatorial explosion. Such forms appear extensively in natural biological systems (Wolfram 2002: 357ff.; see his index for dozens of further examples of nesting). Organisms achieve greater structural complexity, and hence increased chances of survival, as they incorporate discrete subsystems. Similarly, ecosystems complex enough to contain a number of discrete subsystems evidence greater plasticity in responding to destabilizing factors. (Clayton 2004: 595-596)
“Strong” versus “weak” emergence
The resulting interactions between parts and wholes mirror yet exceed the features of emergence that we observed in chemical processes. To the extent that the evolution of organisms and ecosystems evidences a “combinatorial explosion” (Morowitz 2002) based on factors such as the four just summarized, the hope of explaining entire living systems in terms of simple laws appears quixotic. Instead, natural systems made of interacting complex systems form a multileveled network of interdependency (cf. Gregersen 2003), and each level contributes distinct elements to the overall explanation. (Clayton 2004: 596-597)
Systems biology, the Siamese twin of genetics, has established many of the features of life’s “complexity pyramid” (Oltvai and Barabási 2002; cf. Barabási 2002). Construing cells as networks of genes and proteins, systems biologists distinguish four distinct levels: (1) the base functional organization (genome, transcriptome, proteome, and metabalome) [see below, Morowitz on the “dogma of molecular biology.”]; (2) the metabolic pathways built up out of these components; (3) larger functional modules responsible for major cell functions; and (4) the large-scale organization that arises from the nesting of the functional modules. Oltvai and Barabási (2002) conclude that “[the] integration of different organizational levels increasingly forces us to view cellular functions as distributed among groups of heterogeneous components that all interact within large networks.” Milo et al. (2002) have recently shown that a common set of “network motifs” occurs in complex networks in fields as diverse as biochemistry, neurobiology, and ecology. As they note, “similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans.” (Clayton 2004: 598)
Such compounding of complexity — the system-level features of networks, the nodes of which are themselves complex systems — is sometimes said to represent only a quantitative increase in complexity, in which nothing “really new” emerges. This view I have elsewhere labeled “weak emergence.” [This would be a form of philosophical materialism qua philosophical reductionism.] It is the view held by (among others) John Holland (1998) and Stephen Wolfram (2002). But, as Leon Kass (1999: 62) notes in the context of evolutionary biology, “it never occurred to Darwin that certain differences of degree — produced naturally, accumulated gradually (even incrementally), and inherited in an unbroken line of descent — might lead to a difference in kind …” Here Kass nicely formulates the principle involved. As long as nature’s process of compounding complex systems leads to irreducibly complex systems with structures and causal mechanisms of their own, then the natural world evidences not just weak emergence but also a more substantive change that we might label strong emergence. Cases of strong emergence are cases where the “downward causation” emphasized by George Ellis [see p. 607, True complexity and its associated ontology.] … is most in evidence. By contrast, in the relatively rare cases where rules relate the emergent system to its subvening system (in simulated systems, via algorithms; in natural systems, via “bridge laws”) weak emergence interpretation suffices. In the majority of cases, however, such rules are not available; in these cases, especially where we have reason to think that such lower-level rules are impossible in principle, the strong emergence interpretation is suggested. (Clayton 2004: 597-598)
Neuroscience, qualia, and consciousness
Consciousness, many feel, is the most important instance of a clearly strong form of emergence. Here if anywhere, it seems, nature has produced something irreducible — no matter how strong the biological dependence of mental qualia (i.e., subjective experiences) on antecedent states of the central nervous system may be. To know everything there is to know about the progression of brain states is not to know what it’s like to be you, to experience your joy, your pain, or your insights. No human researcher can know, as Thomas Nagel (1980) so famously argued, “what it’s like to be a bat.” (Clayton 2004: 598)
Unfortunately consciousness, however intimately familiar we may be with it on a personal level, remains an almost total mystery from a scientific perspective. Indeed, as Jerry Fodor (1992) noted, “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness.” (Clayton 2004: 598)
Given our lack of comprehension of the transition from brain states to consciousness, there is virtually no way to talk about the “C” word without sliding into the domain of philosophy. The slide begins if the emergence of consciousness is qualitatively different from other emergences; in fact, it begins even if consciousness is different from the neural correlates of consciousness.Much suggests that both differences obtain. How far can neuroscience go, even in principle, in explaining consciousness? (Clayton 2004: 598-599)
Science’s most powerful ally, I suggest, is emergence. As we’ve seen, emergence allows one to acknowledge the undeniable differences between mental properties and physical properties, while still insisting on the dependence of the entire mental life on the brain states that produce it. Consciousness, the thing to be explained, is different because it represents a new level of emergence; but brain states — understood both globally (as the state of the brain as a whole) and in terms of their microcomponents — are consciousness’s sine qua non. The emergentist framework allows science to identify the strongest possible analogies with complex systems elsewhere in the biosphere. So, for example, other complex adaptive systems also “learn,” as long as one defines learning as “a combination of exploration of the environment and improvement of performance through adaptative change” (Schuster 1994). Obviously, systems from primitive organisms to primate brains record information from their environment and use it to adjust future responses to that environment. (Clayton 2004: 599)
Even the representation of visual images in the brain, a classically mental phenomenon, can be parsed in this way. Consider Max Velman’s (2000) schema … Here a cat-in-the-world and the neural representation of the cat are both parts of a natural system; no nonscientific mental “things” like ideas or forms are introduced. In principle, then, representation might be construed as merely a more complicated version of the feedback loop between a plant and its environment … Such is the “natural account of phenomenal consciousness” defended by (e.g.) Le Doux (1978). In a physicalist account of mind, no mental causes are introduced. Without emergence, the story of consciousness must be retold such that thoughts and intentions play no causal role. … If one limits the causal interactions to world and brains, mind must appear as a sort of thought-bubble outside the system. Yet it is counter to our empirical experience in the world, to say the least, to leave no causal role to thoughts and intentions. For example, it certainly seems that your intention to read this … is causally related to the physical fact of your presently holding this book [or browsing this web page, etc.,] in your hands. (Clayton 2004: 599-600)
Arguments such as this force one to acknowledge the disanologies between emergence of consciousness and previous examples of emergence in complex systems. Consciousness confronts us with a “hard problem” different from those already considered (Chalmers 1995: 201):
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
The distinct features of human cognition, it seems, depend on a quantitative increase in brain complexity vis-à-vis other higher primates. Yet, if Chalmers is right (as I fear he is), this particular quantitative increase gives rise to a qualitative change. Even if the development of conscious awareness occurs gradually over the course of primate evolution, the (present) end of that process confronts the scientist with conscious, symbol-using beings clearly distinct from those who preceded them (Deacon 1997). Understanding consciousness even as an emergent phenomenon in the natural world — that is, naturalistically — requires a theory of “felt qualities,” “subjective intentions,” and “states of experience.” Intention-based explanations and, it appears, a new set of sciences: the social or human sciences. By this point emergence has driven us to a level beyond the natural-science-based framework of the present book. New concepts, new testing mechanisms, and perhaps even new standards for knowledge are now required. From the perspective of physics the trail disappears into the clouds; we can follow it no further. (Clayton 2004: 600-601)
The five emergences
In the broader discussion the term “emergence” is used in multiple and incompatible senses, some of which are incompatible with the scientific project. Clarity is required to avoid equivocation between five distinct levels on which the term may be applied: (Clayton 2004: 601)
• Let emergence-1 refer to occurrences of the term within the context of a specific scientific theory. Here it describes features of a specified physical or biological system of which we have some scientific understanding. Scientists who employ these theories claim that the term (in a theory-specific sense) is currently useful for describing features of the natural world. The preceding pages include various examples of theories in which this term occurs. At the level of emergence-1 alone there is no way to establish whether the term is used analogously across theories, or whether it really means something utterly distinct in each theory in which it appears. (Clayton 2004: 601-602)
• Emergence-2 draws attention to features of the world that may eventually become part of a unified scientific theory. Emergence in this sense expresses postulated connections or laws that may in the future become the basis for one or more branches of science. One thinks, for example, of the role of emergence in Stuart Kauffman’s notion of a new “general biology,” or in certain proposed theories of complexity or complexification. (Clayton 2004: 602)
• Emergence-3 is a mata-scientific term that points out a broad pattern across scientific theories. Used in this sense, the term is not drawn from a particular scientific theory; it is an observation about a significant pattern that connects a range of scientific theories. In the preceding pages I have often employed the term in this fashion. My purpose has been to draw attention to common features of the physical systems under discussion, as in (e.g.) the phenomena of autocatalysis, complexity, and self-organization. Each is scientifically understood, each shares common features that are significant. Emergence draws attention to these features, whether or not the individual theories actually use the same label for the phenomena they describe. (Clayton 2004: 602)
Emergence-3 thus serves a heuristic function. It assists in the recognition of common features between theories. Recognizing such patterns can help to extend existing theories, to formulate insightful new hypotheses, or to launch new interdisciplinary research programes.[4] (Clayton 2004: 602)
• Emergence-4 expresses a feature in the movement between scientific disciplines, including some of the most controversial transition points. Current scientific work is being done, for example, to understand how chemical structures are formed, to reconstruct the biochemical dynamics underlying the origins of life, and to conceive how complicated neural processes produce cognitive phenomena such as memory, language, rationality, and creativity. Each involves efforts to understand diverse phenomena involving levels of self-organization within the natural world. Emergence-4 attempts to express what might be shared in common by these (and other) transition points. (Clayton 2004: 602)
Here, however, a clear limitation arises. A scientific theory that explains how chemical structures are formed is perhaps unlikely to explain the origins of life. Neither theory will explain how self-organizing neural nets encode memories. Thus emergence-4 stands closer to the philosophy of science than it does to actual scientific theory. Nonetheless, it is the sort of philosophy of science that should be helpful to scientists.[5] (Clayton 2004: 602)
• Emergence-5 is a metaphysical theory. It represents the view that the nature of the natural world is such that it produces continually more complex realities in a process of ongoing creativity. The present does not comment on such metaphysical claims about emergence.[6] (Clayton 2004: 603)
Conclusion
(….) Since emergence is used as an integrative ordering concept across scientific fields …. It remains, at least in part, a meta-scientific term. (Clayton 2004: 603)
Does the idea of distinct levels then conflict with “standard reductionist science?” No, one can believe that there are levels in Nature and corresponding levels of explanation while at the same time working to explain any given set of higher-order phenomena in terms of underlying laws and systems. In fact, isn’t the first task of science to whittle away at every apparent “break” in Nature, to make it smaller, to eliminate it if possible? Thus, for example, to study the visual perceptual system scientifically is to attempt to explain it fully in terms of the neural structures and electrochemical processes that produce it. The degree to which downward explanation is possible will be determined by long-term empirical research. At present we can only wager on the one outcome or the other based on the evidence before us. (Clayton 2004: 603)
Notes:
[2] Gordon (2000) disputes this claim: “One lesson from ants is that to understand a system like theirs, it is not sufficient to take the system apart. The behavior of each unit is not encapsulated inside that unit but comes from its connections with the rest of the system.” I likewise break strongly with the aggregate model of emergence.
[3] Generally this seems to be a question that makes physicists uncomfortable (“Why, that’s impossible, of course!”), whereas biologists tend to recognize in it one of the core mysteries in the evolution of living systems.
[4] For this reason, emergence-3 stands closer to the philosophy of science than do the previous two senses. Yet it is a kind of philosophy of science that stands rather close to actual science and that seeks to be helpful to it. [The goal of all true “philosophy of science” is to seek critical clarification of ideas, concepts, and theoretical formulations; hence to be “helpful” to science and the question for human knowledge.] By way of analogy one thinks of the work of philosophers of quantum physics such as Jeremy Butterfield or James Cushing, whose work can be and has actually been helpful to bench physicists. One thinks as well of the analogous work of certain philosophers in astrophysics (John Barrow) or in evolutionary biology (David Hull, Michael Ruse).
[5] This as opposed, for example, to the kind of philosophy of science currently popular in English departments and in journals like Critical Inquiry — the kind of philosophy of science that asserts that science is a text that needs to be deconstructed, or that science and literature are equally subjective, or that the worldview of Native Americans should be taught in science classes.
— Clayton, Philip D. Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.). Cambridge: Cambridge University Press; 2004; pp. 577-606.
~ ~ ~
* Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.)