Category Archives: Philosophy of Science

A Universal Science of Man?

The medieval Roman Catholic priesthood conducted its religious preaching and other discussions in Latin, a language no more understandable to ordinary people then are than the mathematical and statistical formulations of economists today. Latin served as a universal language that had the great practical advantage of allowing easy communication within a priestly class transcending national boundaries across Europe. Yet that was not the full story. The use of Latin also separated the priesthood from the ordinary people, one of a number of devices through which the Roman Catholic Church maintained such a separation in the medieval era. It all served to convey an aura of majesty and religious authority—as does the Supreme Court in the United States, still sitting in priestly robes. In employing an arcane language of mathematics and statistics, Samuelson and fellow economists today seek a similar authority in society.

Economics as Religion: From Samuelson to Chicago and Beyond by Robert H. Nelson

This is a book about economics. But it is also a book about human limitations and the difficulty of gaining true insight into the world around us. There is, in truth, no way of separating these two things from one other. To try to discuss economics without understanding the difficulty of applying it to the real world is to consign oneself to dealing with pure makings of our own imaginations. Much of economics at the time of writing is of this sort, although it is unclear such modes of thought should be called ‘economics’ and whether future generations will see them as such. There is every chance that the backward-looking eye of posterity will see much of what today’s economic departments produce in the same way as we now see phrenology: a highly technical, but ultimately ridiculous pseudoscience constructed rather unconsciously to serve the political needs of the era. In the era when men claiming to be scientists felt the skull for bumps and used this to determine a man’s character and his disposition, the political discourse of the day needed a justification for the racial superiority of the white man; today our present political discourse needs a Panglossian doctrine that promotes general ignorance, a technocratic language that can be deployed to cover up certain political aspects of govenmance and tells us that so long as we trust in those in charge everything will work itself out in the long-run. (Pilkington 2016, 1-2)

But the personal motivations of the individual economist today is not primarily political—although it may well be secondarily political, whether that politics turns right or left—the primary motivation of the individual economist today is in search to answers to questions that they can barely forumulate. These men and women, perhaps more than any other, are chasing a shadow that has been taunting mankind since the early days of the Enlightenment. This is the shadow of the mathesis universalis, the Universal Science expressed in the abstract language of mathematics. They want to capture Man’s essence and understand what he will do today, tomorrow and the day after that. To some of us more humble human beings that fell once upon a time onto this strange path, this may seem altogether too much to ask of our capacities for knowledge…. Is it a nobel cause, this Universal Science of Man? Some might say that if it were not so fanciful, it might be. Others might say that it has roots in extreme totalitarian thinking and were it ever taken truly seriously, it would lead to a tyranny with those who espouse it conveniently at the helm. These are moral and political questions that will not be explored in too much detail in the present book. (Pilkington 2016, 2)

What we seek to do here is more humble again. There is a sense today, nearly six years after an economic catastrophe that few still understand and only a few saw coming, that there is something rotten in economics. Something stinks and people are less inclined than ever to trust the funny little man standing next to the blackboard with his equations and his seemingly otherworldly answers to every social and economic problem that one can imagine. This is a healthy feeling and we as a society should promote and embrace it. A similar movement began over half a millennia ago questioning the men of mystery who dictated how people should live their lives from ivory towers; it was called the Reformation and it changed the world…. We are not so much interested in the practices of the economists themselves, as to whether they engage in simony, in nepotism and—could it ever be thought?—the sale of indulgences to those countries that had or were in the process of committing grave sins. Rather we are interested in how we gotten to where we are and how we can fix it. (Pilkington 2016, 2-3)

The roots of the problems with contemporary economics run very deep indeed. In order to comprehend them, we must run the gamut from political motivation to questions of philosophy and methodology to the foundations of the underlying structure itself. When these roots have been exposed, we can then begin the process of digging them up so we can plant a new tree. In doing this, we do not hope to provide all the answers but merely a firm grounding, a shrub that can, given time, grow into something far more robust. (Pilkington 2016, 3)

Down with Mathematics?

(….) Economics needs more people who distrust mathematics when applying thought to the social and economic world, not less. Indeed, … the major problems with economics today arose out of the mathematization of the discipline, especially as it proceeded after the Second World War. Mathematics become to economics what Latin was to the stagnant priest-caste that Luther and other reformers attacked during the Reformation: a means not to clarify, but to obscure through intellectual intimidation. It ensured that the common man could not read the Bible and had to consult the priest and, perhaps, pay him alms. (Pilkington 2016, 3)

(….) [M]athematics can, in certain very limited circumstances, be an opportune way of focusing the debate. It can give us a rather clear and precise conception of what we are talking about. Some aspects—by no means all aspects—of macroeconomics are quantifiable. Investments, profits, the interest rate—we can look the statistics for these things up and use this information to promote economic understanding. That these are quantifiable also means that, to a limited extent, we can conceive of them in mathematical form. It cannot be stressed enough, however, the limited extent to which this is the case. There are always … non-quantifiable elements that play absolutely key roles in how the economy works. (Pilkington 2016, 3-4)

(….) The mathematisation of the discipline was perhaps the crucial turning point when economics began to become something entirely other to the study of the actual economy. It started in the late nineteenth century, but at the time many of those who pioneered the approach became ever more distrustful of doing so. They began to think that it would only lead to obscurity of argument and an inability to communicate properly either with other people or with the real world. Formulae would become synonymous with truth and the interrelation between ideas would become foggy and unclear. A false sense of clarity in the form of pristine equations would be substituted for clarity of thought. Alfred Marshall, a pioneer of mathematics in economics who nevertheless always hid it in footnotes, wrote of his distress in his later years in a letter to his friend. (Pilkington 2016, 4)

[I had] a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics: and I went more and more on the rules—(1) Use mathematics as a shorthand language, rather than an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in (4), burn (3). This last I did often. (Pigou ed. 1966 [1906], pp. 427-428)

The controversy around mathematics appears to have broken out in full force surrounding the issue of econometric estimation in the late 1930s and early 1940s. Econometric estimation … is the practice of putting economic theories into mathematical form and then using them to make predictions based on available statistics…. [I]t is a desperately silly practice. Those who championed the econometric and mathematical approach were men whose names are not known today by anyone who is not deeply interested in the field. The were men like Jan Tinbergen, Oskar Lange, Jacob Marschak and Ragnar Frisch (Louçā 2007). Most of these men were social engineers of one form or another; all of them left-wing and some of them communist. The mood of the time, one reflected in the tendency to try to model the economy itself, was that society and the economy should be planned by men in lab coats. By this they often meant not simply broad government intervention but something more like micro-management of the institutions that people inhabit day-to-day from the top down. Despite the fact that many mathematical economic models today seem outwardly to be concerned with ‘free markets’, they all share this streak, especially in how they conceive that people (should?) act. (Pilkington 2016, 4-5)

Most of the economists at the time were vehemently opposed to this. This was not a particularly left-wing or right-wing issue. On the left, John Maynard Keynes was horrified by what he was seeing develop, while, on the right, Friedrich von Hayek was warning that this was not the way forward. But it was probably Keynes who was the most coherent belligerent of the new approach. This is because before he began to write books on economics, Keynes had worked on the philosophy of probability theory, and probability theory was becoming a key component of the mathematical approach (Keynes 1921). Keynes’ extensive investigations into probability theory allowed him to perceive to what extent mathematical formalism could be applied for understanding society and the economy. He found that it was extremely limited in its ability to illuminate social problems. Keynes was not against statistics or anything like that—he was an early champion and expert—but he was very, very cautious about people who claimed that just because economics produces statistics these can be used in the same as numerical observations form experiments were used in the hard sciences. He was also keenly aware that cetain tendencies towards mathematisation lead to a fogging of the mind. In a more diplomatic letter to one of the new mathematical economists (Keynes, as shall see … could be scathing about these new approaches), he wrote: (Pilkington 2016, 5-6)

Mathematical economics is such risky stuff as compared with nonmathematical economics, because one is deprived of one’s intuition on the one hand, yet there are all kinds of unexpressed unavowed assumptions on the other. Thus I never put much trust in it unless it falls in with my own intuitions; and I am therefore grateful for an author who makes it easier for me to apply this check without too much hard work. (Keynes cited in Louçā 2007, p. 186)

(….) Mathematics, like the high Latin of Luther’s time, is a language. It is a language that facilitates greater precision in some instances and greater obscurity in others. For most issues economic, it promotes obscurity. When a language is used to obscure, it is used as a weapon by those who speak it to repress the voices of those who do not. A good deal of the history of the relationship between mathematics and the other social sciences in the latter half of the twentieth century can be read under this light. If there is anything that this book seeks to do, it is to help people realise that this is not what economics need be or should be. Frankly, we need more of those who speak the languages of the humanities—of philosophy, sociology and psychology—than we do people who speak the language of the engineers but lack the pragmatic spirit of the engineer who can see clearly that his method cannot be deployed to understand those around him. (Pilkington 2016, 6)

Natural selection of algorithms?

If we suppose that the action of the human brain, conscious or otherwise, is merely the acting out of some very complicated algorithm, then we must ask how such an extraordinary effective algorithm actually came about. The standard answer, of course, would be ‘natural selection’. as creatures with brains evolved, those with more effective algorithms would have a better tendency to survive and therefore, on the whole, had more progeny. These progeny also tended to carry more effective algorithms than their cousins, since they inherited the ingredients of these better algorithms from their parents; so gradually the algorithms improved not necessarily steadily, since there could have been considerable fits and starts in their evolution until they reached the remarkable status that we (would apparently) find in the human brain. (Compare Dawkins 1986). (Penrose 1990: 414)

Even according to my own viewpoint, there would have to be some truth in this picture, since I envisage that much of the brain’s action is indeed algorithmic, and as the reader will have inferred from the above discussion I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. (Penrose 1990: 414)

Imagine an ordinary computer program. How would it have come into being? Clearly not (directly) by natural selection! Some human computer programmer would have conceived of it and would have ascertained that it correctly carries out the actions that it is supposed to. (Actually, most complicated computer programs contain errors usually minor, but often subtle ones that do not come to light except under unusual circumstances. The presence of such errors does not substantially affect my argument.) Sometimes a computer program might itself have been ‘written’ by another, say a ‘master’ computer program, but then the master program itself would have been the product of human ingenuity and insight; or the program itself might well be pieced together from ingredients some of which were the products of other computer programs. But in all cases the validity and the very conception of the program would have ultimately been the responsibility of (at least) one human consciousness. (Penrose 1990: 414)

One can imagine, of course, that this need not have been the case, and that, given enough time, the computer programs might somehow have evolved spontaneously by some process of natural selection. If one believes that the actions of the computer programmers’ consciousness are themselves simply algorithms, then one must, in effect, believe algorithms have evolved in just this way. However, what worries me about this is that the decision as to the validity of an algorithm is not itself an algorithmic process! … (The question of whether or not a Turing machine will actually stop is not something that can be decided algorithmically.) In order to decide whether or not an algorithm will actually work, one needs insights, not just another algorithm. (Penrose 414-415)

Nevertheless, one still might imagine some kind of natural selection process being effective for producing approximately valid algorithms. Personally, I find this very difficult to believe, however. Any selection process of this kind could act only on the output of the algorithms and not directly on the ideas underlying the actions of the algorithms. This is not simply extremely inefficient; I believe that it would be totally unworkable. In the first place, it is not easy to ascertain what an algorithm actually is, simply by examining its output. (It would be an easy matter to construct two quite different simple Turing machine actions for which the output tapes did not differ until, say, the 2^65536th place — and this difference could never be spotted in the entire history of the universe!) Moreover, the slightest ‘mutation’ of an algorithm (say a slight change in a Turing machine specification, or in its input tape) would tend to render it totally useless, and it is hard to see how actual improvements in algorithms could ever arise in this random way. (Even deliberate improvements are difficult without ‘meanings’ being available. This inadequately documented and complicated computer program needs to be altered or corrected; and the original programmer has departed or perhaps died. Rather than try to disentangle all the various meanings and intentions that the program implicitly depended upon, it is probably easier just to scrap it and start all over again!) (Penrose 1990: 415)

Perhaps some much more ‘robust’ way of specifying algorithms could be devised, which would not be subject to the above criticisms. In a way, this is what I am saying myself. The ‘robust’ specifications are the ideas that underlie the algorithms. But ideas are things that, as far as we know, need conscious minds for their manifestation. We are back with the problem of what consciousness actually is, and what it can actually do that unconscious objects are incapable of — and how on earth natural selection has been clever enough to evolve that most remarkable of qualities. (Penrose 1990: 415)

(….) To my way of thinking, there is still something mysterious about evolution, with its apparent ‘groping’ towards some future purpose. Things at least seem to organize themselves somewhat better than they ‘ought’ to, just on the basis of blind-chance evolution and natural selection…. There seems to be something about the way that the laws of physics work, which allows natural selection to be much more effective process than it would be with just arbitrary laws. The resulting apparently ‘intelligent groping’ is an interesting issue. (Penrose 1990: 416)

The non-algorithmic nature of mathematical insight

… [A] good part of the reason for believing that consciousness is able to influence truth-judgements in a non-algorithmic way stems from consideration of Gödel’s theorem. If we can see that the role of consciousness is non-algorithmic when forming mathematical judgements, where calculation and rigorous proof constitute such an important factor, then surely we may be persuaded that such a non-algorithmic ingredient could be crucial also for the role of consciousness in more general (non-mathematical) circumstances. (Penrose 1990: 416)

… Gödel’s theorem and its relation to computability … [has] shown that whatever (sufficiently extensive) algorithm a mathematician might use to establish mathematical truth — or, what amounts to the same thing, whatever formal system he might adopt as providing his criterion of truth — there will always be mathematical propositions, such as the explicit Gödel proposition P(K) of the system …, that his algorithm cannot provide an answer for. If the workings of the mathematician’s mind are entirely algorithmic, then the algorithm (or formal system) that he actually uses to form his judgements is not capable of dealing with the proposition P(K) constructed from his personal algorithm. Nevertheless, we can (in principle) see that P(K) is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all! (Penrose 1990: 416-417)

(….) The message should be clear. Mathematical truth is not something that we ascertain merely by use of an algorithm. I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must ‘see’ the truth of a mathematical argument to be convinced of its validity. This ‘seeing’ is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we conceive ourselves of the validity of Gödel’s theorem we not only ‘see’ it, but by so doing we reveal the very non-algorithmic nature of the ‘seeing’ process itself. (Penrose 1990: 418)

A Pragmatic View of Truth

[William] James argued at length for a certain conception of what it means for an idea to be true. This conception was, in brief, that an idea is true if it works. (Stapp 2009, 60)

James’s proposal was at first scorned and ridiculed by most philosophers, as might be expected. For most people can plainly see a big difference between whether an idea is true and whether it works. Yet James stoutly defended his idea, claiming that he was misunderstood by his critics.

It is worthwhile to try and see things from James’s point of view.

James accepts, as a matter of course, that the truth of an idea means its agreement with reality. The questions are: What is the “reality” with which a true idea agrees? And what is the relationship “agreement with reality” by virtue of which that idea becomes true?

All human ideas lie, by definition, in the realm of experience. Reality, on the other hand, is usually considered to have parts lying outside this realm. The question thus arises: How can an idea lying inside the realm of experience agree with something that lies outside? How does one conceive of a relationship between an idea, on the one hand, and something of such a fundamentally different sort? What is the structural form of that connection between an idea and a transexperiential reality that goes by the name of “agreement”? How can such a relationship be comprehended by thoughts forever confined to the realm of experience?

So if we want to know what it means for an idea to agree with a reality we must first accept that this reality lies in the realm of experience.

This viewpoint is not in accord with the usual idea of truth. Certain of our ideas are ideas about what lies outside the realm of experience. For example, I may have the idea that the world is made up of tiny objects called particles. According to the usual notion of truth this idea is true or false according to whether or not the world really is made up of such particles. The truth of the idea depends on whether it agrees with something that lies outside the realm of experience. (Stapp 2009, 61)

Now the notion of “agreement” seems to suggest some sort of similarity or congruence of the things that agree. But things that are similar or congruent are generally things of the same kind. Two triangles can be similar or congruent because they are of the same kind. Two triangles can be similar or congruent because they are the same kind of thing: the relationships that inhere in one can be mapped in a direct and simple way into the relationships that inhere in the other.

But ideas and external realities are presumably very different kinds of things. Our ideas are intimately associated with certain complex, macroscopic, biological entitiesour brainsand the structural forms that can inhere in our ideas would naturally be expected to depend on the structural forms of our brains. External realities, on the other hand, could be structurally very different from human ideas. Hence there is no a priori reason to expect that the relationships that constitute or characterize the essence of external reality can be mapped in any simple or direct fashion into the world of human ideas. Yet if no such mapping exists then the whole idea of “agreement” between ideas and external realities becomes obscure.

The only evidence we have on the question of whether human ideas can be brought into exact correspondence with the essences of the external realites is the success of our ideas in bringing order to our physical experience. Yet success of ideas in this sphere does not ensure the exact correspondence of our ideas to external reality.

On the other hand, the question of whether ideas “agree” with external essences is of no practical importance. What is important is precisely the success of the ideasif the ideas are successful in bringing order to our experience, then they are useful even if they do not “agree”, in some absolute sense, with the external essences. Moreover, if they are successful in bringing order into our experience, then they do “agree” at least with the aspects of our experience that they successfully order. Furthermore, it is only this agreement with aspects of our experience that can ever really be comprehended by man. That which is not an idea is intrinsically incomprehensible, and so are its relationships to other things. This leads to the pragmatic [critical realist?] viewpoint that ideas must be judged by their success and utility in the world of ideas and experience, rather than on the basis of some intrinsically incomprehensible “agreement” with nonideas.

The significance of this viewpoint for science is its negation of the idea that the aim of science is to construct a mental or mathematical image of the world itself. According to the pragmatic view, the proper goal of science is to augment and order our experience. A scientific theory should be judged on how well it serves to extend the range of our experience and reduce it to order. It need not provide a mental or mathematical image of the world itself, for the structural form of the world itself may be such that it cannot be placed in simple correspondence with the types of structures that our mental processes can form. (Stapp 2009, 62)

James was accused of subjectivismof denying the existence of objective reality. In defending himself against this charge, which he termed slanderous, he introduced an interesting ontology consisting of three things: (1) private concepts, (2) sense objects, (3) hypersensible realities. The private concepts are subjective experiences. The sense objects are public sense realities, i.e., sense realities that are independent of the individual. The hypersensible realities are realities that exist independently of all human thinkers.

Of hypersensible realities James can talk only obliquely, since he recognizes both that our knowledge of such things is forever uncertain and that we can moreover never even think of such things without replacing them by mental substitutes that lack the defining characteristics of that which they replace, namely the property of existing independetly of all human thinkers.

James’s sense objects are courious things. They are sense realities and hence belong to the realm of experience. Yet they are public: they are indepedent of the individual. They are, in short, objective experiences. The usual idea about experiences is that they are personal or subjective, not public or objective.

This idea of experienced sense objects as public or objective realities runs through James’s writings. The experience “tiger” can appear in the mental histories of many different individuals. “That desk” is something that I can grasp and shake, and you also can grasp and shake. About this desk James says:

But you and I are commutable here; we can exchange places; and as you go bail for my desk, so I can bail yours. This notion of a reality independent of either of us, taken from ordinary experiences, lies at the base of the pragmatic definition of truth.

These words should, I think, be linked with Bohr’s words about classical concepts as the basis of communication between scientists. In both cases the focus is on the concretely experienced sense realitiessuch as the shaking of the deskas the foundation of social reality. From this point of view the objective world is not built basically out of such airy abstractions as electrons and protons and “space”. It is founded on the concrete sense realities of social experience, such as a block of concrete held in the hand, a sword forged by a blacksmith, a Geiger counter prepared according to specifications by laboratory technicians and placed in a specified position by experimental physicists. (Stapp 2009, 62-63)

Quantum Mechanics and Human Values

We do have minds, we are conscious, and we can reflect upon our private experiences because we have them. Unlike phlogiston … these phenomena exist and are the most common in human experience.

Daniel Robinson, cited in Edward Fullbrook’s (2016, 33) Narrative Fixation in Economics

Valuations are always with us. Disinterested research there has never been and can never be. Prior to answers there must be questions. There can be no view except from a viewpoint. In the questions raised and the viewpoint chosen, valuations are implied. Our valuations determine our approaches to a problem, the definition of our concepts, the choice of models, the selection of observations, the presentations of our conclusions in fact the whole pursuit of a study from beginning to end.

— Gunnar Myrdal (1978, 778-779), cited in Söderbaum (2018, 8)

Philosophers have tried doggedly for three centuries to understand the role of mind in the workings of a brain conceived to function according to principles of classical physics. We now know no such brain exists: no brain, body, or anything else in the real world is composed of those tiny bits of matter that Newton imagined the universe to be made of. Hence it is hardly surprising that those philosophical endeavors were beset by enormous difficulties, which led to such positions as that of the ‘eliminative materialists’, who hold that our conscious thoughts must be eliminated from our scientific understanding of nature; or of the ‘epiphenomenalists’, who admit that human experiences do exist, but claim that they play no role in how we behave; or of the ‘identity theorists’, who claim that each conscious feeling is exactly the same thing as a motion of particles that nineteenth century science thought our brains, and everything else in the universe, were made of, but that twentieth century science has found not to exist, at least as they were formerly conceived. The tremendous difficulty in reconciling consciousness, as we know it, with the older physics is dramatized by the fact that for many years the mere mention of ‘consciousness’ was considered evidence of backwardness and bad taste in most of academia, including, incredibly, even psychology and the philosophy of mind. (Stapp 2007, 139)

What you are, and will become, depends largely upon your values. Values arise from self-image: from what you believe yourself to be. Generally one is led by training, teaching, propaganda, or other forms of indoctrination, to expand one’s conception of the self: one is encouraged to perceive oneself as an integral part of some social unit such as family, ethnic or religious group, or nation, and to enlarge one’s self-interest to include the interests of this unit. If this training is successful your enlarged conception of yourself as good parent, or good son or daughter, or good Christian, Muslim, Jew, or whatever, will cause you to give weight to the welfare of the unit as you would your own. In fact, if well conditioned you may give more weight to the interests of the group than to the well-being of your bodily self. (Stapp 2007, 139)

In the present context it is not relevant whether this human tendency to enlarge one’s self-image is a consequence of natural malleability, instinctual tendency, spiritual insight, or something else. What is important is that we human beings do in fact have the capacity to expand our image of ‘self’, and that this enlarged concept can become the basis of a drive so powerful that it becomes the dominant determinant of human conduct, overwhelming every other factor, including even the instinct for bodily survival. (Stapp 2007, 140)

But where reason is honored, belief must be reconciled with empirical evidence. If you seek evidence for your beliefs about what you are, and how you fit into Nature, then science claims jurisdiction, or at least relevance. Physics presents itself as the basic science, and it is to physics that you are told to turn. Thus a radical shift in the physics-based conception of man from that of an isolated mechanical automaton to that of an integral participant in a non-local holistic process that gives form and meaning to the evolving universe is a seismic event of potentially momentous proportions. (Stapp 2007, 140)

The quantum concept of man, being based on objective science equally available to all, rather than arising from special personal circumstances, has the potential to undergird a universal system of basic values suitable to all people, without regard to the accidents of their origins. With the diffusion of this quantum understanding of human beings, science may fulfill itself by adding to the material benefits it has already provided a philosophical insight of perhaps even greater ultimate value. (Stapp 2007, 140)

This issue of the connection of science to values can be put into perspective by seeing it in the context of a thumb-nail sketch of history that stresses the role of science. For this purpose let human intellectual history be divided into five periods: traditional, modern, transitional, post-modern, and contemporary. (Stapp 2007, 140)

During the ‘traditional’ era our understanding of ourselves and our relationship to Nature was based on ‘ancient traditions’ handed down from generation to generation: ‘Traditions’ were the chief source of wisdom about our connection to Nature. The ‘modern’ era began in the seventeenth century with the rise of what is still called ‘modern science’. That approach was based on the ideas of Bacon, Descartes, Galileo and Newton, and it provided a new source of knowledge that came to be regarded by many thinkers as more reliable than tradition. (Stapp 2007, 140)

The basic idea of ‘modern’ science was ‘materialism’: the idea that the physical world is composed basically of tiny bits of matter whose contact interactions with adjacent bits completely control everything that is now happening, and that ever will happen. According to these laws, as they existed in the late nineteenth century, a person’s conscious thoughts and efforts can make no difference at all to what his body/brain does: whatever you do was deemed to be completely fixed by local interactions between tiny mechanical elements, with your thoughts, ideas, feelings, and efforts, being simply locally determined high-level consequences or re-expressions of the low-level mechanical process, and hence basically just elements of a reorganized way of describing the effects of the absolutely and totally controlling microscopic material causes. (Stapp 2007, 140-141)

This materialist conception of reality began to crumble at the beginning of the twentieth century with Max Planck’s discovery of the quantum of action. Planck announced to his son that he had, on that day, made a discovery as important as Newton’s. That assessment was certainly correct: the ramifications of Planck’s discovery were eventually to cause Newton’s materialist conception of physical reality to come crashing down. Planck’s discovery marks the beginning of the `transitional’ period. (Stapp 2007, 141)

A second important transitional development soon followed. In 1905 Einstein announced his special theory of relativity. This theory denied the validity of our intuitive idea of the instant of time ‘now’, and promulgated the thesis that even the most basic quantities of physics, such as the length of a steel rod, and the temporal order of two events, had no objective ‘true values’, but were well defined only ‘relative’ to some observer’s point of view. (Stapp 2007, 141)

Planck’s discovery led by the mid-1920s to a complete breakdown, at the fundamental level, of the classical material conception of nature. A new basic physical theory, developed principally by Werner Heisenberg, Niels Bohr, Wolfgang Pauli, and Max Born, brought ‘the observer’ explicitly into physics. The earlier idea that the physical world is composed of tiny particles (and electromagnetic and gravitational fields) was abandoned in favor of a theory of natural phenomena in which the consciousness of the human observer is ascribed an essential role. This successor to classical physical theory is called Copenhagen quantum theory. (Stapp 2007, 141)

This turning away by science itself from the tenets of the objective materialist philosophy gave impetus to, and lent support to, post-modernism. That view, which emerged during the second half of the twentieth century, promulgated, in essence, the idea that all ‘truths’ were relative to one’s point of view, and were mere artifacts of some particular social group’s struggle for power over competing groups. Thus each social movement was entitled to its own ‘truth’, which was viewed simply as a socially created pawn in the power game. (Stapp 2007, 141-142)

The connection of post-modern thought to science is that both Copenhagen quantum theory and relativity theory had retreated from the idea of observer-independent objective truth. Science in the first quarter of the twentieth century had not only eliminated materialism as a possible foundation for objective truth, but seemed to have discredited the very idea of objective truth in science. But if the community of scientists has renounced the idea of objective truth in favor of the pragmatic idea that ‘what is true for us is what works for us’, then every group becomes licensed to do the same, and the hope evaporates that science might provide objective criteria for resolving contentious social issues. (Stapp 2007, 142)

This philosophical shift has had profound social and intellectual ramifications. But the physicists who initiated this mischief were generally too interested in practical developments in their own field to get involved in these philosophical issues. Thus they failed to broadcast an important fact: already by mid-century, a further development in physics had occurred that provides an effective antidote to both the ‘materialism’ of the modern era, and the ‘relativism’ and ‘social constructionism’ of the post-modern period. In particular, John von Neumann developed, during the early thirties, a form of quantum theory that brought the physical and mental aspects of nature back together as two aspects of a rationally coherent whole. This theory was elevated, during the forties — by the work of Tomonaga and Schwinger — to a form compatible with the physical requirements of the theory of relativity. (Stapp 2007, 142)

Von Neumann’s theory, unlike the transitional ones, provides a framework for integrating into one coherent idea of reality the empirical data residing in subjective experience with the basic mathematical structure of theoretical physics. Von Neumann’s formulation of quantum theory is the starting point of all efforts by physicists to go beyond the pragmatically satisfactory but ontologically incomplete Copenhagen form of quantum theory. (Stapp 2007, 142)

Von Neumann capitalized upon the key Copenhagen move of bringing human choices into the theory of physical reality. But, whereas the Copenhagen approach excluded the bodies and brains of the human observers from the physical world that they sought to describe, von Neumann demanded logical cohesion and mathematical precision, and was willing to follow where this rational approach led. Being a mathematician, fortified by the rigor and precision of his thought, he seemed less intimidated than his physicist brethren by the sharp contrast between the nature of the world called for by the new mathematics and the nature of the world that the genius of Isaac Newton had concocted. (Stapp 2007, 142-143)

A common core feature of the orthodox (Copenhagen and von Neumann) quantum theory is the incorporation of efficacious conscious human choices into the structure of basic physical theory. How this is done, and how the conception of the human person is thereby radically altered, has been spelled out in lay terms in this book, and is something every well informed person who values the findings of science ought to know about. The conception of self is the basis of values and thence of behavior, and it controls the entire fabric of one’s life. It is irrational, from a scientific perspective, to cling today to false and inadequate nineteenth century concepts about your basic nature, while ignoring the profound impact upon these concepts of the twentieth century revolution in science. (Stapp 2007, 143)

It is curious that some physicists want to improve upon orthodox quantum theory by excluding ‘the observer’, who, by virtue of his subjective nature, must, in their opinion, be excluded from science. That stance is maintained in direct opposition to what would seem to be the most profound advance in physics in three hundred years, namely the overcoming of the most glaring failure of classical physics, its inability to accommodate us, its creators. The most salient philosophical feature of quantum theory is that the mathematics has a causal gap that, by virtue of its intrinsic form, provides a perfect place for Homo sapiens as we know and experience ourselves. (Stapp 2007, 143)

One of the most important tasks of social sciences is to explain the events, processes, and structures that take place and act in society. In a time when scientific relativism (social constructivism, postmodernism, de-constructivism etc.) is expanding, it’s important to guard against reducing science to a pure discursive level [cf. Pålsson Syll 2005]. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is to reveal what this reality actually looks like. This is after all the object of science.

— Lars Pålsson Syll. On the use and misuse of theories and models in economics (Kindle Locations 113-118). WEA. Kindle Edition.

Conclusions

How can our world of billions of thinkers ever come into general concordance on fundamental issues? How do you, yourself, form opinions on such issues? Do you simply accept the message of some ‘authority’, such as a church, a state, or a social or political group? All of these entities promote concepts about how you as an individual fit into the reality that supports your being. And each has an agenda of its own, and hence its own internal biases. But where can you find an unvarnished truth about your nature, and your place in Nature? (Stapp 2007, 145)

Science rests, in the end, on an authority that lies beyond the pettiness of human ambition. It rests, finally, on stubborn facts. The founders of quantum theory certainly had no desire to bring down the grand structure of classical physics of which they were the inheritors, beneficiaries, and torch bearers. It was stubborn facts that forced their hand, and made them reluctantly abandon the two-hundred-year-old old classical ideal of a mechanical universe, and turn to what perhaps should have been seen from the start as a more reasonable endeavor: the creation an understanding of nature that includes in a rationally coherent way the thoughts by which we know and influence the world around us. The labors of scientists endeavoring merely to understand our inanimate environment produced, from its own internal logic, a rationally coherent framework into which we ourselves fit neatly. What was falsified by twentieth-century science was not the core traditions and intuitions that have sustained societies and civilizations since the dawn of mankind, but rather an historical aberration, an impoverished world view within which philosophers of the past few centuries have tried relentlessly but fruitlessly to find ourselves. The falseness of that deviation of science must be made known, and heralded, because human beings are not likely to endure in a society ruled by a conception of themselves that denies the essence of their being. (Stapp 2007, 145)

Einstein’s principle is relativity, not relativism. The historian of science Gerald Holton reports that Einstein was unhappy with the label ‘relativity theory’ and in his correspondence referred to it as Invariantentheorie…. Consider temporal and spatial measurements. Even if temporal and spatial measurements become frame-dependent, the observers who are attached to their different clock-carrying frames, like the respective observer on the platform and the train, can communicate their results to each other. They can even predict what the other observer will measure. The transparency between the reference frames and the mutual predictability of the measurement is due [to] a mathematical relationship, called the Lorentz transformations. The Lorentz transformations state the mathematical rules, which allow an observer to translate his/her coordinates into those of a different observer.

(….) The appropriate criterion for what is fundamentally real will (…) be what is invariant across all points of view…. The invariant is the real. This is a hypothesis about physical reality: what is frame-dependent is apparently real, what is frame-independent may be fundamentally real. To claim that the invariant is the real is to make an inference from the structure of scientific theories to the structure of the natural world.

Weinert (2004, 66, 70-71) The Scientist as Philosopher: Philosophical Consequences of Great Scientific Discoveries

Reply to Sam Harris on Free Will

Sam Harris’s book “Free Will” is an instructive example of how a spokesman dedicated to being reasonable and rational can have his arguments derailed by a reliance on prejudices and false presuppositions so deep-seated that they block seeing science-based possibilities that lie outside the confines of an outmoded world view that is now known to be incompatible with the empirical facts. (Stapp 2017, 97)

A particular logical error appears repeatedly throughout Harris’s book. Early on, he describes the deeds of two psychopaths who have committed some horrible acts. He asserts: “I have to admit that if I were to trade places with one of these men, atom for atom, I would be him: There is no extra part of me that could decide to see the world differently or to resist the impulse to victimize other people.” (Stapp 2017, 97)

Harris asserts, here, that there is “no extra part of me” that could decide differently. But that assertion, which he calls an admission, begs the question. What evidence rationally justifies that claim? Clearly it is not empirical evidence. It is, rather, a prejudicial and anti-scientific commitment to the precepts of a known-to-be-false conception of the world called classical mechanics. That older scientific understanding of reality was found during the first decades of the twentieth century to be incompatible with empirical findings, and was replaced during the 1920s, and early 1930s, by an adequate and successful revised understanding called quantum mechanics. This newer theory, in the rationally coherent and mathematically rigorous formulation offered by John von Neumann, features a separation of the world process into (1), a physically described part composed of atoms and closely connected physical fields; (2), some psychologically described parts lying outside the atom-based part, and identified as our thinking ego’s; and (3), some psycho-physical actions attributed to nature. Within this empirically adequate conception of reality there is an extra (non-atom-based) part of a person (his thinking ego) that can resist (successfully, if willed with sufficient intensity) the impulse to victimize other people. Harris’s example thus illustrates the fundamental errors that can be caused by identifying honored science with nineteenth century classical mechanics. (Stapp 2017, 97)

Harris goes on to defend “compatibilism”, the view that claims both that every physical event is determined by what came before in the physical world and also that we possess “free will”. Harris says that “Today the only philosophically respectable way to endorse free will is to be a compatibilist—because we know that determinism, in every sense relevant to human behavior, is true”. (Stapp 2017, 97-98)

But what Harris claims that “We know” to be true is, according to quantum mechanics, not known to be true. (Stapp 2017, 98)

The final clause “in every sense relevant to human behavior” is presumably meant to discount the relevance of quantum mechanical indeterminism, by asserting that quantum indeterminism is not relevant to human behavior—presumably because it washes out at the level of macroscopic brain dynamics. But that idea of what the shift to quantum mechanics achieves is grossly deficient. The quantum indeterminism merely opens the door to a complex dynamical process that not only violates determinism (the condition that the physical past determines the future) at the level of human behavior, but allows mental intentions that are not controlled by the physical past to influence human behavior in the intended way. Thus the shift to quantum mechanics opens the door to a causal efficacy of free will that is ruled out by Harris’s effective embrace of false nineteenth science. But what Harris claims that “We know” to be true is, according to quantum mechanics, not known to be true. (Stapp 2017, 98)

Computability and Economics

My incompleteness theorem makes it likely that mind is not mechanical, or else mind cannot understand its own mechanism. If my result is taken together with the rationalistic attitude which Hilbert had and which was not refuted by my results, then [we can infer] the sharp result that mind is not mechanical. This is so, because, if the mind were a machine, there would, contrary to this rationalistic attitude, exist number-theoretic questions undecidable for the human mind (Gödel in Wang 1996, 186-187)

(….) However, Wang reported that in 1972, in comments at a meeting to honor von Neumann, Gödel said: “The brain is a computing machine connected with a spirit ” (Wang 1996, 189). In discussion with Wang at about that time, Gödel amplified this remark:

Even if the finite brain cannot store an infinite amount of information, the spirit may be able to. The brain is a computing machine connected with a spirit. If the brain is taken to be physical and as a digital computer, from quantum mechanics there are then only a finite number of states. Only by connecting it to a spirit might it work in some other way. (Gödel in Wang 1996, 193)

Some caution is required in interpreting the remarks recorded by Wang, since the context is not always clear. Nevertheless Wang’s reports create the impression that, by the time of his note about Turing, Gödel was again tending toward a negative answer to the question, “Is the human mind replaceable by a machine?” (Copeland et. al. 2013, 21)

(….) The mathematician Jack Good, formally Turing’s colleague at Bletchley Park, Britain’s wartime code-breaking headquarters, gave a succinct statement of the Mathematical Objection in a 1948 letter to Turing:

Can you pin-point the fallacy in the following argument? “No machine can exist for which there are no problems that we can solve and it can’t. But we are machines: a contradiction.”

At the time of Good’s letter Turing was already deeply interested in the Mathematical Objection. More than eighteen months previously he had given a lecture in London, in which he expounded and criticized an argument flowing from his negative result concerning Entsheidungsproblem and concluding that “there is a fundamental contradiction in the idea of a machine with intelligence” (1947, 393).

At the time of Good’s letter Turing was already deeply interested in the Mathematical Objection. More than eighteen months previously he had given a lecture, in London, in which he expounded and criticized an argument flowing from his negative result concerning the Entsheidungsproblem and concluding that “there is a fundamental contradiction in the idea of a machine with intelligence” (1947, 393). (Copeland et. al. 2013, 21)

Copeland et. al.. (2013, 5; 21) Computability: Turing, Gödel, Church, and Beyond. MIT Press.

If the economy is driven by one individual choice after another in response to prices, this should be capable of being modelled on a computer. In the words of an eminent economist, consumer choice can be likened to a computer ‘into whom we “feed” a sequence of market prices and from whom we obtain a corresponding sequence of “solutions” in the form of specified optimum positions’. The ranking of preferences determines the market choices of economic man. Arriving at this ranking can be modelled as a sequence of pairwise comparisons, for example, making a choice between strawberry and vanilla flavours (taking price into account), and comparing likewise all other options in sequence until the budget is exhausted. Such choices can be embodied in an algorithm (a calculation procedure) to run sequentially on a computer and provide a numerical result.  (Offer and Söderberg 2019, 263)

But there is a snag: some algorithmic problems cannot be solved by a digital computer. They either take too long to compute, are impossible to compute (that is, are ‘non-computable’), or it is unknown whether they can be computed. For example, the variable of interest may increase exponentially as the algorithm moves sequentially through time. A generic computer (known after its originator as a ‘Turing machine’, which can mimic any computer) fails to complete the algorithm and never comes to a halt. Such problems can arise in deceptively simple tasks, for example, the ‘travelling salesman problem’, which involves calculating the shortest route through several given locations, or designing efficient networks more generally. For every incremental move, the time required by the computer rises by a power: there may be a solution, but it requires an impossible length of time to compute. In a more familiar example, encryption relying on the multiplication of two unknown prime numbers can be broken, but relies on solutions taking too long to complete. (Offer and Söderberg 2019, 263-264)

The clockwork consumer maximizes her innate preferences in response to market prices. But there is a flaw in the design: the clockwork may not deliver a result. It may have to run forever before making a single choice. This has been demonstrated formally several times. The ordering of individual preferences has been claimed to be ‘non-computable’, and Walrasian general equilibrium may be non-computable as well. Non-computability in economics is little cited by mainstream scholars. On the face of it, it makes a mockery of the neoclassical notions of rationality and rigour, both of which imply finality. Economics however averts its gaze. In practice, since standard microeconomics has never aspired to realism, it may be a reasonable response to say that it has formalisms that work, and that they constitute ‘horses for courses’. But what cannot be claimed for such formalisms is a unique and binding authority in a theoretical, empirical, policy-normative sense, in the way that scientific consensus is binding. (Offer and Söderberg 2019, 264-265)

Computation rears its head several times in Nobel economics. In the second Nobel Lecture, Ragnar Frisch described the task of the economist as validating and executing policy preferences by feeding them into computer models of the economy, and Milton Friedman expressed a similar idea in his Nobel Lecture of 1976. Hayek (NPW, 1974) made his mark in the ‘socialist calculation debate’. Defenders of socialist planning (and of neoclassical economics) in the 1920s and 1930s argued that private ownership was not crucial: socialism could make use of markets, and that the requirements for socialist calculation were no more onerous than the ones assumed for neoclassical general equilibrium. From then onwards, the debate should really be called ‘the neoclassical calculation debate’. Joseph Stiglitz (NPW, 2001) perversely framed a devastating demolition of general equilibrium economics as a criticism of market socialism. Kenneth Arrow (NPW, 1972), an architect of general equilibrium, pointed out (against general equilibrium) that in terms of computability, every person is her own ‘socialist planner’—the task of rationally ordering even private preferences and choices (which Hayek and economics more generally takes for granted) looks too demanding. Under general equilibrium, if even a single person is in a position to set a price (as opposed to taking it as given), ‘the superiority of the market over centralized planning disappears. Each individual agent is in effect using as much information as would be required by a central planner.’ (Offer and Söderberg 2019, 265)

In response to the socialist neoclassical defence, Hayek and his supporters questioned the very possibility of rational calculation. Hayek acknowledged the interdependence of all prices. But the consumer and entrepreneur did not need to be omniscient, just to make use of local price signals and local knowledge to price their goods and choices. The problem was not the static once-and-for-all efficiency of general equilibrium, but coping with change. The prices obtained fell well short of optimality (in the Pareto general equilibrium sense).29 Hayek implied that this was the best that could be achieved. But how would we know? Joseph Stiglitz (NPW, 2001) does not think it is. Regulation can improve it. Hayek’s position fails as an argument against socialism: if capitalism can do without omniscience, why not a Hayekian market socialism without omniscience? A key part of Mises’s original argument against socialism in 1920 was that that entrepreneurs require the motivation of profit, and that private ownership of the means of production was indispensable. But advanced economies are mixed economies: they have large public sectors, in which central banking, social insurance, and infrastructure, typically more than a third of the economy, are managed by governments or not-for-profit. They would be much less efficient to manage any other way. In Britain, for example, with its privatized railways, the biggest investment decisions are still reserved for government: the rails are publicly owned, the trains are commissioned and purchased by government, and a major high speed line project (HS2) can only be undertaken by government. Despite Hayek, smaller public sectors are not associated with more affluent economies: The expensive Nordic Social Democratic societies demonstrate this. (Offer and Söderberg 2019, 265-266)

Herbert Simon (NPW, 1978) pointed out that individuals could not cope with the computational challenges they faced. They did the best they could with what they had, which he called ‘bounded rationality’. The problem also appears in behavioural economics, where NPWs Allais, Selten, Kahneman, Smith, and Roth have all shown that real people diverge from the norms of rational choice, and that outcomes are therefore unlikely to scale up to ‘efficient’ equilibria. In a letter to the non-computability advocate Vela Velupillai, Simon spelled out the different degrees of cognitive capacity: There are many levels of complexity in problems, and corresponding boundaries between them. Turing computability is an outer boundary, and as you show, any theory that requires more power than that surely is irrelevant to any useful definition of human rationality. A slightly stricter boundary is posed by computational complexity, especially in its common ‘worst case’ form. We cannot expect people (and/or computers) to find exact solutions for large problems in computationally complex domains. This still leaves us far beyond what people and computers actually CAN do. The next boundary, but one for which we have few results … is computational complexity for the ‘average case’, sometimes with an ‘almost everywhere’ loophole [that is, procedures that do not apply in all cases]. That begins to bring us closer to the realities of real-world and real-time computation. Finally, we get to the empirical boundary, measured by laboratory experiments on humans and by observation, of the level of complexity that humans actually can handle, with and without their computers, and—perhaps more important—what they actually do to solve problems that lie beyond this strict boundary even though they are within some of the broader limits. The latter is an important point for economics, because we humans spend most of our lives making decisions that are far beyond any of the levels of complexity we can handle exactly; and this is where … good-enough decisions take over. (Offer and Söderberg 2019, 266-267)

This problem was also acknowledged by Milton Friedman (NPW, 1976). Surprisingly for a Chicago economist, he conceded that optimizing was difficult. His solution was to proceed ‘as if’ the choice had been optimized, without specifying how (the example he gives is of the billiards player, who implicitly solves complicated problems in physics every time he makes a successful shot). Asymmetric information, at the core of bad faith economics, is partly a matter of inability to monitor even the moves of a collaborator or a counter-party. The new classical NPW economists (Lucas, Prescott, and Sargent) avoid the problem of computational complexity (and the difficulty of scaling up from heterogeneous individuals) by using a ‘representative agent’ to stand for the whole of the demand or supply side of the economy. Going back to where we started, ‘imaginary machines’, the reliance on models (that is, radically simplified mechanisms) arises from the difficulty of dealing with anything more complicated. (Offer and Söderberg 2019, 267)

All this is just another way of saying that on plausible assumptions, the market-clearing procedures at the heart of normative economics (that is, its quest for ‘efficiency’) cannot work like computers. Having failed in the test of classic analysis, theory fails the test of computability as well. This suggests that actual human choices are not modelled correctly by economic theory, but are made some other way, with as much calculation as can be mustered, but also with short-cuts, intuitions, and other strategies. This is not far-fetched. Humans do things beyond the reach of computers, like carry out an everyday conversation. Policy is not made by computers, not by economists, but by imperfect politicians. Perhaps it is wrong to start with the individual—maybe equilibrium (such as it is) comes from the outside, from the relative stability of social conventions and institutions. This indeterminacy provides an analytical reason why understanding the economy needs to be pragmatic, pluralistic, and open to argument and evidence; an economic historian would say that we should embrace empirical complexity. Policy problems may be intractable to calculation, but most of them get resolved one way or another by the passage of time. History shows how. This may be taken as endorsing the pragmatism of Social Democracy, and of institutional and historical approaches which resemble the actual decision processes.  (Offer and Söderberg 2019, 267-268)

If economics is not science, what should we make of it? Economics has to be regarded as being one voice among many, not superior to other sources of authority, but not inferior to them either. In that respect, it is like Social Democracy. It commands an array of techniques, the proverbial ‘toolkit’ which economists use to perform concrete evaluations, including many varieties of cost-benefit analysis. It has other large assets as well: a belief system that commands allegiance, passion, commitment, groupthink, and rhetoric. Its amorality attracts the powerful in business, finance, and politics. It indoctrinates millions every year in universities, and its graduates find ready work in think tanks, in government, and in business. The press is full of its advocates. As an ideology, economics may be resistant to argument and evidence, but it is not entirely immune to them. Its nominal allegiance to scientific procedure ensures that the discipline responds to empirical anomalies, albeit slowly, embracing new approaches and discarding some of those that don’t seem to work. (Offer and Söderberg 2019, 268, emphasis added)

It From Bit

Henry Louis Mencken [1917] once wrote that “[t]here is always an easy solution to every human problem — neat, plausible and wrong.” And neoclassical economics has indeed been wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real world target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.

The punch line of this is that most of the problems that neoclassical economics is wrestling with, issues from its attempts at formalistic modeling per se of social phenomena. Reducing microeconomics to refinements of hyper-rational Bayesian deductivist models is not a viable way forward. It will only sentence to irrelevance the most interesting real world economic problems. And as someone has so wisely remarked, murder is unfortunately the only way to reduce biology to chemistry — reducing macroeconomics to Walrasian general equilibrium microeconomics basically means committing the same crime.

Lars Pålsson Syll. On the use and misuse of theories and models in economics.

~ ~ ~

Emergence, some say, is merely a philosophical concept, unfit for scientific consumption. Or, others predict, when subjected to empirical testing it will turn out to be nothing more than shorthand for a whole batch of discrete phenomena involving novelty, which is, if you will, nothing novel. Perhaps science can study emergences, the critics continue, but not emergence as such. (Clayton 2004: 577)*

It’s too soon to tell. But certainly there is a place for those, such as the scientist to whom this volume is dedicated, who attempt to look ahead, trying to gauge what are Nature’s broadest patterns and hence where present scientific resources can best be invested. John Archibald Wheeler formulated an important motif of emergence in 1989:

Directly opposite the concept of universe as machine built on law is the vision of a world self-synthesized. On this view, the notes struck out on a piano by the observer-participants of all places and all times, bits though they are, in and by themselves constituted the great wide world of space and time and things.

(Wheeler 1999: 314)

Wheeler summarized his idea — the observer-participant who is both the result of an evolutionary process and, in some sense, the cause of his own emergence — in two ways: in the famous sketch given in Fig.26.1 and in the maxim “It from bit.” In the attempt to summarize this chapter’s thesis with an equal economy of words I offer the corresponding maxim, “Us from it.” The maxim expresses the bold question that gives rise to the emergentist research program: Does nature, in its matter and its laws, manifest an inbuilt tendency to bring about increasing complexity? Is there an apparently inevitable process of complexification that runs from the period table of the elements through the explosive variations of evolutionary history to the unpredictable progress of human cultural history, and perhaps even beyond? (Clayton 2004: 577)

The emergence hypothesis requires that we proceed though at least four stages. The first stage involves rather straightforward physics — say, the emergence of classical phenomena from the quantum world (Zurek 1991, 2002) or the emergence of chemical properties through molecular structure (Earley 1981). In a second stage we move from the obvious cases of emergence in evolutionary history toward what may be the biology of the future: a new, law-based “general biology” (Kauffman 2000) that will uncover the laws of emergence underlying natural history. Stage three of the research program involves the study of “products of the brain” (perception, cognition, awareness), which the program attempts to understand not as unfathomable mysteries but as emergent phenomena that arise as natural products of the complex interactions of brain and central nervous system. Some add a fourth stage to the program, one that is more metaphysical in nature: the suggestion that the ultimate results, or the original causes, of natural emergence transcend or lie beyond Nature as a whole. Those who view stage-four theories with suspicion should note that the present chapter does not appeal to or rely on metaphysical speculations of this sort in making its case. (Clayton 2004: 578-579)

Defining terms and assumptions

The basic concept of emergence is not complicated, even if the empirical details of emergent processes are. We turn to Wheeler, again, for an opening formulation:

When you put enough elementary units together, you get something that is more than the sum of these units. A substance made of a great number of molecules, for instance, has properties such as pressure and temperature that no one molecule possesses. It may be a solid or a liquid or a gas, although no single molecule is solid or liquid or gas. (Wheeler 1998: 341)

Or, in the words of biochemist Arthur Peacocke, emergence takes place when “new forms of matter, and a hierarchy of organization of these forms … appear in the course of time” and ” these new forms have new properties, behaviors, and networks of relations” that must be used to describe them (Peacocke 1993: 62).

Clearly, no one-size-fits-all theory of emergence will be adequate to the wide variety of emergent phenomena in the world. Consider the complex empirical differences that are reflected in these diverse senses of emergence:

• temporal or spatial emergence
• emergence in the progression from simple to complex
• emergence in increasingly complex levels of information processing
• the emergence of new properties (e.g., physical, biological, psychological)
• the emergence of new causal entities (atoms, molecules, cells, central nervous system)
• the emergence of new organizing principles or degrees of inner organization (feedback loops, autocatalysis, “autopoiesis”)
• emergence in the development of “subjectivity” (if one can draw a ladder from perception, through awareness, self-awareness, and self-consciousness, to rational intuition).

Despite the diversity, certain parameters do constrain the scientific study of emergence:

  1. Emergence studies will be scientific only if emergence can be explicated in terms that the relevant sciences can study, check, and incorporate into actual theories.
  2. Explanations concerning such phenomena must thus be given in terms of the structures and functions of stuff in the world. As Christopher Southgate writes, “An emergent property is one describing a higher level of organization of matter, where the description is not epistemologically reducible to lower-level concepts” (Southgate et al. 1999: 158).
  3. It also follows that all forms of dualism are disfavored. For example, only those research programs count as emergentist which refuse to accept an absolute break between neurophysiological properties and mental properties. “Substance dualisms,” such as the Cartesian delineation of reality into “matter” and “mind,” are generally avoided. Instead, research programs in emergence tend to combine sustained research into (in this case) the connections between brain and “mind,” on the one hand, with the expectation that emergent mental phenomena will not be fully explainable in terms of underlying causes on the other.
  4. By definition, emergence transcends any single scientific discipline. At a recent international consultation on emergence theory, each scientist was asked to define emergence, and each offered a definition of the term in his or her own specific field of inquiry: physicists made emergence a product of tome-invariant natural laws; biologists presented emergence as a consequence of natural history; neuroscientists spoke primarily of “things that emerge from brains”; and engineers construed emergence in terms of new things that we can build or create. Each of these definitions contributes to, but none can be the sole source for, a genuinely comprehensive theory of emergence. (Clayton 2004: 579-580)

Physics to chemistry

(….) Things emerge in the development of complex physical systems that are understood by observation and cannot be derived from first principles, even given a complete knowledge of the antecedent states. One would not know about conductivity, for example, from a study of individual electrons alone; conductivity is a property that emerges only in complex solid state systems with huge numbers of electrons…. Such examples are convincing: physicists are familiar with a myriad of cases in which physical wholes cannot be predicted based on knowledge of their parts. Intuitions differ, though, on the significance of this unpredictability. (Clayton 2004: 580)

(….) [Such examples are] unpredictable even in principle — if the system-as-a-whole is really more than the sum of its parts.

Simulated Evolutionary Systems

Computer simulations study the processes whereby very simple rules give rise to complex emergent properties. John Conway’s program “Life,” which simulates cellular automata, is already widely known…. Yet even in as simple a system as Conway’s “Life,” predicting the movement of larger structures in terms of the simple parts alone turns out to be extremely complex. Thus in the messy real world of biology, behaviors of complex systems quickly become noncomputable in practice…. As a result — and, it now appears, necessarily — scientists rely on explanations given in terms of the emerging structures and their causal powers. Dreams of a final reduction “downwards” are fundamentally impossible. Recycled lower-level descriptions cannot do justice to the actual emergent complexity of the natural world as it has evolved. (Clayton 2004: 582)

Ant colony behavior

Neural network models of emergent phenomena can model … the emergence of ant colony behavior from simple behavioral “rules” that are genetically programmed into individual ants. (….) Even if the behavior of an ant colony were nothing more than an aggregate of the behaviors of the individual ants, whose behavior follows very simple rules, the result would be remarkable, for the behavior of the ant colony as a whole is extremely complex and highly adaptive to complex changes in its ecosystem. The complex adaptive potentials of the ant colony as a whole are emergent features of the aggregated system. The scientific task is to correctly describe and comprehend such emergent phenomena where the whole is more than the sum of the parts. (Clayton 2004: 586-587)

Biochemistry

So far we have considered models of how nature could build highly complex and adaptive behaviors from relatively simple processing rules. Now we must consider actual cases in which significant order emerges out of (relative) chaos. The big question is how nature obtains order “out of nothing,” that is, when the order is not present in the initial conditions but is produced in the course of a system’s evolution. What are some of the mechanisms that nature in fact uses? We consider four examples. (Clayton 2004: 587)

Fluid convection

The Benard instability is often cited as an example of a system far from thermodynamic equilibrium, where a stationary state becomes unstable and then manifests spontaneous organization (Peacocke 1994: 153). In the Bernard case, the lower surface of a horizontal layer of liquid is heated. This produces a heat flux from the bottom to the top of the liquid. When the temperature gradient reaches a certain threshold value, conduction no longer suffices to convey the heat upward. At that point convection cells form at right angles to th4e vertical heat flow. The liquid spontaneously organizes itself into these hexagonal structures or cells. (Clayton 2004: 587-588)

Differential equations describing the heat flow exhibit a bifurcation of the solutions. This bifurcation represents the spontaneous self-organization of large numbers of molecules, formally in random motion, into convection cells. This represents a particularly clear case of the spontaneous appearance of order in a system. According to the emergence hypothesis, many cases of emergent order in biology are analogous. (Clayton 2004: 588)

Autocatalysis in biochemical metabolism

Autocatalytic processes play a role in some of the most fundamental examples of emergence in the biosphere. These are relatively simple chemical processes with catalytic steps, yet they well express the thermodynamics of the far-from-equilibrium chemical processes that lie at the base of biology. (….) Such loops play an important role in metabolic functions. (Clayton 2004: 588)

Belousov-Zhabotinsky reactions

The role of emergence becomes clearer as one considers more complex examples. Consider the famous Belousov-Zhabotinsky reaction (Prigogine 1984: 152). This reaction consists of the oxidation of an organic acid (malonic acid) by potassium bromate in the presence of a catalyst such as cerium, manganese, or ferroin. From the four inputs into the chemical reactor more than 30 products and intermediaries are produced. The Belousov-Zhabotinsky reaction provides an example of a biochemical process where a high level of disorder settles into a patterned state. (Clayton 2004: 589)

(….) Put into philosophical terms, the data suggest that emergence is not merely epistemological but can also be ontological in nature. That is, it’s not just that we can’t predict emergent behaviors in these systems from a complete knowledge of the structures and energies of the parts. Instead, studying the systems suggests that structural features of the system — which are emergent features of the system as such and not properties pertaining to any of its parts — determine the overall state of the system, and hence as a result the behavior of individual particles within the system. (Clayton 2004: 589-590)

The role of emergent features of systems is increasingly evident as one moves from the very simple systems so far considered to the sorts of systems one actually encounters in the biosphere. (….) (Clayton 2004: 589-590)

The biochemistry of cell aggregation and differentiation

We move finally to processes where a random behavior or fluctuation gives rise to organized behavior between cells based on self-organization mechanisms. Consider the process of cell aggregation and differentiation in cellular slime molds (specifically, in Dictyostelium discoideum). The slime mold cycle begins when the environment becomes poor in nutrients and a population of isolated cells joins into a single mass on the order of 104 cells (Prigogine 1984: 156) . The aggregate migrates until it finds a higher nutrient source. Differentiation than occurs: a stalk or “foot” forms out of about one-third of the cells and is soon covered with spores. The spores detach and spread, growing when they encounter suitable nutrients and eventually forming a new colony of amoebas. (Clayton 2004: 589-591) [See Levinton 2001: 166;]

Note that this aggregation process is randomly initiated. Autocatalysis begins in a random cell within the colony, which then becomes the attractor center. It begins to produce cyclic adenosine monophosphate (AMP). As AMP is released in greater quantities into extracellular medium, it catalyzes the same reaction in the other cells, amplifying the fluctuation and total output. Cells then move up the gradient to the source cell, and other cells in turn follow their cAMP trail toward the attractor center. (Clayton 2004: 589-591)

Biology

Ilya Prigogine did not follow the notion of “order out of chaos” up through the entire ladder of biological evolution. Stuart Kauffman (1995, 2000) and others (Gell-Mann 1994; Goodwin 2001; see also Cowan et al. 1994 and other works in the same series) have however recently traced the role of the same principles in living systems. Biological processes in general are the result of systems that create and maintain order (stasis) through massive energy input from their environment. In principle these types of processes could be the object of what Kauffman envisions as “a new general biology,” based on sets of still-to-be-determined laws of emergent ordering or self-complexification. Like the biosphere itself, these laws (if they indeed exist) are emergent: they depend on the underlying physical and chemical regularities but are not reducible to them. [Note, there is no place for mind as a causal source.] Kauffman (2000: 35) writes: (Clayton 2004: 592)

I wish to say that life is an expected, emergent property of complex chemical reaction networks. Under rather general conditions, as the diversity of molecular species in a reaction system increases, a phase transition is crossed beyond which the formation of collectively autocatalytic sets of molecules suddenly becomes almost inevitable. (Clayton 2004: 593)

Until a science has been developed that formulates and tests physics-like laws at the level of biology [evo-devo is the closest we have so far come], the “new general biology” remains an as-yet-unverified, though intriguing, hypothesis. Nevertheless recent biology, driven by the genetic revolution on the one side and by the growth on the environmental sciences on the other, has made explosive advances in understanding the role of self-organizing complexity in the biosphere. Four factors in particular play a central role in biological emergence. (Clayton 2004: 593)

The role of scaling

As one moves up the ladder of complexity, macrostructures and macromechanisms emerge. In the formation of new structures, scale matters — or, better put, changes in scale matter. Nature continually evolves new structures and mechanisms as life forms move up the scale of molecules (c. 1 Ångstrom) to neurons (c. 100 micrometers) to the human central nervous system (c. 1 meter). As new structures are developed, new whole-part relations emerge. (Clayton 2004: 593)

John Holland argues that different sciences in the hierarchy of emergent complexity occur at jumps of roughly three orders of magnitude in scale. By the point systems have become too complex for predictions to be calculated, one is forced to “move the description ‘up a level’” (Holland 1998: 201). The “microlaws” still constrain outcomes, of course, but additional basic descriptive units must also be added. This pattern of introducing new explanatory levels iterates in a periodic fashion as one moves up the ladder of increasing complexity. To recognize the pattern is to make emergence an explicit feature of biological research. As of now, however, science possesses only a preliminary understanding of the principles underlying this periodicity. (Clayton 2004: 593)

The role of feedback loops

The role of feedback loops, examined above for biochemical processes, become increasingly important from the cellular level upwards. (….) (Clayton 2004: 593)

The role of local-global interactions

In complex dynamical systems the interlocked feedback loops can produce an emergent global structure. (….) In these cases, “the global property — [the] emergent behavior — feeds back to influence the behavior of the individuals … that produced it” (Lewin 1999). The global structure may have properties the local particles do not have. (Clayton 2004: 594)

(….) In contrast …, Kauffman insists that an ecosystem is in one sense “merely” a complex web of interactions. Yet consider a typical ecosystem of organisms of the sort that Kauffman (2000: 191) analyzes … Depending on one’s research interests, one can focus attention either on holistic features of such systems or on the interactions of the components within them. Thus Langston’s term “global” draws attention to system-level features and properties, whereas Kauffman’s “merely” emphasizes that no mysterious outside forces need to be introduced (such as, e.g., Rupert Sheldrake’s (1995) “morphic resonance”). Since the two dimensions are complementary, neither alone is scientifically adequate; the explosive complexity manifested in the evolutionary process involves the interplay of both systemic features and component interactions. (Clayton 2004: 595)

The role of nested hierarchies

A final layer of complexity is added in cases where the local-global structure forms a nested hierarchy. Such hierarchies are often represented using nested circles. Nesting is one of the basic forms of combinatorial explosion. Such forms appear extensively in natural biological systems (Wolfram 2002: 357ff.; see his index for dozens of further examples of nesting). Organisms achieve greater structural complexity, and hence increased chances of survival, as they incorporate discrete subsystems. Similarly, ecosystems complex enough to contain a number of discrete subsystems evidence greater plasticity in responding to destabilizing factors. (Clayton 2004: 595-596)

“Strong” versus “weak” emergence

The resulting interactions between parts and wholes mirror yet exceed the features of emergence that we observed in chemical processes. To the extent that the evolution of organisms and ecosystems evidences a “combinatorial explosion” (Morowitz 2002) based on factors such as the four just summarized, the hope of explaining entire living systems in terms of simple laws appears quixotic. Instead, natural systems made of interacting complex systems form a multileveled network of interdependency (cf. Gregersen 2003), and each level contributes distinct elements to the overall explanation. (Clayton 2004: 596-597)

Systems biology, the Siamese twin of genetics, has established many of the features of life’s “complexity pyramid” (Oltvai and Barabási 2002; cf. Barabási 2002). Construing cells as networks of genes and proteins, systems biologists distinguish four distinct levels: (1) the base functional organization (genome, transcriptome, proteome, and metabalome) [see below, Morowitz on the “dogma of molecular biology.”]; (2) the metabolic pathways built up out of these components; (3) larger functional modules responsible for major cell functions; and (4) the large-scale organization that arises from the nesting of the functional modules. Oltvai and Barabási (2002) conclude that “[the] integration of different organizational levels increasingly forces us to view cellular functions as distributed among groups of heterogeneous components that all interact within large networks.” Milo et al. (2002) have recently shown that a common set of “network motifs” occurs in complex networks in fields as diverse as biochemistry, neurobiology, and ecology. As they note, “similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans.” (Clayton 2004: 598)

Such compounding of complexity — the system-level features of networks, the nodes of which are themselves complex systems — is sometimes said to represent only a quantitative increase in complexity, in which nothing “really new” emerges. This view I have elsewhere labeled “weak emergence.” [This would be a form of philosophical materialism qua philosophical reductionism.] It is the view held by (among others) John Holland (1998) and Stephen Wolfram (2002). But, as Leon Kass (1999: 62) notes in the context of evolutionary biology, “it never occurred to Darwin that certain differences of degree — produced naturally, accumulated gradually (even incrementally), and inherited in an unbroken line of descent — might lead to a difference in kind …” Here Kass nicely formulates the principle involved. As long as nature’s process of compounding complex systems leads to irreducibly complex systems with structures and causal mechanisms of their own, then the natural world evidences not just weak emergence but also a more substantive change that we might label strong emergence. Cases of strong emergence are cases where the “downward causation” emphasized by George Ellis [see p. 607, True complexity and its associated ontology.] … is most in evidence. By contrast, in the relatively rare cases where rules relate the emergent system to its subvening system (in simulated systems, via algorithms; in natural systems, via “bridge laws”) weak emergence interpretation suffices. In the majority of cases, however, such rules are not available; in these cases, especially where we have reason to think that such lower-level rules are impossible in principle, the strong emergence interpretation is suggested. (Clayton 2004: 597-598)

Neuroscience, qualia, and consciousness

Consciousness, many feel, is the most important instance of a clearly strong form of emergence. Here if anywhere, it seems, nature has produced something irreducible — no matter how strong the biological dependence of mental qualia (i.e., subjective experiences) on antecedent states of the central nervous system may be. To know everything there is to know about the progression of brain states is not to know what it’s like to be you, to experience your joy, your pain, or your insights. No human researcher can know, as Thomas Nagel (1980) so famously argued, “what it’s like to be a bat.” (Clayton 2004: 598)

Unfortunately consciousness, however intimately familiar we may be with it on a personal level, remains an almost total mystery from a scientific perspective. Indeed, as Jerry Fodor (1992) noted, “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness.” (Clayton 2004: 598)

Given our lack of comprehension of the transition from brain states to consciousness, there is virtually no way to talk about the “C” word without sliding into the domain of philosophy. The slide begins if the emergence of consciousness is qualitatively different from other emergences; in fact, it begins even if consciousness is different from the neural correlates of consciousness.Much suggests that both differences obtain. How far can neuroscience go, even in principle, in explaining consciousness? (Clayton 2004: 598-599)

Science’s most powerful ally, I suggest, is emergence. As we’ve seen, emergence allows one to acknowledge the undeniable differences between mental properties and physical properties, while still insisting on the dependence of the entire mental life on the brain states that produce it. Consciousness, the thing to be explained, is different because it represents a new level of emergence; but brain states — understood both globally (as the state of the brain as a whole) and in terms of their microcomponents — are consciousness’s sine qua non. The emergentist framework allows science to identify the strongest possible analogies with complex systems elsewhere in the biosphere. So, for example, other complex adaptive systems also “learn,” as long as one defines learning as “a combination of exploration of the environment and improvement of performance through adaptative change” (Schuster 1994). Obviously, systems from primitive organisms to primate brains record information from their environment and use it to adjust future responses to that environment. (Clayton 2004: 599)

Even the representation of visual images in the brain, a classically mental phenomenon, can be parsed in this way. Consider Max Velman’s (2000) schema … Here a cat-in-the-world and the neural representation of the cat are both parts of a natural system; no nonscientific mental “things” like ideas or forms are introduced. In principle, then, representation might be construed as merely a more complicated version of the feedback loop between a plant and its environment … Such is the “natural account of phenomenal consciousness” defended by (e.g.) Le Doux (1978). In a physicalist account of mind, no mental causes are introduced. Without emergence, the story of consciousness must be retold such that thoughts and intentions play no causal role. … If one limits the causal interactions to world and brains, mind must appear as a sort of thought-bubble outside the system. Yet it is counter to our empirical experience in the world, to say the least, to leave no causal role to thoughts and intentions. For example, it certainly seems that your intention to read this … is causally related to the physical fact of your presently holding this book [or browsing this web page, etc.,] in your hands. (Clayton 2004: 599-600)

Arguments such as this force one to acknowledge the disanologies between emergence of consciousness and previous examples of emergence in complex systems. Consciousness confronts us with a “hard problem” different from those already considered (Chalmers 1995: 201):

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

The distinct features of human cognition, it seems, depend on a quantitative increase in brain complexity vis-à-vis other higher primates. Yet, if Chalmers is right (as I fear he is), this particular quantitative increase gives rise to a qualitative change. Even if the development of conscious awareness occurs gradually over the course of primate evolution, the (present) end of that process confronts the scientist with conscious, symbol-using beings clearly distinct from those who preceded them (Deacon 1997). Understanding consciousness even as an emergent phenomenon in the natural world — that is, naturalistically — requires a theory of “felt qualities,” “subjective intentions,” and “states of experience.” Intention-based explanations and, it appears, a new set of sciences: the social or human sciences. By this point emergence has driven us to a level beyond the natural-science-based framework of the present book. New concepts, new testing mechanisms, and perhaps even new standards for knowledge are now required. From the perspective of physics the trail disappears into the clouds; we can follow it no further. (Clayton 2004: 600-601)

The five emergences

In the broader discussion the term “emergence” is used in multiple and incompatible senses, some of which are incompatible with the scientific project. Clarity is required to avoid equivocation between five distinct levels on which the term may be applied: (Clayton 2004: 601)

• Let emergence-1 refer to occurrences of the term within the context of a specific scientific theory. Here it describes features of a specified physical or biological system of which we have some scientific understanding. Scientists who employ these theories claim that the term (in a theory-specific sense) is currently useful for describing features of the natural world. The preceding pages include various examples of theories in which this term occurs. At the level of emergence-1 alone there is no way to establish whether the term is used analogously across theories, or whether it really means something utterly distinct in each theory in which it appears. (Clayton 2004: 601-602)

• Emergence-2 draws attention to features of the world that may eventually become part of a unified scientific theory. Emergence in this sense expresses postulated connections or laws that may in the future become the basis for one or more branches of science. One thinks, for example, of the role of emergence in Stuart Kauffman’s notion of a new “general biology,” or in certain proposed theories of complexity or complexification. (Clayton 2004: 602)

• Emergence-3 is a mata-scientific term that points out a broad pattern across scientific theories. Used in this sense, the term is not drawn from a particular scientific theory; it is an observation about a significant pattern that connects a range of scientific theories. In the preceding pages I have often employed the term in this fashion. My purpose has been to draw attention to common features of the physical systems under discussion, as in (e.g.) the phenomena of autocatalysis, complexity, and self-organization. Each is scientifically understood, each shares common features that are significant. Emergence draws attention to these features, whether or not the individual theories actually use the same label for the phenomena they describe. (Clayton 2004: 602)

Emergence-3 thus serves a heuristic function. It assists in the recognition of common features between theories. Recognizing such patterns can help to extend existing theories, to formulate insightful new hypotheses, or to launch new interdisciplinary research programes.[4] (Clayton 2004: 602)

• Emergence-4 expresses a feature in the movement between scientific disciplines, including some of the most controversial transition points. Current scientific work is being done, for example, to understand how chemical structures are formed, to reconstruct the biochemical dynamics underlying the origins of life, and to conceive how complicated neural processes produce cognitive phenomena such as memory, language, rationality, and creativity. Each involves efforts to understand diverse phenomena involving levels of self-organization within the natural world. Emergence-4 attempts to express what might be shared in common by these (and other) transition points. (Clayton 2004: 602)

Here, however, a clear limitation arises. A scientific theory that explains how chemical structures are formed is perhaps unlikely to explain the origins of life. Neither theory will explain how self-organizing neural nets encode memories. Thus emergence-4 stands closer to the philosophy of science than it does to actual scientific theory. Nonetheless, it is the sort of philosophy of science that should be helpful to scientists.[5] (Clayton 2004: 602)

• Emergence-5 is a metaphysical theory. It represents the view that the nature of the natural world is such that it produces continually more complex realities in a process of ongoing creativity. The present does not comment on such metaphysical claims about emergence.[6] (Clayton 2004: 603)

Conclusion

(….) Since emergence is used as an integrative ordering concept across scientific fields …. It remains, at least in part, a meta-scientific term. (Clayton 2004: 603)

Does the idea of distinct levels then conflict with “standard reductionist science?” No, one can believe that there are levels in Nature and corresponding levels of explanation while at the same time working to explain any given set of higher-order phenomena in terms of underlying laws and systems. In fact, isn’t the first task of science to whittle away at every apparent “break” in Nature, to make it smaller, to eliminate it if possible? Thus, for example, to study the visual perceptual system scientifically is to attempt to explain it fully in terms of the neural structures and electrochemical processes that produce it. The degree to which downward explanation is possible will be determined by long-term empirical research. At present we can only wager on the one outcome or the other based on the evidence before us. (Clayton 2004: 603)

Notes:

[2] Gordon (2000) disputes this claim: “One lesson from ants is that to understand a system like theirs, it is not sufficient to take the system apart. The behavior of each unit is not encapsulated inside that unit but comes from its connections with the rest of the system.” I likewise break strongly with the aggregate model of emergence.

[3] Generally this seems to be a question that makes physicists uncomfortable (“Why, that’s impossible, of course!”), whereas biologists tend to recognize in it one of the core mysteries in the evolution of living systems.

[4] For this reason, emergence-3 stands closer to the philosophy of science than do the previous two senses. Yet it is a kind of philosophy of science that stands rather close to actual science and that seeks to be helpful to it. [The goal of all true “philosophy of science” is to seek critical clarification of ideas, concepts, and theoretical formulations; hence to be “helpful” to science and the question for human knowledge.] By way of analogy one thinks of the work of philosophers of quantum physics such as Jeremy Butterfield or James Cushing, whose work can be and has actually been helpful to bench physicists. One thinks as well of the analogous work of certain philosophers in astrophysics (John Barrow) or in evolutionary biology (David Hull, Michael Ruse).

[5] This as opposed, for example, to the kind of philosophy of science currently popular in English departments and in journals like Critical Inquiry — the kind of philosophy of science that asserts that science is a text that needs to be deconstructed, or that science and literature are equally subjective, or that the worldview of Native Americans should be taught in science classes.

— Clayton, Philip D. Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.). Cambridge: Cambridge University Press; 2004; pp. 577-606.

~ ~ ~

* Emergence: us from it. In Science and Ultimate Reality: Quantum Theory, Cosmology and Complexity (John D. Barrow, Paul W. Davies, and Charles L. Harper, Jr., ed.)