Our discussion of the nature of physical concepts has shown that a main reason for formulating concepts is to use them in connection with mathematically stated laws. It is tempting to go one step further and to demand that practicing scientists deal only with ideas corresponding to strict measurables, that they formulate only concepts reducible to the least ambiguous of all data: numbers and measurements. The history of science would indeed furnish examples to show the great advances that followed from the formation of strictly quantitative concepts. (Holton and Brush 2001, 170)
(….) The nineteenth-century physicist Lord Kelvin commended this attitude in the famous statement:
I often say that when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of meagre and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of Science, whatever the matter may be. (“Electrical Units of Measurement”)
Useful though this trend is within its limits, there is an entirely different aspect to scientific concepts: indeed it is probable that science would stop if every scientist were to avoid anything other than strictly quantitative concepts. We shall find that a position like Lord Kelvin’s (which is similar to that held at present by some thinkers in the social sciences) does justice neither to the complexity and fertility of the human mind nor to the needs of contemporary physical science itself—not to scientists nor to science. Quite apart from the practical impossibility of demanding of one’s mind that at all times it identify such concepts as electron only with the measurable aspects of that construct, there are specifically two main objections: First, this position misunderstands how scientists as individuals do their work, and second, it misunderstands how science as a system grows out of the contribution of individuals. (Holton and Brush 2001, 170-171)
(….) While a scientist struggles with a problem, there can be little conscious limitation on his free and at times audacious constructions. Depending on his field, his problem, his training, and his temperament, he may allow himself to be guided by a logical sequence based on more or less provisional hypotheses, or equally likely by “feelings for things,” by likely analogy, by some promising guess, or he may follow a judicious trial-and-error procedure.
The well-planned experiment is, of course, by far the most frequent one in modern science and generally has the best chance of success; but some men and women in science have often not even mapped out a tentative plan of attack on the problems, but have instead let their enthusiasms, their hunches, and their sheer joy of discovery suggest the line of work. Sometimes, therefore, the discovery of a new effect or tool or technique is followed by a period of trying out one or the other applications in a manner that superficially almost seems playful. Even the philosophical orientation of scientists is far less rigidly prescribed than might be supposed. (Holton and Brush 2001, 170-171)
‘Tis a dangerous thing to ingage the authority of Scripture in disputes about the Natural World, in opposition to Reason, lest Time, which brings all things to light, should discover that to be false which we had made Scripture to assert.
— Thomas Burnet, Archaelogiae Philosophicae, 1692
In the late nineteenth century intellectuals assumed that truth had spiritual, moral, and cognitive dimensions. By 1930, however, intellectuals had abandoned this broad conception of truth. They embraced, instead, a view of knowledge that drew a sharp distinction between “facts” and “values.” They associated cognitive truth with empirically verified knowledge and maintained that by this standard, moral values could not be validated as “true.” In the nomenclature of the twentieth century, only “science” constituted true knowledge. Moral and spiritual values could be “true” in an emotional or nonliteral sense, but not in terms of cognitively verifiable knowledge. The term truth no longer comfortably encompassed factual knowledge and moral values.
— Julie A. A. Reuben (1996) The Making of the Modern University: Intellectual Transformation and the Marginalization of Morality
Certain people have different standards for recognizing “truth.” Given access to the same facts, two individuals can look at an issued and reach utterly different conclusions, to the point where they believe those with a different opinion belong somewhere on a spectrum from stupid to perverse…. (Asher 2012: xiv)
(….) The creationist has something at stake, some worldview or allegiance, that makes a fair, honest view of the data behind Darwinian evolutionary biology impossible. Why?
(….) [T]here is an obvious explanation for antipathy toward Charles Darwin among various anti-evolutionist groups of the last 150 years, groups that are often connected to one kind of intense religious creed or another: they think Darwin threatens their worldview. Contributing to this conviction are those biologists who portray evolution as tied to atheism, who help convince the devout that a natural connection of humanity with other organisms is incompatible with their religion. Compounding things further is the fact that adherence to many religious worldviews is not flexible, and any scientific theory or philosophy that seems to threaten certain beliefs must be wrong, whatever some scientist may say about evidence. (Asher 2012: xvi)
Coyne says there is one way to be rational, and any of this stuff about alternative “truth” is relativist nonsense not worth the flatscreen monitor on which it is written:
What, then, is the nature of “religious truth” that supposedly complements “scientific truth”?… Anything touted as a “truth” must come with a method for being disproved—a method that does not depend on personal revelation. … It would appear, then, that one cannot be coherently religious and scientific at the same time. That alleged synthesis requires that with one part of your brain you accept only those things that are tested and supported by agreed-upon evidence, logic, and reason, while with the other part of your brain you accept things that are unsupportable or even falsified.
I disagree, and would argue that there are many things in life that deserve the descriptor “truth” but are not amenable to rational disproof. Coyne is absolutely correct to say that coddling the irrational—those for whom “religious truth” means stoning adulterers or drinking poisoned Kool-Aid—is incompatible with science and, more generally, civil society. However, while science is a-religious, it is not anti-religious, at least in the important sense that it does not (indeed, cannot) concern itself with phenomena beyond what we rationally perceive. It is not only possible to portray science as lacking fatal consequences for those religious tenets that concern things we cannot empirically observe (such as purpose or agency in life), but it is precisely what scientists have got to do to make a compelling case to the public. Coyne tosses “religion” into the same dumpster as any passing superstition, and actively encourages the perception that science is corrosive to any religious sentiment. Yes, there are religious claims that are demonstrably wrong in an empirical sense. … However, such specific claims do not do justice to the religion integrally tied into the identity of many lay-people and scientists alike, an identity that by any meaningful definition is worthy of the name “truth.” (Asher 2012: xvii-xviii)
— Asher, Robert J. Evolution and Belief [Confessions of a Religious Paleontologist]. Cambridge: Cambridge University Press; 2012; p. xiv.
When we reflect on science—its aims, its values, its limits—we are doing philosophy, not science. This may be bad news for the high priests of scientism, who reject philosophy, but there is no escaping it.
(….) There is a general agreement that science concentrates on aspects of the world that can be studied through theories that can be tested by doing experiments. Those aspects relate to spatiotemporal patterns in nature, for this is what makes experiments possible. If other dimensions of reality exist, they simply cannot be studied using the methods of the empirical sciences.
(….) Modern science is an enormously wonderful and powerful achievement of our species, a culturally transcendent, universal method for studying the natural world. It should never be used as an ideological weapon. Scientific progress demands a respect for truth, rigor, and objectivity, three ethical values implied in the ethos of science. We can nevertheless draw different conclusions from our analyses of science, but we should always present them carefully, distinguishing what can be said in the name of science from personal interpretations that must be supported by independent reasons, or acknowledged simply as personal opinions. Our analysis shows that the Oracles differ in important points and are not consistently fighting for a common cause. When they go beyond their science, they use different arguments and arrive at different conclusions.
We conclude with one final insight. Science is compatible with a broad cross section of very different views on the deepest human problems. Weinberg, an agnostic Jew from New York, shared his Nobel Prize with Abdus Salam, a devout Muslim from Pakistan. They spoke different languages and had very different views on many important topics. But these differences were of no consequence when they came together to do science. Modern science can be embraced by any religion, any culture, any tribe, and brought to bear on whatever problems are considered most urgent, whether it be tracing their origins, curing their diseases, or cleaning up their water. Science should never be fashioned into a weapon for the promotion of an ideological agenda. Nevertheless, as history has shown, science is all too frequently enlisted in the service of propaganda; and, as we have argued in this book, we must be on guard against intellectual nonsense masquerading as science.
— Karl Giberson and Mariano Artigas (2007) in Oracles of Science: Celebrity Scientists versus God and Religion.
Darwinism as an ideology
One of the most interesting developments of the twentieth century has been the growing trend to regard Darwinian theory as transcending the category of provisional scientific theories, and constituting a “world- view.” Darwinism is here regarded as establishing a coherent worldview through its evolutionary narrative, which embraces such issues as the fundamental nature of reality, the physical universe, human origins, human nature, society, psychology, values, and destinies. While being welcomed by some, others have expressed alarm at this apparent failure to distinguish between good, sober, and restrained science on the one hand, and non-empirical metaphysics, fantasy, myth and ideology on the other. In the view of some, this transition has led to Darwinism becoming a religion or atheist faith tradition in its own right.
— Denis R. Alexander and Ronald L. Numbers (2010) in Biology and Ideology: From Descartes to Dawkins.
It is difficult to overestimate the importance of Darwinian thinking to American economic reform in the Gilded Age and Progressive Era. Evolutionary thought was American economic reform’s scientific touchstone and a vital source of ideas and conceptual support. The Wharton School’s Simon Nelson Patten, writing in 1894, observed that the century was closing with a bias for biological reasoning and analogy, just as the prior century had closed with a bias for the methods of physics and astronomy. The great scientific victories of the nineteenth century, Patten believed, were “in the field of biology.”
SOMETHING IN DARWIN FOR EVERYONE
To understand the influence of evolutionary thought on American economic reform, we must first appreciate that evolutionary thought in the Gilded Age and Progressive Era in no way dictated a conservative, pessimistic, Social Darwinist politics. On the contrary, evolutionary thought was protean, plural, and contested.
It could license, of course, arguments that explained and justified the economic status quo as survival of the fittest, so-called Social Darwinism. But evolutionary thought was no less useful to economic reformers, who found in it justification for optimism rather than pessimism, for intervention rather than fatalism, for vigorous rather than weak government, and for progress rather than drift. Evolution, as Irving Fisher insisted in National Vitality, did not teach a “fatalistic creed.” Evolution, rather, awakened the world to “the fact of its own improvability.”
In the thirty years bracketing 1900, there seems to have been something in Darwin for everyone. Karl Pearson, English eugenicist and founding father of modern statistical theory, found a case for socialism in Darwin, as did the co-discoverer of the theory of evolution by natural selection, Alfred Russel Wallace. Herbert Spencer, in contrast, famously used natural selection, which he called “survival of the fittest,” to defend limited government.
Warmongers borrowed the notion of survival of the fittest to justify imperial conquest, as when Josiah Strong asserted that the Anglo-Saxon race was “divinely commissioned” to conquer the backward races abroad. Opponents of war also found sustenance in evolutionary thought. Pyotr Kropotkin argued that the struggle for existence need not involve conflict, much less violence. Cooperation could well be the fittest strategy. David Starr Jordan, president of Stanford from 1891 to 1913 and a leader of the American Peace Movement during World War I, opposed war because it selected for the unfit. The fittest men died in battle, while the weaklings stayed home to reproduce.
Darwin seems to have been pro-natalist, on the grounds that more births increased the variation available for natural selection. Margaret Sanger argued that restricting births was the best way to select the fittest. Darwin’s self-appointed “bulldog,” T. H. Huxley, thought natural selection justified agnosticism, whereas devout American interpreters, such as botanist Asa Gray, found room in Darwinism for a deity.
It is a tribute to the influence of Darwinism that Darwin inspired exegetes of nearly every ideology: capitalist and socialist, individualist and collectivist, pacifist and militarist, pro-natalist and birth-controlling, as well as agnostic and devout.
Darwinism was itself plural, and Progressive Era evolutionary thought was more plural still. The ideas of other prominent evolutionists (notably, Herbert Spencer and Alfred Russel Wallace) were also influential in the Progressive Era, both when they accorded with Darwin and when they didn’t.
— Thomas C. Leonard (2016) in Illiberal Reformers: Race, Eugenics, and American Economics in the Progressive Era.
[L]iberal theology reconceptualizes the meaning of Christianity in the light of modern knowledge and ethical values. It is reformist in spirit and substance, not revolutionary. Specifically it is defined by its openness to the verdicts of modern intellectual inquiry, especially historical criticism and the natural sciences; its commitment to the authority of individual reason and experience; its conception of Christianity as an ethical way of life; its advocacy of moral concepts of atonement or reconciliation; and its commitments to make Christianity credible and socially relevant to contemporary people. In the nineteenth century, liberal theologians denied that God created the world in six days, commanded the genocidal extermination of Israel’s ancient enemies, demanded the literal sacrifice of his Son as a substitutionary legal payment for sin [see Laughing Buddha], and verbally inspired the Bible. Most importantly, they denied that religious arguments should be settled by appeals to an infallible text or ecclesial authority. Putting it positively, nineteenth-century liberals accepted Darwinian evolution, biblical criticism, a moral influence view of the cross, an idea of God as the personal and eternal Spirit of love, and a view of Scripture as authoritative only within Christian experience. Nineteenth- teenth- and early-twentieth-century liberals expected these views to prevail in Christianity as a whole, but in the twenty-first century they remain contested beliefs.
— Gary Dorrien. The Making of American Liberal Theology: Crisis, Irony, and Postmodernity: 1950-2005 (Kindle Locations 155-157). Kindle Edition.
Unless the moral insight and the spiritual attainment of mankind are proportionately augmented, the unlimited advancement of a purely materialistic culture may eventually become a menace to civilization. A purely materialistic science harbors within itself the potential seed of the destruction of all scientific striving, for this very attitude presages the ultimate collapse of a civilization which has abandoned its sense of moral values and has repudiated its spiritual goal of attainment.
The materialistic scientist and the extreme idealist are destined always to be at loggerheads. This is not true of those scientists and idealists who are in possession of a common standard of high moral values and spiritual test levels. In every age scientists and religionists must recognize that they are on trial before the bar of human need. They must eschew all warfare between themselves while they strive valiantly to justify their continued survival by enhanced devotion to the service of human progress. If the so-called science or religion of any age is false, then must it either purify its activities or pass away before the emergence of a material science or spiritual religion of a truer and more worthy order.
What both developing science and religion need is more searching and fearless self-criticism, a greater awareness of incompleteness in evolutionary status. The teachers of both science and religion are often altogether too self-confident and dogmatic. Science and religion can only be self-critical of their facts. The moment departure is made from the stage of facts, reason abdicates or else rapidly degenerates into a consort of false logic.
~ ~ ~
By the mid-nineteenth century, there were really only three ways in which natural theologians could deal with the growing evidence that the earth was very old, that it was recycling inexorably beneath their feet, and that life on earth had constantly changed over millions of years. They could ignore it, they could accommodate it to the biblical accounts of history by more or less denying the literal truth of Genesis, or they could explain it all away. The later natural theologians largely ignored it. The sacred theorists tried unsuccessfully to reconcile geology with the Bible. And one man above all others tried to explain it away. He was Philip Henry Gosse (1810-1888), a writer on natural history whose books caught the imagination of generations of Victorians and whose life became a tortured tale of religion contesting with science…. (Thomson 2007: 223)
Gosse’s dilemma was that of all natural theologians, especially after the publication in 1844 of an anonymously authored, thrillingly dangerous, and wildly successful book on evolution…. The book’s title, with an allusion to James Hutton that nobody could miss, was Vestiges of Creation. Chambers’ theory was largely derived from Lamarck’s which, like Erasmus Darwin’s, depended upon organisms being subject to change as a direct result of environmental pressures and exigencies [which today is know to be possible via epigentics]. Chambers probably set Charles Darwin back fifteen years — much to the benefit of all. In many ways he blazed the trail that Darwin could more cautiously follow with an even more convincing theory in hand. Darwin must have realized, with the example of Chambers in front of him (and approval of the political left and censure from both the religious and scientific right) that he would have to ensure his theory would have a better reception. (Thomson 2007: 224)
Gosse knew that various versions of what we now call evolution had been around for more than a hundred years. By the mid-1850s, most scientists in Britain knew which way the wind was blowing. Darwin had been hard at work in private since 1842, preparing the ground for his idea of natural selection, and knowing how popular a scientist Gosse was, he tried to enlist him to support his theory. Darwin’s self-designated ‘bull dogs’, including Thomas Huxley, were steadily persuading the sceptics — Huxley had been lecturing formally on an evolutionary relationship between men and apes as early as 1858. This growing movement evolutionary movement offered a new way of explaining the evidence of organic changes, but only at the expense of much accepted religious belief. It threatened to change radically the whole frame of intellectual reference and to produce a new explanation of cause. For a huge number of theologians, clerics, philosophers and ordinary people, evolution was changing the metaphysical balance of power. Among those who felt this most keenly was Gosse. (Thomson 2007: 224)
One’s heart has to ache for Gosse, one of the most sympathetic characters of the evolutionary saga, a man weighed down by the burdens of fundamentalist Christianity and at the same time a brilliant naturalist…. He was the first to introduce to a popular audience the life of the seashore, the fragile world of exquisite beauty and strength that lies just a few inches beneath the surface of the sea and in the rocky pools of the coast. Before Gosse, all this was largely unseen. Gosse single-handedly created marine biology and home aquaria, and became one of the great chroniclers of the intricate worlds revealed by the microscope. (Thomson 2007: 225)
(….) Once Lamarck and Chambers had made it possible (even necessary) to take evolution seriously, and after his meeting with Charles Darwin had shown how powerful was the extent of the challenge to his fundamentalist beliefs, Gosse felt called to respond; as a Plymouth Brother and as a scientist, it was his responsibility, just as it had been Paley’s and before Paley John Ray’s or Thomas Burnet’s. Gosse’s dilemma was to try to find a way to reconcile his science and his faith. He chose to challenge the rapidly growing support for evolutionists from the geological record. (Thomson 2007: 226)
(….) Huxley had a favourite lecture — a “Lay Sermon’ — entitled Essay on a Piece of Chalk. He would stand before an eager crowd and take a piece of common chalk from his pocket, asking the audience what it could possibly tell them about the history of the cosmos and of life on earth. The answer is that chalk (in those days, before blackboard chalk was an artificial, hypo-allergenic substance) represents the accumulation on an ancient sea bottom of the skeletons of countless billions of microscopic planktonic organisms that once inhabited vast tropical oceans that extended across the earth, from Europe and the Middle East to Australia and North America. (Thomson 2007: 227)
(….) Philip Gosse knew only too well what a piece of chalk looked like under a microscope and that the earth’s crust consisted of thousands of feet of different rocks, some bearing fossils, others the remains of ancient lava flows, dust storms, water-borne sediments, and even ancient coral reefs just like those he had seen in Jamaica…. How could Gosse explain away this all-too-solid evidence of the ancient history of the earth and its denizens? What did it have to say about the biblical account of creation in six days? (Thomson 2007: 228)
(….) Gosse’s answer cost him dearly. The dilemma figuratively tore him — scientist and fundamentalist Christian — in half. In a classic example of ad hoc reasoning, he explained away all this appearance of change in a book entitled Omphalos, the Greek for ‘navel’, and in that one word is contained the core of Gosse’s argument. It is the old conundrum: did Adam have a navel? If God created Adam as the first man out of nothing, Adam would have no need for a navel, since he had never been connected by an umbilical cord to a mother. Nor indeed had Eve, of whose origin Genesis gives two accounts. Nor indeed (remembering that the Bible tells us that God made man in his own image) would God physiologically have needed navel. (Thomson 2007: 229)
Gosse simply asserted that at the moment of creation, just as God made Adam with a navel, he also made the earth with all its complex layers, its faults, every one of its fossils, volcanoes in mid-eruption and rivers in full spate carrying a load of sediment that had never been eroded from mountains that had never been uplifted. Similarly, at that instant, every tree that had never grown nevertheless had internal growth rings; every mammal already had partially worn teeth. He created rotting logs on the forest floor, the rain in mid-fall, the light from distant stars in mid-stream, the planets part-way around their orbits … the whole universe up and running at the moment of creation no further assembly required. (Thomson 2007: 229)
Such an argument, of course, can never be beaten. It says that God has created all the evidence that supports his existence and (shades of Hume) all the evidence that appears to cast doubt on it. Equally, of course, a theory that explains everything explains nothing. Omphalos is untestable and therefore one cannot concur rationally with its argument; you must simply close your eyes and believe. Or smile. (Thomson 2007: 229-230)
Over the years, Gosse’s argument has been bowdlerised to the slightly unworthy proposition that God set out the geological record, with all its evidence of change, in order to test man’s faith. It was, therefore, the ultimate celestial jest and cruel hoax. This was about as far from Gosse’s pious intention as Darwin’s impious theory. As for what Paley would have made of Omphalos I like to think he would have rejected it, but kindly, for he was a kind man. Victorian England not only rejected it, they laughed at it cruelly. Gosse became overnight a broken man, his reputation as a scientist in shatters. (Thomson 2007: 230)
But nothing is as simple as it ought to be. A community that mocked Omphalos and had no problem in coming to terms with the even more difficult issue of cosmology, still could not come to terms with geology. In fact, whether in Paley’s time or in Darwin’s, or indeed our own, one of the oddities in the history of interplay between science and religion is that cosmology never seems to have become as serious a threat to revealed religion as natural science. When pressed, people often revert to believing two things at once. The evidence that the universe is huge and ancient can be assimilated seemingly without shaking the conviction that the earth itself is 6,000 years old and that all living creatures were created over a two-day period. For example: ‘The school books of the present day, while they teach the child that the earth moves, yet assure him that that it is a little less than six thousand years old, and that it was made in six days. On the other hand, geologists of all religious creeds are agreed that the earth has existed for an immense series of years.’ These last words were written in 1860 and appear in a work that arguably presented a greater threat to the Established Church than the evolutionism of Erasmus Darwin, Lamarck, Robert Chambers or even Charles Darwin. Essays and Reviews was an example of the enemy within, a compilation of extremely liberal theological views by noted churchman and academics. Among their targets was the unnecessary and outmoded belief in miracles and the biblical account of the days of creation. The battle is still being fought. (Thomson 2007: 230-231)
The answer, therefore, which the seventeenth century gave to the ancient question … “What is the world made of?” was that the world is a succession of instantaneous configurations of matter — or material, if you wish to include stuff more subtle than ordinary matter…. Thus the configurations determined there own changes, so that the circle of scientific thought was completely closed. This is the famous mechanistic theory of nature, which has reigned supreme ever since the seventeenth century. It is the orthodox creed of physical science…. There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what I will call the ‘Fallacy of Misplaced Concreteness.’ This fallacy is the occasion of great confusion in philosophy. (Whitehead 1967: 50-51)
(….) This conception of the universe is surely framed in terms of high abstractions, and the paradox only arises because we have mistaken our abstractions for concrete realities…. The seventeenth century had finally produced a scheme of scientific thought framed by mathematics, for the use of mathematics. The great characteristic of the mathematical mind is its capacity for dealing with abstractions; and for eliciting from them clear-cut demonstrative trains of reasoning, entirely satisfactory so long as it is those abstractions which you want to think about. The enormous success of the scientific abstractions, yielding on the one hand matter with its simple location in space and time, on the other hand mind, perceiving, suffering, reasoning, but not interfering, has foisted onto philosophy the task of accepting them as the most concrete rendering of fact. (Whitehead 1967: 54-55)
Thereby, modern philosophy has been ruined. It has oscillated in a complex manner between three extremes. These are the dualists, who accept matter and mind as on an equal basis, and the two varieties of monists, those who put mind inside matter, and those who put matter inside mind. But this juggling with abstractions can never overcome the inherent confusion introduced by the ascription of misplaced concreteness to the scientific scheme of the seventeenth century. (Whitehead 1967: 55)
— Alfred North Whitehead in Science and the Modern World
In the UK, for example, 97 percent of money is created by commercial banks and its character takes the form of debt-based, interest-bearing loans. As for its intended use? In the 10 years running up to the 2008 financial crash, over 75 percent of those loans were granted for buying stocks or houses—so fuelling the house-price bubble—while a mere 13 percent went to small businesses engaged in productive enterprise.47 When such debt increases, a growing share of a nation’s income is siphoned off as payments to those with interest-earning investments and as profit for the banking sector, leaving less income available for spending on products and services made by people working in the productive economy. ‘Just as landlords were the archetypal rentiers of their agricultural societies,’ writes economist Michael Hudson, ‘so investors, financiers and bankers are in the largest rentier sector of today’s financialized economies.’ (Raworth 2017, 155)
Once the current design of money is spelled out this way—its creation, its character and its use—it becomes clear that there are many options for redesigning it, involving the state and the commons along with the market. What’s more, many different kinds of money can coexist, with the potential to turn a monetary monoculture into a financial ecosystem. (Raworth 2017, 155)
Imagine, for starters, if central banks were to take back the power to create money and then issue it to commercial banks, while simultaneously requiring them to hold 100 percent reserves for the loans that they make—meaning that every loan would be backed by someone else’s savings, or the bank’s own capital. It would certainly separate the role of providing money from the role of providing credit, so helping to prevent the build-up of debt-fuelled credit bubbles that burst with such deep social costs. That idea may sound outlandish, but it is neither a new nor a fringe suggestion. First proposed during the 1930s Great Depression by influential economists of the day such as Irving Fisher and Milton Friedman, it gained renewed support after the 2008 crash, gaining the backing of mainstream financial experts at the International Monetary Fund and Martin Wolf of the UK’s Financial Times. (Raworth 2017, 155-156)
— Kate Raworth in Doughnut Economics
Suggestions are anchored in neoclassical theory
Despite growing diversity in research, the theory flow of economics, often referred to as neoclassical, continues to dominate teaching and politics. It developed in the 19th century as an attempt to apply the methods of the natural sciences and especially physics to social phenomena, In the search for an “exact” social science, social relationships are abstracted to such an extent that calculations are possible. The neoclassical economics department primarily asks one question: How do rational actors optimize under certain circumstances? This approach is nothing bad in and of itself. However, in view of the ecological crisis, we have to ask ourselves completely different questions in society: How can the planetary collapse be prevented? What can an economic system look like that is social, fair and ecological?
The dematerialization of the value concept boded ill for the tangible world of stable time and concrete motion (Kern 1983). Again, the writer Jorge Luis Borges (1962, p. 159) captured the mood of the metaphor: (Mirowski 1989, 134. Kindle Location 2875-2877)
I reflected there is nothing less material than money, since any coin whatsoever (let us say a coin worth twenty centavos) is, strictly speaking, a repertory of possible futures. Money is abstract, I repeated; money is the future tense. It can be an evening in the suburbs, or music by Brahms; it can be maps, or chess, or coffee; it can be the words of Epictetus teaching us to despise gold; it is a Proteus more versatile than the one on the isle of Pharos. It is unforeseeable time, Bergsonian time . . . (Mirowski 1989, 134-135. Kindle Location 2877-2881)
It was not solely in art that the reconceptualization of value gripped the imagination. Because the energy concept depended upon the value metaphor in part for its credibility, physics was prodded to reinterpret the meaning of its conservation principles. In an earlier, simpler era Clerk Maxwell could say that conservation principles gave the physical molecules “the stamp of the manufactured article” (Barrow and Tipler 1986, p. 88), but as manufacture gave way to finance, seeing conservation principles in nature gave way to seeing them more as contingencies, imposed by our accountants in order to keep confusion at bay. Nowhere is this more evident than in the popular writings of the physicist Arthur Eddington, the Stephen Jay Gould of early twentieth century physics: (Mirowski 1989, 135. Kindle Location 2881-2887)
The famous laws of conservation and energy . . . are mathematical identities. Violation of them is unthinkable. Perhaps I can best indicate their nature by an analogy. An aged college Bursar once dwelt secluded in his rooms devoting himself entirely to accounts. He realised the intellectual and other activities of the college only as they presented themselves in the bills. He vaguely conjectured an objective reality at the back of it all — some sort of parallel to the real college — though he could only picture it in terms of the pounds, shillings and pence which made up what he would call “the commonsense college of everyday experience.” The method of account-keeping had become inveterate habit handed down from generations of hermit-like bursars; he accepted the form of the accounts as being part of the nature of things. But he was of a scientific turn and he wanted to learn more about the college. One day in looking over the books he discovered a remarkable law. For every item on the credit side an equal item appeared somewhere else on the debit side. “Ha!” said the Bursar, “I have discovered one of the great laws controlling the college. It is a perfect and exact law of the real world. Credit must be called plus and debit minus; and so we have the law of conservation of £. s. d. This is the true way to find out things, and there is no limit to what may ultimately be discovered by this scientific method . . .” (Mirowski 1989, 135. Kindle Location 2887-2898)
I have no quarrel with the Bursar for believing that scientific investigation of the accounts is a road to exact (though necessarily partial) knowledge of the reality behind them . . . But I would point out to him that a discovery of the overlapping of the different aspects in which the realities of the college present themselves in the world of accounts, is not a discovery of the laws controlling the college; that he has not even begun to find the controlling laws. The college may totter but the Bursar’s accounts still balance . . . (Mirowski 1989, 135-136. Kindle Location 2898-2902)
Perhaps a better way of expressing this selective influence of the mind on the laws of Nature is to say that values are created by the mind [Eddington 1930, pp. 237–8, 243]. (Mirowski 1989, 136. Kindle Location 2903-2904)
Once physicists had become inured to entertaining the idea that value is not natural, then it was a foregone conclusion that the stable Laplacean dreamworld of a fixed and conserved energy and a single super-variational principle was doomed. Again, Eddington stated it better than I could hope to: (Mirowski 1989, 136. Kindle Location 2904-2907)
[Classical determinism] was the gold standard in the vaults; [statistical laws were] the paper currency actually used. But everyone still adhered to the traditional view that paper currency needs to be backed by gold. As physics progressed the occasions when the gold was actually produced became career until they ceased altogether. Then it occurred to some of us to question whether there still was a hoard of gold in the vaults or whether its existence was a mythical tradition. The dramatic ending of the story would be that the vaults were opened and found to be empty. The actual ending is not quite so simple. It turns out that the key has been lost, and no one can say for certain whether there is any gold in the vaults or not. But I think it is clear that, with either termination, present-day physics is off the gold standard [Eddington 1935, p. 81]. (Mirowski 1989, 136. Kindle Location 2907-2913)
The denaturalization of value presaged the dissolution of the energy concept into a mere set of accounts, which, like national currencies, were not convertable at any naturally fixed rates of exchange. Quantum mechanical energy was not exactly the same thing as relativistic energy or thermodynamic energy. Yet this did not mean that physics had regressed to a state of fragmented autarkies. Trade was still conducted between nations; mathematical structure could bridge subdisciplines of physics. It was just that everyone was coming to acknowledge that money was provisional, and that symmetries expressed by conservatiori principles were contingent upon the purposes of the theory in which they were embedded. (Mirowski 1989, 136. Kindle Location 2913-2918)
Increasingly, this contingent status was expressed by recourse to economic metaphors. The variability of metrics of space-time in general relativity were compared to the habit of describing inflation in such torturous language as: “The pound is now only worth seven and sixpence” (Eddington 1930, p. 26). The fundamentally stochastic character of the energy quantum was said to allow nuclear particles to “borrow” sufficient energy so that they could “tunnel” their way out of the nucleus. And, inevitably, if we live with a banking system wherein money is created by means of loans granted on the basis of near-zero fractional reserves, then this process of borrowing energy could cascade, building upon itself until the entire universe is conceptualized as a “free lunch.” The nineteenth century would have recoiled in horror from this idea, they who believed that banks merely ratified the underlying real transactions with their loans. (Mirowski 1989, 136-137. Kindle Location 2918-2925)
I suppose this book started when I first heard the story of Sergey Aleynikov, the Russian computer programmer who had worked for Goldman Sachs and then, in the summer of 2009, after he’d quit his job, was arrested by the FBI and charged by the United States government with stealing Goldman Sachs’s computer code. I’d thought it strange, after the financial crisis, in which Goldman had played such an important role, that the only Goldman Sachs employee who had been charged with any sort of crime was the employee who had taken something from Goldman Sachs. I’d thought it even stranger that government prosecutors had argued that the Russian shouldn’t be freed on bail because the Goldman Sachs computer code, in the wrong hands, could be used to “manipulate markets in unfair ways.” (Goldman’s were the right hands? If Goldman Sachs was able to manipulate markets, could other banks do it, too?) But maybe the strangest aspect of the case was how difficult it appeared to be—for the few who attempted—to explain what the Russian had done. I don’t mean only what he had done wrong: I mean what he had done. His job. He was usually described as a “high-frequency trading programmer,” but that wasn’t an explanation. That was a term of art that, in the summer of 2009, most people, even on Wall Street, had never before heard. What was high-frequency trading? Why was the code that enabled Goldman Sachs to do it so important that, when it was discovered to have been copied by some employee, Goldman Sachs needed to call the FBI? If this code was at once so incredibly valuable and so dangerous to financial markets, how did a Russian who had worked for Goldman Sachs for a mere two years get his hands on it? (Lewis 2014, 40-53)
[I]n a room looking out at the World Trade Center site, at One Liberty Plaza … gathered a small army of shockingly well-informed people from every corner of Wall Street—big banks, the major stock exchanges, and high-frequency trading firms. Many of them had left high-paying jobs to declare war on Wall Street, which meant, among other things, attacking the very problem that the Russian computer programmer had been hired by Goldman Sachs to create. (Lewis 2014, 53-56)
(….) One moment all is well; the next, the value of the entire U.S. stock market has fallen 22.61 percent, and no one knows why. During the crash, some Wall Street brokers, to avoid the orders their customers wanted to place to sell stocks, simply declined to pick up their phones. It wasn’t the first time that Wall Street people had discredited themselves, but this time the authorities responded by changing the rules—making it easier for computers to do the jobs done by those imperfect people. The 1987 stock market crash set in motion a process—weak at first, stronger over the years—that has ended with computers entirely replacing the people. (Lewis 2014, 62-67)
Over the past decade, the financial markets have changed too rapidly for our mental picture of them to remain true to life. (Lewis 2014, 67)
(….) The U.S. stock market now trades inside black boxes, in heavily guarded buildings in New Jersey and Chicago. What goes on inside those black boxes is hard to say—the ticker tape that runs across the bottom of cable TV screens captures only the tiniest fraction of what occurs in the stock markets. The public reports of what happens inside the black boxes are fuzzy and unreliable—even an expert cannot say what exactly happens inside them, or when it happens, or why. The average investor has no hope of knowing, of course, even the little he needs to know. He logs onto his TD Ameritrade or E*Trade or Schwab account, enters a ticker symbol of some stock, and clicks an icon that says “Buy”: Then what? He may think he knows what happens after he presses the key on his computer keyboard, but, trust me, he does not. If he did, he’d think twice before he pressed it. (Lewis 2014, 72-78)
The world clings to its old mental picture of the stock market because it’s comforting; because it’s so hard to draw a picture of what has replaced it; and because the few people able to draw it for you have no [economic] interest in doing so. (Lewis 2014, 78-80)
Emily Northrop (2000) questions whether the fundamental cause of scarcity — unlimited wants — is really innate, and argues that it may be merely constructed [see Diamonds are Bullshit]. She notes that some people manage to resist consumerism and choose different lifestyles embodying simplicity, balance or connection (to the earth and to others). The fact that some are able to do this suggests unlimited wants aren’t innate. In arguing that our wants are constructed, she emphasizes the power of social norms and the power of advertising: some of society’s cleverest people and billions of dollars a year are spent creating and maintaining our wants. (Hill and Myatt 2010, 16)
Northrop also points out that the notion of unlimited wants puts all wants on an equal footing: one person’s want for a subsistence diet is no more important than a millionaire’s want for precious jewellery. This equality of wants reflects the market value system that no goods are intrinsically more worthy than others — just as no preferences are more worthy than others. This is clearly a value judgement and one that many people reject. Yet economics, which unquestioningly adopts this approach, claims to be an objective social science that avoids making value judgements! (Hill and Myatt 2010, 16)
It is noteworthy that Keynes disagreed that ‘all wants have equal merit’. Rather than identify the economic problem with scarcity, he identified it with the satisfaction of what he called absolute needs: food, clothing, shelter and healthcare (Keynes 1963 : 365). This definition of the economic problem puts equity and the distribution of income front and centre. It contrasts with the textbook approach of treating equity as a political issue outside the scope of economic analysis. (Hill and Myatt 2010, 16)
Another economist who rejects the ‘innate unlimited wants’ idea is Stephen Marglin (2008). Unlike Northrop, he doesn’t blame advertising or social norms. Rather, he sees the fundamental cause to be the destruction of community ties, which creates an existential vacuum: all that’s left is stuff. Goods and services substitute for meaningful relationships with family, friends and community. His conclusion: as long as goods are a primary means of solving existential problems, we will always want more. But what or who is responsible for undermining community ties and bonds? Marglin argues that the assumptions of textbook economics, and the resulting policy recommendations of economists, undermine community…. (Hill and Myatt 2010, 16-17)
According to Marglin, the textbook focus on individuals makes the community invisible to economists’ eyes. But it is our friendships and deep connections with others which give our lives meaning. So community ties, built on mutual trust and common purpose, have a value — a value that economists ignore when recommending policy.
Furthermore, Marglin argues that rational choice theory — emphasized in the mainstream textbooks — reduces ethical judgements and values to mere preferences. Are you working for the benefit of your community? That’s your preference. Are you cooking the books to get rich quick and devil take the hindmost? That’s your preference. Being selfish is no worse than being altruistic, they are just different preferences. (Hill and Myatt 2010, 16)
Indeed, according to mainstream textbook economics it is smart to be selfish. It not only maximizes your own material well-being, but through the invisible hand of the market it also produces the greatest good for the greatest possible number. This view influences the cultural norms of society and indirectly erodes community. This influence of economics on attitudes isn’t mere speculation. Marwell and Ames (1981) document that exposure to economics generates less cooperative, less other-regarding, behaviour. Frank et al. (1993) show that uncooperative behaviour increases the more individuals are exposed to economics. (Hill and Myatt 2010, 17-18)
(….) Marglin argues that the textbook focus on individuals is problematic. John Kenneth Galbraith went farther. He thought the textbook focus on individuals was a source of grave error and bias because in the real world the individual is not the agent that matters most. The corporation is. By having the wrong focus, economics is able to deny the importance of power and political interests. (Hill and Myatt 2010, 18)
Further, textbooks assume that the state is subordinate to individuals through the ballot box. At the very least, government is assumed to be neutral, intervening to correct market failure as best it can, and to redistribute income so as to make market outcomes more equitable. (Hill and Myatt 2010, 18-19)
But this idealized world is so far removed from the real world that it is little more than a myth, or ‘perhaps even a fraud’ (John K. Galbraith 2004). The power of the largest corporations rivals that of the state; indeed, they often hijack the state’s power for their own purposes. In reality, we see the management of the consumer by corporations; and we see the subordination of the state to corporate interest. (Hill and Myatt 2010, 19)
(….) Galbraith argues that the biggest corporations have power over markets, power in the community, power over the state, and power over belief. As such, the corporation is a political instrument, different in form and degree but not in kind from the state itself. Textbook economics, in denying that power, is part of the problem. It stops us from seeing how we are governed. As such it becomes an ‘ally of those whose exercise of power depends on an acquiescent public’ (John K. Galbraith 1973a: 11). (Hill and Myatt 2010, 19-20)
I came to think of humans as a kind of Turing machine. I searched for stories which reinforced the parable. There were many of them. However, Üxküll’s tick story was the most impressive (Kindle Locations 884-887). (….) Üxküll’s tick and the Turing machine parable all fitted together in one idea (Kindle Locations 900-907). (….) We find an astonishing coincidence with my Turing machine parable of animal and human behaviors…. This is the most primitive case of the definition of the situation.
According to this view, individuals within an economy follow simple rules of thumb to determine their course of action. However, they adapt to their environment by changing the rules they use when these prove to be less successful. They are not irrational in that they do not act against their own interests, but they have neither the information nor the calculating capacity to ‘optimise’. Indeed, they are assumed to have limited and largely local information, and they modify their behaviour to improve their situation. Individuals in complexity models are neither assumed to understand how the economy works nor to consciously look for the ‘best choice’. The main preoccupation is not whether aggregate outcomes are efficient or not but rather with how all of these different individuals interacting with each other come to coordinate their behaviour. Giving individuals in a model simple rules to follow and allowing them to change them as they interact with others means thinking of them much more like particles or social insects. Mainstream economists often object to this approach, arguing that humans have intentions and aims which cannot be found in either inanimate particles or lower forms of life.
— Kirman et. al. (2018, 95) in Rethinking Economics: An Introduction to Pluralist Economics, Routledge.
Even such purely academic theories as interpretations of human nature have profound practical consequences if disseminated widely enough. If we impress upon people that science has discovered that human beings are motivated only by the desire for material advantage, they will tend to live up to this expectation, and we shall have undermined their readiness to moved by impersonal ideals. By propagating the opposite view we might succeed in producing a larger number of idealists, but also help cynical exploiters to find easy victims. This specific issue, incidentally, is of immense actual importance, because it seems that the moral disorientation and fanatic nihilism which afflict modern youth have been stimulated by the popular brands of sociology and psychology [and economics] with their bias for overlooking the more inspiring achievements and focusing on the dismal average or even the subnormal. When, fraudulently basking in the glory of the exact sciences, the psychologists [, theoretical economists, etc.,] refuse to study anything but the most mechanical forms of behavior—often so mechanical that even rats have no chance to show their higher faculties—and then present their mostly trivial findings as the true picture of the human mind, they prompt people to regard themselves and others as automata, devoid of responsibility or worth, which can hardly remain without effect upon the tenor of social life. (….) Abstrusiveness need not impair a doctrine’s aptness for inducing or fortifying certain attitudes, as it may in fact help to inspire awe and obedience by ‘blinding people with science’.
— Andreski (1973, 33-35) in Social Sciences as Sorcery. Emphasis added.
Complexity theory comes with its own problems of over-reach and tractability. Context counts; any theory taken to far stretches credulity. The art is in spotting the spoof. It is true irony to watch the pot calling the kettle black! To wit, mainstream economists questioning the validity of complexity theories use of greedy reductionism — often for the sole purpose of mathematical tractability — when applied to human beings; just because mainstream economists also have unrealistic assumptions (i.e., homo economicus) that overly simplify human behavior and capabilities doesn’t invalidate such a critique. Just because the pot calls the kettle black doesn’t mean the kettle and the pot are not black. Building models of human behavior solely on rational expectations and/or “social insects” qua fitness climbing ticks means we are either Gods or Idiots. Neither Gödel nor Turing reduced creatively thinking human beings to mere Turing machines.
~ ~ ~
The best dialogues take place when each interlocutor speaks from her best self, without pretending to be something she is not. In their recent book Phishing for Phools: The Economics of Manipulation and Deception, Nobel Prize–winning economists George Akerlof and Robert Shiller expand the standard definition of “phishing.” In their usage, it goes beyond committing fraud on the Internet to indicate something older and more general: “getting people to do things that are in the interest of the phisherman” rather than their own. In much the same spirit, we would like to expand the meaning of another recent computer term, “spoofing,” which normally means impersonating someone else’s email name and address to deceive the recipient—a friend or family member of the person whose name is stolen—into doing something no one would do at the behest of a stranger. Spoofing in our usage also means something more general: pretending to represent one discipline or school when actually acting according to the norms of another. Like phishing, spoofing is meant to deceive, and so it is always useful to spot the spoof.
Students who take an English course under the impression they will be taught literature, and wind up being given lessons in politics that a political scientist would scoff at or in sociology that would mystify a sociologist, are being spoofed. Other forms of the humanities—or dehumanities, as we prefer to call them—spoof various scientific disciplines, from computer science to evolutionary biology and neurology. The longer the spoof deceives, the more disillusioned the student will be with what she takes to be the “humanities.” (Morson, Gary Saul. Cents and Sensibility (pp. 1-2). Princeton University Press. Kindle Edition.)
By the same token, when economists pretend to solve problems in ethics, culture, and social values in purely economic terms, they are spoofing other disciplines, although in this case the people most readily deceived are the economists themselves. We will examine various ways in which this happens and how, understandably enough, it earns economists a bad name among those who spot the spoof.
But many do not spot it. Gary Becker won a Nobel Prize largely for extending economics to the furthest reaches of human behavior, and the best-selling Freakonomics series popularizes this approach. What seems to many an economist to be a sincere effort to reach out to other disciplines strikes many practitioners of those fields as nothing short of imperialism, since economists expropriate topics rather than treat existing literatures and methods with the respect they deserve. Too often the economic approach to interdisciplinary work is that other fields have the questions and economics has the answers. (Morson, Gary Saul. Cents and Sensibility (pp. 2-3). Princeton University Press. Kindle Edition.)
As with the dehumanities, these efforts are not valueless. There is, after all, an economic aspect to many activities, including those we don’t usually think of in economic terms. People make choices about many things, and the rational choice model presumed by economists can help us understand how they do so, at least when they behave rationally—and even the worst curmudgeon acknowledges that people are sometimes rational! We have never seen anyone deliberately get into a longer line at a bank. (Morson, Gary Saul. Cents and Sensibility (p. 3). Princeton University Press. Kindle Edition.)
Even regarding ethics, economic models can help in one way, by indicating what is the most efficient allocation of resources. To be sure, one can question the usual economic definition of efficiency—in terms of maximizing the “economic surplus”—and one can question the establishment of goals in purely economic terms, but regardless of which goals one chooses, it pays to choose an efficient way, one that expends the least resources, to reach them. Wasting resources is never a good thing to do, because the resources wasted could have been put to some ethical purpose. The problem is that efficiency does not exhaust ethical questions, and the economic aspect of many problems is not the most important one. By pretending to solve ethical questions, economists wind up spoofing philosophers, theologians, and other ethicists. Economic rationality is indeed part of human nature, but by no means all of it.
For the rest of human nature, we need the humanities (and the humanistic social sciences). In our view, numerous aspects of life are best understood in terms of a dialogue between economics and the humanities—not the spoofs, but real economics and real humanities. (Morson, Gary Saul. Cents and Sensibility (pp. 3-4). Princeton University Press. Kindle Edition.)
There are many examples in the modern world showing how this doctrine of the free market—the pursuit of self-interest—has worked out to the disadvantage of society.
— CAMBRIDGE PROFESSOR JOAN ROBINSON, 1977, cited in Buddhist Economics.
The approach used here concentrates on a factual basis that differentiates it from more traditional practical ethics and economic policy analysis, such as the “economic” concentration on the primacy of income and wealth (rather than on the characteristics of human lives and substantive freedoms).
— NOBEL LAUREATE AMARTYA SEN, DEVELOPMENT AS FREEDOM, cited in Buddhist Economics
In Buddhist economics, people are interdependent with one another and with Nature, so each person’s well-being is measured by how well everyone and the environment are functioning with the goal of minimizing suffering for people and the planet. Everyone is assumed to have the right to a comfortable life with access to basic nutrition, health care, education, and the assurance of safety and human rights. A country’s well-being is measured by the aggregation of the well-being of all residents and the health of the ecosystem.
— Brown (2017, 2), in Buddhist Economics
As Toyota President Akio Toyoda recently commented, Toyota’s renewed commitment to society extends from putting customers first to “putting people first” and aiming to serve society as a whole. This mission statement stems from Toyota’s earliest values and explains why the company is closely aligned to the Sustainable Development Goals as inspiration for its long-term global sustainability strategy. At a European-level, the company is following this lead by contributing to society through its social and employment practices, such as its focus on diversity and inclusion
(….) “As we transform from an automotive to mobility company, and to produce mass happiness, we need to make more than cars, vans and trucks. We need to align with the Sustainable Development Goals, Green Deal and a better future”
—Automotive World, Toyota’s mission to produce “happiness for all” with its business transformation programme, December 7, 2020
We live in the age of kikikan (危機感). Civilizational crisis is everywhere to be seen for those who are awake. The way forward is gapponshugi, a vision embodying a new motive for economic striving.
~ ~ ~
In the most dramatic moments of Italy’s debt crisis, the newly installed “technical” government, led by Mario Monti, appealed to trade unions to accept salary cuts in the name of national solidarity. Monti urged them to participate in a collective effort to increase the competitiveness of the Italian economy (or at least to show that efforts were being made in that direction) in order to calm international investors and “the market” and, hopefully, reduce the spread between the interest rates of Italian and German bonds (at the time around 500 points, meaning that the Italian government had to refinance its ten-year debt at the excruciating rate of 7.3 percent). Commenting on this appeal in an editorial in the left-leaning journal Il Manifesto, the journalist Loris Campetti wondered how it could be at all possible to demand solidarity from a Fiat worker when the CEO of his company earned about 500 times what the worker did.1 And such figures are not unique to Italy. In the United States, the average CEO earned about 30 times what the average worker earned in the mid-1970s (1973 being the year in which income inequality in the United States was at its historically lowest point). Today the multiplier lies around 400. Similarly, the income of the top 1 percent (or even more striking, the top 0.1 percent) of the U.S. population has skyrocketed in relation to that of the remaining 99 percent, bringing income inequality back to levels not seen since the Roaring Twenties. (Arvidsson et. al. 2013, 1-2)
The problem is not, or at least not only, that such income discrepancies exist, but that there is no way to legitimate them. At present there is no way to rationally explain why a corporate CEO (or a top-level investment banker or any other member of the 1 percent) should be worth 400 times as much as the rest of us. And consequently there is no way to legitimately appeal to solidarity or to rationally argue that a factory worker (or any of us in the 99 percent) should take a pay cut in the name of a system that permits such discrepancies in wealth. What we have is a value crisis. There are huge differentials in the monetary rewards that individuals receive, but there is no way in which those differentials can be explained and legitimated in terms of any common understanding of how such monetary rewards should be determined. There is no common understanding of value to back up the prices that markets assign, to put it in simple terms. (We will discuss the thorny relation between the concepts of “value” and “price” along with the role of markets farther on in this chapter.) (Arvidsson et. al. 2013, 2)
This value crisis concerns more than the distribution of income and private wealth. It is also difficult to rationalize how asset prices are set. In the wake of the 2008 financial crisis a steady stream of books, articles, and documentaries has highlighted the irrational practices, sometimes bordering on the fraudulent, by means of which mortgage-backed securities were revalued from junk to investment grade, credit default swaps were emitted without adequate underlying assets, and the big actors of Wall Street colluded with each other and with political actors to protect against transparency and rational scrutiny and in the end to have the taxpayers foot the bill. Neither was this irrationality just a temporary expression of a period of exceptional “irrational exuberance”; rather, irrationality has become a systemic feature of the financial system. As Amar Bidhé argues, the reliance on mathematical formulas embodied in computerized calculating devices at all levels of the financial system has meant that the setting of values on financial markets has been rendered ever more disconnected from judgments that can be rationally reconstructed and argued through.5 Instead, decisions that range from whether to grant a mortgage to an individual, to how to make split-second investment decisions on stock and currency markets, to how to grade or rate the performance of a company or even a nation have been automated, relegated to the discretion of computers and algorithms. While there is nothing wrong with computers and algorithms per se, the problem is that the complexity of these devices has rendered the underlying methods of calculation and their assumptions incomprehensible and opaque even to the people who use them on a daily basis (and imagine the rest of us!). To cite Richard Sennett’s interviews with the back-office Wall Street technicians who actually develop such algorithms: (Arvidsson et. al. 2013, 2-3)
“I asked him to outline the algo [algorithm] for me,” one junior accountant remarked about her derivatives-trading Porsche driving superior, “and he couldn’t, he just took it on faith.” “Most kids have computer skills in their genes … but just up to a point … when you try to show them how to generate the numbers they see on screen, they get impatient, they just want the numbers and leave where these came from to the main-frame.” (Arvidsson et. al. 2013, 3)
The problem here is not ignorance alone, but that the makeup of the algorithms and automated trading devices that execute the majority of trades on financial markets today (about 70 percent are executed by “bots,” or automatic trading agents), is considered a purely technical question, beyond rational discussion, judgment, and scrutiny. Actors tend to take the numbers on faith without knowing, or perhaps even bothering about, where they came from. Consequently these devices can often contain flawed assumptions that, never scrutinized, remain accepted as almost natural “facts.” During the dot-com boom, for example, Internet analysts valued dot-coms by looking at a multiplier of visitors to the dot-com’s Web site without considering how these numbers translated into monetary revenues; during the pre-2008 boom investors assigned the same default risks to subprime mortgages, or mortgages taken out by people who were highly likely to default, as they did to ordinary mortgages.8 And there are few ways in which the nature of such assumptions, flawed or not, can be discussed, scrutinized, or even questioned. Worse, there are few ways of even knowing what those assumptions are. The assumptions that stand behind the important practice of brand valuation are generally secret. Consequently, there is no way of explaining how or discussing why valuations of the same brand by different brand-valuation companies can differ as much as 450 percent. A similar argument can be applied to Fitch, Moody’s, Standard & Poor, and other ratings agencies that are acquiring political importance in determining the economic prospects of nations like Italy and France. (Arvidsson et. al. 2013, 3)
This irrationality goes even deeper than financial markets. Investments in corporate social responsibility are increasing massively, both in the West and in Asia, as companies claim to want to go beyond profits to make a genuine contribution to society. But even though there is a growing body of academic literature indicating that a good reputation for social responsibility is beneficial for corporate performance in a wide variety of ways—from financial outcomes to ease in generating customer loyalty and attracting talented employees—there is no way of determining exactly how beneficial these investments are and, consequently, how many resources should be allocated to them. Indeed, perhaps it would be better to simply tax corporations and let the state or some other actor distribute the resources to some “responsible” causes. The fact that we have no way of knowing leads to a number of irrationalities. Sometimes companies invest more money in communicating their efforts at “being good” than they do in actually promoting socially responsible causes. (In 2001, for example, the tobacco company Philip Morris spent $75 million on what it defined as “good deeds” and then spent $100 million telling the public about those good deeds.) At other times such efforts can be downright contradictory, for example when tobacco companies sponsor antismoking campaigns aimed at young people in countries like Malaysia while at the same time targeting most of their ad spending to the very same segment. Other companies make genuine efforts to behave responsibly, but those efforts reflect poorly on their reputation. Apple, for example, has done close to nothing in promoting corporate responsibility, and has a consistently poor record when it comes to labor conditions among its Chinese subcontractors (like Foxconn). Yet the company benefits from a powerful brand that is to no small degree premised on the fact that consumers perceive it to be somehow more benign than Microsoft, which actually does devote considerable resources to good causes (or at least the Bill and Melinda Gates Foundation does so). (Arvidsson et. al. 2013, 3-4)
Similar irrationalities exist throughout the contemporary economy, ranging from how to measure productivity and determine rewards for knowledge workers to how to arrive at a realistic estimate of value for a number of “intangible” assets, from creativity and capacity for innovation to brand. (We will come back to these questions below as well as in the chapters that follow.) Throughout the contemporary economy, from the heights of finance down to the concrete realities of everyday work, particularly in knowledge work, great insecurities arise with regard to what things are actually worth and the extent to which the prices assigned to them actually reflect their value. (Indeed, in academic managerial thought, the very concept of “value” is presently without any clear definition; it means widely different things in different contexts.) (Arvidsson et. al. 2013, 4)
But this is not merely an accounting problem. The very question of how you determine worth, and consequently what value is, has been rendered problematic by the proliferation of a number of value criteria (or “orders of worth,” to use sociologist David Stark’s term) that are poorly reflected in established economic models. A growing number of people value the ethical impact of consumer goods. But there are no clear ways of determining the relative value of different forms of “ethical impact,” nor even a clear definition of what “ethical impact” means. Therefore there is no way of determining whether it is actually more socially useful or desirable for a company to invest in these pursuits than to concentrate on getting basic goods to consumers as cheaply and conveniently as possible. Consequently, ethical consumerism, while a growing reality, tends to be more efficient at addressing the existential concerns of wealthy consumers than at systematically addressing issues like poverty or empowerment. Similarly, more and more people understand the necessity for more sustainable forms of development. And while the definition of “sustainability” is clearer than that of “ethics,” there are no coherent ways of making concerns for sustainability count in practices of asset valuation (although some efforts have been made in that direction, which we will discuss) or of rationally determining the trade-off between efforts toward sustainability and standard economic pursuits. Thus the new values that are acquiring a stronger presence in our society—popular demand for a more sustainable economy and a more just and equal global society—have only very weak and unreliable ways of influencing the actual conduct of corporations and other important economic actors, and can affect economic decisions in only a tenuous way. More generally, we have no way of arriving at what orders of worth “count” in general and how much, and even if we were able to make such decisions, we have no channels by means of which to effect the setting of economic values. So the value crisis is not only economic; it is also ethical and political. (Arvidsson et. al. 2013, 4-5, emphasis added)
It is ethical in the sense that the relative value of the different orders of worth that are emerging in contemporary society (economic prosperity, “ethical conduct,” “social responsibility,” sustainability, global justice and empowerment) is simply indeterminable. As a consequence, ethics becomes a matter of personal choice and “standpoint” and the ethical perspectives of different individuals become incommensurate with one another. Ethics degenerates into “postmodern” relativism. (Arvidsson et. al. 2013, 5, emphasis added)
It is political because since we have no way of rationally arriving at what orders of worth we should privilege and how much, we have no common cause in the name of which we could legitimately appeal to people or companies (or force them) to do what they otherwise might not want to do. (The emphasis here is on legitimately; of course people are asked and forced to do things all the time, but if they inquire as to why, it becomes very difficult to say what should motivate them.) In the absence of legitimacy, politics is reduced to either more or less corrupt bargaining between particular interest groups or the naked exercise of raw power. In either case there can be no raison d’état. In such a context, appeals to solidarity, like that of the Monti government in Italy, remain impossible. (Arvidsson et. al. 2013, 5-6)
There have of course always been debates and conflicts, often violent, around what the common good should be. The point is that today we do not even have a language, or less metaphorically, a method for conducting such debates. (Modern ethical debates are interminable, as philosopher Alasdair MacIntyre wrote in the late 1970s.) This is what we mean by a value crisis. Not that there might be disagreement on how to value social responsibility or sustainability in relation to economic growth, or how much a CEO should be paid in relation to a worker, but that there is no common method to resolve such issues, or even to define specifically what they are about. We have no common “value regime,” no common understanding of what the values are and how to make evaluative decisions, even contested and conflict-ridden ones. (Arvidsson et. al. 2013, 6)
This has not always been the case. Industrial society—that old model that we still remember as the textbook example of how economics and social systems are supposed to work—was built around a common way of connecting economic value creation to overall social values, an imaginary social contract. In this arrangement, business would generate economic growth, which would be distributed by the welfare state in such a way that it contributed to the well-being of everyone. And even though there were intense conflicts about how this contract should apply, everyone agreed on its basic values. More importantly, these basic values were institutionalized in a wide range of practices and devices, from accounting methods to procedures of policy decisions to methods for calculating the financial value of companies and assets. Again, this did not mean that there was no conflict or discussion, but it did mean that there was a common ground on which such conflict and discussion could be acted out. There was a common value regime. (Arvidsson et. al. 2013, 6)
We are not arguing for a comeback of the value regime of industrial society. That would be impossible, and probably undesirable even if it were possible. However, neither do we accept the “postmodernist” argument (less popular now, perhaps, than it was two decades go) that the end of values (and of ethics or even politics) would be somehow liberating and emancipatory. Instead we argue that the foundations for a different kind of value regime—an ethical economy—are actually emerging as we speak. (Arvidsson et. al. 2013, 6)
The growth of economic knowledge over the past 200 years compares quite favourably with the growth of physical science in any arbitrary 200 year stretch of the dark ages or medieval period. But one is reminded of Mark Twain: “it ain’t what people don’t know that’s the problem; it’s what they know that just ain’t so.” Along with the accumulation of knowledge there has been a proliferation of abstract theorizing that is only too easy to misapply or apply to situations where it is inappropriate. The low power of empirical tests and indifference of too many people to empirical testing has allowed useless models to persist too. Ideology also plays a bigger part than it does in most sciences, especially in macroeconomics. So it is easy to point to cases where economists offered terrible advice. No reason to despair. Smith, Marx, Keynes, Kalecki, Simon and Minsky all advanced understanding somewhat while Marshall, Hicks and others clarified and formalized concepts. Macroeconomics took a wrong path and a sharp turn for the worse in the1970s and we are barely emerging now. Still, what is 50 years in the eye of history?
The modern forecasting field, which emerged in the early twentieth century, had many points of origin in the previous century: in the field credit rating agencies, in the financial press, and in the blossoming fields of science—including meteorology, thermodynamics, and physics. The possibilities of scientific discovery and invention generated unbounded optimism among Victorian-era Americans. Scientific discoveries of all sorts, from the invention of the internal combustion engine to the insights of Darwin and Freud, seemed to promise a new and illuminating age just out of reach. (Friedman 2014, ix)
But forecasting also had deeper roots in the inherent wish of human beings to find certainty in life by knowing the future: What will I be when I grow up? Where will I live? What kind of work will I do? Will it be fulfilling? Will I marry? What will happen to my parents and other family members? To my country, to my job? To the economy in which I live? Forecasting addresses not just business issues but the deep-seated human wish to divine the future. It is the story of the near universal compulsion to avoid ambiguity and doubt and the refusal of the realities of life to satisfy that impulse. (Friedman 2014, ix)
Economic forecasting arose when it did because while the effort to introduce rationality—in the form of the scientific method—was emerging, the insatiable human longing for predictability persisted in the industrializing economy. Indeed, the early twentieth century saw a curious enlistment of science in a range of efforts to temper the uncertainty of the future. Reform movements, including good, bad, and ugly ones (like labor laws, Prohibition, and eugenics), envisioned a future improved through the application of science. So, too, forecasting attracted a spectrum of visionaries. Here were “seers,” such as the popular prophet Roger Babson, Wall Street entrepreneurs, like John Moody, and genuine academic scientists, such as Irving Fisher of Yale and Charles Jesse Bullock and Warren Persons of Harvard. (Friedman 2014, ix)
Customers of the new forecasting services often took these statistics-based predictions on faith. They wanted forecasts, John Moody noted, not discourses on the methods that produced them. Readers did not seek out detailed information on the accuracy of economic predictions, as long as forecasters proved to be right at least a portion of the time. The desire for any information that would illuminate the future was overwhelming, and subscribers to forecasting newsletters were willing to suspend reasoned judgment to gain comfort. This blend of rationality and anxiety, measurement and intuition, optimism and fear is the broad frame of the story and, not incidentally, why forecasters who were repeatedly proved mistaken, as all ultimately must be given enough time, still commanded attention and fee-paying clients. (Friedman 2014, x)
(….) Forecaster’s reliance on science and statistics as methods for accessing the future aligns their story with conventional narratives of modernity. The German sociologist Max Weber, for instance, argued that a key component of the modern worldview was a marked “disenchantment of the world,” as scientific rationality displaced older, magical, and “irrational” ways of understanding. Indeed, the forecasters … certainly saw themselves as systematic empiricists and logicians who promised to rescue the science of prediction from quacks and psychics. They sought, in the words of historian Jackson Lears, to “stabilize the sorcery of the market.” (Friedman 2014, 5)
The relationship between the forecasting industry and modernity was an ambivalent one, though. On the one hand, the early forecasters helped build key institutions (including Moody’s Investors Service and the National Bureau of Economic Research) and popularize new statistical tools, like leading indicators and indexes of industrial production. On the other hand, though all forecasters dressed their predictions in the garb of rationality (with graphs, numbers, and equations), their predictive accuracy was no more certain than a crystal ball. Moreover, despite efforts of forecasters to distance themselves from astrologers and popular conjurers, the emergence of scientific forecasting went hand in hand with rising popular interest in all manner of prediction. The general public, anxious for insights into an uncertain future, consumed forecasts indiscriminately. (Friedman 2014, 5)
Nineteenth-century economists liked to illustrate the importance of scarcity to value by using the water and diamond paradox. Why is water cheap, even though it is necessary for human life, and diamonds are expensive and therefore of high value, even though humans can quite easily get by without them? Marx’s labour theory of value–naïvely applied–would argue that diamonds simply take a lot more time and effort to produce. But the new utility theory of value, as the marginalists defined it, explained the difference in price through the scarcity of diamonds. Where there is an abundance of water, it is cheap. Where there is a scarcity (as in a desert), its value can become very high. For the marginalists, this scarcity theory of value became the rationale for the price of everything, from diamonds, to water, to workers’ wages.
The idea of scarcity became so important to economists that in the early 1930s it prompted one influential British economist, Lionel Robbins (1898–1984), Professor of Economics at the London School of Economics, to define the study of economics itself in terms of scarcity; his description of it as ‘the study of the allocation of resources, under conditions of scarcity’ is still widely used.8 The emergence of marginalism was a pivotal moment in the history of economic thought, one that laid the foundations for today’s dominant economic theory.
— Mariana Mazzucato (2018, 64-65) The Value of Everything
The Manufacturing of Scarcity qua Market Manipulation
American males enter adulthood through a peculiar rite of passage: they spend most of their savings on a shiny piece of rock. They could invest the money in assets that will compound over time and someday provide a nest egg. Instead, they trade that money for a diamond ring, which isn’t much of an asset at all. As soon as a diamond leaves a jeweler, it loses over 50% of its value. (Priceonomics 2014, 3)
We exchange diamond rings as part of the engagement process because the diamond company De Beers decided in 1938 that it would like us to. Prior to a stunningly successful marketing campaign, Americans occasionally exchanged engagement rings, but it wasn’t pervasive. Not only is the demand for diamonds a marketing invention, but diamonds aren’t actually that rare. Only by carefully restricting the supply has De Beers kept the price of a diamond high. (Priceonomics 2014, 3)
Countless American dudes will attest that the societal obligation to furnish a diamond engagement ring is both stressful and expensive. But this obligation only exists because the company that stands to profit from it willed it into existence. (Priceonomics 2014, 3)
So here is a modest proposal: Let’s agree that diamonds are bullshit and reject their role in the marriage process. Let’s admit that we as a society were tricked for about a century into coveting sparkling pieces of carbon, but it’s time to end the nonsense. (Priceonomics 2014, 3-4)
The Concept of Intrinsic Value
In finance, there is concept called intrinsic value. An asset’s value is essentially driven by the (discounted) value of the future cash that asset will generate. For example, when Hertz buys a car, its value is the profit Hertz will earn from renting it out and selling the car at the end of its life (the “terminal value”). For Hertz, a car is an investment. When you buy a car, unless you make money from it somehow, its value corresponds to its resale value. Since a car is a depreciating asset, the amount of value that the car loses over its lifetime is a very real expense you pay. (Priceonomics 2014, 4)
A diamond is a depreciating asset masquerading as an investment. There is a common misconception that jewelry and precious metals are assets that can store value, appreciate, and hedge against inflation. That’s not wholly untrue. (Priceonomics 2014, 4)
Gold and silver are commodities that can be purchased on financial markets. They can appreciate and hold value in times of inflation. You can even hoard gold under your bed and buy gold coins and bullion (albeit at approximately a 10% premium to market rates). If you want to hoard gold jewelry, however, there is typically a 100-400% retail markup. So jewelry is not a wise investment. (Priceonomics 2014, 4)
But with that caveat in mind, the market for gold is fairly liquid and gold is fungible — you can trade one large piece of gold for ten smalls ones like you can trade a ten dollar bill for ten one dollar bills. These characteristics make it a feasible investment. (Priceonomics 2014, 4)
Diamonds, however, are not an investment. The market for them is not liquid, and diamonds are not fungible. (Priceonomics 2014, 4-5)
The first test of a liquid market is whether you can resell a diamond. In a famous piece published by The Atlantic in 1982, Edward Epstein explains why you can’t sell used diamonds for anything but a pittance:
“Retail jewelers, especially the prestigious Fifth Avenue stores, prefer not to buy back diamonds from customers, because the offer they would make would most likely be considered ridiculously low. The ‘keystone,’ or markup, on a diamond and its setting may range from 100 to 200 percent, depending on the policy of the store; if it bought diamonds back from customers, it would have to buy them back at wholesale prices. Most jewelers would prefer not to make a customer an offer that might be deemed insulting and also might undercut the widely held notion that diamonds go up in value. Moreover, since retailers generally receive their diamonds from wholesalers on consignment, and need not pay for them until they are sold, they would not readily risk their own cash to buy diamonds from customers.” (Priceonomics 2014, 5)
When you buy a diamond, you buy it at retail, which is a 100% to 200% markup. If you want to resell it, you have to pay less than wholesale to incent a diamond buyer to risk her own capital on the purchase. Given the large markup, this will mean a substantial loss on your part. The same article puts some numbers around the dilemma: (Priceonomics 2014, 5-6)
(….) We like diamonds because Gerold M. Lauck told us to. Until the mid 20th century, diamond engagement rings were a small and dying industry in America, and the concept had not really taken hold in Europe. (Priceonomics 2014, 7)
Not surprisingly, the American market for diamond engagement rings began to shrink during the Great Depression. Sales volume declined and the buyers that remained purchased increasingly smaller stones. But the U.S. market for engagement rings was still 75% of De Beers’ sales. With Europe on the verge of war, it didn’t seem like a promising place to invest. If De Beers was going to grow, it had to reverse the trend. (Priceonomics 2014, 7)
And so, in 1938, De Beers turned to Madison Avenue for help. The company hired Gerold Lauck and the N. W. Ayer advertising agency, which commissioned a study with some astute observations. Namely, men were the key to the market. As Epstein wrote of the findings:
“Since ‘young men buy over 90% of all engagement rings’ it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship” (Priceonomics 2014, 7)
(….) The next time you look at a diamond, consider this: nearly every American marriage begins with a diamond because a bunch of rich white men in the 1940s convinced everyone that its size determines a man’s self worth. They created this convention — that unless a man purchases (an intrinsically useless) diamond, his life is a failure — while sitting in a room, racking their brains on how to sell diamonds that no one wanted. (Priceonomics 2014, 8)
A History of Market Manipulation
(….) What, you might ask, could top institutionalizing demand for a useless product out of thin air? Monopolizing the supply of diamonds for over a century to make that useless product extremely expensive. You see, diamonds aren’t really even that rare. (Priceonomics 2014, 10)
Before 1870, diamonds were very rare. They typically ended up in a Maharaja’s crown or a royal necklace. In 1870, enormous deposits of diamonds were discovered in Kimberley, South Africa. As diamonds flooded the market, the financiers of the mines realized they were making their own investments worthless. As they mined more and more diamonds, they became less scarce and their price dropped. (Priceonomics 2014, 10)
The diamond market may have bottomed out were it not for an enterprising individual by the name of Cecil Rhodes. He began buying up mines in order to control the output and keep the price of diamonds high. By 1888, Rhodes controlled the entire South African diamond supply, and in turn, essentially the entire world supply. One of the companies he acquired was eponymously named after its founders, the De Beers brothers. (Priceonomics 2014, 10)
Building a diamond monopoly isn’t easy work. It requires a balance of ruthlessly punishing and cooperating with competitors, as well as a very long term view. For example, in 1902, prospectors discovered a massive mine in South Africa that contained as many diamonds as all of De Beers’ mines combined. The owners initially refused to join the De Beers cartel, and only joined three years later after new owner Ernest Oppenheimer recognized that a competitive market for diamonds would be disastrous for the industry. In Oppenheimer’s words: (Priceonomics 2014, 10-11)
“Common sense tells us that the only way to increase the value of diamonds is to make them scarce, that is to reduce production.” (Priceonomics 2014, 11)
(….) We covet diamonds in America for a simple reason: the company that stands to profit from diamond sales decided that we should. De Beers’ marketing campaign single handedly made diamond rings the measure of one’s success in America. Despite diamonds’ complete lack of inherent value, the company manufactured an image of diamonds as a status symbol. And to keep the price of diamonds high, despite the abundance of new diamond finds, De Beers executed the most effective monopoly of the 20th century. (Priceonomics 2014, 13)
~ ~ ~
The history of De Beers’ ruthless behavior in its drive to maintain its monopoly is well documented. There were so successful at creating a market in monopoly that eventually such a monstrosity as blood diamonds could exist. But that is another story. The moral of the story is that when it comes to capitalism there is really no such thing as intrinsic value or a “free market,” and that slick marketing can make a terd sell for the price of diamond.
Upon this market manipulation economists built a house of cards that overlooked the monopolist’s manipulations and instead claimed diamonds are expensive because they are rare. Diamonds are bullshit and by extension so too is modern economics theory of scarcity largely bullshit too.
In 1994 Paul Ormerod published a book called The Death of Economics. He argued economists don’t know what they’re talking about. In 2001 Steve Keen published a book called Debunking Economics: the naked emperor of the social sciences, with a second edition in 2011 subtitled The naked emperor dethroned?. Keen also argued economists don’t know what they’re talking about. (Davies 2015, 1)
Neither of these books, nor quite a few others, has had the desired effect. Mainstream economics has sailed serenely on its way, declaiming, advising, berating, sternly lecturing, deciding, teaching, pontificating. Meanwhile half of Europe and many regions and groups in the United States are in depression, and fascism is making a comeback. The last big depression spawned Hitler. This one is promoting Golden Dawn in Greece and similar extremist movements elsewhere. In the anglophone world a fundamentalist right-wing ideology is enforcing an increasingly narrow political correctness centred on “free” markets and the right of the rich to do and say whatever they like. “Freedom”, but only for some, and without responsibility. (Davies 2015, 1-2)
Evidently Ormerod and Keen were too subtle. It’s true their books also get a bit technical at times, especially Keen’s, but then they were addressing the profession, trying to bring it to its senses, to reform it from the inside. That seems to have been their other mistake. They produced example after example of how mainstream ideas fail, but still they had no effect. I think the message was addressed to the wrong audience, and was just too subtle. Economics is naked and dead, but never mind the stink, just prop up the corpse and carry on. (Davies 2015, 2)
Oh, but look! The corpse is moving. It’s getting up and walking. Time to call in John Quiggin, author of Zombie Economics: how dead ideas still walk among us. Perhaps he’ll show us how to shoot it in the head, or whatever it takes to finally stop a zombie. (Davies 2015, 2)
Well, I think it’s clear we can’t be too subtle. We need to speak in plain English, to everyone, and get straight to the point. Economists don’t know what they’re talking about. We should remove economists from positions of power and influence. Get them out of treasuries, central banks, media, universities, where ever they spread their baleful ignorance. (Davies 2015, 2)
Economists don’t know how businesses work, they don’t know how financial markets work, they can’t begin to do elementary accounting, they don’t know where money comes from nor how banks work, they think private debt has no effect on the economy, their favourite theory is a laughably irrelevant abstraction and they never learnt that mathematics on its own is not science. They ignore well-known evidence that clearly contradicts their theories. (Davies 2015, 2-3)
Other academics should look into this discipline called economics that lurks in their midst. Practitioners of proper academic rigour, like historians, ecologists, physicists, psychologists, systems scientists, engineers, even lawyers, will be shocked. Academic economics is an incoherent grab bag of mathematical abstraction, assertion, failure to heed observations, misrepresentation of history and sources, rationalisation of archaic money-lending practices, and wishful thinking. It missed the computational boat that liberated other fields from old analytical mathematics and overly-restrictive assumptions. It is ignorant of major fields of modern knowledge in biology, ecology, psychology, anthropology, physics and systems science. (Davies 2015, 3)
Though many economists themselves may not realise it, economics is an ideology rationalised by a dog’s breakfast of superficial arguments and defended by dense thickets of jargon and arcane mathematics. The ideology is an old one: the rich and powerful know best, the rest of us are here to serve them. (Davies 2015, 3)