I suppose this book started when I first heard the story of Sergey Aleynikov, the Russian computer programmer who had worked for Goldman Sachs and then, in the summer of 2009, after he’d quit his job, was arrested by the FBI and charged by the United States government with stealing Goldman Sachs’s computer code. I’d thought it strange, after the financial crisis, in which Goldman had played such an important role, that the only Goldman Sachs employee who had been charged with any sort of crime was the employee who had taken something from Goldman Sachs. I’d thought it even stranger that government prosecutors had argued that the Russian shouldn’t be freed on bail because the Goldman Sachs computer code, in the wrong hands, could be used to “manipulate markets in unfair ways.” (Goldman’s were the right hands? If Goldman Sachs was able to manipulate markets, could other banks do it, too?) But maybe the strangest aspect of the case was how difficult it appeared to be—for the few who attempted—to explain what the Russian had done. I don’t mean only what he had done wrong: I mean what he had done. His job. He was usually described as a “high-frequency trading programmer,” but that wasn’t an explanation. That was a term of art that, in the summer of 2009, most people, even on Wall Street, had never before heard. What was high-frequency trading? Why was the code that enabled Goldman Sachs to do it so important that, when it was discovered to have been copied by some employee, Goldman Sachs needed to call the FBI? If this code was at once so incredibly valuable and so dangerous to financial markets, how did a Russian who had worked for Goldman Sachs for a mere two years get his hands on it? (Lewis 2014, 40-53)
[I]n a room looking out at the World Trade Center site, at One Liberty Plaza … gathered a small army of shockingly well-informed people from every corner of Wall Street—big banks, the major stock exchanges, and high-frequency trading firms. Many of them had left high-paying jobs to declare war on Wall Street, which meant, among other things, attacking the very problem that the Russian computer programmer had been hired by Goldman Sachs to create. (Lewis 2014, 53-56)
(….) One moment all is well; the next, the value of the entire U.S. stock market has fallen 22.61 percent, and no one knows why. During the crash, some Wall Street brokers, to avoid the orders their customers wanted to place to sell stocks, simply declined to pick up their phones. It wasn’t the first time that Wall Street people had discredited themselves, but this time the authorities responded by changing the rules—making it easier for computers to do the jobs done by those imperfect people. The 1987 stock market crash set in motion a process—weak at first, stronger over the years—that has ended with computers entirely replacing the people. (Lewis 2014, 62-67)
Over the past decade, the financial markets have changed too rapidly for our mental picture of them to remain true to life. (Lewis 2014, 67)
(….) The U.S. stock market now trades inside black boxes, in heavily guarded buildings in New Jersey and Chicago. What goes on inside those black boxes is hard to say—the ticker tape that runs across the bottom of cable TV screens captures only the tiniest fraction of what occurs in the stock markets. The public reports of what happens inside the black boxes are fuzzy and unreliable—even an expert cannot say what exactly happens inside them, or when it happens, or why. The average investor has no hope of knowing, of course, even the little he needs to know. He logs onto his TD Ameritrade or E*Trade or Schwab account, enters a ticker symbol of some stock, and clicks an icon that says “Buy”: Then what? He may think he knows what happens after he presses the key on his computer keyboard, but, trust me, he does not. If he did, he’d think twice before he pressed it. (Lewis 2014, 72-78)
The world clings to its old mental picture of the stock market because it’s comforting; because it’s so hard to draw a picture of what has replaced it; and because the few people able to draw it for you have no [economic] interest in doing so. (Lewis 2014, 78-80)
Emily Northrop (2000) questions whether the fundamental cause of scarcity — unlimited wants — is really innate, and argues that it may be merely constructed [see Diamonds are Bullshit]. She notes that some people manage to resist consumerism and choose different lifestyles embodying simplicity, balance or connection (to the earth and to others). The fact that some are able to do this suggests unlimited wants aren’t innate. In arguing that our wants are constructed, she emphasizes the power of social norms and the power of advertising: some of society’s cleverest people and billions of dollars a year are spent creating and maintaining our wants. (Hill and Myatt 2010, 16)
Northrop also points out that the notion of unlimited wants puts all wants on an equal footing: one person’s want for a subsistence diet is no more important than a millionaire’s want for precious jewellery. This equality of wants reflects the market value system that no goods are intrinsically more worthy than others — just as no preferences are more worthy than others. This is clearly a value judgement and one that many people reject. Yet economics, which unquestioningly adopts this approach, claims to be an objective social science that avoids making value judgements! (Hill and Myatt 2010, 16)
It is noteworthy that Keynes disagreed that ‘all wants have equal merit’. Rather than identify the economic problem with scarcity, he identified it with the satisfaction of what he called absolute needs: food, clothing, shelter and healthcare (Keynes 1963 : 365). This definition of the economic problem puts equity and the distribution of income front and centre. It contrasts with the textbook approach of treating equity as a political issue outside the scope of economic analysis. (Hill and Myatt 2010, 16)
Another economist who rejects the ‘innate unlimited wants’ idea is Stephen Marglin (2008). Unlike Northrop, he doesn’t blame advertising or social norms. Rather, he sees the fundamental cause to be the destruction of community ties, which creates an existential vacuum: all that’s left is stuff. Goods and services substitute for meaningful relationships with family, friends and community. His conclusion: as long as goods are a primary means of solving existential problems, we will always want more. But what or who is responsible for undermining community ties and bonds? Marglin argues that the assumptions of textbook economics, and the resulting policy recommendations of economists, undermine community…. (Hill and Myatt 2010, 16-17)
According to Marglin, the textbook focus on individuals makes the community invisible to economists’ eyes. But it is our friendships and deep connections with others which give our lives meaning. So community ties, built on mutual trust and common purpose, have a value — a value that economists ignore when recommending policy.
Furthermore, Marglin argues that rational choice theory — emphasized in the mainstream textbooks — reduces ethical judgements and values to mere preferences. Are you working for the benefit of your community? That’s your preference. Are you cooking the books to get rich quick and devil take the hindmost? That’s your preference. Being selfish is no worse than being altruistic, they are just different preferences. (Hill and Myatt 2010, 16)
Indeed, according to mainstream textbook economics it is smart to be selfish. It not only maximizes your own material well-being, but through the invisible hand of the market it also produces the greatest good for the greatest possible number. This view influences the cultural norms of society and indirectly erodes community. This influence of economics on attitudes isn’t mere speculation. Marwell and Ames (1981) document that exposure to economics generates less cooperative, less other-regarding, behaviour. Frank et al. (1993) show that uncooperative behaviour increases the more individuals are exposed to economics. (Hill and Myatt 2010, 17-18)
(….) Marglin argues that the textbook focus on individuals is problematic. John Kenneth Galbraith went farther. He thought the textbook focus on individuals was a source of grave error and bias because in the real world the individual is not the agent that matters most. The corporation is. By having the wrong focus, economics is able to deny the importance of power and political interests. (Hill and Myatt 2010, 18)
Further, textbooks assume that the state is subordinate to individuals through the ballot box. At the very least, government is assumed to be neutral, intervening to correct market failure as best it can, and to redistribute income so as to make market outcomes more equitable. (Hill and Myatt 2010, 18-19)
But this idealized world is so far removed from the real world that it is little more than a myth, or ‘perhaps even a fraud’ (John K. Galbraith 2004). The power of the largest corporations rivals that of the state; indeed, they often hijack the state’s power for their own purposes. In reality, we see the management of the consumer by corporations; and we see the subordination of the state to corporate interest. (Hill and Myatt 2010, 19)
(….) Galbraith argues that the biggest corporations have power over markets, power in the community, power over the state, and power over belief. As such, the corporation is a political instrument, different in form and degree but not in kind from the state itself. Textbook economics, in denying that power, is part of the problem. It stops us from seeing how we are governed. As such it becomes an ‘ally of those whose exercise of power depends on an acquiescent public’ (John K. Galbraith 1973a: 11). (Hill and Myatt 2010, 19-20)
According to this view, individuals within an economy follow simple rules of thumb to determine their course of action. However, they adapt to their environment by changing the rules they use when these prove to be less successful. They are not irrational in that they do not act against their own interests, but they have neither the information nor the calculating capacity to ‘optimise’. Indeed, they are assumed to have limited and largely local information, and they modify their behaviour to improve their situation. Individuals in complexity models are neither assumed to understand how the economy works nor to consciously look for the ‘best choice’. The main preoccupation is not whether aggregate outcomes are efficient or not but rather with how all of these different individuals interacting with each other come to coordinate their behaviour. Giving individuals in a model simple rules to follow and allowing them to change them as they interact with others means thinking of them much more like particles or social insects. Mainstream economists often object to this approach, arguing that humans have intentions and aims which cannot be found in either inanimate particles or lower forms of life.
— Kirman et. al. (2018, 95) in Rethinking Economics: An Introduction to Pluralist Economics, Routledge.
Even such purely academic theories as interpretations of human nature have profound practical consequences if disseminated widely enough. If we impress upon people that science has discovered that human beings are motivated only by the desire for material advantage, they will tend to live up to this expectation, and we shall have undermined their readiness to moved by impersonal ideals. By propagating the opposite view we might succeed in producing a larger number of idealists, but also help cynical exploiters to find easy victims. This specific issue, incidentally, is of immense actual importance, because it seems that the moral disorientation and fanatic nihilism which afflict modern youth have been stimulated by the popular brands of sociology and psychology [and economics] with their bias for overlooking the more inspiring achievements and focusing on the dismal average or even the subnormal. When, fraudulently basking in the glory of the exact sciences, the psychologists [, theoretical economists, etc.,] refuse to study anything but the most mechanical forms of behavior—often so mechanical that even rats have no chance to show their higher faculties—and then present their mostly trivial findings as the true picture of the human mind, they prompt people to regard themselves and others as automata, devoid of responsibility or worth, which can hardly remain without effect upon the tenor of social life. (….) Abstrusiveness need not impair a doctrine’s aptness for inducing or fortifying certain attitudes, as it may in fact help to inspire awe and obedience by ‘blinding people with science’.
— Andreski (1973, 33-35) in Social Sciences as Sorcery. Emphasis added.
Complexity theory comes with its own problems of over-reach and tractability. Context counts; any theory taken to far stretches credulity. The art is in spotting the spoof. It is true irony to watch the pot calling the kettle black! To wit, mainstream economists questioning the validity of complexity theories use of greedy reductionism — often for the sole purpose of mathematical tractability — when applied to human beings; just because mainstream economists also have unrealistic assumptions (i.e., homo economicus) that overly simplify human behavior and capabilities doesn’t invalidate such a critique. Just because the pot calls the kettle black doesn’t mean the kettle and the pot are not black. Building models of human behavior solely on rational expectations and/or “social insects” qua fitness climbing ticks means we are either Gods or Idiots. Neither Gödel nor Turing reduced creatively thinking human beings to mere Turing machines.
~ ~ ~
The best dialogues take place when each interlocutor speaks from her best self, without pretending to be something she is not. In their recent book Phishing for Phools: The Economics of Manipulation and Deception, Nobel Prize–winning economists George Akerlof and Robert Shiller expand the standard definition of “phishing.” In their usage, it goes beyond committing fraud on the Internet to indicate something older and more general: “getting people to do things that are in the interest of the phisherman” rather than their own. In much the same spirit, we would like to expand the meaning of another recent computer term, “spoofing,” which normally means impersonating someone else’s email name and address to deceive the recipient—a friend or family member of the person whose name is stolen—into doing something no one would do at the behest of a stranger. Spoofing in our usage also means something more general: pretending to represent one discipline or school when actually acting according to the norms of another. Like phishing, spoofing is meant to deceive, and so it is always useful to spot the spoof.
Students who take an English course under the impression they will be taught literature, and wind up being given lessons in politics that a political scientist would scoff at or in sociology that would mystify a sociologist, are being spoofed. Other forms of the humanities—or dehumanities, as we prefer to call them—spoof various scientific disciplines, from computer science to evolutionary biology and neurology. The longer the spoof deceives, the more disillusioned the student will be with what she takes to be the “humanities.” (Morson, Gary Saul. Cents and Sensibility (pp. 1-2). Princeton University Press. Kindle Edition.)
By the same token, when economists pretend to solve problems in ethics, culture, and social values in purely economic terms, they are spoofing other disciplines, although in this case the people most readily deceived are the economists themselves. We will examine various ways in which this happens and how, understandably enough, it earns economists a bad name among those who spot the spoof.
But many do not spot it. Gary Becker won a Nobel Prize largely for extending economics to the furthest reaches of human behavior, and the best-selling Freakonomics series popularizes this approach. What seems to many an economist to be a sincere effort to reach out to other disciplines strikes many practitioners of those fields as nothing short of imperialism, since economists expropriate topics rather than treat existing literatures and methods with the respect they deserve. Too often the economic approach to interdisciplinary work is that other fields have the questions and economics has the answers. (Morson, Gary Saul. Cents and Sensibility (pp. 2-3). Princeton University Press. Kindle Edition.)
As with the dehumanities, these efforts are not valueless. There is, after all, an economic aspect to many activities, including those we don’t usually think of in economic terms. People make choices about many things, and the rational choice model presumed by economists can help us understand how they do so, at least when they behave rationally—and even the worst curmudgeon acknowledges that people are sometimes rational! We have never seen anyone deliberately get into a longer line at a bank. (Morson, Gary Saul. Cents and Sensibility (p. 3). Princeton University Press. Kindle Edition.)
Even regarding ethics, economic models can help in one way, by indicating what is the most efficient allocation of resources. To be sure, one can question the usual economic definition of efficiency—in terms of maximizing the “economic surplus”—and one can question the establishment of goals in purely economic terms, but regardless of which goals one chooses, it pays to choose an efficient way, one that expends the least resources, to reach them. Wasting resources is never a good thing to do, because the resources wasted could have been put to some ethical purpose. The problem is that efficiency does not exhaust ethical questions, and the economic aspect of many problems is not the most important one. By pretending to solve ethical questions, economists wind up spoofing philosophers, theologians, and other ethicists. Economic rationality is indeed part of human nature, but by no means all of it.
For the rest of human nature, we need the humanities (and the humanistic social sciences). In our view, numerous aspects of life are best understood in terms of a dialogue between economics and the humanities—not the spoofs, but real economics and real humanities. (Morson, Gary Saul. Cents and Sensibility (pp. 3-4). Princeton University Press. Kindle Edition.)
There are many examples in the modern world showing how this doctrine of the free market—the pursuit of self-interest—has worked out to the disadvantage of society.
— CAMBRIDGE PROFESSOR JOAN ROBINSON, 1977, cited in Buddhist Economics.
The approach used here concentrates on a factual basis that differentiates it from more traditional practical ethics and economic policy analysis, such as the “economic” concentration on the primacy of income and wealth (rather than on the characteristics of human lives and substantive freedoms).
— NOBEL LAUREATE AMARTYA SEN, DEVELOPMENT AS FREEDOM, cited in Buddhist Economics
In Buddhist economics, people are interdependent with one another and with Nature, so each person’s well-being is measured by how well everyone and the environment are functioning with the goal of minimizing suffering for people and the planet. Everyone is assumed to have the right to a comfortable life with access to basic nutrition, health care, education, and the assurance of safety and human rights. A country’s well-being is measured by the aggregation of the well-being of all residents and the health of the ecosystem.
— Brown (2017, 2), in Buddhist Economics
~ ~ ~
In the most dramatic moments of Italy’s debt crisis, the newly installed “technical” government, led by Mario Monti, appealed to trade unions to accept salary cuts in the name of national solidarity. Monti urged them to participate in a collective effort to increase the competitiveness of the Italian economy (or at least to show that efforts were being made in that direction) in order to calm international investors and “the market” and, hopefully, reduce the spread between the interest rates of Italian and German bonds (at the time around 500 points, meaning that the Italian government had to refinance its ten-year debt at the excruciating rate of 7.3 percent). Commenting on this appeal in an editorial in the left-leaning journal Il Manifesto, the journalist Loris Campetti wondered how it could be at all possible to demand solidarity from a Fiat worker when the CEO of his company earned about 500 times what the worker did.1 And such figures are not unique to Italy. In the United States, the average CEO earned about 30 times what the average worker earned in the mid-1970s (1973 being the year in which income inequality in the United States was at its historically lowest point). Today the multiplier lies around 400. Similarly, the income of the top 1 percent (or even more striking, the top 0.1 percent) of the U.S. population has skyrocketed in relation to that of the remaining 99 percent, bringing income inequality back to levels not seen since the Roaring Twenties. (Arvidsson et. al. 2013, 1-2)
The problem is not, or at least not only, that such income discrepancies exist, but that there is no way to legitimate them. At present there is no way to rationally explain why a corporate CEO (or a top-level investment banker or any other member of the 1 percent) should be worth 400 times as much as the rest of us. And consequently there is no way to legitimately appeal to solidarity or to rationally argue that a factory worker (or any of us in the 99 percent) should take a pay cut in the name of a system that permits such discrepancies in wealth. What we have is a value crisis. There are huge differentials in the monetary rewards that individuals receive, but there is no way in which those differentials can be explained and legitimated in terms of any common understanding of how such monetary rewards should be determined. There is no common understanding of value to back up the prices that markets assign, to put it in simple terms. (We will discuss the thorny relation between the concepts of “value” and “price” along with the role of markets farther on in this chapter.) (Arvidsson et. al. 2013, 2)
This value crisis concerns more than the distribution of income and private wealth. It is also difficult to rationalize how asset prices are set. In the wake of the 2008 financial crisis a steady stream of books, articles, and documentaries has highlighted the irrational practices, sometimes bordering on the fraudulent, by means of which mortgage-backed securities were revalued from junk to investment grade, credit default swaps were emitted without adequate underlying assets, and the big actors of Wall Street colluded with each other and with political actors to protect against transparency and rational scrutiny and in the end to have the taxpayers foot the bill. Neither was this irrationality just a temporary expression of a period of exceptional “irrational exuberance”; rather, irrationality has become a systemic feature of the financial system. As Amar Bidhé argues, the reliance on mathematical formulas embodied in computerized calculating devices at all levels of the financial system has meant that the setting of values on financial markets has been rendered ever more disconnected from judgments that can be rationally reconstructed and argued through.5 Instead, decisions that range from whether to grant a mortgage to an individual, to how to make split-second investment decisions on stock and currency markets, to how to grade or rate the performance of a company or even a nation have been automated, relegated to the discretion of computers and algorithms. While there is nothing wrong with computers and algorithms per se, the problem is that the complexity of these devices has rendered the underlying methods of calculation and their assumptions incomprehensible and opaque even to the people who use them on a daily basis (and imagine the rest of us!). To cite Richard Sennett’s interviews with the back-office Wall Street technicians who actually develop such algorithms: (Arvidsson et. al. 2013, 2-3)
“I asked him to outline the algo [algorithm] for me,” one junior accountant remarked about her derivatives-trading Porsche driving superior, “and he couldn’t, he just took it on faith.” “Most kids have computer skills in their genes … but just up to a point … when you try to show them how to generate the numbers they see on screen, they get impatient, they just want the numbers and leave where these came from to the main-frame.” (Arvidsson et. al. 2013, 3)
The problem here is not ignorance alone, but that the makeup of the algorithms and automated trading devices that execute the majority of trades on financial markets today (about 70 percent are executed by “bots,” or automatic trading agents), is considered a purely technical question, beyond rational discussion, judgment, and scrutiny. Actors tend to take the numbers on faith without knowing, or perhaps even bothering about, where they came from. Consequently these devices can often contain flawed assumptions that, never scrutinized, remain accepted as almost natural “facts.” During the dot-com boom, for example, Internet analysts valued dot-coms by looking at a multiplier of visitors to the dot-com’s Web site without considering how these numbers translated into monetary revenues; during the pre-2008 boom investors assigned the same default risks to subprime mortgages, or mortgages taken out by people who were highly likely to default, as they did to ordinary mortgages.8 And there are few ways in which the nature of such assumptions, flawed or not, can be discussed, scrutinized, or even questioned. Worse, there are few ways of even knowing what those assumptions are. The assumptions that stand behind the important practice of brand valuation are generally secret. Consequently, there is no way of explaining how or discussing why valuations of the same brand by different brand-valuation companies can differ as much as 450 percent. A similar argument can be applied to Fitch, Moody’s, Standard & Poor, and other ratings agencies that are acquiring political importance in determining the economic prospects of nations like Italy and France. (Arvidsson et. al. 2013, 3)
This irrationality goes even deeper than financial markets. Investments in corporate social responsibility are increasing massively, both in the West and in Asia, as companies claim to want to go beyond profits to make a genuine contribution to society. But even though there is a growing body of academic literature indicating that a good reputation for social responsibility is beneficial for corporate performance in a wide variety of ways—from financial outcomes to ease in generating customer loyalty and attracting talented employees—there is no way of determining exactly how beneficial these investments are and, consequently, how many resources should be allocated to them. Indeed, perhaps it would be better to simply tax corporations and let the state or some other actor distribute the resources to some “responsible” causes. The fact that we have no way of knowing leads to a number of irrationalities. Sometimes companies invest more money in communicating their efforts at “being good” than they do in actually promoting socially responsible causes. (In 2001, for example, the tobacco company Philip Morris spent $75 million on what it defined as “good deeds” and then spent $100 million telling the public about those good deeds.) At other times such efforts can be downright contradictory, for example when tobacco companies sponsor antismoking campaigns aimed at young people in countries like Malaysia while at the same time targeting most of their ad spending to the very same segment. Other companies make genuine efforts to behave responsibly, but those efforts reflect poorly on their reputation. Apple, for example, has done close to nothing in promoting corporate responsibility, and has a consistently poor record when it comes to labor conditions among its Chinese subcontractors (like Foxconn). Yet the company benefits from a powerful brand that is to no small degree premised on the fact that consumers perceive it to be somehow more benign than Microsoft, which actually does devote considerable resources to good causes (or at least the Bill and Melinda Gates Foundation does so). (Arvidsson et. al. 2013, 3-4)
Similar irrationalities exist throughout the contemporary economy, ranging from how to measure productivity and determine rewards for knowledge workers to how to arrive at a realistic estimate of value for a number of “intangible” assets, from creativity and capacity for innovation to brand. (We will come back to these questions below as well as in the chapters that follow.) Throughout the contemporary economy, from the heights of finance down to the concrete realities of everyday work, particularly in knowledge work, great insecurities arise with regard to what things are actually worth and the extent to which the prices assigned to them actually reflect their value. (Indeed, in academic managerial thought, the very concept of “value” is presently without any clear definition; it means widely different things in different contexts.) (Arvidsson et. al. 2013, 4)
But this is not merely an accounting problem. The very question of how you determine worth, and consequently what value is, has been rendered problematic by the proliferation of a number of value criteria (or “orders of worth,” to use sociologist David Stark’s term) that are poorly reflected in established economic models. A growing number of people value the ethical impact of consumer goods. But there are no clear ways of determining the relative value of different forms of “ethical impact,” nor even a clear definition of what “ethical impact” means. Therefore there is no way of determining whether it is actually more socially useful or desirable for a company to invest in these pursuits than to concentrate on getting basic goods to consumers as cheaply and conveniently as possible. Consequently, ethical consumerism, while a growing reality, tends to be more efficient at addressing the existential concerns of wealthy consumers than at systematically addressing issues like poverty or empowerment. Similarly, more and more people understand the necessity for more sustainable forms of development. And while the definition of “sustainability” is clearer than that of “ethics,” there are no coherent ways of making concerns for sustainability count in practices of asset valuation (although some efforts have been made in that direction, which we will discuss) or of rationally determining the trade-off between efforts toward sustainability and standard economic pursuits. Thus the new values that are acquiring a stronger presence in our society—popular demand for a more sustainable economy and a more just and equal global society—have only very weak and unreliable ways of influencing the actual conduct of corporations and other important economic actors, and can affect economic decisions in only a tenuous way. More generally, we have no way of arriving at what orders of worth “count” in general and how much, and even if we were able to make such decisions, we have no channels by means of which to effect the setting of economic values. So the value crisis is not only economic; it is also ethical and political. (Arvidsson et. al. 2013, 4-5, emphasis added)
It is ethical in the sense that the relative value of the different orders of worth that are emerging in contemporary society (economic prosperity, “ethical conduct,” “social responsibility,” sustainability, global justice and empowerment) is simply indeterminable. As a consequence, ethics becomes a matter of personal choice and “standpoint” and the ethical perspectives of different individuals become incommensurate with one another. Ethics degenerates into “postmodern” relativism. (Arvidsson et. al. 2013, 5, emphasis added)
It is political because since we have no way of rationally arriving at what orders of worth we should privilege and how much, we have no common cause in the name of which we could legitimately appeal to people or companies (or force them) to do what they otherwise might not want to do. (The emphasis here is on legitimately; of course people are asked and forced to do things all the time, but if they inquire as to why, it becomes very difficult to say what should motivate them.) In the absence of legitimacy, politics is reduced to either more or less corrupt bargaining between particular interest groups or the naked exercise of raw power. In either case there can be no raison d’état. In such a context, appeals to solidarity, like that of the Monti government in Italy, remain impossible. (Arvidsson et. al. 2013, 5-6)
There have of course always been debates and conflicts, often violent, around what the common good should be. The point is that today we do not even have a language, or less metaphorically, a method for conducting such debates. (Modern ethical debates are interminable, as philosopher Alasdair MacIntyre wrote in the late 1970s.) This is what we mean by a value crisis. Not that there might be disagreement on how to value social responsibility or sustainability in relation to economic growth, or how much a CEO should be paid in relation to a worker, but that there is no common method to resolve such issues, or even to define specifically what they are about. We have no common “value regime,” no common understanding of what the values are and how to make evaluative decisions, even contested and conflict-ridden ones. (Arvidsson et. al. 2013, 6)
This has not always been the case. Industrial society—that old model that we still remember as the textbook example of how economics and social systems are supposed to work—was built around a common way of connecting economic value creation to overall social values, an imaginary social contract. In this arrangement, business would generate economic growth, which would be distributed by the welfare state in such a way that it contributed to the well-being of everyone. And even though there were intense conflicts about how this contract should apply, everyone agreed on its basic values. More importantly, these basic values were institutionalized in a wide range of practices and devices, from accounting methods to procedures of policy decisions to methods for calculating the financial value of companies and assets. Again, this did not mean that there was no conflict or discussion, but it did mean that there was a common ground on which such conflict and discussion could be acted out. There was a common value regime. (Arvidsson et. al. 2013, 6)
We are not arguing for a comeback of the value regime of industrial society. That would be impossible, and probably undesirable even if it were possible. However, neither do we accept the “postmodernist” argument (less popular now, perhaps, than it was two decades go) that the end of values (and of ethics or even politics) would be somehow liberating and emancipatory. Instead we argue that the foundations for a different kind of value regime—an ethical economy—are actually emerging as we speak. (Arvidsson et. al. 2013, 6)
The growth of economic knowledge over the past 200 years compares quite favourably with the growth of physical science in any arbitrary 200 year stretch of the dark ages or medieval period. But one is reminded of Mark Twain: “it ain’t what people don’t know that’s the problem; it’s what they know that just ain’t so.” Along with the accumulation of knowledge there has been a proliferation of abstract theorizing that is only too easy to misapply or apply to situations where it is inappropriate. The low power of empirical tests and indifference of too many people to empirical testing has allowed useless models to persist too. Ideology also plays a bigger part than it does in most sciences, especially in macroeconomics. So it is easy to point to cases where economists offered terrible advice. No reason to despair. Smith, Marx, Keynes, Kalecki, Simon and Minsky all advanced understanding somewhat while Marshall, Hicks and others clarified and formalized concepts. Macroeconomics took a wrong path and a sharp turn for the worse in the1970s and we are barely emerging now. Still, what is 50 years in the eye of history?
The modern forecasting field, which emerged in the early twentieth century, had many points of origin in the previous century: in the field credit rating agencies, in the financial press, and in the blossoming fields of science—including meteorology, thermodynamics, and physics. The possibilities of scientific discovery and invention generated unbounded optimism among Victorian-era Americans. Scientific discoveries of all sorts, from the invention of the internal combustion engine to the insights of Darwin and Freud, seemed to promise a new and illuminating age just out of reach. (Friedman 2014, ix)
But forecasting also had deeper roots in the inherent wish of human beings to find certainty in life by knowing the future: What will I be when I grow up? Where will I live? What kind of work will I do? Will it be fulfilling? Will I marry? What will happen to my parents and other family members? To my country, to my job? To the economy in which I live? Forecasting addresses not just business issues but the deep-seated human wish to divine the future. It is the story of the near universal compulsion to avoid ambiguity and doubt and the refusal of the realities of life to satisfy that impulse. (Friedman 2014, ix)
Economic forecasting arose when it did because while the effort to introduce rationality—in the form of the scientific method—was emerging, the insatiable human longing for predictability persisted in the industrializing economy. Indeed, the early twentieth century saw a curious enlistment of science in a range of efforts to temper the uncertainty of the future. Reform movements, including good, bad, and ugly ones (like labor laws, Prohibition, and eugenics), envisioned a future improved through the application of science. So, too, forecasting attracted a spectrum of visionaries. Here were “seers,” such as the popular prophet Roger Babson, Wall Street entrepreneurs, like John Moody, and genuine academic scientists, such as Irving Fisher of Yale and Charles Jesse Bullock and Warren Persons of Harvard. (Friedman 2014, ix)
Customers of the new forecasting services often took these statistics-based predictions on faith. They wanted forecasts, John Moody noted, not discourses on the methods that produced them. Readers did not seek out detailed information on the accuracy of economic predictions, as long as forecasters proved to be right at least a portion of the time. The desire for any information that would illuminate the future was overwhelming, and subscribers to forecasting newsletters were willing to suspend reasoned judgment to gain comfort. This blend of rationality and anxiety, measurement and intuition, optimism and fear is the broad frame of the story and, not incidentally, why forecasters who were repeatedly proved mistaken, as all ultimately must be given enough time, still commanded attention and fee-paying clients. (Friedman 2014, x)
(….) Forecaster’s reliance on science and statistics as methods for accessing the future aligns their story with conventional narratives of modernity. The German sociologist Max Weber, for instance, argued that a key component of the modern worldview was a marked “disenchantment of the world,” as scientific rationality displaced older, magical, and “irrational” ways of understanding. Indeed, the forecasters … certainly saw themselves as systematic empiricists and logicians who promised to rescue the science of prediction from quacks and psychics. They sought, in the words of historian Jackson Lears, to “stabilize the sorcery of the market.” (Friedman 2014, 5)
The relationship between the forecasting industry and modernity was an ambivalent one, though. On the one hand, the early forecasters helped build key institutions (including Moody’s Investors Service and the National Bureau of Economic Research) and popularize new statistical tools, like leading indicators and indexes of industrial production. On the other hand, though all forecasters dressed their predictions in the garb of rationality (with graphs, numbers, and equations), their predictive accuracy was no more certain than a crystal ball. Moreover, despite efforts of forecasters to distance themselves from astrologers and popular conjurers, the emergence of scientific forecasting went hand in hand with rising popular interest in all manner of prediction. The general public, anxious for insights into an uncertain future, consumed forecasts indiscriminately. (Friedman 2014, 5)
One of my most memorable adventures as a cultural intermediary occurred about twelve years ago when I translated for a Christian colleague who was visiting the monastery in southern India where I was living. He was there working on a translation of a Buddhist text, and I volunteered my services as interpreter. One day, in the course of his conversations with one of the senior scholars of the monestary, it came up that he was a Christian, and my teacher asked him to share some of his beliefs. My friend chose to focus on Jesus’ identity as messiah. As I finished translating the words of my colleague, my teacher broke out in a fit of laughter, much to my embarrassment. He then proceeded to question his interlocutor in a kind of pointed and unabashedly adversarial way that is typical of the Tibetan monastic debate courtyard. There ensued a lively exchange, but when all was said and done, my teacher’s basic question was this: How can the death of one individual act as the direct and substantive cause for the salvation of others?
Behind this interreligious impasse there are of course operative several Buddhist doctrinal presuppositions that are in marked contrast (at times even in opposition) to those of traditional Christianity, not the least of which is the Buddhist vision of what constitutes liberation.
Several corollaries to the Buddhist view of liberation are especially relevant as responses to the Christian confession of Jesus as messiah. (1) Each of us is responsible for our own lot in life. We each cause our own suffering, and each of us is ultimately responsible for our own liberation. (2) Our salvation is not dependent on any one historical event. Specifically, our salvation is not dependent upon the appearance of any one personage in history. True, the actions of others can help us or hinder us on the way, but no action (or lack of action) on the part of another individual—whether human or divine—can seal our fate, either as regards salvation or damnation. (3) Soteriologically, there is no end to time, no time after which sentient beings will suffer, and thus long will there be the possibility of their liberation. (4) No being has the capacity to decide whether or not we will be saved. Salvation is not granted to us, or withheld from us, by some external force. It is self-earned. (5) No single action on our part can instantaneously cause our liberation. What brings about salvation is not mere belief or faith, even a faith that is sustained throughout en entire life. Certainly, it is not the instantaneous belief in something (e.g., the belief that Jesus is Lord) that brings about salvation, but the long and arduous process of radical mental transformation, which requires more than simply belief.22
Together these various tenants make it impossible for Buddhists to accept a messianic creed of the traditional Christian sort. Jesus may have been an extraordinary human being, a sage, an effective and charismatic teacher, and even the manifestation of a deity, but he cannot have been the messiah that most Christians believe him to have been.23 (Gross et. al. 2000, 27-28, José Ignacio Cabezón, A God, but Not a Savior, Iliff School of Theology.)
22 I am not unaware of the fact that in the history of Buddhism there have been movements that challenge this notion of the nature and path to salvation. Especially important to mention in this regard are certain schools of Japanese Pure Land Buddhism. But again, I remind my readers that I am speaking here principally from an Indo-Tibetan Buddhist doctrinal perspective.
23 Of course, if the Jesus Seminar is right, than Jesus did not make this claim of himself. See Funk et. al, The Five Gospels, pp. 32-34.
~ ~ ~
It is well known within the comparative religious studies field that there exists a phenomena whereby a founder reveals a teaching of experience and after passing another teaching about the founder develops in the minds of those who are tasked with creating the social institutions that perpetuate the founder’s teachings. It is the teachings of the founder distinct from the teachings about the founder that are important and often lost in historical time until recovered through critical religious scholarship. This of vs. about distinction is important. The teachings of Jesus are distinct and separate from the teachings about Jesus that developed after his death. The atonement doctrine — which in light of modernity is nothing more than divine child abuse — was a doctrine developed after Jesus lived, taught, and died and is incompatible with the teachings of Jesus as he revealed them through his life and teachings as exemplified in his many parables. The same can is found in the life experience of Siddhārtha Gautama (Sanskrit/Devanagari: सिद्धार्थ गौतम Siddhārtha Gautama, c. 563/480 – c. 483/400 BCE) and many other religious teachers. Similarly the teaching of Honen Shonin were modified by Shinran Shonin’s teachings which were adapted by Rennyo Shonin’s teachings and so on it goes.
It may well seem to you that the gospel of Jesus did not include all that is high and holy in the Christian gospel as we know it. All those magnificent, transcendent, Christian beliefs seem absent from the original gospel of Jesus — his “gospel” may seem minimal by comparison with the gospel! Missing from his gospel are not only where he came from (“conceived by the Holy Spirit, born of the Virgin Mary”), but also what he came to do. Where, after all, is “the saving work of Christ”: dying for out sins, rising on the third day, appearing to the apostles resurrected from the dead? These are, after all, the gospel about Jesus, which you, understandably enough, believe and cherish. But if you really are committed to Jesus, then you should be committed to the gospel of Jesus, which is what I have written this book to try to help you see and understand: the “good news” Jesus offered people during his public ministry. (Robinson 2005: 225)
— Robinson, James M. The Gospel of Jesus: In Search of the Original Good News. New York: HarperCollins; 2005; p. 225.
Nineteenth-century economists liked to illustrate the importance of scarcity to value by using the water and diamond paradox. Why is water cheap, even though it is necessary for human life, and diamonds are expensive and therefore of high value, even though humans can quite easily get by without them? Marx’s labour theory of value–naïvely applied–would argue that diamonds simply take a lot more time and effort to produce. But the new utility theory of value, as the marginalists defined it, explained the difference in price through the scarcity of diamonds. Where there is an abundance of water, it is cheap. Where there is a scarcity (as in a desert), its value can become very high. For the marginalists, this scarcity theory of value became the rationale for the price of everything, from diamonds, to water, to workers’ wages.
The idea of scarcity became so important to economists that in the early 1930s it prompted one influential British economist, Lionel Robbins (1898–1984), Professor of Economics at the London School of Economics, to define the study of economics itself in terms of scarcity; his description of it as ‘the study of the allocation of resources, under conditions of scarcity’ is still widely used.8 The emergence of marginalism was a pivotal moment in the history of economic thought, one that laid the foundations for today’s dominant economic theory.
— Mariana Mazzucato (2018, 64-65) The Value of Everything
The Manufacturing of Scarcity qua Market Manipulation
American males enter adulthood through a peculiar rite of passage: they spend most of their savings on a shiny piece of rock. They could invest the money in assets that will compound over time and someday provide a nest egg. Instead, they trade that money for a diamond ring, which isn’t much of an asset at all. As soon as a diamond leaves a jeweler, it loses over 50% of its value. (Priceonomics 2014, 3)
We exchange diamond rings as part of the engagement process because the diamond company De Beers decided in 1938 that it would like us to. Prior to a stunningly successful marketing campaign, Americans occasionally exchanged engagement rings, but it wasn’t pervasive. Not only is the demand for diamonds a marketing invention, but diamonds aren’t actually that rare. Only by carefully restricting the supply has De Beers kept the price of a diamond high. (Priceonomics 2014, 3)
Countless American dudes will attest that the societal obligation to furnish a diamond engagement ring is both stressful and expensive. But this obligation only exists because the company that stands to profit from it willed it into existence. (Priceonomics 2014, 3)
So here is a modest proposal: Let’s agree that diamonds are bullshit and reject their role in the marriage process. Let’s admit that we as a society were tricked for about a century into coveting sparkling pieces of carbon, but it’s time to end the nonsense. (Priceonomics 2014, 3-4)
The Concept of Intrinsic Value
In finance, there is concept called intrinsic value. An asset’s value is essentially driven by the (discounted) value of the future cash that asset will generate. For example, when Hertz buys a car, its value is the profit Hertz will earn from renting it out and selling the car at the end of its life (the “terminal value”). For Hertz, a car is an investment. When you buy a car, unless you make money from it somehow, its value corresponds to its resale value. Since a car is a depreciating asset, the amount of value that the car loses over its lifetime is a very real expense you pay. (Priceonomics 2014, 4)
A diamond is a depreciating asset masquerading as an investment. There is a common misconception that jewelry and precious metals are assets that can store value, appreciate, and hedge against inflation. That’s not wholly untrue. (Priceonomics 2014, 4)
Gold and silver are commodities that can be purchased on financial markets. They can appreciate and hold value in times of inflation. You can even hoard gold under your bed and buy gold coins and bullion (albeit at approximately a 10% premium to market rates). If you want to hoard gold jewelry, however, there is typically a 100-400% retail markup. So jewelry is not a wise investment. (Priceonomics 2014, 4)
But with that caveat in mind, the market for gold is fairly liquid and gold is fungible — you can trade one large piece of gold for ten smalls ones like you can trade a ten dollar bill for ten one dollar bills. These characteristics make it a feasible investment. (Priceonomics 2014, 4)
Diamonds, however, are not an investment. The market for them is not liquid, and diamonds are not fungible. (Priceonomics 2014, 4-5)
The first test of a liquid market is whether you can resell a diamond. In a famous piece published by The Atlantic in 1982, Edward Epstein explains why you can’t sell used diamonds for anything but a pittance:
“Retail jewelers, especially the prestigious Fifth Avenue stores, prefer not to buy back diamonds from customers, because the offer they would make would most likely be considered ridiculously low. The ‘keystone,’ or markup, on a diamond and its setting may range from 100 to 200 percent, depending on the policy of the store; if it bought diamonds back from customers, it would have to buy them back at wholesale prices. Most jewelers would prefer not to make a customer an offer that might be deemed insulting and also might undercut the widely held notion that diamonds go up in value. Moreover, since retailers generally receive their diamonds from wholesalers on consignment, and need not pay for them until they are sold, they would not readily risk their own cash to buy diamonds from customers.” (Priceonomics 2014, 5)
When you buy a diamond, you buy it at retail, which is a 100% to 200% markup. If you want to resell it, you have to pay less than wholesale to incent a diamond buyer to risk her own capital on the purchase. Given the large markup, this will mean a substantial loss on your part. The same article puts some numbers around the dilemma: (Priceonomics 2014, 5-6)
(….) We like diamonds because Gerold M. Lauck told us to. Until the mid 20th century, diamond engagement rings were a small and dying industry in America, and the concept had not really taken hold in Europe. (Priceonomics 2014, 7)
Not surprisingly, the American market for diamond engagement rings began to shrink during the Great Depression. Sales volume declined and the buyers that remained purchased increasingly smaller stones. But the U.S. market for engagement rings was still 75% of De Beers’ sales. With Europe on the verge of war, it didn’t seem like a promising place to invest. If De Beers was going to grow, it had to reverse the trend. (Priceonomics 2014, 7)
And so, in 1938, De Beers turned to Madison Avenue for help. The company hired Gerold Lauck and the N. W. Ayer advertising agency, which commissioned a study with some astute observations. Namely, men were the key to the market. As Epstein wrote of the findings:
“Since ‘young men buy over 90% of all engagement rings’ it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship” (Priceonomics 2014, 7)
(….) The next time you look at a diamond, consider this: nearly every American marriage begins with a diamond because a bunch of rich white men in the 1940s convinced everyone that its size determines a man’s self worth. They created this convention — that unless a man purchases (an intrinsically useless) diamond, his life is a failure — while sitting in a room, racking their brains on how to sell diamonds that no one wanted. (Priceonomics 2014, 8)
A History of Market Manipulation
(….) What, you might ask, could top institutionalizing demand for a useless product out of thin air? Monopolizing the supply of diamonds for over a century to make that useless product extremely expensive. You see, diamonds aren’t really even that rare. (Priceonomics 2014, 10)
Before 1870, diamonds were very rare. They typically ended up in a Maharaja’s crown or a royal necklace. In 1870, enormous deposits of diamonds were discovered in Kimberley, South Africa. As diamonds flooded the market, the financiers of the mines realized they were making their own investments worthless. As they mined more and more diamonds, they became less scarce and their price dropped. (Priceonomics 2014, 10)
The diamond market may have bottomed out were it not for an enterprising individual by the name of Cecil Rhodes. He began buying up mines in order to control the output and keep the price of diamonds high. By 1888, Rhodes controlled the entire South African diamond supply, and in turn, essentially the entire world supply. One of the companies he acquired was eponymously named after its founders, the De Beers brothers. (Priceonomics 2014, 10)
Building a diamond monopoly isn’t easy work. It requires a balance of ruthlessly punishing and cooperating with competitors, as well as a very long term view. For example, in 1902, prospectors discovered a massive mine in South Africa that contained as many diamonds as all of De Beers’ mines combined. The owners initially refused to join the De Beers cartel, and only joined three years later after new owner Ernest Oppenheimer recognized that a competitive market for diamonds would be disastrous for the industry. In Oppenheimer’s words: (Priceonomics 2014, 10-11)
“Common sense tells us that the only way to increase the value of diamonds is to make them scarce, that is to reduce production.” (Priceonomics 2014, 11)
(….) We covet diamonds in America for a simple reason: the company that stands to profit from diamond sales decided that we should. De Beers’ marketing campaign single handedly made diamond rings the measure of one’s success in America. Despite diamonds’ complete lack of inherent value, the company manufactured an image of diamonds as a status symbol. And to keep the price of diamonds high, despite the abundance of new diamond finds, De Beers executed the most effective monopoly of the 20th century. (Priceonomics 2014, 13)
~ ~ ~
The history of De Beers’ ruthless behavior in its drive to maintain its monopoly is well documented. There were so successful at creating a market in monopoly that eventually such a monstrosity as blood diamonds could exist. But that is another story. The moral of the story is that when it comes to capitalism there is really no such thing as intrinsic value or a “free market,” and that slick marketing can make a terd sell for the price of diamond.
Upon this market manipulation economists built a house of cards that overlooked the monopolist’s manipulations and instead claimed diamonds are expensive because they are rare. Diamonds are bullshit and by extension so too is modern economics theory of scarcity largely bullshit too.