Category Archives: Economics

Spotting the Spoof

According to this view, individuals within an economy follow simple rules of thumb to determine their course of action. However, they adapt to their environment by changing the rules they use when these prove to be less successful. They are not irrational in that they do not act against their own interests, but they have neither the information nor the calculating capacity to ‘optimise’. Indeed, they are assumed to have limited and largely local information, and they modify their behaviour to improve their situation. Individuals in complexity models are neither assumed to understand how the economy works nor to consciously look for the ‘best choice’. The main preoccupation is not whether aggregate outcomes are efficient or not but rather with how all of these different individuals interacting with each other come to coordinate their behaviour. Giving individuals in a model simple rules to follow and allowing them to change them as they interact with others means thinking of them much more like particles or social insects. Mainstream economists often object to this approach, arguing that humans have intentions and aims which cannot be found in either inanimate particles or lower forms of life.

Kirman et. al. (2018, 95) in Rethinking Economics: An Introduction to Pluralist Economics, Routledge.

Even such purely academic theories as interpretations of human nature have profound practical consequences if disseminated widely enough. If we impress upon people that science has discovered that human beings are motivated only by the desire for material advantage, they will tend to live up to this expectation, and we shall have undermined their readiness to moved by impersonal ideals. By propagating the opposite view we might succeed in producing a larger number of idealists, but also help cynical exploiters to find easy victims. This specific issue, incidentally, is of immense actual importance, because it seems that the moral disorientation and fanatic nihilism which afflict modern youth have been stimulated by the popular brands of sociology and psychology [and economics] with their bias for overlooking the more inspiring achievements and focusing on the dismal average or even the subnormal. When, fraudulently basking in the glory of the exact sciences, the psychologists [, theoretical economists, etc.,] refuse to study anything but the most mechanical forms of behavior—often so mechanical that even rats have no chance to show their higher faculties—and then present their mostly trivial findings as the true picture of the human mind, they prompt people to regard themselves and others as automata, devoid of responsibility or worth, which can hardly remain without effect upon the tenor of social life. (….) Abstrusiveness need not impair a doctrine’s aptness for inducing or fortifying certain attitudes, as it may in fact help to inspire awe and obedience by ‘blinding people with science’.

— Andreski (1973, 33-35) in Social Sciences as Sorcery. Emphasis added.

Complexity theory comes with its own problems of over-reach and tractability. Context counts; any theory taken to far stretches credulity. The art is in spotting the spoof. It is true irony to watch the pot calling the kettle black! To wit, mainstream economists questioning the validity of complexity theories use of greedy reductionism — often for the sole purpose of mathematical tractability — when applied to human beings; just because mainstream economists also have unrealistic assumptions (i.e., homo economicus) that overly simplify human behavior and capabilities doesn’t invalidate such a critique. Just because the pot calls the kettle black doesn’t mean the kettle and the pot are not black. Building models of human behavior solely on rational expectations and/or “social insects” qua fitness climbing ticks means we are either Gods or Idiots. Neither Gödel nor Turing reduced creatively thinking human beings to mere Turing machines.

~ ~ ~

The best dialogues take place when each interlocutor speaks from her best self, without pretending to be something she is not. In their recent book Phishing for Phools: The Economics of Manipulation and Deception, Nobel Prize–winning economists George Akerlof and Robert Shiller expand the standard definition of “phishing.” In their usage, it goes beyond committing fraud on the Internet to indicate something older and more general: “getting people to do things that are in the interest of the phisherman” rather than their own. In much the same spirit, we would like to expand the meaning of another recent computer term, “spoofing,” which normally means impersonating someone else’s email name and address to deceive the recipient—a friend or family member of the person whose name is stolen—into doing something no one would do at the behest of a stranger. Spoofing in our usage also means something more general: pretending to represent one discipline or school when actually acting according to the norms of another. Like phishing, spoofing is meant to deceive, and so it is always useful to spot the spoof.

Students who take an English course under the impression they will be taught literature, and wind up being given lessons in politics that a political scientist would scoff at or in sociology that would mystify a sociologist, are being spoofed. Other forms of the humanities—or dehumanities, as we prefer to call them—spoof various scientific disciplines, from computer science to evolutionary biology and neurology. The longer the spoof deceives, the more disillusioned the student will be with what she takes to be the “humanities.” (Morson, Gary Saul. Cents and Sensibility (pp. 1-2). Princeton University Press. Kindle Edition.)

By the same token, when economists pretend to solve problems in ethics, culture, and social values in purely economic terms, they are spoofing other disciplines, although in this case the people most readily deceived are the economists themselves. We will examine various ways in which this happens and how, understandably enough, it earns economists a bad name among those who spot the spoof.

But many do not spot it. Gary Becker won a Nobel Prize largely for extending economics to the furthest reaches of human behavior, and the best-selling Freakonomics series popularizes this approach. What seems to many an economist to be a sincere effort to reach out to other disciplines strikes many practitioners of those fields as nothing short of imperialism, since economists expropriate topics rather than treat existing literatures and methods with the respect they deserve. Too often the economic approach to interdisciplinary work is that other fields have the questions and economics has the answers. (Morson, Gary Saul. Cents and Sensibility (pp. 2-3). Princeton University Press. Kindle Edition.)

As with the dehumanities, these efforts are not valueless. There is, after all, an economic aspect to many activities, including those we don’t usually think of in economic terms. People make choices about many things, and the rational choice model presumed by economists can help us understand how they do so, at least when they behave rationally—and even the worst curmudgeon acknowledges that people are sometimes rational! We have never seen anyone deliberately get into a longer line at a bank. (Morson, Gary Saul. Cents and Sensibility (p. 3). Princeton University Press. Kindle Edition.)

Even regarding ethics, economic models can help in one way, by indicating what is the most efficient allocation of resources. To be sure, one can question the usual economic definition of efficiency—in terms of maximizing the “economic surplus”—and one can question the establishment of goals in purely economic terms, but regardless of which goals one chooses, it pays to choose an efficient way, one that expends the least resources, to reach them. Wasting resources is never a good thing to do, because the resources wasted could have been put to some ethical purpose. The problem is that efficiency does not exhaust ethical questions, and the economic aspect of many problems is not the most important one. By pretending to solve ethical questions, economists wind up spoofing philosophers, theologians, and other ethicists. Economic rationality is indeed part of human nature, but by no means all of it.

For the rest of human nature, we need the humanities (and the humanistic social sciences). In our view, numerous aspects of life are best understood in terms of a dialogue between economics and the humanities—not the spoofs, but real economics and real humanities. (Morson, Gary Saul. Cents and Sensibility (pp. 3-4). Princeton University Press. Kindle Edition.)

Value Crisis of Modernity

There are many examples in the modern world showing how this doctrine of the free market—the pursuit of self-interest—has worked out to the disadvantage of society.

— CAMBRIDGE PROFESSOR JOAN ROBINSON, 1977, cited in Buddhist Economics.

The approach used here concentrates on a factual basis that differentiates it from more traditional practical ethics and economic policy analysis, such as the “economic” concentration on the primacy of income and wealth (rather than on the characteristics of human lives and substantive freedoms).

— NOBEL LAUREATE AMARTYA SEN, DEVELOPMENT AS FREEDOM, cited in Buddhist Economics

In Buddhist economics, people are interdependent with one another and with Nature, so each person’s well-being is measured by how well everyone and the environment are functioning with the goal of minimizing suffering for people and the planet. Everyone is assumed to have the right to a comfortable life with access to basic nutrition, health care, education, and the assurance of safety and human rights. A country’s well-being is measured by the aggregation of the well-being of all residents and the health of the ecosystem.

Brown (2017, 2), in Buddhist Economics

~ ~ ~

In the most dramatic moments of Italy’s debt crisis, the newly installed “technical” government, led by Mario Monti, appealed to trade unions to accept salary cuts in the name of national solidarity. Monti urged them to participate in a collective effort to increase the competitiveness of the Italian economy (or at least to show that efforts were being made in that direction) in order to calm international investors and “the market” and, hopefully, reduce the spread between the interest rates of Italian and German bonds (at the time around 500 points, meaning that the Italian government had to refinance its ten-year debt at the excruciating rate of 7.3 percent). Commenting on this appeal in an editorial in the left-leaning journal Il Manifesto, the journalist Loris Campetti wondered how it could be at all possible to demand solidarity from a Fiat worker when the CEO of his company earned about 500 times what the worker did.1 And such figures are not unique to Italy. In the United States, the average CEO earned about 30 times what the average worker earned in the mid-1970s (1973 being the year in which income inequality in the United States was at its historically lowest point). Today the multiplier lies around 400. Similarly, the income of the top 1 percent (or even more striking, the top 0.1 percent) of the U.S. population has skyrocketed in relation to that of the remaining 99 percent, bringing income inequality back to levels not seen since the Roaring Twenties. (Arvidsson et. al. 2013, 1-2)

The problem is not, or at least not only, that such income discrepancies exist, but that there is no way to legitimate them. At present there is no way to rationally explain why a corporate CEO (or a top-level investment banker or any other member of the 1 percent) should be worth 400 times as much as the rest of us. And consequently there is no way to legitimately appeal to solidarity or to rationally argue that a factory worker (or any of us in the 99 percent) should take a pay cut in the name of a system that permits such discrepancies in wealth. What we have is a value crisis. There are huge differentials in the monetary rewards that individuals receive, but there is no way in which those differentials can be explained and legitimated in terms of any common understanding of how such monetary rewards should be determined. There is no common understanding of value to back up the prices that markets assign, to put it in simple terms. (We will discuss the thorny relation between the concepts of “value” and “price” along with the role of markets farther on in this chapter.) (Arvidsson et. al. 2013, 2)

This value crisis concerns more than the distribution of income and private wealth. It is also difficult to rationalize how asset prices are set. In the wake of the 2008 financial crisis a steady stream of books, articles, and documentaries has highlighted the irrational practices, sometimes bordering on the fraudulent, by means of which mortgage-backed securities were revalued from junk to investment grade, credit default swaps were emitted without adequate underlying assets, and the big actors of Wall Street colluded with each other and with political actors to protect against transparency and rational scrutiny and in the end to have the taxpayers foot the bill. Neither was this irrationality just a temporary expression of a period of exceptional “irrational exuberance”; rather, irrationality has become a systemic feature of the financial system. As Amar Bidhé argues, the reliance on mathematical formulas embodied in computerized calculating devices at all levels of the financial system has meant that the setting of values on financial markets has been rendered ever more disconnected from judgments that can be rationally reconstructed and argued through.5 Instead, decisions that range from whether to grant a mortgage to an individual, to how to make split-second investment decisions on stock and currency markets, to how to grade or rate the performance of a company or even a nation have been automated, relegated to the discretion of computers and algorithms. While there is nothing wrong with computers and algorithms per se, the problem is that the complexity of these devices has rendered the underlying methods of calculation and their assumptions incomprehensible and opaque even to the people who use them on a daily basis (and imagine the rest of us!). To cite Richard Sennett’s interviews with the back-office Wall Street technicians who actually develop such algorithms: (Arvidsson et. al. 2013, 2-3)

“I asked him to outline the algo [algorithm] for me,” one junior accountant remarked about her derivatives-trading Porsche driving superior, “and he couldn’t, he just took it on faith.” “Most kids have computer skills in their genes … but just up to a point … when you try to show them how to generate the numbers they see on screen, they get impatient, they just want the numbers and leave where these came from to the main-frame.” (Arvidsson et. al. 2013, 3)

The problem here is not ignorance alone, but that the makeup of the algorithms and automated trading devices that execute the majority of trades on financial markets today (about 70 percent are executed by “bots,” or automatic trading agents), is considered a purely technical question, beyond rational discussion, judgment, and scrutiny. Actors tend to take the numbers on faith without knowing, or perhaps even bothering about, where they came from. Consequently these devices can often contain flawed assumptions that, never scrutinized, remain accepted as almost natural “facts.” During the dot-com boom, for example, Internet analysts valued dot-coms by looking at a multiplier of visitors to the dot-com’s Web site without considering how these numbers translated into monetary revenues; during the pre-2008 boom investors assigned the same default risks to subprime mortgages, or mortgages taken out by people who were highly likely to default, as they did to ordinary mortgages.8 And there are few ways in which the nature of such assumptions, flawed or not, can be discussed, scrutinized, or even questioned. Worse, there are few ways of even knowing what those assumptions are. The assumptions that stand behind the important practice of brand valuation are generally secret. Consequently, there is no way of explaining how or discussing why valuations of the same brand by different brand-valuation companies can differ as much as 450 percent. A similar argument can be applied to Fitch, Moody’s, Standard & Poor, and other ratings agencies that are acquiring political importance in determining the economic prospects of nations like Italy and France. (Arvidsson et. al. 2013, 3)

This irrationality goes even deeper than financial markets. Investments in corporate social responsibility are increasing massively, both in the West and in Asia, as companies claim to want to go beyond profits to make a genuine contribution to society. But even though there is a growing body of academic literature indicating that a good reputation for social responsibility is beneficial for corporate performance in a wide variety of ways—from financial outcomes to ease in generating customer loyalty and attracting talented employees—there is no way of determining exactly how beneficial these investments are and, consequently, how many resources should be allocated to them. Indeed, perhaps it would be better to simply tax corporations and let the state or some other actor distribute the resources to some “responsible” causes. The fact that we have no way of knowing leads to a number of irrationalities. Sometimes companies invest more money in communicating their efforts at “being good” than they do in actually promoting socially responsible causes. (In 2001, for example, the tobacco company Philip Morris spent $75 million on what it defined as “good deeds” and then spent $100 million telling the public about those good deeds.) At other times such efforts can be downright contradictory, for example when tobacco companies sponsor antismoking campaigns aimed at young people in countries like Malaysia while at the same time targeting most of their ad spending to the very same segment. Other companies make genuine efforts to behave responsibly, but those efforts reflect poorly on their reputation. Apple, for example, has done close to nothing in promoting corporate responsibility, and has a consistently poor record when it comes to labor conditions among its Chinese subcontractors (like Foxconn). Yet the company benefits from a powerful brand that is to no small degree premised on the fact that consumers perceive it to be somehow more benign than Microsoft, which actually does devote considerable resources to good causes (or at least the Bill and Melinda Gates Foundation does so). (Arvidsson et. al. 2013, 3-4)

Similar irrationalities exist throughout the contemporary economy, ranging from how to measure productivity and determine rewards for knowledge workers to how to arrive at a realistic estimate of value for a number of “intangible” assets, from creativity and capacity for innovation to brand. (We will come back to these questions below as well as in the chapters that follow.) Throughout the contemporary economy, from the heights of finance down to the concrete realities of everyday work, particularly in knowledge work, great insecurities arise with regard to what things are actually worth and the extent to which the prices assigned to them actually reflect their value. (Indeed, in academic managerial thought, the very concept of “value” is presently without any clear definition; it means widely different things in different contexts.) (Arvidsson et. al. 2013, 4)

But this is not merely an accounting problem. The very question of how you determine worth, and consequently what value is, has been rendered problematic by the proliferation of a number of value criteria (or “orders of worth,” to use sociologist David Stark’s term) that are poorly reflected in established economic models. A growing number of people value the ethical impact of consumer goods. But there are no clear ways of determining the relative value of different forms of “ethical impact,” nor even a clear definition of what “ethical impact” means. Therefore there is no way of determining whether it is actually more socially useful or desirable for a company to invest in these pursuits than to concentrate on getting basic goods to consumers as cheaply and conveniently as possible. Consequently, ethical consumerism, while a growing reality, tends to be more efficient at addressing the existential concerns of wealthy consumers than at systematically addressing issues like poverty or empowerment. Similarly, more and more people understand the necessity for more sustainable forms of development. And while the definition of “sustainability” is clearer than that of “ethics,” there are no coherent ways of making concerns for sustainability count in practices of asset valuation (although some efforts have been made in that direction, which we will discuss) or of rationally determining the trade-off between efforts toward sustainability and standard economic pursuits. Thus the new values that are acquiring a stronger presence in our society—popular demand for a more sustainable economy and a more just and equal global society—have only very weak and unreliable ways of influencing the actual conduct of corporations and other important economic actors, and can affect economic decisions in only a tenuous way. More generally, we have no way of arriving at what orders of worth “count” in general and how much, and even if we were able to make such decisions, we have no channels by means of which to effect the setting of economic values. So the value crisis is not only economic; it is also ethical and political. (Arvidsson et. al. 2013, 4-5, emphasis added)

It is ethical in the sense that the relative value of the different orders of worth that are emerging in contemporary society (economic prosperity, “ethical conduct,” “social responsibility,” sustainability, global justice and empowerment) is simply indeterminable. As a consequence, ethics becomes a matter of personal choice and “standpoint” and the ethical perspectives of different individuals become incommensurate with one another. Ethics degenerates into “postmodern” relativism. (Arvidsson et. al. 2013, 5, emphasis added)

It is political because since we have no way of rationally arriving at what orders of worth we should privilege and how much, we have no common cause in the name of which we could legitimately appeal to people or companies (or force them) to do what they otherwise might not want to do. (The emphasis here is on legitimately; of course people are asked and forced to do things all the time, but if they inquire as to why, it becomes very difficult to say what should motivate them.) In the absence of legitimacy, politics is reduced to either more or less corrupt bargaining between particular interest groups or the naked exercise of raw power. In either case there can be no raison d’état. In such a context, appeals to solidarity, like that of the Monti government in Italy, remain impossible. (Arvidsson et. al. 2013, 5-6)

There have of course always been debates and conflicts, often violent, around what the common good should be. The point is that today we do not even have a language, or less metaphorically, a method for conducting such debates. (Modern ethical debates are interminable, as philosopher Alasdair MacIntyre wrote in the late 1970s.) This is what we mean by a value crisis. Not that there might be disagreement on how to value social responsibility or sustainability in relation to economic growth, or how much a CEO should be paid in relation to a worker, but that there is no common method to resolve such issues, or even to define specifically what they are about. We have no common “value regime,” no common understanding of what the values are and how to make evaluative decisions, even contested and conflict-ridden ones. (Arvidsson et. al. 2013, 6)

This has not always been the case. Industrial society—that old model that we still remember as the textbook example of how economics and social systems are supposed to work—was built around a common way of connecting economic value creation to overall social values, an imaginary social contract. In this arrangement, business would generate economic growth, which would be distributed by the welfare state in such a way that it contributed to the well-being of everyone. And even though there were intense conflicts about how this contract should apply, everyone agreed on its basic values. More importantly, these basic values were institutionalized in a wide range of practices and devices, from accounting methods to procedures of policy decisions to methods for calculating the financial value of companies and assets. Again, this did not mean that there was no conflict or discussion, but it did mean that there was a common ground on which such conflict and discussion could be acted out. There was a common value regime. (Arvidsson et. al. 2013, 6)

We are not arguing for a comeback of the value regime of industrial society. That would be impossible, and probably undesirable even if it were possible. However, neither do we accept the “postmodernist” argument (less popular now, perhaps, than it was two decades go) that the end of values (and of ethics or even politics) would be somehow liberating and emancipatory. Instead we argue that the foundations for a different kind of value regime—an ethical economy—are actually emerging as we speak. (Arvidsson et. al. 2013, 6)

Crystal Balls and Econometrics

The modern forecasting field, which emerged in the early twentieth century, had many points of origin in the previous century: in the field credit rating agencies, in the financial press, and in the blossoming fields of science—including meteorology, thermodynamics, and physics. The possibilities of scientific discovery and invention generated unbounded optimism among Victorian-era Americans. Scientific discoveries of all sorts, from the invention of the internal combustion engine to the insights of Darwin and Freud, seemed to promise a new and illuminating age just out of reach. (Friedman 2014, ix)

But forecasting also had deeper roots in the inherent wish of human beings to find certainty in life by knowing the future: What will I be when I grow up? Where will I live? What kind of work will I do? Will it be fulfilling? Will I marry? What will happen to my parents and other family members? To my country, to my job? To the economy in which I live? Forecasting addresses not just business issues but the deep-seated human wish to divine the future. It is the story of the near universal compulsion to avoid ambiguity and doubt and the refusal of the realities of life to satisfy that impulse. (Friedman 2014, ix)

Economic forecasting arose when it did because while the effort to introduce rationality—in the form of the scientific method—was emerging, the insatiable human longing for predictability persisted in the industrializing economy. Indeed, the early twentieth century saw a curious enlistment of science in a range of efforts to temper the uncertainty of the future. Reform movements, including good, bad, and ugly ones (like labor laws, Prohibition, and eugenics), envisioned a future improved through the application of science. So, too, forecasting attracted a spectrum of visionaries. Here were “seers,” such as the popular prophet Roger Babson, Wall Street entrepreneurs, like John Moody, and genuine academic scientists, such as Irving Fisher of Yale and Charles Jesse Bullock and Warren Persons of Harvard. (Friedman 2014, ix)

Customers of the new forecasting services often took these statistics-based predictions on faith. They wanted forecasts, John Moody noted, not discourses on the methods that produced them. Readers did not seek out detailed information on the accuracy of economic predictions, as long as forecasters proved to be right at least a portion of the time. The desire for any information that would illuminate the future was overwhelming, and subscribers to forecasting newsletters were willing to suspend reasoned judgment to gain comfort. This blend of rationality and anxiety, measurement and intuition, optimism and fear is the broad frame of the story and, not incidentally, why forecasters who were repeatedly proved mistaken, as all ultimately must be given enough time, still commanded attention and fee-paying clients. (Friedman 2014, x)

(….) Forecaster’s reliance on science and statistics as methods for accessing the future aligns their story with conventional narratives of modernity. The German sociologist Max Weber, for instance, argued that a key component of the modern worldview was a marked “disenchantment of the world,” as scientific rationality displaced older, magical, and “irrational” ways of understanding. Indeed, the forecasters … certainly saw themselves as systematic empiricists and logicians who promised to rescue the science of prediction from quacks and psychics. They sought, in the words of historian Jackson Lears, to “stabilize the sorcery of the market.” (Friedman 2014, 5)

The relationship between the forecasting industry and modernity was an ambivalent one, though. On the one hand, the early forecasters helped build key institutions (including Moody’s Investors Service and the National Bureau of Economic Research) and popularize new statistical tools, like leading indicators and indexes of industrial production. On the other hand, though all forecasters dressed their predictions in the garb of rationality (with graphs, numbers, and equations), their predictive accuracy was no more certain than a crystal ball. Moreover, despite efforts of forecasters to distance themselves from astrologers and popular conjurers, the emergence of scientific forecasting went hand in hand with rising popular interest in all manner of prediction. The general public, anxious for insights into an uncertain future, consumed forecasts indiscriminately. (Friedman 2014, 5)

Diamonds are Bullshit

Nineteenth-century economists liked to illustrate the importance of scarcity to value by using the water and diamond paradox. Why is water cheap, even though it is necessary for human life, and diamonds are expensive and therefore of high value, even though humans can quite easily get by without them? Marx’s labour theory of value–naïvely applied–would argue that diamonds simply take a lot more time and effort to produce. But the new utility theory of value, as the marginalists defined it, explained the difference in price through the scarcity of diamonds. Where there is an abundance of water, it is cheap. Where there is a scarcity (as in a desert), its value can become very high. For the marginalists, this scarcity theory of value became the rationale for the price of everything, from diamonds, to water, to workers’ wages.

The idea of scarcity became so important to economists that in the early 1930s it prompted one influential British economist, Lionel Robbins (1898–1984), Professor of Economics at the London School of Economics, to define the study of economics itself in terms of scarcity; his description of it as ‘the study of the allocation of resources, under conditions of scarcity’ is still widely used.8 The emergence of marginalism was a pivotal moment in the history of economic thought, one that laid the foundations for today’s dominant economic theory.

Mariana Mazzucato (2018, 64-65) The Value of Everything

The Manufacturing of Scarcity qua Market Manipulation

American males enter adulthood through a peculiar rite of passage: they spend most of their savings on a shiny piece of rock. They could invest the money in assets that will compound over time and someday provide a nest egg. Instead, they trade that money for a diamond ring, which isn’t much of an asset at all. As soon as a diamond leaves a jeweler, it loses over 50% of its value. (Priceonomics 2014, 3)

We exchange diamond rings as part of the engagement process because the diamond company De Beers decided in 1938 that it would like us to. Prior to a stunningly successful marketing campaign, Americans occasionally exchanged engagement rings, but it wasn’t pervasive. Not only is the demand for diamonds a marketing invention, but diamonds aren’t actually that rare. Only by carefully restricting the supply has De Beers kept the price of a diamond high. (Priceonomics 2014, 3)

Countless American dudes will attest that the societal obligation to furnish a diamond engagement ring is both stressful and expensive. But this obligation only exists because the company that stands to profit from it willed it into existence. (Priceonomics 2014, 3)

So here is a modest proposal: Let’s agree that diamonds are bullshit and reject their role in the marriage process. Let’s admit that we as a society were tricked for about a century into coveting sparkling pieces of carbon, but it’s time to end the nonsense. (Priceonomics 2014, 3-4)

The Concept of Intrinsic Value

In finance, there is concept called intrinsic value. An asset’s value is essentially driven by the (discounted) value of the future cash that asset will generate. For example, when Hertz buys a car, its value is the profit Hertz will earn from renting it out and selling the car at the end of its life (the “terminal value”). For Hertz, a car is an investment. When you buy a car, unless you make money from it somehow, its value corresponds to its resale value. Since a car is a depreciating asset, the amount of value that the car loses over its lifetime is a very real expense you pay. (Priceonomics 2014, 4)

A diamond is a depreciating asset masquerading as an investment. There is a common misconception that jewelry and precious metals are assets that can store value, appreciate, and hedge against inflation. That’s not wholly untrue. (Priceonomics 2014, 4)

Gold and silver are commodities that can be purchased on financial markets. They can appreciate and hold value in times of inflation. You can even hoard gold under your bed and buy gold coins and bullion (albeit at approximately a 10% premium to market rates). If you want to hoard gold jewelry, however, there is typically a 100-400% retail markup. So jewelry is not a wise investment. (Priceonomics 2014, 4)

But with that caveat in mind, the market for gold is fairly liquid and gold is fungible — you can trade one large piece of gold for ten smalls ones like you can trade a ten dollar bill for ten one dollar bills. These characteristics make it a feasible investment. (Priceonomics 2014, 4)

Diamonds, however, are not an investment. The market for them is not liquid, and diamonds are not fungible. (Priceonomics 2014, 4-5)

The first test of a liquid market is whether you can resell a diamond. In a famous piece published by The Atlantic in 1982, Edward Epstein explains why you can’t sell used diamonds for anything but a pittance:

“Retail jewelers, especially the prestigious Fifth Avenue stores, prefer not to buy back diamonds from customers, because the offer they would make would most likely be considered ridiculously low. The ‘keystone,’ or markup, on a diamond and its setting may range from 100 to 200 percent, depending on the policy of the store; if it bought diamonds back from customers, it would have to buy them back at wholesale prices. Most jewelers would prefer not to make a customer an offer that might be deemed insulting and also might undercut the widely held notion that diamonds go up in value. Moreover, since retailers generally receive their diamonds from wholesalers on consignment, and need not pay for them until they are sold, they would not readily risk their own cash to buy diamonds from customers.” (Priceonomics 2014, 5)

When you buy a diamond, you buy it at retail, which is a 100% to 200% markup. If you want to resell it, you have to pay less than wholesale to incent a diamond buyer to risk her own capital on the purchase. Given the large markup, this will mean a substantial loss on your part. The same article puts some numbers around the dilemma: (Priceonomics 2014, 5-6)

(….) We like diamonds because Gerold M. Lauck told us to. Until the mid 20th century, diamond engagement rings were a small and dying industry in America, and the concept had not really taken hold in Europe. (Priceonomics 2014, 7)

Not surprisingly, the American market for diamond engagement rings began to shrink during the Great Depression. Sales volume declined and the buyers that remained purchased increasingly smaller stones. But the U.S. market for engagement rings was still 75% of De Beers’ sales. With Europe on the verge of war, it didn’t seem like a promising place to invest. If De Beers was going to grow, it had to reverse the trend. (Priceonomics 2014, 7)

And so, in 1938, De Beers turned to Madison Avenue for help. The company hired Gerold Lauck and the N. W. Ayer advertising agency, which commissioned a study with some astute observations. Namely, men were the key to the market. As Epstein wrote of the findings:

“Since ‘young men buy over 90% of all engagement rings’ it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship” (Priceonomics 2014, 7)

(….) The next time you look at a diamond, consider this: nearly every American marriage begins with a diamond because a bunch of rich white men in the 1940s convinced everyone that its size determines a man’s self worth. They created this convention — that unless a man purchases (an intrinsically useless) diamond, his life is a failure — while sitting in a room, racking their brains on how to sell diamonds that no one wanted. (Priceonomics 2014, 8)

A History of Market Manipulation

(….) What, you might ask, could top institutionalizing demand for a useless product out of thin air? Monopolizing the supply of diamonds for over a century to make that useless product extremely expensive. You see, diamonds aren’t really even that rare. (Priceonomics 2014, 10)

Before 1870, diamonds were very rare. They typically ended up in a Maharaja’s crown or a royal necklace. In 1870, enormous deposits of diamonds were discovered in Kimberley, South Africa. As diamonds flooded the market, the financiers of the mines realized they were making their own investments worthless. As they mined more and more diamonds, they became less scarce and their price dropped. (Priceonomics 2014, 10)

The diamond market may have bottomed out were it not for an enterprising individual by the name of Cecil Rhodes. He began buying up mines in order to control the output and keep the price of diamonds high. By 1888, Rhodes controlled the entire South African diamond supply, and in turn, essentially the entire world supply. One of the companies he acquired was eponymously named after its founders, the De Beers brothers. (Priceonomics 2014, 10)

Building a diamond monopoly isn’t easy work. It requires a balance of ruthlessly punishing and cooperating with competitors, as well as a very long term view. For example, in 1902, prospectors discovered a massive mine in South Africa that contained as many diamonds as all of De Beers’ mines combined. The owners initially refused to join the De Beers cartel, and only joined three years later after new owner Ernest Oppenheimer recognized that a competitive market for diamonds would be disastrous for the industry. In Oppenheimer’s words: (Priceonomics 2014, 10-11)

“Common sense tells us that the only way to increase the value of diamonds is to make them scarce, that is to reduce production.” (Priceonomics 2014, 11)

(….) We covet diamonds in America for a simple reason: the company that stands to profit from diamond sales decided that we should. De Beers’ marketing campaign single handedly made diamond rings the measure of one’s success in America. Despite diamonds’ complete lack of inherent value, the company manufactured an image of diamonds as a status symbol. And to keep the price of diamonds high, despite the abundance of new diamond finds, De Beers executed the most effective monopoly of the 20th century. (Priceonomics 2014, 13)

~ ~ ~

The history of De Beers’ ruthless behavior in its drive to maintain its monopoly is well documented. There were so successful at creating a market in monopoly that eventually such a monstrosity as blood diamonds could exist. But that is another story. The moral of the story is that when it comes to capitalism there is really no such thing as intrinsic value or a “free market,” and that slick marketing can make a terd sell for the price of diamond.

Upon this market manipulation economists built a house of cards that overlooked the monopolist’s manipulations and instead claimed diamonds are expensive because they are rare. Diamonds are bullshit and by extension so too is modern economics theory of scarcity largely bullshit too.

Sack the Economists

And Disband the Departments of The Walking Dead

In 1994 Paul Ormerod published a book called The Death of Economics. He argued economists don’t know what they’re talking about. In 2001 Steve Keen published a book called Debunking Economics: the naked emperor of the social sciences, with a second edition in 2011 subtitled The naked emperor dethroned?. Keen also argued economists don’t know what they’re talking about. (Davies 2015, 1)

Neither of these books, nor quite a few others, has had the desired effect. Mainstream economics has sailed serenely on its way, declaiming, advising, berating, sternly lecturing, deciding, teaching, pontificating. Meanwhile half of Europe and many regions and groups in the United States are in depression, and fascism is making a comeback. The last big depression spawned Hitler. This one is promoting Golden Dawn in Greece and similar extremist movements elsewhere. In the anglophone world a fundamentalist right-wing ideology is enforcing an increasingly narrow political correctness centred on “free” markets and the right of the rich to do and say whatever they like. “Freedom”, but only for some, and without responsibility. (Davies 2015, 1-2)

Evidently Ormerod and Keen were too subtle. It’s true their books also get a bit technical at times, especially Keen’s, but then they were addressing the profession, trying to bring it to its senses, to reform it from the inside. That seems to have been their other mistake. They produced example after example of how mainstream ideas fail, but still they had no effect. I think the message was addressed to the wrong audience, and was just too subtle. Economics is naked and dead, but never mind the stink, just prop up the corpse and carry on. (Davies 2015, 2)

Oh, but look! The corpse is moving. It’s getting up and walking. Time to call in John Quiggin, author of Zombie Economics: how dead ideas still walk among us. Perhaps he’ll show us how to shoot it in the head, or whatever it takes to finally stop a zombie. (Davies 2015, 2)

Well, I think it’s clear we can’t be too subtle. We need to speak in plain English, to everyone, and get straight to the point. Economists don’t know what they’re talking about. We should remove economists from positions of power and influence. Get them out of treasuries, central banks, media, universities, where ever they spread their baleful ignorance. (Davies 2015, 2)

Economists don’t know how businesses work, they don’t know how financial markets work, they can’t begin to do elementary accounting, they don’t know where money comes from nor how banks work, they think private debt has no effect on the economy, their favourite theory is a laughably irrelevant abstraction and they never learnt that mathematics on its own is not science. They ignore well-known evidence that clearly contradicts their theories. (Davies 2015, 2-3)

Other academics should look into this discipline called economics that lurks in their midst. Practitioners of proper academic rigour, like historians, ecologists, physicists, psychologists, systems scientists, engineers, even lawyers, will be shocked. Academic economics is an incoherent grab bag of mathematical abstraction, assertion, failure to heed observations, misrepresentation of history and sources, rationalisation of archaic money-lending practices, and wishful thinking. It missed the computational boat that liberated other fields from old analytical mathematics and overly-restrictive assumptions. It is ignorant of major fields of modern knowledge in biology, ecology, psychology, anthropology, physics and systems science. (Davies 2015, 3)

Though many economists themselves may not realise it, economics is an ideology rationalised by a dog’s breakfast of superficial arguments and defended by dense thickets of jargon and arcane mathematics. The ideology is an old one: the rich and powerful know best, the rest of us are here to serve them. (Davies 2015, 3)

Power to Choose the Mismeasure of Humanity

If you push enough oats into a horse some will spill out and feed the sparrows.

Horse and Sparrow Economic Theory

The rich man may feast on caviar and champagne, while the poor women starves at his gate. And she may not even take the crumbs from his table, if that would deprive him of his pleasure in feeding them to his birds.

Gauthier 1986, 218, Morals by Agreement, Oxford University Press

If the rich could hire other people to die for them, the poor could make a wonderful living.

Yiddish Proverb

The power to choose the measure of success

The successful campaign to eliminate distributional issues from the core of the economic discipline has its mirror image in the popularity of GDP as the measure of economic success of a nation. While the pioneer of national accounting (i.e., GDP), Simon Kusnetz, explicitly said that GDP should not be used as a measure of welfare, and few economists would explicitly advocate such use, it is also true that economists as a group have done precious little to counter the popular opinion that growth, in the sense of maximization of GDP, should be the main goal of economic policy.

GDP is the money value of final goods and services that an economy produces in a quarter or a year (i.e., not including those goods and services used as inputs in production of other goods and services). This definition makes it … a reasonable yardstick of how much money moved around in a quarter or a year, and therefore captures to some extent how much economic activity in money terms there was in that period. It is a poor measure of actual activity in absolute terms due to using money rather than physically measuring human activity or indicators of human activity (e.g., how many tons of material were moving around in a year, or how many bits of information were exchanged in a year). Some activity that commands a large premium in money terms for institutional reasons, like investment banking, even if it is only one powerful person doing a moderate amount of work, will count the same as activities of hundreds of factory workers and much more than the activity of millions of housewives. Societal changes like providing more institutional childcare or reigning in the market power of investment banks can make a huge difference in terms of measured GDP, without significantly changing the actual activities performed. Because of this reliance on using money valuations, GDP has severe issues with accurately measuring technological progress. (Häring et. al. 2012, 28-29)

This method of measuring economic activity has two things going for it. It makes the mathematics a lot easier than measuring in a sensible way. And it conforms with the implicit assumptions if mainstream economics that an extra dollar is worth the same to a poor person than it is to a rich person, just as it makes no differentiation between types of activity, for instance whether they are good (i.e., charitable work) or bad (i.e. criminal activity). If a hedge fund manager makes five billion dollars in a good year, as John Paulson reportedly did in 2010 (Burton and Kishan 2011), this is must as good in GDP terms as 13.7 million people living on a dollar a day doubling their incomes. (Häring et. al. 2012, 29)

Policies that treat human beings as social creatures and try to reach the best results in the most important dimensions of human goals cannot flag their success with equally prominent and simple statistical measures like a single number where higher is “better.” The rich and wealthy benefit most from this way of measuring the economic success of a nation, since it de-emphasizes the gains of the mass low-income people relative to those of a minority if rich people. As far as nations are concerned, it benefits nations that champion the policies favored by this approach, with the US being foremost among these. (Häring, Norbert and Douglas Nial. Economists and the Powerful [Convenient Theories, Distorted Facts, Ample Rewards]. New York: Anthem Press; 2012; pp. 28-29.)

~ ~ ~

LET’S STOP PRETENDING UNEMPLOYMENT IS VOLUNTARY

Unless you have a PhD in economics, you probably think it uncontroversial to argue that we should be concerned about the unemployment rate. Those of you who lost a job, or who have struggled to find a job on leaving school, college, or a university, are well aware that unemployment is a painful and dehumanizing experience. You may be surprised to learn that, for the past thirty-five years, the models used by academic economists and central bankers to understand how the economy works have not included unemployment as a separate category. In almost every macroeconomic seminar I attended, from 1980 through 2007, it was accepted that all unemployment is voluntary. (Farmer 2017, 47)

In 1960, almost all macroeconomists talked about involuntary unemployment and they assumed, following Keynes, the quantity of labor demanded is not equal to the quantity of labor supplied. That view of economics was turned on its head, almost single-handedly, by Robert Lucas. Lucas persuaded macroeconomists that it makes no sense to talk about disequilibrium in any market and he initiated a revolution in macroeconomics that reformulated the discipline using pre-Keynesian classical assumptions. (Farmer 2017, 47)

The idea that all unemployment is voluntary is called the equilibrium approach to labor markets. Lucas wrote his first article on this idea in 1969 in a coauthored paper with Leonard Rapping. His ideas received a big boost during the 1980s when Finn Kydland, Edward C. Prescott, Charles Long, and Charles Plosser persuaded macroeconomists to use a mathematical approach, called the Ramsey growth model, as a new paradigm for business cycle theory. The theory of real business cycles, or RBCs, was born. According to this theory, we should think about consumption, investment, and employment “as if” they were the optimal choices of a single representative agent with superhuman perception of the probabilities of future events. (Farmer 2017, 47-48)

Mismeasure of Homo Economicus

Of the total employment growth in the US between 2005 and 2015, insecure employment in the categories of independent contractors, on-call workers and workers provided by contracting companies or temp agencies accounted for fully 94 percent.3a Outsourcing of employment plays a big role in what David Weil describes as the “fissuring” of the workplace — depressing wages, magnifying income and wealth inequality, and generating a pervasive sense on the part of those at the wrong end of the fissuring that the world is cheating them, making them angry in return.4 On top of this, many Trump voters are angry that the government is giving handouts to “shirkers”, and sticking them with the tax bill. (Fullbrook et. at. 2017, 65-66. Is Trump wrong on trade? A partial defense based on production and employment. In Trumponomics: Causes and Consequences.)

(….) [P]romotion of the low bar temporary contract or part-time “gig” jobs which comprised over 90% of Obama’s boasted job creation.20 (Fullbrook et. al. 2017, 210. Donald Trump, American political economy and the “terrible simplificateurs.” In Trumponomics: Causes and Consequences.)

The US might be less rich than official statistics make us believe…. After all, measuring GDP is an art as much as a science. What is usually portrayed as a straight forward act of objective measurement involves value judgments and much guesswork.

— Häring et. al. 2012, 33-34, in Economists and the Powerful
Power to measure success …

(….) These conventional metrics [i.e., GDP, misleading and deceptive unemployment metrics, etc.], however, ignored the fact that the QUALITY of the jobs was poor….. And the unemployment data ignores the quality of the types of jobs being created. Recent research by Professors Lawrence Katz of Harvard and Alan Krueger of Princeton based on non-labor force survey data (private sampling) suggests that “all of the net employment growth in the U.S. economy from 2005 to 2015 appears to have occurred in alternative work arrangements.”3 That is standard jobs with predictable income, pension benefits and health care coverage, have disappeared and are being replaced by more precarious contract work and other types of alternative working arrangements. Quantifying this trend, the authors conclude the following:

“The increase in the share of workers in alternative work arrangements from 10.1 percent in 2005 to 15.8 percent in 2015 implies that the number of workers employed in alternative arrangement increased by 9.4 million (66.5 percent), from 14.2 million in February 2005 to 23.6 million in November 2015.”

Thus, these figures imply that employment in traditional jobs (standard employment arrangements) slightly declined by 0.4 million (0.3 percent) from 126.2 million in February 2005 to 125.8 million in November 2015. Unfortunately, we cannot determine the extent to which the replacement of traditional jobs with alternative work arrangements occurred before, during or after the Great Recession. (Fullbrook et. at. 2017, 326-327. Explaining the rise of Donald Trump. In Trumponomics: Causes and Consequences.)

(….) The final change I want to draw attention to is the increasing precarity of the U.S. working-class. They’re increasingly employed in part-time jobs … and in “alternative” work arrangements. As Lawrence Katz and Alan Krueger (2016) ahve shown, just in the past decade, the percentage of American workers engaged in alternative work arrangements — defined as temporary help agency workers, on-call workers, contract workers, and independent contractors or freelancers — rose from 10.1 percent (in February 2005) to 15.8 percent (in late 2015). And it turns out, the so-called gig economy is characterized by the same unequalizing, capital-labor dynamics as the rest of the U.S. economy.

What is clear from this brief survey of the changes in the condition of the U.S. working-class in recent decades is that, while American workers have created enormous additional income and wealth, most of the increase has been captured by their employers and a tiny group at the top as workers have been forced to compete with one another for new kinds of jobs, with fewer protections, at lower wages, and with less security than they once expected. And the period of recovery from the Second Great Depression has done nothing to change that fundamental dynamic. (Fullbrook et. at. 2017, 350-351. Class an Trumponomics. In Trumponomics: Causes and Consequences.)

Notes

3a Lawrence Katz and Alan Krueger, 2016, “The rise and nature of alternative work arrangements in the US, 1995-2015”, March 29. By the end of 2015, workers in the authors ‘alternative ‘ employment constituted 16 percent of total workers. (Fullbrook et. al., 2017, 66)
3b https://krueger.princeton.edu/sites/default/files/akrueger/files/katz_krueger_cws__march_29_20165.pdf
4 David Weil, 2014, The Fissured Workplace: Why Work Became So Bad For So Many and What Can Be Done To Improve It, Harvard University Press.
20 “Nearly 95% of New Jobs During Obama Era were Contract, or Part Time.” Investing.com, 21 December 2016. Accessed at https://www.investing.com/news/economy-news/nearly-95-of-all-job-growth-during-obama-era-part-time,-contract-work-449057

~ ~ ~

[T]he jobs shifted away to be done by separate employers pay low wages; provide limited or often no health care, pension, or other benefits; and offer tenuous job security. Moreover, workers in each case received pay or faced workplace conditions that violated one or more workplace laws…. In the late 1980s and early 1990s, many companies, facing increasingly restive capital markets, shed activities deemed peripheral to their core business models: out went janitors, security guards, payroll administrators, and information technology specialists…. Even lawyers who handle our business transactions and consultants who work for well-known accounting companies may now have an arm’s-length relationship with those whom we think they are employed. By shedding direct employment, lead business enterprises select from among multiple providers [i.e., ‘preferred vendors’ as MSFT calls them] of those activities and services formally done inside the organization, thereby substantially reducing costs [they play vendors off of one another based on cost and create what is know in the recruiting/staffing industry ‘the death of the middle man’ race to the bottom] and dispatching the many responsibilities connected to being the employer of record [saving as much as ~30% in employee benefits no longer paid]. Information and communication technologies have enabled this hidden transformation of work…. By shedding employment to other parties, lead companies change a wage-setting problem into a contracting [and price] decision. The result is stagnation of real wages [and loss of employee benefits] for many of the jobs formerly done inside.

Weil 2014, 3-4

David Weil’s book The Fissured Workplace sheds light on the extent and nature of this “shedding” of employees by corporations. The evidence shows that increasingly employers are forcing workers into temporary, contract positions, or part-time “gig” jobs in a variety of fields. Female workers suffer most heavily in this new fissured economy, as work in traditionally feminine fields like education and medicine have been declining and shifting to the use of contract workers. The disappearance of conventional full-time work, 9 a.m. to 5 p.m. work, has hit every demographic. Krueger, a former chairman of the White House Council of Economic Advisers, was surprised by the finding. “Workers seeking full-time, steady work have lost,” said Krueger.

But it would be a mistake to believe that the highly skilled and highly educated technology/knowledge workers are immune from this kind of fissuring, for it is continuing apace within the major global technology corporations — know as “lead companies” — like Microsoft, Google, Facebook, Wells Fargo (and banks in general), etc., continuing to lay off entire divisions and groups only to rehire them back as “contingent” workers employed by one of the lead firm’s designated “third party vendors,” or “preferred vendors,” or “partners.” The worker/employer power balance of entire industries can be shifted to favorably give corporations huge advantages by the use of opaque global supply chains that use technological smoke screens and employer delegated deception to hide the real nature of these relationships meant to disadvantage the workers economically.

It is now possible to do to white-collar high-skilled high-education workers what has already been done to blue-collar low-skilled low-educated workers, except now it is no longer necessary to “export” those jobs overseas when such technology workers can be shed by lead corporations and forced to work locally for a “third party vendors” (aka staffing companies) at a sometimes 50% to 60% reduced family income and sometimes with little or no employee benefits (e.g., healthcare, sick days, vacation days, etc.). Yahya under the section “Statement of the Problem” writes:

The emergence of knowledge-based economies (KBEs) in developing countries has the potential to leapfrog these economies to compete in the globalized services sector (Rooney et al., 2003). While reducing labour costs is a main reason for outsourcing, it is not the only driver: other determinants include the need to improve quality of service and providing new services for customers (Kaplan, 2002). The rise of the KBEs also illustrates that the distinction between white collar and blue collar workers is an archaic concept because both categories are subjected to the same conditionalities of business cost reduction and profit maximization. The advance of technological developments increased their commonalities, which made white collar service employment just as vulnerable as blue collar work. The convergence of the Information and Communications Technology (ICT) sector has fuelled economic growth but has increased the displacement of service jobs from developed to developing economies (Rooney et al., 2003). The rise of the global IT industry and the outsourcing of various services to lower-cost developing countries are performed through the spatially unbundling of tasks and relocating them to the most productive locations (Wilson, 1998).

(Yahya 2011, 621, emphasis added)

Yahya is mistaken in his claim this form of the “fissuring workplace” improves quality of service, for it actually reduces the quality of service as evidence has shown. When family wage earners are forced to become “contingent” workers in the “gig” economy they are effectively turned into precarious workers who have far less social security in terms of job stability, wages, and benefits. Typically they are forced to work longer hours for less pay and fewer employee benefits, or sometimes non at all. The twin objectives of “fissuring” — reducing costs and simultaneously improving quality of service — turn out to be a chimera in reality leading to overstressed and underpaid precarious “contingent” workers suffering from increased socio-economic anxiety and workload exhaustion.

This has an overall destabilizing social impact on family wage earners — especially single women with children — and society in general. These “external” costs to families and society are rarely considered within economics typically being treated as “externalities” that are exogenous to econometric analysis. With the decline of the power of unions workers — in all classes and domains, from blue-collar to white-collar — are being subjected to increasing wage suppression tactics much of which is hidden behind intentional lack of transparency and technological smoke screens that give major corporations asymmetrical information advantage over workers in deciding wages and compensation values in the so-called “free market” which is in reality a highly rigged market.