Category Archives: Economics

Diamonds are Bullshit

Nineteenth-century economists liked to illustrate the importance of scarcity to value by using the water and diamond paradox. Why is water cheap, even though it is necessary for human life, and diamonds are expensive and therefore of high value, even though humans can quite easily get by without them? Marx’s labour theory of value–naïvely applied–would argue that diamonds simply take a lot more time and effort to produce. But the new utility theory of value, as the marginalists defined it, explained the difference in price through the scarcity of diamonds. Where there is an abundance of water, it is cheap. Where there is a scarcity (as in a desert), its value can become very high. For the marginalists, this scarcity theory of value became the rationale for the price of everything, from diamonds, to water, to workers’ wages.

The idea of scarcity became so important to economists that in the early 1930s it prompted one influential British economist, Lionel Robbins (1898–1984), Professor of Economics at the London School of Economics, to define the study of economics itself in terms of scarcity; his description of it as ‘the study of the allocation of resources, under conditions of scarcity’ is still widely used.8 The emergence of marginalism was a pivotal moment in the history of economic thought, one that laid the foundations for today’s dominant economic theory.

Mariana Mazzucato (2018, 64-65) The Value of Everything

The Manufacturing of Scarcity qua Market Manipulation

American males enter adulthood through a peculiar rite of passage: they spend most of their savings on a shiny piece of rock. They could invest the money in assets that will compound over time and someday provide a nest egg. Instead, they trade that money for a diamond ring, which isn’t much of an asset at all. As soon as a diamond leaves a jeweler, it loses over 50% of its value. (Priceonomics 2014, 3)

We exchange diamond rings as part of the engagement process because the diamond company De Beers decided in 1938 that it would like us to. Prior to a stunningly successful marketing campaign, Americans occasionally exchanged engagement rings, but it wasn’t pervasive. Not only is the demand for diamonds a marketing invention, but diamonds aren’t actually that rare. Only by carefully restricting the supply has De Beers kept the price of a diamond high. (Priceonomics 2014, 3)

Countless American dudes will attest that the societal obligation to furnish a diamond engagement ring is both stressful and expensive. But this obligation only exists because the company that stands to profit from it willed it into existence. (Priceonomics 2014, 3)

So here is a modest proposal: Let’s agree that diamonds are bullshit and reject their role in the marriage process. Let’s admit that we as a society were tricked for about a century into coveting sparkling pieces of carbon, but it’s time to end the nonsense. (Priceonomics 2014, 3-4)

The Concept of Intrinsic Value

In finance, there is concept called intrinsic value. An asset’s value is essentially driven by the (discounted) value of the future cash that asset will generate. For example, when Hertz buys a car, its value is the profit Hertz will earn from renting it out and selling the car at the end of its life (the “terminal value”). For Hertz, a car is an investment. When you buy a car, unless you make money from it somehow, its value corresponds to its resale value. Since a car is a depreciating asset, the amount of value that the car loses over its lifetime is a very real expense you pay. (Priceonomics 2014, 4)

A diamond is a depreciating asset masquerading as an investment. There is a common misconception that jewelry and precious metals are assets that can store value, appreciate, and hedge against inflation. That’s not wholly untrue. (Priceonomics 2014, 4)

Gold and silver are commodities that can be purchased on financial markets. They can appreciate and hold value in times of inflation. You can even hoard gold under your bed and buy gold coins and bullion (albeit at approximately a 10% premium to market rates). If you want to hoard gold jewelry, however, there is typically a 100-400% retail markup. So jewelry is not a wise investment. (Priceonomics 2014, 4)

But with that caveat in mind, the market for gold is fairly liquid and gold is fungible — you can trade one large piece of gold for ten smalls ones like you can trade a ten dollar bill for ten one dollar bills. These characteristics make it a feasible investment. (Priceonomics 2014, 4)

Diamonds, however, are not an investment. The market for them is not liquid, and diamonds are not fungible. (Priceonomics 2014, 4-5)

The first test of a liquid market is whether you can resell a diamond. In a famous piece published by The Atlantic in 1982, Edward Epstein explains why you can’t sell used diamonds for anything but a pittance:

“Retail jewelers, especially the prestigious Fifth Avenue stores, prefer not to buy back diamonds from customers, because the offer they would make would most likely be considered ridiculously low. The ‘keystone,’ or markup, on a diamond and its setting may range from 100 to 200 percent, depending on the policy of the store; if it bought diamonds back from customers, it would have to buy them back at wholesale prices. Most jewelers would prefer not to make a customer an offer that might be deemed insulting and also might undercut the widely held notion that diamonds go up in value. Moreover, since retailers generally receive their diamonds from wholesalers on consignment, and need not pay for them until they are sold, they would not readily risk their own cash to buy diamonds from customers.” (Priceonomics 2014, 5)

When you buy a diamond, you buy it at retail, which is a 100% to 200% markup. If you want to resell it, you have to pay less than wholesale to incent a diamond buyer to risk her own capital on the purchase. Given the large markup, this will mean a substantial loss on your part. The same article puts some numbers around the dilemma: (Priceonomics 2014, 5-6)

(….) We like diamonds because Gerold M. Lauck told us to. Until the mid 20th century, diamond engagement rings were a small and dying industry in America, and the concept had not really taken hold in Europe. (Priceonomics 2014, 7)

Not surprisingly, the American market for diamond engagement rings began to shrink during the Great Depression. Sales volume declined and the buyers that remained purchased increasingly smaller stones. But the U.S. market for engagement rings was still 75% of De Beers’ sales. With Europe on the verge of war, it didn’t seem like a promising place to invest. If De Beers was going to grow, it had to reverse the trend. (Priceonomics 2014, 7)

And so, in 1938, De Beers turned to Madison Avenue for help. The company hired Gerold Lauck and the N. W. Ayer advertising agency, which commissioned a study with some astute observations. Namely, men were the key to the market. As Epstein wrote of the findings:

“Since ‘young men buy over 90% of all engagement rings’ it would be crucial to inculcate in them the idea that diamonds were a gift of love: the larger and finer the diamond, the greater the expression of love. Similarly, young women had to be encouraged to view diamonds as an integral part of any romantic courtship” (Priceonomics 2014, 7)

(….) The next time you look at a diamond, consider this: nearly every American marriage begins with a diamond because a bunch of rich white men in the 1940s convinced everyone that its size determines a man’s self worth. They created this convention — that unless a man purchases (an intrinsically useless) diamond, his life is a failure — while sitting in a room, racking their brains on how to sell diamonds that no one wanted. (Priceonomics 2014, 8)

A History of Market Manipulation

(….) What, you might ask, could top institutionalizing demand for a useless product out of thin air? Monopolizing the supply of diamonds for over a century to make that useless product extremely expensive. You see, diamonds aren’t really even that rare. (Priceonomics 2014, 10)

Before 1870, diamonds were very rare. They typically ended up in a Maharaja’s crown or a royal necklace. In 1870, enormous deposits of diamonds were discovered in Kimberley, South Africa. As diamonds flooded the market, the financiers of the mines realized they were making their own investments worthless. As they mined more and more diamonds, they became less scarce and their price dropped. (Priceonomics 2014, 10)

The diamond market may have bottomed out were it not for an enterprising individual by the name of Cecil Rhodes. He began buying up mines in order to control the output and keep the price of diamonds high. By 1888, Rhodes controlled the entire South African diamond supply, and in turn, essentially the entire world supply. One of the companies he acquired was eponymously named after its founders, the De Beers brothers. (Priceonomics 2014, 10)

Building a diamond monopoly isn’t easy work. It requires a balance of ruthlessly punishing and cooperating with competitors, as well as a very long term view. For example, in 1902, prospectors discovered a massive mine in South Africa that contained as many diamonds as all of De Beers’ mines combined. The owners initially refused to join the De Beers cartel, and only joined three years later after new owner Ernest Oppenheimer recognized that a competitive market for diamonds would be disastrous for the industry. In Oppenheimer’s words: (Priceonomics 2014, 10-11)

“Common sense tells us that the only way to increase the value of diamonds is to make them scarce, that is to reduce production.” (Priceonomics 2014, 11)

(….) We covet diamonds in America for a simple reason: the company that stands to profit from diamond sales decided that we should. De Beers’ marketing campaign single handedly made diamond rings the measure of one’s success in America. Despite diamonds’ complete lack of inherent value, the company manufactured an image of diamonds as a status symbol. And to keep the price of diamonds high, despite the abundance of new diamond finds, De Beers executed the most effective monopoly of the 20th century. (Priceonomics 2014, 13)

~ ~ ~

The history of De Beers’ ruthless behavior in its drive to maintain its monopoly is well documented. There were so successful at creating a market in monopoly that eventually such a monstrosity as blood diamonds could exist. But that is another story. The moral of the story is that when it comes to capitalism there is really no such thing as intrinsic value or a “free market,” and that slick marketing can make a terd sell for the price of diamond.

Upon this market manipulation economists built a house of cards that overlooked the monopolist’s manipulations and instead claimed diamonds are expensive because they are rare. Diamonds are bullshit and by extension so too is modern economics theory of scarcity largely bullshit too.

Sack the Economists

And Disband the Departments of The Walking Dead

In 1994 Paul Ormerod published a book called The Death of Economics. He argued economists don’t know what they’re talking about. In 2001 Steve Keen published a book called Debunking Economics: the naked emperor of the social sciences, with a second edition in 2011 subtitled The naked emperor dethroned?. Keen also argued economists don’t know what they’re talking about. (Davies 2015, 1)

Neither of these books, nor quite a few others, has had the desired effect. Mainstream economics has sailed serenely on its way, declaiming, advising, berating, sternly lecturing, deciding, teaching, pontificating. Meanwhile half of Europe and many regions and groups in the United States are in depression, and fascism is making a comeback. The last big depression spawned Hitler. This one is promoting Golden Dawn in Greece and similar extremist movements elsewhere. In the anglophone world a fundamentalist right-wing ideology is enforcing an increasingly narrow political correctness centred on “free” markets and the right of the rich to do and say whatever they like. “Freedom”, but only for some, and without responsibility. (Davies 2015, 1-2)

Evidently Ormerod and Keen were too subtle. It’s true their books also get a bit technical at times, especially Keen’s, but then they were addressing the profession, trying to bring it to its senses, to reform it from the inside. That seems to have been their other mistake. They produced example after example of how mainstream ideas fail, but still they had no effect. I think the message was addressed to the wrong audience, and was just too subtle. Economics is naked and dead, but never mind the stink, just prop up the corpse and carry on. (Davies 2015, 2)

Oh, but look! The corpse is moving. It’s getting up and walking. Time to call in John Quiggin, author of Zombie Economics: how dead ideas still walk among us. Perhaps he’ll show us how to shoot it in the head, or whatever it takes to finally stop a zombie. (Davies 2015, 2)

Well, I think it’s clear we can’t be too subtle. We need to speak in plain English, to everyone, and get straight to the point. Economists don’t know what they’re talking about. We should remove economists from positions of power and influence. Get them out of treasuries, central banks, media, universities, where ever they spread their baleful ignorance. (Davies 2015, 2)

Economists don’t know how businesses work, they don’t know how financial markets work, they can’t begin to do elementary accounting, they don’t know where money comes from nor how banks work, they think private debt has no effect on the economy, their favourite theory is a laughably irrelevant abstraction and they never learnt that mathematics on its own is not science. They ignore well-known evidence that clearly contradicts their theories. (Davies 2015, 2-3)

Other academics should look into this discipline called economics that lurks in their midst. Practitioners of proper academic rigour, like historians, ecologists, physicists, psychologists, systems scientists, engineers, even lawyers, will be shocked. Academic economics is an incoherent grab bag of mathematical abstraction, assertion, failure to heed observations, misrepresentation of history and sources, rationalisation of archaic money-lending practices, and wishful thinking. It missed the computational boat that liberated other fields from old analytical mathematics and overly-restrictive assumptions. It is ignorant of major fields of modern knowledge in biology, ecology, psychology, anthropology, physics and systems science. (Davies 2015, 3)

Though many economists themselves may not realise it, economics is an ideology rationalised by a dog’s breakfast of superficial arguments and defended by dense thickets of jargon and arcane mathematics. The ideology is an old one: the rich and powerful know best, the rest of us are here to serve them. (Davies 2015, 3)

Power to Choose the Mismeasure of Humanity

If you push enough oats into a horse some will spill out and feed the sparrows.

Horse and Sparrow Economic Theory

The rich man may feast on caviar and champagne, while the poor women starves at his gate. And she may not even take the crumbs from his table, if that would deprive him of his pleasure in feeding them to his birds.

Gauthier 1986, 218, Morals by Agreement, Oxford University Press

The power to choose the measure of success

The successful campaign to eliminate distributional issues from the core of the economic discipline has its mirror image in the popularity of GDP as the measure of economic success of a nation. While the pioneer of national accounting (i.e., GDP), Simon Kusnetz, explicitly said that GDP should not be used as a measure of welfare, and few economists would explicitly advocate such use, it is also true that economists as a group have done precious little to counter the popular opinion that growth, in the sense of maximization of GDP, should be the main goal of economic policy.

GDP is the money value of final goods and services that an economy produces in a quarter or a year (i.e., not including those goods and services used as inputs in production of other goods and services). This definition makes it … a reasonable yardstick of how much money moved around in a quarter or a year, and therefore captures to some extent how much economic activity in money terms there was in that period. It is a poor measure of actual activity in absolute terms due to using money rather than physically measuring human activity or indicators of human activity (e.g., how many tons of material were moving around in a year, or how many bits of information were exchanged in a year). Some activity that commands a large premium in money terms for institutional reasons, like investment banking, even if it is only one powerful person doing a moderate amount of work, will count the same as activities of hundreds of factory workers and much more than the activity of millions of housewives. Societal changes like providing more institutional childcare or reigning in the market power of investment banks can make a huge difference in terms of measured GDP, without significantly changing the actual activities performed. Because of this reliance on using money valuations, GDP has severe issues with accurately measuring technological progress. (Häring et. al. 2012, 28-29)

This method of measuring economic activity has two things going for it. It makes the mathematics a lot easier than measuring in a sensible way. And it conforms with the implicit assumptions if mainstream economics that an extra dollar is worth the same to a poor person than it is to a rich person, just as it makes no differentiation between types of activity, for instance whether they are good (i.e., charitable work) or bad (i.e. criminal activity). If a hedge fund manager makes five billion dollars in a good year, as John Paulson reportedly did in 2010 (Burton and Kishan 2011), this is must as good in GDP terms as 13.7 million people living on a dollar a day doubling their incomes. (Häring et. al. 2012, 29)

Policies that treat human beings as social creatures and try to reach the best results in the most important dimensions of human goals cannot flag their success with equally prominent and simple statistical measures like a single number where higher is “better.” The rich and wealthy benefit most from this way of measuring the economic success of a nation, since it de-emphasizes the gains of the mass low-income people relative to those of a minority if rich people. As far as nations are concerned, it benefits nations that champion the policies favored by this approach, with the US being foremost among these. (Häring, Norbert and Douglas Nial. Economists and the Powerful [Convenient Theories, Distorted Facts, Ample Rewards]. New York: Anthem Press; 2012; pp. 28-29.)

~ ~ ~

LET’S STOP PRETENDING UNEMPLOYMENT IS VOLUNTARY

Unless you have a PhD in economics, you probably think it uncontroversial to argue that we should be concerned about the unemployment rate. Those of you who lost a job, or who have struggled to find a job on leaving school, college, or a university, are well aware that unemployment is a painful and dehumanizing experience. You may be surprised to learn that, for the past thirty-five years, the models used by academic economists and central bankers to understand how the economy works have not included unemployment as a separate category. In almost every macroeconomic seminar I attended, from 1980 through 2007, it was accepted that all unemployment is voluntary. (Farmer 2017, 47)

In 1960, almost all macroeconomists talked about involuntary unemployment and they assumed, following Keynes, the quantity of labor demanded is not equal to the quantity of labor supplied. That view of economics was turned on its head, almost single-handedly, by Robert Lucas. Lucas persuaded macroeconomists that it makes no sense to talk about disequilibrium in any market and he initiated a revolution in macroeconomics that reformulated the discipline using pre-Keynesian classical assumptions. (Farmer 2017, 47)

The idea that all unemployment is voluntary is called the equilibrium approach to labor markets. Lucas wrote his first article on this idea in 1969 in a coauthored paper with Leonard Rapping. His ideas received a big boost during the 1980s when Finn Kydland, Edward C. Prescott, Charles Long, and Charles Plosser persuaded macroeconomists to use a mathematical approach, called the Ramsey growth model, as a new paradigm for business cycle theory. The theory of real business cycles, or RBCs, was born. According to this theory, we should think about consumption, investment, and employment “as if” they were the optimal choices of a single representative agent with superhuman perception of the probabilities of future events. (Farmer 2017, 47-48)

Mismeasure of Homo Economicus

Of the total employment growth in the US between 2005 and 2015, insecure employment in the categories of independent contractors, on-call workers and workers provided by contracting companies or temp agencies accounted for fully 94 percent.3a Outsourcing of employment plays a big role in what David Weil describes as the “fissuring” of the workplace — depressing wages, magnifying income and wealth inequality, and generating a pervasive sense on the part of those at the wrong end of the fissuring that the world is cheating them, making them angry in return.4 On top of this, many Trump voters are angry that the government is giving handouts to “shirkers”, and sticking them with the tax bill. (Fullbrook et. at. 2017, 65-66. Is Trump wrong on trade? A partial defense based on production and employment. In Trumponomics: Causes and Consequences.)

(….) [P]romotion of the low bar temporary contract or part-time “gig” jobs which comprised over 90% of Obama’s boasted job creation.20 (Fullbrook et. al. 2017, 210. Donald Trump, American political economy and the “terrible simplificateurs.” In Trumponomics: Causes and Consequences.)

The US might be less rich than official statistics make us believe…. After all, measuring GDP is an art as much as a science. What is usually portrayed as a straight forward act of objective measurement involves value judgments and much guesswork.

— Häring et. al. 2012, 33-34, in Economists and the Powerful
Power to measure success …

(….) These conventional metrics [i.e., GDP, misleading and deceptive unemployment metrics, etc.], however, ignored the fact that the QUALITY of the jobs was poor….. And the unemployment data ignores the quality of the types of jobs being created. Recent research by Professors Lawrence Katz of Harvard and Alan Krueger of Princeton based on non-labor force survey data (private sampling) suggests that “all of the net employment growth in the U.S. economy from 2005 to 2015 appears to have occurred in alternative work arrangements.”3 That is standard jobs with predictable income, pension benefits and health care coverage, have disappeared and are being replaced by more precarious contract work and other types of alternative working arrangements. Quantifying this trend, the authors conclude the following:

“The increase in the share of workers in alternative work arrangements from 10.1 percent in 2005 to 15.8 percent in 2015 implies that the number of workers employed in alternative arrangement increased by 9.4 million (66.5 percent), from 14.2 million in February 2005 to 23.6 million in November 2015.”

Thus, these figures imply that employment in traditional jobs (standard employment arrangements) slightly declined by 0.4 million (0.3 percent) from 126.2 million in February 2005 to 125.8 million in November 2015. Unfortunately, we cannot determine the extent to which the replacement of traditional jobs with alternative work arrangements occurred before, during or after the Great Recession. (Fullbrook et. at. 2017, 326-327. Explaining the rise of Donald Trump. In Trumponomics: Causes and Consequences.)

(….) The final change I want to draw attention to is the increasing precarity of the U.S. working-class. They’re increasingly employed in part-time jobs … and in “alternative” work arrangements. As Lawrence Katz and Alan Krueger (2016) ahve shown, just in the past decade, the percentage of American workers engaged in alternative work arrangements — defined as temporary help agency workers, on-call workers, contract workers, and independent contractors or freelancers — rose from 10.1 percent (in February 2005) to 15.8 percent (in late 2015). And it turns out, the so-called gig economy is characterized by the same unequalizing, capital-labor dynamics as the rest of the U.S. economy.

What is clear from this brief survey of the changes in the condition of the U.S. working-class in recent decades is that, while American workers have created enormous additional income and wealth, most of the increase has been captured by their employers and a tiny group at the top as workers have been forced to compete with one another for new kinds of jobs, with fewer protections, at lower wages, and with less security than they once expected. And the period of recovery from the Second Great Depression has done nothing to change that fundamental dynamic. (Fullbrook et. at. 2017, 350-351. Class an Trumponomics. In Trumponomics: Causes and Consequences.)

Notes

3a Lawrence Katz and Alan Krueger, 2016, “The rise and nature of alternative work arrangements in the US, 1995-2015”, March 29. By the end of 2015, workers in the authors ‘alternative ‘ employment constituted 16 percent of total workers. (Fullbrook et. al., 2017, 66)
3b https://krueger.princeton.edu/sites/default/files/akrueger/files/katz_krueger_cws__march_29_20165.pdf
4 David Weil, 2014, The Fissured Workplace: Why Work Became So Bad For So Many and What Can Be Done To Improve It, Harvard University Press.
20 “Nearly 95% of New Jobs During Obama Era were Contract, or Part Time.” Investing.com, 21 December 2016. Accessed at https://www.investing.com/news/economy-news/nearly-95-of-all-job-growth-during-obama-era-part-time,-contract-work-449057

~ ~ ~

[T]he jobs shifted away to be done by separate employers pay low wages; provide limited or often no health care, pension, or other benefits; and offer tenuous job security. Moreover, workers in each case received pay or faced workplace conditions that violated one or more workplace laws…. In the late 1980s and early 1990s, many companies, facing increasingly restive capital markets, shed activities deemed peripheral to their core business models: out went janitors, security guards, payroll administrators, and information technology specialists…. Even lawyers who handle our business transactions and consultants who work for well-known accounting companies may now have an arm’s-length relationship with those whom we think they are employed. By shedding direct employment, lead business enterprises select from among multiple providers [i.e., ‘preferred vendors’ as MSFT calls them] of those activities and services formally done inside the organization, thereby substantially reducing costs [they play vendors off of one another based on cost and create what is know in the recruiting/staffing industry ‘the death of the middle man’ race to the bottom] and dispatching the many responsibilities connected to being the employer of record [saving as much as ~30% in employee benefits no longer paid]. Information and communication technologies have enabled this hidden transformation of work…. By shedding employment to other parties, lead companies change a wage-setting problem into a contracting [and price] decision. The result is stagnation of real wages [and loss of employee benefits] for many of the jobs formerly done inside.

Weil 2014, 3-4

David Weil’s book The Fissured Workplace sheds light on the extent and nature of this “shedding” of employees by corporations. The evidence shows that increasingly employers are forcing workers into temporary, contract positions, or part-time “gig” jobs in a variety of fields. Female workers suffer most heavily in this new fissured economy, as work in traditionally feminine fields like education and medicine have been declining and shifting to the use of contract workers. The disappearance of conventional full-time work, 9 a.m. to 5 p.m. work, has hit every demographic. Krueger, a former chairman of the White House Council of Economic Advisers, was surprised by the finding. “Workers seeking full-time, steady work have lost,” said Krueger.

But it would be a mistake to believe that the highly skilled and highly educated technology/knowledge workers are immune from this kind of fissuring, for it is continuing apace within the major global technology corporations — know as “lead companies” — like Microsoft, Google, Facebook, Wells Fargo (and banks in general), etc., continuing to lay off entire divisions and groups only to rehire them back as “contingent” workers employed by one of the lead firm’s designated “third party vendors,” or “preferred vendors,” or “partners.” The worker/employer power balance of entire industries can be shifted to favorably give corporations huge advantages by the use of opaque global supply chains that use technological smoke screens and employer delegated deception to hide the real nature of these relationships meant to disadvantage the workers economically.

It is now possible to do to white-collar high-skilled high-education workers what has already been done to blue-collar low-skilled low-educated workers, except now it is no longer necessary to “export” those jobs overseas when such technology workers can be shed by lead corporations and forced to work locally for a “third party vendors” (aka staffing companies) at a sometimes 50% to 60% reduced family income and sometimes with little or no employee benefits (e.g., healthcare, sick days, vacation days, etc.). Yahya under the section “Statement of the Problem” writes:

The emergence of knowledge-based economies (KBEs) in developing countries has the potential to leapfrog these economies to compete in the globalized services sector (Rooney et al., 2003). While reducing labour costs is a main reason for outsourcing, it is not the only driver: other determinants include the need to improve quality of service and providing new services for customers (Kaplan, 2002). The rise of the KBEs also illustrates that the distinction between white collar and blue collar workers is an archaic concept because both categories are subjected to the same conditionalities of business cost reduction and profit maximization. The advance of technological developments increased their commonalities, which made white collar service employment just as vulnerable as blue collar work. The convergence of the Information and Communications Technology (ICT) sector has fuelled economic growth but has increased the displacement of service jobs from developed to developing economies (Rooney et al., 2003). The rise of the global IT industry and the outsourcing of various services to lower-cost developing countries are performed through the spatially unbundling of tasks and relocating them to the most productive locations (Wilson, 1998).

(Yahya 2011, 621, emphasis added)

Yahya is mistaken in his claim this form of the “fissuring workplace” improves quality of service, for it actually reduces the quality of service as evidence has shown. When family wage earners are forced to become “contingent” workers in the “gig” economy they are effectively turned into precarious workers who have far less social security in terms of job stability, wages, and benefits. Typically they are forced to work longer hours for less pay and fewer employee benefits, or sometimes non at all. The twin objectives of “fissuring” — reducing costs and simultaneously improving quality of service — turn out to be a chimera in reality leading to overstressed and underpaid precarious “contingent” workers suffering from increased socio-economic anxiety and workload exhaustion.

This has an overall destabilizing social impact on family wage earners — especially single women with children — and society in general. These “external” costs to families and society are rarely considered within economics typically being treated as “externalities” that are exogenous to econometric analysis. With the decline of the power of unions workers — in all classes and domains, from blue-collar to white-collar — are being subjected to increasing wage suppression tactics much of which is hidden behind intentional lack of transparency and technological smoke screens that give major corporations asymmetrical information advantage over workers in deciding wages and compensation values in the so-called “free market” which is in reality a highly rigged market.

A Work in Progress

[E]volutionary economics is a work in progress…. The term “evolutionary economics” has been used to denote a wide range of economic research and writing…. [T]he authors, believe that the value of a broad theoretical perspective, such as that of evolutionary economics, should be judged in terms of the strength and quality of the understanding of empirical phenomena and the illumination of policy questions provided by research oriented by that perspective. We believe that the research done over the last thirty years oriented by evolutionary economic theory has amply demonstrated the value of that theory, and we want to increase the number of scholars who appreciate that. (Nelson et. al. 2018)

(….) At the root of the difference between evolutionary economics and economics of the sort presented in today’s standard textbooks is the conviction of evolutionary economists that continuing change, largely driven by innovation, is a central characteristic of modern capitalist economies, and that this fact ought to be built into the core of basic economic theory. Economies are always changing, new elements are always being introduced and old ones disappearing. Of course economic activities and economic sectors differ in the pace and character of change. In many parts of the economy innovation is rapid and continuing, and the context for economic action taking is almost always shifting and providing new opportunities and challenges. And while in some activities and sectors the rate of innovation is more limited, attempts at doing something new are going on almost everywhere in the economy, and so too change that can make obsolete old ways of doing things. Neoclassical theory, which is a significant influence on how most professionally trained economists think, represses this. (Nelson et. al. 2018)

[To be continued … dd]

Literature Only Economics vs. Practical Problem Solving Economics

This was a paper hard to read. It does not mean that the paper was badly written. The difficulty of the task that the author sought enforced him to write this difficult paper. After struggling a week in reading the paper, I am rather sympathetic with Delorme. In a sense, he was unfortunate, because he came to be interested in complexity problems by encountering two problems: (1) road safety problem and (2) the Regime of Interactions between the State and the Economy (RISE). I say “unfortunate,” because these are not good problems with which to start the general discussion on complexity in economics, as I will explain later. Of course, one cannot choose the first problems one encounters and we cannot blame the author on this point, but in my opinion the good starting problems are crucial to further development of the argument of complexity in economics.

Let us take the example of the beginning of modern physics. Do not think of Newton. It is a final accomplishment of the first phase of modern physics. There will be no many people who object that modern physics started by two (almost simultaneous) discoveries: (1) Kepler’s laws of orbital movements and (2) Galileo’s law of falling bodies and others. The case of Galilei can be explained by a gradual rise of the spirit of experiments. Kepler’s case is more interesting. One of crucial data for him was Tycho Brahe’s observations. He improved the accuracy of observation about 1 digit. Before Brahe for more than one thousand years, accuracy of astronomical observations was about 1 tenth of a degree (i.e. 6 minutes in angular unit system). Brahe improved this up to an accuracy of 1/2 minute to 1 minute. With this data, Kepler was confident that 8 minutes of error he detected in Copernican system was clear evidence that refutes Copernican and Ptolemaic systems. Kepler declared that these 8 minutes revolutionize whole astronomy. After many years of trials and errors, he came to discover that Mars follows an elliptic orbit. Newton’s great achievement was only possible because he knew these two results (of Galilei and Kepler). For example, Newton’s law of gravitation was not a simple result of induction or abduction. The law of square-inverse was a result of halflogical deduction from Kepler’s third law.

I cite this example, because this explains in which conditions a science can emerge. In the same vein, the economics of complexity (or more correctly economics) can be a good science when we find this good starting point. (Science should not be interpreted in a conventional meaning. I mean by science as a generic term for a good framework and system of knowledge). For example, imagine that solar system was composed of two binary stars and earth is orbiting with a substantial relative weight. It is easy to see that this system has to be solved as three-body problem and it would be very difficult for a Kepler to find any law of orbital movement. Then the history of modern physics would have been very different. This simple example shows us that any science is conditioned by complexity problems, or by tractable and intractable problem of the subject matter or objects we want to study.

The lesson we should draw form the history of modern physics is a science is most likely to start from more tractable problems and evolve to a state that can incorporate more complex and intractable phenomena. I am afraid that Delorme is forgetting this precious lesson. Isn’t he imagining that an economic science (and social science in general) can be well constructed if we gain a good philosophy and methodology of complex phenomena?

I do not object that many (or most) of economic phenomena are deeply complex ones. What I propose as a different approach is to climb the complexity hill by taking a more easy route or track than to attack directly the summit of complexity. Finding this track should be the main part of research program but I could not find any such arguments in Delorme’s paper. (Yoshinori Shiozawa, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 10/10/2017.)

1) My paper can be viewed as an exercise in problem solving in a context of empirical intractability in social science. It was triggered by the empirical discovery of complex phenomena raising questions that are not amenable to available tools of analysis, i.e., are intractable. Then the problem is to devise a model and tools of analysis enabling to cope with these questions. Then, unless someone comes with a complex system analysis or whatever tool that solves the problem at stake, a thing I would welcome, I can’t think of any other way to proceed than focusing on the very cognitive process of knowledge creation and portraying it as a reflective, open-ended, problem-first cognitive behavioral endeavour. It is an approach giving primacy both to looking and discovering rather than to assuming and deducing, and to complexity addressed in its own right rather than to complex systems in which complexity is often viewed tautologically as the behavior of complex systems. The outcome is a new tool of analysis named Deep Complexity in short. I believe that the availability of this tool provides a means to take more seriously the limitations of knowledge in a discipline like economics in which inconclusive and non demonstrative developments are not scarce when sizeable issues are involved.

2) Yoshinori Shiozawa raises the question of where to start from, from tractable problems or from the intractable? He advocates the former and suggests to “evolve to a state that can incorporate more complex and intractable phenomena”. But then, with what tools of analysis for intractable phenomena? And I would have never addressed intractability if I had not bumped into unresolved empirical obtacles. Non commutative complementarity is at work here: starting with the tractable in a discipline dominated by non conclusive and non demonstrative debates doesn’t create any incentive to explore thoroughly the intractable. It is even quite intimidating for those who engage in it. This sociology of the profession excludes de facto intractability from legitimate investigation. Then starting from the possibility of intractability incorporates establishing a dividing line and entails a procedural theorizing in which classical analysis can be developed for tractable problems when they are identified, otherwise the deep complexity tool is appropriate, before a substantive theorizing can be initiated. It is a counterintuitive process: complexification comes first, before a further necessary simplification or reduction. (Robert Delorme, (WEA Conference), 11/30/2017.)

In my first comment in this paper, I have promised to argue the track I propose. I could not satisfy my promise. Please read my second post for the general comments in discussion forum. I have given a short description on the working of an economy that can be as big as world economy. It explains how an economy works. The working of economy (not economics) is simple but general equilibrium theory disfigured it. The track I propose for economics is to start form these simple observations

As I have wrote in my first post, modern science started from Galileo Galilei’s physics and Johaness Kepler’s astronomy. We should not imagine that we can solve a really difficult problem (Delorme’s deep complexity) in a simple way. It is not a wise way to try to attack deep complexity unless we have succeeded to develop a sufficient apparatus by which to treat it. (Yoshinori Shiozawa, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 11/30/2017.)

Dear Dr Shiozawa, it seems that we are not addressing the same objects of inquiry. Yours seems to stand at an abstract level of modern science in general. Mine is much less ambitious: it is grounded in research on how to deal with particular, empirically experienced problems in real economic and social life, that appear intractable, and subject to scientific practice. Deep Complexity is the tool that is manufactured to address this particular problem. It may have wider implications in social science. but that is another story. (Robert Delorme, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 11/30/2017.)

You are attacking concrete social problems. I am rather a general theorist. That may be the reason of our differences of stance toward your problem.

Our situation reminds me the history of medicine. This is one of the oldest science and yet as the organism is highly complex system, many therapies remained symptomatic. Even though, they were to some extent useful and practical. I do not deny this fact. However, modern medicine is now changing its features, because biophysical theories and discoveries are changing medical research. Researchers are investigating the molecular level mechanism why a disease emerges. Using this knowledge, they can now design drugs at the molecular level. Without having a real science, this is not possible.

[Note Shiozawa’s implicit claim that previous medical science was not real science, but became real with the advent of molecular biology. No doubt molecular biology has opened up new domains of knowledge, but of course it is simply ludicrous to claim medicine wasn’t real science prior to molecular biology, as many perfectly valid scientific discoveries prior to and/or discovered without molecular biology are available to prove this assertion simply false. As Delorme states plainly below, this is scientism, not to mention an abysmal attempt to use revisionist history for purely rhetorical purposes. For more examples of Shiozawa’s scientism and sophistry see Semantic Negligence and for a description of literature-only economics see Payson 2017. For a good description of the kind of scientism Shiozawa is parroting see Pilkington 2016. To use one of Shiozawa’s favorite authors for go-to appeals to authority (unfortunately his memory doesn’t serve him well as Andreski contradicts his claim on RWER), see Stanislav Andreski’s Social Sciences as Sorcery (1973, 22-23).]

Economics is still in the age of pre-Copernican stage. It would be hard to find a truth mechanism why one of your examples occurs. I understand your intention, if you want say by the word of “deep complexity” a set of problems that are still beyond our ability of cognition or analysis. We may take a method very different from the regular science and probably similar to symptomatology and diagnostics. If you have argue in this way, it would have made a great contribution to our forum on complexities in economics. This is what I wanted to argue as the third aspect of complexity, i.e. complexity that conditions the development of economics as science.

To accumulate symptomatic and diagnostic knowledge in economics is quite important but most neglected part of the present day economics. (Yoshinori Shiozawa, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 12/1/2017, italics added.)

It is interesting to learn that, as an economist and social scientist, I must be in a “pre-Copernican” stage. Although what this means is not totally clear to me, I take it as revealing that our presuppositions about scientific practice differ. You claim to know what is the most appropriate way of investigating the subject I address, and that this way is the methods and tools of natural science. I claim to have devised a way which works, without knowing if it is the most appropriate, a thing whose decidability would seem to be quite problematic. And the way I have devised meets the conditions of a reflective epistemology of scientific practice, in natural science as well as in social science. Your presupposition is that the application of the methods of natural science is the yardstick for social science. This is scientism.

My presupposition is that there may be a difference between them, and that one cannot think of an appropriate method in social science without having first investigated and formulated the problem that is presented by the subject. As a “general theorist”, your position is enjoyable. May I recall what Keynes told Harrod: “Do not be reluctant to soil your hands”. I am ready to welcome any effective alternative provided it works on the object of inquiry that is at stake. It is sad that you don’t bring such an alternative. As Herb Simon wrote, ”You can’t beat something with nothing”. I borrow from your own sentence that “if you had argued this way, it would have made a great contribution to our forum…” (Robert Delorme, A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science. In Economic Philosophy: Complexity in Economics (WEA Conference), 12/1/2017, italics added.)

False Apostles of Rationality

In April 1998, I traveled from London to the United States to interview several economics and finance professors. It was during this trip that I learned how derivatives had broken down the wall of skepticism between Wall Street and academia. My trip started at the University of Chicago, whose economists had become famous for their theories about market rationality. They argued that markets were supposed to reach equilibrium, which means that everyone makes an informed judgment about the risk associated with different assets, and the market adjusts so that the risk is correctly compensated for by returns. Also, markets are supposed to be efficient—all pertinent information about a security, such as a stock, is already factored into its price. In April 1998, I traveled from London to the United States to interview several economics and finance professors. It was during this trip that I learned how derivatives had broken down the wall of skepticism between Wall Street and academia. My trip started at the University of Chicago, whose economists had become famous for their theories about market rationality. They argued that markets were supposed to reach equilibrium, which means that everyone makes an informed judgment about the risk associated with different assets, and the market adjusts so that the risk is correctly compensated for by returns. Also, markets are supposed to be efficient—all pertinent information about a security, such as a stock, is already factored into its price. (Dunbar 2011, 36-37)

At the university’s Quadrangle Club, I enjoyed a pleasant lunch with Merton Miller, a professor whose work with Franco Modigliani in the 1950s had won him a Nobel Prize for showing that companies could not create value by changing their mix of debt and equity. A key aspect of Miller-Modigliani (as economists call the theory) was that if a change in the debt-equity mix did influence stock prices, traders could build a money machine by buying and shorting (borrowing a stock or bond to sell it and then buying it back later) in order to gain a free lunch. Although the theory was plagued with unrealistic assumptions, the idea that traders might build a mechanism like this was prescient. (Dunbar 2011, 37)

Miller had a profound impact on the current financial world in three ways. He:

  1. Mentored academics who further developed his theoretical mechanism, called arbitrage.
  2. Created the tools that made the mechanism feasible.
  3. Trained many of the people who went to Wall Street and implemented it.

One of the MBA students who studied under Miller in the 1970s was John Meriwether, who went to work for the Wall Street firm Salomon Brothers. By the end of that decade, he had put into practice what Miller only theorized about, creating a trading desk at Salomon specifically aimed at profiting from arbitrage opportunities in the bond markets. Meriwether and his Salomon traders, together with a handful of other market-making firms, used the new futures contracts to find a mattress in securities markets that otherwise would have been too dangerous to trade in. Meanwhile, Miller and other academics associated with the University of Chicago had been advising that city’s long-established futures exchanges on creating new contracts linked to interest rates, stock market indexes, and foreign exchange markets. (Dunbar 2011, 37)

The idea of arbitrage is an old one, dating back to the nineteenth century, when disparities in the price of gold in different cities motivated some speculators (including Nathan Rothschild, founder of the Rothschild financial dynasty) to buy it where it was cheap and then ship it and sell it where it was more expensive. But in the volatile markets of the late 1970s, futures seemed to provide something genuinely different and exciting, bringing together temporally and geographically disparate aspects of buying and selling into bundles of transactions. Buy a basket of stocks reflecting an index, and sell an index future. Buy a Treasury bond, and sell a Treasury bond future. It was only the difference between the fundamental asset (called an underlying asset) and its derivative that mattered, not the statistics or economic theories that supposedly provided a benchmark for market prices. (Dunbar 2011, 38)

In the world Merton Miller lived in, the world of the futures exchanges (he was chairman emeritus of the Chicago Mercantile Exchange when I met him), they knew they needed speculators like Meriwether. Spotting arbitrage opportunities between underlying markets and derivatives enticed the likes of Salomon to come in and trade on that exchange. That provided liquidity to risk-averse people who wanted to use the exchange for hedging purposes. And if markets were efficient—in other words, if people like Meriwether did their job—then the prices of futures contracts should be mathematically related to the underlying asset using “no-arbitrage” principles. (Dunbar 2011, 38)

Bending Reality to Match the Textbook

The next leg of my U.S. trip took me to Boston and Connecticut. There I met two more Nobel-winning finance professors—Robert Merton and Myron Scholes—who took Miller’s idea to its logical conclusion at a hedge fund called Long-Term Capital Management (LTCM). Scholes had benefited directly from Miller’s mentorship as a University of Chicago PhD candidate, while Merton had studied under Paul Samuelson at MIT. What made Merton and Scholes famous (with the late Fischer Black) was their contemporaneous discovery of a formula for pricing options on stocks and other securities. (Dunbar 2011, 38)

Again, the key idea was based on arbitrage, but this time the formula was much more complicated. The premise: A future or forward contract is very similar (although not identical) to the underlying security, which is why one can be used to synthesize exposure to the other. An option contract, on the other hand, is asymmetrical. It lops off the upside or downside of the security’s performance—it is “nonlinear” in mathematical terms. Think about selling options in the same way as manufacturing a product, like a car. How many components do you need? To manufacture a stock option using a single purchase of underlying stock is impossible because the linearity of the latter can’t keep up with the nonlinearity of the former. Finding the answer to the manufacturing problem meant breaking up the lifetime of an option into lots of little bits, in the same way that calculus helps people work out the trajectory of a tennis ball in flight. The difference is that stock prices zigzag in a way that looks random, requiring a special kind of calculus that Merton was particularly good at. The math gave a recipe for smoothly tracking the option by buying and selling varying amounts of the underlying stock over time. Because the replication recipe played catch-up with the moves in the underlying market (Black, Scholes, and Merton didn’t claim to be fortune-tellers), it cost money to execute. In other words you can safely manufacture this nonlinear financial product called an option, but you have to spend a certain amount of money trading in the market in order to do so. But why believe the math? (Dunbar 2011, 38-39)

The breakthrough came next. Imagine that the option factory is up and running and selling its products in the market. By assuming that smart, aggressive traders like Meriwether would snap up any mispriced options and build their own factory to pick them apart again using the mathematical recipe, Black, Scholes, and Merton followed in Miller’s footsteps with a no-arbitrage rule. In other words, you’d better believe the math because, otherwise, traders will use it against you. That was how the famous Black-Scholes formula entered finance. (Dunbar 2011, 39, emphasis added)

When the formula was first published in the Journal of Political Economy in 1973, it was far from obvious that anyone would actually try to use its hedging recipe to extract money from arbitrage, although the Chicago Board Options Exchange (CBOE) did start offering equity option contracts that year. However, there was now an added incentive to play the arbitrage game because Black, Scholes, and Merton had shown that (subject to some assumptions) their formula exorcised the uncertainty in the returns on underlying assets. (Dunbar 2011, 39)

Over the following twenty-five years, the outside world would catch up with the eggheads in the ivory tower. Finance academics who had clustered around Merton at MIT (and elsewhere) moved to Wall Street. Trained to spot and replicate mispriced options across all financial markets, they became trading superstars. By the time Meriwether left Salomon in 1992, its proprietary trading group was bringing in revenues of over $1 billion a year. He set up his own highly lucrative hedge fund, LTCM, which made $5 billion from 1994 to 1997, earning annual returns of over 40 percent. By April 1998, Merton and Scholes were partners at LTCM and making millions of dollars per year, a nice bump from a professor’s salary. (Dunbar 2011, 40)

(….) It is hard to overemphasize the impact of this financial revolution. The neoclassical economic paradigm of equilibrium, efficiency, and rational expectations may have reeled under the weight of unrealistic assumptions and assaults of behavioral economics. But here was the classic “show me the money” riposte. A race of superhumans had emerged at hedge funds and investment banks whose rational self-interest made the theory come true and earned them billions in the process. (Dunbar 2011, 40)

If there was a high priest behind this, it had to be Merton, who in a 1990 speech talked about “blueprints” and “production technologies” that could be used for “synthesizing an otherwise nonexistent derivative security.” He wrote of a “spiral of innovation,” wherein the existence of markets in simpler derivatives would serve as a platform for the invention of new ones. As he saw his prescience validated, Merton would increasingly adopt a utopian tone, arguing that derivatives contracts created by large financial institutions could solve the risk management needs of both families and emerging market nations. To see the spiral in action, consider an over-the-counter derivative offered by investment banks from 2005 onward: an option on the VIX index. If for some reason you were financially exposed to the fear gauge, such a contract would protect you against it. The new option would be dynamically hedged by the bank, using VIX futures, providing liquidity to the CBOE contract. In turn, that would prompt arbitrage between the VIX and the S&P 500 options used to calculate it, ultimately leading to trading in the S&P 500 index itself. (Dunbar 2011, 40-41)

As this example demonstrates, Merton’s spiral was profitable in the sense that every time a new derivative product was created, an attendant retinue of simpler derivatives or underlying securities needed to be traded in order to replicate it. Remember, for market makers, volume normally equates to profit. For the people whose job it was to trade the simpler building blocks—the “flow” derivatives or cash products used to manufacture more complex products—this amounted to a safe opportunity to make money—or in other words, a mattress. In some markets, the replication recipe book would create more volume than the fundamental sources of supply and demand in that market. (Dunbar 2011, 41)

The banks started aggressively recruiting talent that could handle the arcane, complicated mathematical formulas needed to identify and evaluate these financial replication opportunities. Many of these quantitative analysts—quants—were refugees from academic physics. During the 1990s, research in fundamental physics was beset by cutbacks in government funding and a feeling that after the heroic age of unified theories and successful particle experiments, the field was entering a barren period. Wall Street and its remunerative rewards were just too tempting to pass up. Because the real-world uncertainty was supposedly eliminated by replication, quants did not need to make the qualitative judgments required of traditional securities analysts. What they were paid to get right was the industrial problem of derivative production: working out the optimal replication recipe that would pass the no-arbitrage test. Solving these problems was an ample test of PhD-level math skills. (Dunbar 2011, 41)

On the final leg of my trip in April 1998, I went to New York, where I had brunch with Nassim Taleb, an option trader at the French bank Paribas (now part of BNP Paribas). Not yet the fiery, best-selling intellectual he subsequently became (author of 2007’s The Black Swan), Taleb had already attacked VAR in a 1997 magazine interview as “charlatanism,” but he was in no doubt about how options theory had changed the world. “Merton had the premonition,” Taleb said admiringly. “One needs arbitrageurs to make markets efficient, and option markets provide attractive opportunities for replicators. We are indeed lucky . . . the world of finance has agreed to resemble the textbook, in order to operate better.” (Dunbar 2011, 42)

Although Taleb would subsequently change his views about how well the world matched up with Merton’s textbook, the tidal wave of money churned up by derivatives in free market economics carried most people along in its wake.9 People in the regulatory community found it hard to resist this intellectual juggernaut. After all, many of them had studied economics or business, where equilibrium and efficiency were at the heart of the syllabus. Confronted with the evidence of derivatives market efficiency and informational advantages, why should they stand in the way? (Dunbar 2011, 42)

Arrangers as Market Makers

It is easy to view investment banks and other arrangers as mechanics who simply operated the machinery that linked lenders to capital markets. In reality, arrangers orchestrated subprime lending behind the scenes. Drawing on his experience as a former derivatives trader, Frank Partnoy wrote, “The driving force behind the explosion of subprime mortgage lending in the U.S. was neither lenders nor borrowers. It was the arrangers of CDOs. They were the ones supplying the cocaine. The lenders and borrowers were just mice pushing the button.”

Behind the scenes, arrangers were the real ones pulling the strings of subprime lending, but their role received scant attention. One explanation for this omission is that the relationships between arrangers and lenders were opaque and difficult to dissect. Furthermore, many of the lenders who could have “talked” went out of business. On the investment banking side, the threat of personal liability may well have discouraged people from coming forward with information.

The evidence that does exist comes from public documents and the few people who chose to spill the beans. One of these is William Dallas, the founder and former chief executive officer of a lender, Ownit. According to the New York Times, Dallas said that investment banks pressured his firm to make questionable loans for packaging into securities. Merrill Lynch explicitly told Dallas to increase the number of stated-income loans Ownit was producing. The message, Dallas said, was obvious: “You are leaving money on the table—do more [low-doc loans].”

Publicly available documents echo this depiction. An annual report from Fremont General portrayed how Fremont changed its mix of loan products to satisfy demand from Wall Street:

The company [sought] to maximize the premiums on whole loan sales and securitizations by closely monitoring the requirements of the various institutional purchasers, investors and rating agencies, and focusing on originating the types of loans that met their criteria and for which higher premiums were more likely to be realized. (The Subprime Virus: Reckless Credit, Regulatory Failure, and Next Steps by Kathleen C. Engel, Patricia A. McCoy, 2011, 56-57)