Category Archives: Mathematics

Genuinely Creative Thought

2.2 The evolution of the mind: consciousness, creativity, psychological indeterminacy

If consciousness is accepted as real, it seems reasonable that one would allow for an active consciousness, for us to be aware of the experience of thinking and to engage in that experience. If we didn’t allow for engaged and active thought in consciousness, then consciousness would seem to be a passive “ghost in the machine” sort of consciousness. Siegel (2016) would appear to be in agreement with this notion insofar as he sees the mind as a conscious regulator of energy and information flow. But if we allow consciousness to be real in this manner, we allow the possibility of thoughts which exist for no reason other than “we” (the phenomenological “I” (Luijpen, 1969)) think them consciously and actively. The existence of such a thought does not itself break the principle of sufficient reason (Melamed and Lin, 2015), but the “I” thinking them might. That the “I” brings into being a conscious thought might be the terminus of a particular chain of causation. (Markey-Towler 2018, 8)

We call such thoughts to exist “genuinely creative thought”, they are thoughts which exist for no reason other than they are created by the phenomenological “I”. The capability to imagine new things is endowed by the conscious mind. This poses a difficulty for mathematical models which by their nature (consisting always of statements A ⇒ B) require the principle of sufficient reason to hold. Active conscious thought, insofar as it may be genuinely creative is indeterminate until it exists. However, that we might not be able to determine the existence of such thoughts before they are extant does not preclude us from representing them once their existence is determined. Koestler (1964) taught that all acts of creation are ultimately acts of “bisociation”, that is, of linking two things together in a manner hitherto not the case. Acts of creation, bisociations made by the conscious mind, are indeterminate before they exist, but once they exist they can be represented as relations Rhh’ between two objects of reality h,h’. We may think of such acts of creation as akin to the a priori synthetic statements of which Kant (1781) spoke. (Markey-Towler 2018, 8)

This is no matter of mere assertion. Roger Penrose (1989) holds, and it is difficult to dismiss him, that the famous theorems of Kurt Gödel imply something unique exists in the human consciousness. The human mind can “do” something no machine can. Gödel demonstrated that within certain logical systems there would be true statements which could not be so verified within the confines of the logical system but would require verification by the human consciousness. The consciousness realises connections in this case truth-values which cannot be realised by the machinations of mathematical logic alone. It creates. The human mind can therefore (since we have seen those connections made) create connections in the creation of mathematical systems irreducible to machination alone. There are certain connections which consciousness alone can make. (Markey-Towler 2018, 9)

The problem of conscious thought goes a little further though. New relations may be presented to the consciousness either by genuinely creative thought or otherwise, but they must be actually incorporated into the mind, Rhh’g(H)μ and take their place alongside others in the totality of thought g(H)μ. Being a matter of conscious thought by the phenomenological “I”, the acceptance or rejection of such relations is something we cannot determine until the “I” has determined the matter. As Cardinal Newman demonstrated in his Grammar of Assent (1870), connections may be presented to the phenomenological “I”, but they are merely presented to the “I” and therefore inert until the “I” assents to them accepts and incorporates them into that individual’s worldview. The question of assent to various connections presented to the “I” is an either/or question Newman recognises is ultimately free of the delimitations of reason and a matter for resolution by the “I” alone. (Markey-Towler 2018, 9)

There are thus two indeterminacies introduced to any psychological theory by the existence of consciousness:

1 Indeterminacy born of the possibility of imagining new relations Rhh’ in genuinely creative thought.
2 Indeterminacy born of the acceptance or rejection by conscious thought of any new relation Rhh’ and their incorporation or not into the mind μg(H). (Markey-Towler 2018, 9)

The reality of consciousness thus places a natural limit on the degree to which we can determine the processes of the mind, determine those thoughts which will exist prior to their existence. For psychology, this indeterminacy of future thought until its passage and observance is the (rough) equivalent of the indeterminacy introduced to the physical world by Heisenberg’s principle, the principle underlying the concept of the “wave function” upon which an indeterminate quantum mechanics operates (under certain interpretations (Kent, 2012; Popper, 1934, Ch.9)). (Markey-Towler 2018, 9-10)

2.3 Philosophical conclusions

We hold to the following philosophical notions in this work. The mind is that element of our being which experiences our place in the world and relation to it. We are conscious when we are aware of our place in and relation to the world. We hold to a mix of the “weak Artificial Intelligence” and mystic philosophies that mind is emergent from the brain and that mind, brain and body constitute the individual existing in a monist reality. The mind is a network structure μ = {H g(H)} expressing the connections g(H) the individual construes between the objects and events in the world H, an architecture within which and upon which the psychological process operates. The reality of consciousness introduces an indeterminacy into that architecture which imposes a limit on our ability to determine the psychological process. (Markey-Towler 2018, 10)

~ ~ ~

My own philosophical views differ from the assumptions underlying Markey-Towler. To say that “mind is emergent from the brain and that mind, brain and body constitute the individual existing in a monist reality,” is essentially a form of physical monism that claims mind “emerged” from matter, which really explains nothing. If the universe (and humans) are merely mechanisms and mind is reducible to matter we would never be able to be aware of our place in and relation to the universe nor would there ever be two differing philosophical interpretations of our place in the universe. The hard problem (mind-brain question) in neuroscience remains a debated and unsettled question. There are serious philosophical weaknesses in mechanistic materialism as a philosophical position, as is discussed in Quantum Mechanics and Human Values (Stapp 2007 and 2017).

False Apostles of Rationality

In April 1998, I traveled from London to the United States to interview several economics and finance professors. It was during this trip that I learned how derivatives had broken down the wall of skepticism between Wall Street and academia. My trip started at the University of Chicago, whose economists had become famous for their theories about market rationality. They argued that markets were supposed to reach equilibrium, which means that everyone makes an informed judgment about the risk associated with different assets, and the market adjusts so that the risk is correctly compensated for by returns. Also, markets are supposed to be efficient—all pertinent information about a security, such as a stock, is already factored into its price. In April 1998, I traveled from London to the United States to interview several economics and finance professors. It was during this trip that I learned how derivatives had broken down the wall of skepticism between Wall Street and academia. My trip started at the University of Chicago, whose economists had become famous for their theories about market rationality. They argued that markets were supposed to reach equilibrium, which means that everyone makes an informed judgment about the risk associated with different assets, and the market adjusts so that the risk is correctly compensated for by returns. Also, markets are supposed to be efficient—all pertinent information about a security, such as a stock, is already factored into its price. (Dunbar 2011, 36-37)

At the university’s Quadrangle Club, I enjoyed a pleasant lunch with Merton Miller, a professor whose work with Franco Modigliani in the 1950s had won him a Nobel Prize for showing that companies could not create value by changing their mix of debt and equity. A key aspect of Miller-Modigliani (as economists call the theory) was that if a change in the debt-equity mix did influence stock prices, traders could build a money machine by buying and shorting (borrowing a stock or bond to sell it and then buying it back later) in order to gain a free lunch. Although the theory was plagued with unrealistic assumptions, the idea that traders might build a mechanism like this was prescient. (Dunbar 2011, 37)

Miller had a profound impact on the current financial world in three ways. He:

  1. Mentored academics who further developed his theoretical mechanism, called arbitrage.
  2. Created the tools that made the mechanism feasible.
  3. Trained many of the people who went to Wall Street and implemented it.

One of the MBA students who studied under Miller in the 1970s was John Meriwether, who went to work for the Wall Street firm Salomon Brothers. By the end of that decade, he had put into practice what Miller only theorized about, creating a trading desk at Salomon specifically aimed at profiting from arbitrage opportunities in the bond markets. Meriwether and his Salomon traders, together with a handful of other market-making firms, used the new futures contracts to find a mattress in securities markets that otherwise would have been too dangerous to trade in. Meanwhile, Miller and other academics associated with the University of Chicago had been advising that city’s long-established futures exchanges on creating new contracts linked to interest rates, stock market indexes, and foreign exchange markets. (Dunbar 2011, 37)

The idea of arbitrage is an old one, dating back to the nineteenth century, when disparities in the price of gold in different cities motivated some speculators (including Nathan Rothschild, founder of the Rothschild financial dynasty) to buy it where it was cheap and then ship it and sell it where it was more expensive. But in the volatile markets of the late 1970s, futures seemed to provide something genuinely different and exciting, bringing together temporally and geographically disparate aspects of buying and selling into bundles of transactions. Buy a basket of stocks reflecting an index, and sell an index future. Buy a Treasury bond, and sell a Treasury bond future. It was only the difference between the fundamental asset (called an underlying asset) and its derivative that mattered, not the statistics or economic theories that supposedly provided a benchmark for market prices. (Dunbar 2011, 38)

In the world Merton Miller lived in, the world of the futures exchanges (he was chairman emeritus of the Chicago Mercantile Exchange when I met him), they knew they needed speculators like Meriwether. Spotting arbitrage opportunities between underlying markets and derivatives enticed the likes of Salomon to come in and trade on that exchange. That provided liquidity to risk-averse people who wanted to use the exchange for hedging purposes. And if markets were efficient—in other words, if people like Meriwether did their job—then the prices of futures contracts should be mathematically related to the underlying asset using “no-arbitrage” principles. (Dunbar 2011, 38)

Bending Reality to Match the Textbook

The next leg of my U.S. trip took me to Boston and Connecticut. There I met two more Nobel-winning finance professors—Robert Merton and Myron Scholes—who took Miller’s idea to its logical conclusion at a hedge fund called Long-Term Capital Management (LTCM). Scholes had benefited directly from Miller’s mentorship as a University of Chicago PhD candidate, while Merton had studied under Paul Samuelson at MIT. What made Merton and Scholes famous (with the late Fischer Black) was their contemporaneous discovery of a formula for pricing options on stocks and other securities. (Dunbar 2011, 38)

Again, the key idea was based on arbitrage, but this time the formula was much more complicated. The premise: A future or forward contract is very similar (although not identical) to the underlying security, which is why one can be used to synthesize exposure to the other. An option contract, on the other hand, is asymmetrical. It lops off the upside or downside of the security’s performance—it is “nonlinear” in mathematical terms. Think about selling options in the same way as manufacturing a product, like a car. How many components do you need? To manufacture a stock option using a single purchase of underlying stock is impossible because the linearity of the latter can’t keep up with the nonlinearity of the former. Finding the answer to the manufacturing problem meant breaking up the lifetime of an option into lots of little bits, in the same way that calculus helps people work out the trajectory of a tennis ball in flight. The difference is that stock prices zigzag in a way that looks random, requiring a special kind of calculus that Merton was particularly good at. The math gave a recipe for smoothly tracking the option by buying and selling varying amounts of the underlying stock over time. Because the replication recipe played catch-up with the moves in the underlying market (Black, Scholes, and Merton didn’t claim to be fortune-tellers), it cost money to execute. In other words you can safely manufacture this nonlinear financial product called an option, but you have to spend a certain amount of money trading in the market in order to do so. But why believe the math? (Dunbar 2011, 38-39)

The breakthrough came next. Imagine that the option factory is up and running and selling its products in the market. By assuming that smart, aggressive traders like Meriwether would snap up any mispriced options and build their own factory to pick them apart again using the mathematical recipe, Black, Scholes, and Merton followed in Miller’s footsteps with a no-arbitrage rule. In other words, you’d better believe the math because, otherwise, traders will use it against you. That was how the famous Black-Scholes formula entered finance. (Dunbar 2011, 39, emphasis added)

When the formula was first published in the Journal of Political Economy in 1973, it was far from obvious that anyone would actually try to use its hedging recipe to extract money from arbitrage, although the Chicago Board Options Exchange (CBOE) did start offering equity option contracts that year. However, there was now an added incentive to play the arbitrage game because Black, Scholes, and Merton had shown that (subject to some assumptions) their formula exorcised the uncertainty in the returns on underlying assets. (Dunbar 2011, 39)

Over the following twenty-five years, the outside world would catch up with the eggheads in the ivory tower. Finance academics who had clustered around Merton at MIT (and elsewhere) moved to Wall Street. Trained to spot and replicate mispriced options across all financial markets, they became trading superstars. By the time Meriwether left Salomon in 1992, its proprietary trading group was bringing in revenues of over $1 billion a year. He set up his own highly lucrative hedge fund, LTCM, which made $5 billion from 1994 to 1997, earning annual returns of over 40 percent. By April 1998, Merton and Scholes were partners at LTCM and making millions of dollars per year, a nice bump from a professor’s salary. (Dunbar 2011, 40)

(….) It is hard to overemphasize the impact of this financial revolution. The neoclassical economic paradigm of equilibrium, efficiency, and rational expectations may have reeled under the weight of unrealistic assumptions and assaults of behavioral economics. But here was the classic “show me the money” riposte. A race of superhumans had emerged at hedge funds and investment banks whose rational self-interest made the theory come true and earned them billions in the process. (Dunbar 2011, 40)

If there was a high priest behind this, it had to be Merton, who in a 1990 speech talked about “blueprints” and “production technologies” that could be used for “synthesizing an otherwise nonexistent derivative security.” He wrote of a “spiral of innovation,” wherein the existence of markets in simpler derivatives would serve as a platform for the invention of new ones. As he saw his prescience validated, Merton would increasingly adopt a utopian tone, arguing that derivatives contracts created by large financial institutions could solve the risk management needs of both families and emerging market nations. To see the spiral in action, consider an over-the-counter derivative offered by investment banks from 2005 onward: an option on the VIX index. If for some reason you were financially exposed to the fear gauge, such a contract would protect you against it. The new option would be dynamically hedged by the bank, using VIX futures, providing liquidity to the CBOE contract. In turn, that would prompt arbitrage between the VIX and the S&P 500 options used to calculate it, ultimately leading to trading in the S&P 500 index itself. (Dunbar 2011, 40-41)

As this example demonstrates, Merton’s spiral was profitable in the sense that every time a new derivative product was created, an attendant retinue of simpler derivatives or underlying securities needed to be traded in order to replicate it. Remember, for market makers, volume normally equates to profit. For the people whose job it was to trade the simpler building blocks—the “flow” derivatives or cash products used to manufacture more complex products—this amounted to a safe opportunity to make money—or in other words, a mattress. In some markets, the replication recipe book would create more volume than the fundamental sources of supply and demand in that market. (Dunbar 2011, 41)

The banks started aggressively recruiting talent that could handle the arcane, complicated mathematical formulas needed to identify and evaluate these financial replication opportunities. Many of these quantitative analysts—quants—were refugees from academic physics. During the 1990s, research in fundamental physics was beset by cutbacks in government funding and a feeling that after the heroic age of unified theories and successful particle experiments, the field was entering a barren period. Wall Street and its remunerative rewards were just too tempting to pass up. Because the real-world uncertainty was supposedly eliminated by replication, quants did not need to make the qualitative judgments required of traditional securities analysts. What they were paid to get right was the industrial problem of derivative production: working out the optimal replication recipe that would pass the no-arbitrage test. Solving these problems was an ample test of PhD-level math skills. (Dunbar 2011, 41)

On the final leg of my trip in April 1998, I went to New York, where I had brunch with Nassim Taleb, an option trader at the French bank Paribas (now part of BNP Paribas). Not yet the fiery, best-selling intellectual he subsequently became (author of 2007’s The Black Swan), Taleb had already attacked VAR in a 1997 magazine interview as “charlatanism,” but he was in no doubt about how options theory had changed the world. “Merton had the premonition,” Taleb said admiringly. “One needs arbitrageurs to make markets efficient, and option markets provide attractive opportunities for replicators. We are indeed lucky . . . the world of finance has agreed to resemble the textbook, in order to operate better.” (Dunbar 2011, 42)

Although Taleb would subsequently change his views about how well the world matched up with Merton’s textbook, the tidal wave of money churned up by derivatives in free market economics carried most people along in its wake.9 People in the regulatory community found it hard to resist this intellectual juggernaut. After all, many of them had studied economics or business, where equilibrium and efficiency were at the heart of the syllabus. Confronted with the evidence of derivatives market efficiency and informational advantages, why should they stand in the way? (Dunbar 2011, 42)

Arrangers as Market Makers

It is easy to view investment banks and other arrangers as mechanics who simply operated the machinery that linked lenders to capital markets. In reality, arrangers orchestrated subprime lending behind the scenes. Drawing on his experience as a former derivatives trader, Frank Partnoy wrote, “The driving force behind the explosion of subprime mortgage lending in the U.S. was neither lenders nor borrowers. It was the arrangers of CDOs. They were the ones supplying the cocaine. The lenders and borrowers were just mice pushing the button.”

Behind the scenes, arrangers were the real ones pulling the strings of subprime lending, but their role received scant attention. One explanation for this omission is that the relationships between arrangers and lenders were opaque and difficult to dissect. Furthermore, many of the lenders who could have “talked” went out of business. On the investment banking side, the threat of personal liability may well have discouraged people from coming forward with information.

The evidence that does exist comes from public documents and the few people who chose to spill the beans. One of these is William Dallas, the founder and former chief executive officer of a lender, Ownit. According to the New York Times, Dallas said that investment banks pressured his firm to make questionable loans for packaging into securities. Merrill Lynch explicitly told Dallas to increase the number of stated-income loans Ownit was producing. The message, Dallas said, was obvious: “You are leaving money on the table—do more [low-doc loans].”

Publicly available documents echo this depiction. An annual report from Fremont General portrayed how Fremont changed its mix of loan products to satisfy demand from Wall Street:

The company [sought] to maximize the premiums on whole loan sales and securitizations by closely monitoring the requirements of the various institutional purchasers, investors and rating agencies, and focusing on originating the types of loans that met their criteria and for which higher premiums were more likely to be realized. (The Subprime Virus: Reckless Credit, Regulatory Failure, and Next Steps by Kathleen C. Engel, Patricia A. McCoy, 2011, 56-57)

A Universal Science of Man?

The medieval Roman Catholic priesthood conducted its religious preaching and other discussions in Latin, a language no more understandable to ordinary people then are than the mathematical and statistical formulations of economists today. Latin served as a universal language that had the great practical advantage of allowing easy communication within a priestly class transcending national boundaries across Europe. Yet that was not the full story. The use of Latin also separated the priesthood from the ordinary people, one of a number of devices through which the Roman Catholic Church maintained such a separation in the medieval era. It all served to convey an aura of majesty and religious authority—as does the Supreme Court in the United States, still sitting in priestly robes. In employing an arcane language of mathematics and statistics, Samuelson and fellow economists today seek a similar authority in society.

Economics as Religion: From Samuelson to Chicago and Beyond by Robert H. Nelson

This is a book about economics. But it is also a book about human limitations and the difficulty of gaining true insight into the world around us. There is, in truth, no way of separating these two things from one other. To try to discuss economics without understanding the difficulty of applying it to the real world is to consign oneself to dealing with pure makings of our own imaginations. Much of economics at the time of writing is of this sort, although it is unclear such modes of thought should be called ‘economics’ and whether future generations will see them as such. There is every chance that the backward-looking eye of posterity will see much of what today’s economic departments produce in the same way as we now see phrenology: a highly technical, but ultimately ridiculous pseudoscience constructed rather unconsciously to serve the political needs of the era. In the era when men claiming to be scientists felt the skull for bumps and used this to determine a man’s character and his disposition, the political discourse of the day needed a justification for the racial superiority of the white man; today our present political discourse needs a Panglossian doctrine that promotes general ignorance, a technocratic language that can be deployed to cover up certain political aspects of govenmance and tells us that so long as we trust in those in charge everything will work itself out in the long-run. (Pilkington 2016, 1-2)

But the personal motivations of the individual economist today is not primarily political—although it may well be secondarily political, whether that politics turns right or left—the primary motivation of the individual economist today is in search to answers to questions that they can barely forumulate. These men and women, perhaps more than any other, are chasing a shadow that has been taunting mankind since the early days of the Enlightenment. This is the shadow of the mathesis universalis, the Universal Science expressed in the abstract language of mathematics. They want to capture Man’s essence and understand what he will do today, tomorrow and the day after that. To some of us more humble human beings that fell once upon a time onto this strange path, this may seem altogether too much to ask of our capacities for knowledge…. Is it a nobel cause, this Universal Science of Man? Some might say that if it were not so fanciful, it might be. Others might say that it has roots in extreme totalitarian thinking and were it ever taken truly seriously, it would lead to a tyranny with those who espouse it conveniently at the helm. These are moral and political questions that will not be explored in too much detail in the present book. (Pilkington 2016, 2)

What we seek to do here is more humble again. There is a sense today, nearly six years after an economic catastrophe that few still understand and only a few saw coming, that there is something rotten in economics. Something stinks and people are less inclined than ever to trust the funny little man standing next to the blackboard with his equations and his seemingly otherworldly answers to every social and economic problem that one can imagine. This is a healthy feeling and we as a society should promote and embrace it. A similar movement began over half a millennia ago questioning the men of mystery who dictated how people should live their lives from ivory towers; it was called the Reformation and it changed the world…. We are not so much interested in the practices of the economists themselves, as to whether they engage in simony, in nepotism and—could it ever be thought?—the sale of indulgences to those countries that had or were in the process of committing grave sins. Rather we are interested in how we gotten to where we are and how we can fix it. (Pilkington 2016, 2-3)

The roots of the problems with contemporary economics run very deep indeed. In order to comprehend them, we must run the gamut from political motivation to questions of philosophy and methodology to the foundations of the underlying structure itself. When these roots have been exposed, we can then begin the process of digging them up so we can plant a new tree. In doing this, we do not hope to provide all the answers but merely a firm grounding, a shrub that can, given time, grow into something far more robust. (Pilkington 2016, 3)

Down with Mathematics?

(….) Economics needs more people who distrust mathematics when applying thought to the social and economic world, not less. Indeed, … the major problems with economics today arose out of the mathematization of the discipline, especially as it proceeded after the Second World War. Mathematics become to economics what Latin was to the stagnant priest-caste that Luther and other reformers attacked during the Reformation: a means not to clarify, but to obscure through intellectual intimidation. It ensured that the common man could not read the Bible and had to consult the priest and, perhaps, pay him alms. (Pilkington 2016, 3)

(….) [M]athematics can, in certain very limited circumstances, be an opportune way of focusing the debate. It can give us a rather clear and precise conception of what we are talking about. Some aspects—by no means all aspects—of macroeconomics are quantifiable. Investments, profits, the interest rate—we can look the statistics for these things up and use this information to promote economic understanding. That these are quantifiable also means that, to a limited extent, we can conceive of them in mathematical form. It cannot be stressed enough, however, the limited extent to which this is the case. There are always … non-quantifiable elements that play absolutely key roles in how the economy works. (Pilkington 2016, 3-4)

(….) The mathematisation of the discipline was perhaps the crucial turning point when economics began to become something entirely other to the study of the actual economy. It started in the late nineteenth century, but at the time many of those who pioneered the approach became ever more distrustful of doing so. They began to think that it would only lead to obscurity of argument and an inability to communicate properly either with other people or with the real world. Formulae would become synonymous with truth and the interrelation between ideas would become foggy and unclear. A false sense of clarity in the form of pristine equations would be substituted for clarity of thought. Alfred Marshall, a pioneer of mathematics in economics who nevertheless always hid it in footnotes, wrote of his distress in his later years in a letter to his friend. (Pilkington 2016, 4)

[I had] a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics: and I went more and more on the rules—(1) Use mathematics as a shorthand language, rather than an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in (4), burn (3). This last I did often. (Pigou ed. 1966 [1906], pp. 427-428)

The controversy around mathematics appears to have broken out in full force surrounding the issue of econometric estimation in the late 1930s and early 1940s. Econometric estimation … is the practice of putting economic theories into mathematical form and then using them to make predictions based on available statistics…. [I]t is a desperately silly practice. Those who championed the econometric and mathematical approach were men whose names are not known today by anyone who is not deeply interested in the field. The were men like Jan Tinbergen, Oskar Lange, Jacob Marschak and Ragnar Frisch (Louçā 2007). Most of these men were social engineers of one form or another; all of them left-wing and some of them communist. The mood of the time, one reflected in the tendency to try to model the economy itself, was that society and the economy should be planned by men in lab coats. By this they often meant not simply broad government intervention but something more like micro-management of the institutions that people inhabit day-to-day from the top down. Despite the fact that many mathematical economic models today seem outwardly to be concerned with ‘free markets’, they all share this streak, especially in how they conceive that people (should?) act. (Pilkington 2016, 4-5)

Most of the economists at the time were vehemently opposed to this. This was not a particularly left-wing or right-wing issue. On the left, John Maynard Keynes was horrified by what he was seeing develop, while, on the right, Friedrich von Hayek was warning that this was not the way forward. But it was probably Keynes who was the most coherent belligerent of the new approach. This is because before he began to write books on economics, Keynes had worked on the philosophy of probability theory, and probability theory was becoming a key component of the mathematical approach (Keynes 1921). Keynes’ extensive investigations into probability theory allowed him to perceive to what extent mathematical formalism could be applied for understanding society and the economy. He found that it was extremely limited in its ability to illuminate social problems. Keynes was not against statistics or anything like that—he was an early champion and expert—but he was very, very cautious about people who claimed that just because economics produces statistics these can be used in the same as numerical observations form experiments were used in the hard sciences. He was also keenly aware that cetain tendencies towards mathematisation lead to a fogging of the mind. In a more diplomatic letter to one of the new mathematical economists (Keynes, as shall see … could be scathing about these new approaches), he wrote: (Pilkington 2016, 5-6)

Mathematical economics is such risky stuff as compared with nonmathematical economics, because one is deprived of one’s intuition on the one hand, yet there are all kinds of unexpressed unavowed assumptions on the other. Thus I never put much trust in it unless it falls in with my own intuitions; and I am therefore grateful for an author who makes it easier for me to apply this check without too much hard work. (Keynes cited in Louçā 2007, p. 186)

(….) Mathematics, like the high Latin of Luther’s time, is a language. It is a language that facilitates greater precision in some instances and greater obscurity in others. For most issues economic, it promotes obscurity. When a language is used to obscure, it is used as a weapon by those who speak it to repress the voices of those who do not. A good deal of the history of the relationship between mathematics and the other social sciences in the latter half of the twentieth century can be read under this light. If there is anything that this book seeks to do, it is to help people realise that this is not what economics need be or should be. Frankly, we need more of those who speak the languages of the humanities—of philosophy, sociology and psychology—than we do people who speak the language of the engineers but lack the pragmatic spirit of the engineer who can see clearly that his method cannot be deployed to understand those around him. (Pilkington 2016, 6)

Natural selection of algorithms?

If we suppose that the action of the human brain, conscious or otherwise, is merely the acting out of some very complicated algorithm, then we must ask how such an extraordinary effective algorithm actually came about. The standard answer, of course, would be ‘natural selection’. as creatures with brains evolved, those with more effective algorithms would have a better tendency to survive and therefore, on the whole, had more progeny. These progeny also tended to carry more effective algorithms than their cousins, since they inherited the ingredients of these better algorithms from their parents; so gradually the algorithms improved not necessarily steadily, since there could have been considerable fits and starts in their evolution until they reached the remarkable status that we (would apparently) find in the human brain. (Compare Dawkins 1986). (Penrose 1990: 414)

Even according to my own viewpoint, there would have to be some truth in this picture, since I envisage that much of the brain’s action is indeed algorithmic, and as the reader will have inferred from the above discussion I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. (Penrose 1990: 414)

Imagine an ordinary computer program. How would it have come into being? Clearly not (directly) by natural selection! Some human computer programmer would have conceived of it and would have ascertained that it correctly carries out the actions that it is supposed to. (Actually, most complicated computer programs contain errors usually minor, but often subtle ones that do not come to light except under unusual circumstances. The presence of such errors does not substantially affect my argument.) Sometimes a computer program might itself have been ‘written’ by another, say a ‘master’ computer program, but then the master program itself would have been the product of human ingenuity and insight; or the program itself might well be pieced together from ingredients some of which were the products of other computer programs. But in all cases the validity and the very conception of the program would have ultimately been the responsibility of (at least) one human consciousness. (Penrose 1990: 414)

One can imagine, of course, that this need not have been the case, and that, given enough time, the computer programs might somehow have evolved spontaneously by some process of natural selection. If one believes that the actions of the computer programmers’ consciousness are themselves simply algorithms, then one must, in effect, believe algorithms have evolved in just this way. However, what worries me about this is that the decision as to the validity of an algorithm is not itself an algorithmic process! … (The question of whether or not a Turing machine will actually stop is not something that can be decided algorithmically.) In order to decide whether or not an algorithm will actually work, one needs insights, not just another algorithm. (Penrose 414-415)

Nevertheless, one still might imagine some kind of natural selection process being effective for producing approximately valid algorithms. Personally, I find this very difficult to believe, however. Any selection process of this kind could act only on the output of the algorithms and not directly on the ideas underlying the actions of the algorithms. This is not simply extremely inefficient; I believe that it would be totally unworkable. In the first place, it is not easy to ascertain what an algorithm actually is, simply by examining its output. (It would be an easy matter to construct two quite different simple Turing machine actions for which the output tapes did not differ until, say, the 2^65536th place — and this difference could never be spotted in the entire history of the universe!) Moreover, the slightest ‘mutation’ of an algorithm (say a slight change in a Turing machine specification, or in its input tape) would tend to render it totally useless, and it is hard to see how actual improvements in algorithms could ever arise in this random way. (Even deliberate improvements are difficult without ‘meanings’ being available. This inadequately documented and complicated computer program needs to be altered or corrected; and the original programmer has departed or perhaps died. Rather than try to disentangle all the various meanings and intentions that the program implicitly depended upon, it is probably easier just to scrap it and start all over again!) (Penrose 1990: 415)

Perhaps some much more ‘robust’ way of specifying algorithms could be devised, which would not be subject to the above criticisms. In a way, this is what I am saying myself. The ‘robust’ specifications are the ideas that underlie the algorithms. But ideas are things that, as far as we know, need conscious minds for their manifestation. We are back with the problem of what consciousness actually is, and what it can actually do that unconscious objects are incapable of — and how on earth natural selection has been clever enough to evolve that most remarkable of qualities. (Penrose 1990: 415)

(….) To my way of thinking, there is still something mysterious about evolution, with its apparent ‘groping’ towards some future purpose. Things at least seem to organize themselves somewhat better than they ‘ought’ to, just on the basis of blind-chance evolution and natural selection…. There seems to be something about the way that the laws of physics work, which allows natural selection to be much more effective process than it would be with just arbitrary laws. The resulting apparently ‘intelligent groping’ is an interesting issue. (Penrose 1990: 416)

The non-algorithmic nature of mathematical insight

… [A] good part of the reason for believing that consciousness is able to influence truth-judgements in a non-algorithmic way stems from consideration of Gödel’s theorem. If we can see that the role of consciousness is non-algorithmic when forming mathematical judgements, where calculation and rigorous proof constitute such an important factor, then surely we may be persuaded that such a non-algorithmic ingredient could be crucial also for the role of consciousness in more general (non-mathematical) circumstances. (Penrose 1990: 416)

… Gödel’s theorem and its relation to computability … [has] shown that whatever (sufficiently extensive) algorithm a mathematician might use to establish mathematical truth — or, what amounts to the same thing, whatever formal system he might adopt as providing his criterion of truth — there will always be mathematical propositions, such as the explicit Gödel proposition P(K) of the system …, that his algorithm cannot provide an answer for. If the workings of the mathematician’s mind are entirely algorithmic, then the algorithm (or formal system) that he actually uses to form his judgements is not capable of dealing with the proposition P(K) constructed from his personal algorithm. Nevertheless, we can (in principle) see that P(K) is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all! (Penrose 1990: 416-417)

(….) The message should be clear. Mathematical truth is not something that we ascertain merely by use of an algorithm. I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must ‘see’ the truth of a mathematical argument to be convinced of its validity. This ‘seeing’ is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we conceive ourselves of the validity of Gödel’s theorem we not only ‘see’ it, but by so doing we reveal the very non-algorithmic nature of the ‘seeing’ process itself. (Penrose 1990: 418)

Charmed by Dimensional Analysis

I was charmed when as a young student I watched one of my physics professors, the late Harold Daw, work a problem with dimensional analysis. The result appeared as if by magic without the effort of constructing a model, solving a differential equation, or applying boundary conditions. But the inspiration of the moment did not, until many years later, bear fruit. In the meantime my acquaintance with this important tool remained partial and superficial. Dimensional analysis seemed to promise more than it could deliver. (Lemons 2017, ix, emphasis added)

Dimensional analysis has charmed and disappointed others as well…. The problem for teachers and students is that … [t]he mathematics required for its application is quite elementary — of the kind one learns in a good high school course — and its foundational principle is essentially a more precise version of the rule against “adding apples and oranges.” Yet the successful application of dimensional analysis requires physical intuition — an intuition that develops only slowly with the experience of modeling and manipulating physical variables. (Lemons 2017, Preface ix, emphasis added)

A Mistake to Avoid

A model of a state or process incorporates certain idealizations and simplifications. Skill and judgement are required to decide which quantities are needed to describe the state or process and what idealizations and simplifications should be incorporated. Similar skill and judgement are required in dimensional analysis, for the analysis in dimensional analysis is the analysis of a model. And the model we adopt in a dimensional analysis is determined by the dimensional analysis variables and constants we adopt and the dimensions in terms of which they are expressed. (….) While a certain part of dimensional analysis reduces to the algorithmic, no algorithm helps us answer [certain physical questions]. Rather, our answers define the state or process we describe and the model we adopt. We will, on occasion, make mistakes. (Lemons 2017, 11, emphasis original)

Dimensional analysis makes it possible to analyze in a systematic way dimensional relationships between physical quantities defining a model (Higham 2015, 90-91, emphasis added). Dimensional analysis is a clever strategy for extracting knowledge from a remarkably simple idea, nicely stated by Richardson[,] “… that phenomena go their way independently of the units whereby we measure them.” Within its limits, it works excellently, and makes possible astonishing economies in effort. The limits are soon reached, and beyond them it cannot help. In that it is like a specialized tool in carpentry or cooking or agriculture, like the water-driven husking mill … which husks rice elegantly and admirably but cannot do anything else. (Palmer 2015, v, emphasis added)

Physical (material) things have quantitative relationships that are measurable. A dimensional model uses a number of dimensional variables (physical variables) and constants that describe the model. Dimensional analysis is not a straightforward task for it requires skill and judgment — the same kind of skill and judgment needed to construct a model of a physical state or process. Add the complexity of open social systems and this requires even more skill and judgment.

So the legitimate questions arise when confronted with human social systems—such as economics is by its very nature—to what extent can mathematical models capture the true underlying causes of changes in economic behaviour?

When we become charmed by our mathematical tools, and fail to recognize there limitations, there range of validity, we become slaves to our tools rather than masters of them.

What is Applied Mathematics?

The Big Picture

Applied mathematics is a large subject that interfaces with many other fields. Trying to define it is problematic, as noted by William Prager and Richard Courant, who set up two of the first centers of applied mathematics in the United States in the first half of the twentieth century, at Brown University and New York University, respectively. They explained that:

Precisely to define applied mathematics is next to impossible. It cannot be done in terms of subject matter: the borderline between theory and application is highly subjective and shifts with time. Nor can it be done in terms of motivation: to study a mathematical problem for its own sake is surely not the exclusive privilege of pure mathematicians. Perhaps the best I can do within the framework of this talk is to describe applied mathematics as the bridge connecting pure mathematics with science and technology.

Prager (1972)

Applied mathematics is not a definable scientific field but a human attitude. The attitude of the applied scientist is directed towards finding clear cut answers which can stand the test of empirical observation. To obtain the answers to theoretically often insuperably difficult problems, he must be willing to make compromises regarding rigorous mathematical completeness; he must supplement theoretical reasoning by numerical work, plausibility considerations and son on.

Courant (1965)

Garrett Birkhoff offered the following view in 1977, with reference to the mathematician and physicist Lord Rayleigh (John William Strutt, 1842-1919):

Essentially, mathematics becomes “applied” when it is used to solve real-world problems “neither seeking nor avoiding mathematical difficulties” (Raleigh).

Rather than define what applied mathematics is, one can describe the methods used in it. Peter Lax stated of these methods, in 1989, that:

Some of them are organic parts of pure mathematics: rigorous proofs of precisely stated theorems. But for the greatest part the applied mathematician must rely on other weapons: special solutions, asymptotic description, simplified equations, experimentation both in the laboratory and on the computer.

Here, instead of attempting to give our own definition of applied mathematics we describe the various facets of the subject, as organized around solving a problem. The main steps are described in figure 1. Let us go through each of these steps in turn. (Higham 2015, 1)

Modeling a problem. Modeling is about taking a physical problem and developing equations—differential, difference, integral, or algebraic—that capture the essential features of the problem and so can be used to obtain a qualitative or quantitative understanding of its behavior. Here, “physical problem” might refer to a vibrating string, the spread of an infectious disease, or the influence of people participating in a social network. Modeling is necessarily imperfect and requires simplifying assumptions. One needs to retain enough aspects of the system being studied that the model reproduces the most important behavior but not so many that the model is too hard to analyze. Different types of models might be feasible (continuous, discrete, stochastic), and for a given type there can be many possibilities. Not all applied mathematicians carry out modeling; in fact, most join the process at the next step. (Higham 2015, 2)

Analyzing the mathematical problem. The questions formulated in the previous step are now analyzed and, ideally, solved. In practice, an explicit, easily evaluated solution usually cannot be obtained, so approximations may have to be made, e.g., by discretizing a differential equation, producing a reduced problem. The techniques necessary for the analysis of the equations or reduced problem may not exist, so this step may involve developing appropriate new techniques. If analytic or perturbation methods have been used then the process may jump from here directly to validation of the model.

Developing algorithms. It may be possible to solve the reduced problem using an existing algorithm—a sequence of steps that can be followed mechanically without the need for ingenuity. Even if a suitable algorithm exists it may not be fast or accurate enough, may not exploit available structure or other problem features, or may not fully exploit architecture of the computer on which it is to be run. It is therefore often necessary to develop new or improved algorithms.

Writing software. In order to use algorithms on a computer it is necessary to implement them in software. Writing reliable, efficient software is not easy, and depending on the computer environment being targeted it can be a highly specialized task. The necessary software may already be available, perhaps in a package or program library. If it is not, software is ideally developed and documented to a high standard and made available to others. In many cases the software stage consists simply of writing short programs, scripts, or notebooks that carry out the necessary computations and summarize the results, perhaps graphically.

Computational experiments. The software is now run on problem instances and solutions obtained. The computations could be numeric or symbolic, or a mixture of the two.

Validation of the model. The final step is to take the results from the experiments (or from the analysis, if the previous three steps were not needed), interpret them (which may be a nontrivial task), and see if they agree with the observed behavior of the original system. If the agreement is not sufficiently good then the model can be modified and the loop through the steps repeated. The validation step ma be impossible, as the system in question ma not yet have been built (e.g., a bridge or a building).

Other important tasks for some problems, which are not explicitly shown in our outline, are to calibrate parameters in a model, to quantify the uncertainty in these parameters, and to analyze the effect of that uncertainty on the solution of the problem. These steps fall under the heading of UNCERTAINTY QUANTIFICATION [II.34].

Once all the steps have been successfully completed the mathematical model can be used to make predictions, compare competing hypotheses, and so on. A key aim is that the mathematical analysis gives new insights into the physical problem, even though the mathematical model may be a simplification of it.

A particular applied mathematician is most likely to work on just some of the steps; indeed, except for relatively simple problems it is rare for one person to have the skills to carry out the whole process from modeling to computer solution and validation.

In some cases the original problem may have been communicated by a scientist in a different field. A significant effort can be required to understand what the mathematical problem is and, when it is eventually solved, to translate the findings back into the language of the relevant field. Being able to talk to people outside mathematics is therefore a valuable skill for the applied mathematician. (Higham 2015, 2)

Breaking Mathematical Sense

I asked him to outline the algo [algorithm] for me,” one junior accountant remarked about her derivatives-trading Porsche driving superior, “and he couldn’t, he just took it on faith.” “Most kids have computer skills in their genes … but just up to a point … when you try to show them how to generate the numbers they see on screen, they get impatient, they just want the numbers and leave where these came from to the main-frame.

Arvidsson, Adam. The Ethical Economy (p. 3). Columbia University Press. Kindle Edition.

Introduction

Mathematicians, as far as I can see, are not terribly interested in the philosophy of mathematics. They often have philosophical views, but they are usually not very keen on challenging or developing them—they don’t usually consider this as worthy of too much effort. They’re also very suspicious of philosophers. Indeed, mathematicians know better than anyone else what it is that they’re doing. The idea of having a philosopher lecture them about it feels kind of silly, or even intrusive. (Roi 2017, 3)

So we turn to people who have something to do with mathematics in their professional or daily lives, but are not focused on mathematics. Such people often have some sort of vague, sometimes naïve, conceptions of mathematics. One of the most striking manifestations of these folk views is the following: If I say something philosophical that people don’t understand, the default assumption is that I use big pretentious words to cover small ideas. If I say something mathematical that people don’t understand, the default assumption is that I’m saying something so smart and deep that they just can’t get it. (Roi 2017, 3-4)

There’s an overwhelming respect for mathematics in academia and wider circles. So much so that bad, trivial, and pointless forms of mathematization are often mistaken for important achievements in the social sciences, and sometimes in the humanities as well. It is often assumed that all ambiguities in our vague verbal communication disappear once we switch to mathematics, which is supposed to be purely univocal and absolutely true. But a mirror image of this approach is also common. According to this view, mathematics is a purely mechanical, inhuman, and irrelevantly abstract form of knowledge. (Roi 2017, 4)

I believe that the philosophy of mathematics should try to confront such naïve views. To do that, one doesn’t need to reconstruct a rational scheme underlying the way we speak of mathematics, but rather paint a richer picture of mathematics, which tries to affirm, rather than dispel, its ambiguities, humanity, and historicity. (Roi 2017, 4)

(….) The uncritical idolizing of mathematics as the best model of knowledge, just like the opposite trend of disparaging mathematics as mindless drudgery, are both detrimental to the organization and evaluation of contemporary academic knowledge. Instead, mathematics should be appreciated and judged as one among many practices of shaping knowledge. (Roi 2017, 4-5)

Some Ideas on Education in the Management Sciences, Management Science, 17: b2-4.

A Vignette: Option Pricing and the Black-Sholes Formula

Be a market maker—try to buy and sell very quickly, and take benefits from the spread between the bid and offer.

— Senior Morgan Stanley Trader cited in Nicholas Dunbar‘s The Devil’s Derivatives.

The point of the following vignette is to give a concrete example of how mathematics relates to its wider scientific and practical context. It will show that mathematics has force, and that its force applies even when actual mathematical claims do not quite work as descriptions of reality…. The context of this vignette is option pricing. An “option” is the right (but not the obligation) to make a certain transaction at a certain cost at a certain time. For example, I could own the option to buy 100 British pounds for 150 US dollars three months from today. If I own the option, and three months from today 100 are worth more than 150 dollars, I will most probably simply discard it. Such options could be used as insurance. The preceding option, for example, would insure me against a drop in the dollar-pound exchange rate, if I needed such insurance. It could also serve as a simple bet for financial gamblers. But what price should one put on this kind of insurance or bet? There are two narratives to answer this question. The first says that until 1973, no one really knew how to price such options, and prices were determined by supply, demand, and guesswork. More precisely, there existed some reasoned means to price options, but they all involved putting a price on the risk one was willing to take, which is a rather subjective issue. (Roi 2017, 6)

In two papers published in 1973, Fischer Black and Myron Sholes, followed by Robert Merton, came up with a reasoned formula for pricing options that did not require putting a price on risk. This feat was deemed so important that in 1997 Scholes and Merton were awarded the Nobel Prize in economics [see The Nobel Factor] for their formula (Black had died two years earlier). Indeed, “Black, Merton and Scholes thus laid the foundation for the rapid growth of markets for derivatives in the last ten years”—at least according to the Royal Swedish Academy press release (1997). (Roi 2017, 6-7)

But there’s another way to tell the story. This other way claims that options go back as far as antiquity, and option pricing has been studied as early as the seventeenth century. Option pricing formulas were established well before Black and Scholes, and so were various means to factor out putting a price on risk (based on something called put-call parity rather than the Nobel-winning method of dynamic hedging, but we can’t go into details here). Moreover, according to this narrative, the Black-Sholes formula simply doesn’t work and isn’t used (Derman and Taleb 2005; Haug and Taleb 2011).

If we wanted to strike a compromise between the two narratives, we could say that the Black-Scholes model was a new and original addition to existing models and that it works under suitable ideal conditions, which are not always approximated by reality. But let’s try to be more specific. (Roi 2017, 7)

The idea behind the Black-Scholes model is to reconstruct the option by a dynamic process of buying and selling the underlying assets (in our preceding example, pounds and dollars). It provides an initial cost and a recipe that tells you how to continuously buy and sell these dollars and pounds as their exchange rate fluctuates over time in order to guarantee that by the time of the transactions, that money one has accumulated together with the 150 dollars dictated by the option would be enough to buy 100 pounds. This recipe depends on some clever, deep, and elegant mathematics. (Roi 2017, 7)

This recipe is also risk free and will necessarily work, provided some conditions hold. These conditions include, among others, the capacity to always instantaneously buy and sell as many pounds/dollars as I want and a specific probabilistic model for the behavior of the exchange rate (Brownian motion with a fixed and known future volatility, where volatility is a measure of the fluctuations of the exchange rate). (Roi 2017, 7)

The preceding two conditions do not hold in reality. First, buying and selling is never really unlimited and instantaneous. Second, exchange rates do not adhere precisely to the specific probabilistic model. But if we can buy and sell fast enough, and the Brownian model is a good enough approximation, the pricing formula should work well enough. Unfortunately, prices sometimes follow other probabilistic models (with some infinite moments), where the Black and Scholes formula may fail to be even approximately true. The latter flaw is sometimes cited as an explanation for some of the recent market crashes—but this is a highly debated interpretation. (Roi 2017, 7-8)

Another problem is that the future volatility (a measure of cost fluctuations from now until the option expires) of whatever the option buys and sells has to be known for the model to work. One could rely on past volatility, but when comparing actual option prices and the Black-Sholes formula, this doesn’t quite work. The volatility rate that is required to fit the Black-Sholes formula to actual market option pricing is not simply past volatility. (Roi 2017, 8)

In fact, if one compares actual option prices to the Black-Sholes formula, and tries to calculate the volatility that would make them fit, it turns out that there’s no single volatility for a given commodity at a given time. The cost of wilder options (for selling or buying at a price far removed from the present price) reflects higher volatility than the more tame options. So something is clearly empirically wrong with the Black-Sholes model, which assumes a fixed (rather than a stochastic) future volatility for whatever the option deals with, regardless of the terms of the option. (Roi 2017, 8)

So the Black-Sholes formula is nice in theory, but needn’t work in practice. Haug and Taleb (2011) even argue that practitioners simply don’t use it, and have simpler practical alternatives. They go as far as to say that the Black-Sholes formula is like “scientists lecturing birds on how to fly, and taking credit for their subsequent performance—except that here it would be lecturing them the wrong way” (101, n. 13). So why did the formula deserve a Nobel prize? (Roi 2017, 8)

Looking at some informal exchanges between practitioners, one can find some interesting answers. The discussion I quote from the online forum Quora was headed by the question “Is the Black-Sholes Formula Just Plain Wrong?” (2014). All practitioners agree that the formula is not used as such. Many of them don’t quite see it as an approximation either. But this does not mean they think it is useless. One practitioner (John Hwang) writes:

Where Black-Sholes really shines, however, is as a common language between options traders. It’s the oldest, simplest, and the most intuitive option pricing model around. Every option trader understands it, and it is easy to calculate, so it makes sense to communicate implied volatility [the volatility that would make the formula fit the actual price] in terms of Black-Sholes…. As proof, the exchanges disseminate [Black-Sholes] implied volatility in addition to data.

Another practitioner (Rohit Gupta) adds that this “is done because traders have better intuition in terms of volatilities instead of quoting various prices.” In the same vein, yet another practitioner (Joseph Wang) added:

One other way of looking at this is that Black-Sholes provides something of a baseline that lets you compare the real world to a nonexistent ideal world…. Since we don’t live in an ideal world, the numbers are different, but the Black-Sholes framework tells us *how different* the real world is from the idealized world.

So the model earned its renown by providing a common language that practitioners understand well, and allowing them to understand actual contingent circumstances in relation to a sturdy ideal. (Roi 2017, 9)

Now recall that practitioners extrapolate the implied volatility by comparing the Black-Sholes formula to actual prices, rather than plug a given volatility into the formula to get a price. This may sound like data fitting. Indeed, one practitioner (Ron Ginn) states that “if the common denominator of the crowd’s opinion is more or less Black-Sholes … smells like a self fulfilling prophecy could materialize,” or, put in a more elaborate manner (Luca Parlamento):

I just want to add that CBOE [Chicago Board Options Exchange] in early ’70 was looking to market a new product: something called “options.” Their issue was that how you can market something that no one evaluate? You can’t! You need a model that helps people exchange stuff, turn[s] out that the BS formula … did the job. You have a way to make people easily agree on prices, create a liquid market and … “why not” generate commissions.

The tone here is more sinister: the formula is useful because it’s there, because it’s a reference point that allows a market to grow around it. (Roi 2017, 9)

But why did this specific formula attract the market, and become a common reference point, possibly even a self-fulfilling prophecy? Why not any of the other older or contemporary pricing practices, which are no worse? Why was this specific pricing model deemed Nobel worthy? (Roi 29017, 10)

The answer, I believe, lies in the mathematics. The formula depends on a sound and elegant argument. The mathematics it uses is sophisticated, and enjoys a record of good service in physics, which imparts a halo of scientific prestige. Moreover, it is expressed in the language of an expressive mathematical domain that makes sense to practitioners (and, of course, it also came at the right time).

This is the force of mathematics. It’s a language that the practitioners of the relevant niches understand and value. It feels well founded and at least ideally true. If it is sophisticated and comes with a good track record in other scientific contexts, it is assumed to be deep and somehow true. All this helps build rich practical networks around mathematical ideas, even when these ideas do not reflect empirical reality very well. (Roi 29017, 10)

(….) [I]f we want to understand the surprising force of mathematics demonstrated in this vignette, we need to engage in a more careful analysis of mathematical practice. (Roi 29017, 10)