Category Archives: Mathematics

A Universal Science of Man?

The medieval Roman Catholic priesthood conducted its religious preaching and other discussions in Latin, a language no more understandable to ordinary people then are than the mathematical and statistical formulations of economists today. Latin served as a universal language that had the great practical advantage of allowing easy communication within a priestly class transcending national boundaries across Europe. Yet that was not the full story. The use of Latin also separated the priesthood from the ordinary people, one of a number of devices through which the Roman Catholic Church maintained such a separation in the medieval era. It all served to convey an aura of majesty and religious authority—as does the Supreme Court in the United States, still sitting in priestly robes. In employing an arcane language of mathematics and statistics, Samuelson and fellow economists today seek a similar authority in society.

Economics as Religion: From Samuelson to Chicago and Beyond by Robert H. Nelson

This is a book about economics. But it is also a book about human limitations and the difficulty of gaining true insight into the world around us. There is, in truth, no way of separating these two things from one other. To try to discuss economics without understanding the difficulty of applying it to the real world is to consign oneself to dealing with pure makings of our own imaginations. Much of economics at the time of writing is of this sort, although it is unclear such modes of thought should be called ‘economics’ and whether future generations will see them as such. There is every chance that the backward-looking eye of posterity will see much of what today’s economic departments produce in the same way as we now see phrenology: a highly technical, but ultimately ridiculous pseudoscience constructed rather unconsciously to serve the political needs of the era. In the era when men claiming to be scientists felt the skull for bumps and used this to determine a man’s character and his disposition, the political discourse of the day needed a justification for the racial superiority of the white man; today our present political discourse needs a Panglossian doctrine that promotes general ignorance, a technocratic language that can be deployed to cover up certain political aspects of govenmance and tells us that so long as we trust in those in charge everything will work itself out in the long-run. (Pilkington 2016, 1-2)

But the personal motivations of the individual economist today is not primarily political—although it may well be secondarily political, whether that politics turns right or left—the primary motivation of the individual economist today is in search to answers to questions that they can barely forumulate. These men and women, perhaps more than any other, are chasing a shadow that has been taunting mankind since the early days of the Enlightenment. This is the shadow of the mathesis universalis, the Universal Science expressed in the abstract language of mathematics. They want to capture Man’s essence and understand what he will do today, tomorrow and the day after that. To some of us more humble human beings that fell once upon a time onto this strange path, this may seem altogether too much to ask of our capacities for knowledge…. Is it a nobel cause, this Universal Science of Man? Some might say that if it were not so fanciful, it might be. Others might say that it has roots in extreme totalitarian thinking and were it ever taken truly seriously, it would lead to a tyranny with those who espouse it conveniently at the helm. These are moral and political questions that will not be explored in too much detail in the present book. (Pilkington 2016, 2)

What we seek to do here is more humble again. There is a sense today, nearly six years after an economic catastrophe that few still understand and only a few saw coming, that there is something rotten in economics. Something stinks and people are less inclined than ever to trust the funny little man standing next to the blackboard with his equations and his seemingly otherworldly answers to every social and economic problem that one can imagine. This is a healthy feeling and we as a society should promote and embrace it. A similar movement began over half a millennia ago questioning the men of mystery who dictated how people should live their lives from ivory towers; it was called the Reformation and it changed the world…. We are not so much interested in the practices of the economists themselves, as to whether they engage in simony, in nepotism and—could it ever be thought?—the sale of indulgences to those countries that had or were in the process of committing grave sins. Rather we are interested in how we gotten to where we are and how we can fix it. (Pilkington 2016, 2-3)

The roots of the problems with contemporary economics run very deep indeed. In order to comprehend them, we must run the gamut from political motivation to questions of philosophy and methodology to the foundations of the underlying structure itself. When these roots have been exposed, we can then begin the process of digging them up so we can plant a new tree. In doing this, we do not hope to provide all the answers but merely a firm grounding, a shrub that can, given time, grow into something far more robust. (Pilkington 2016, 3)

Down with Mathematics?

(….) Economics needs more people who distrust mathematics when applying thought to the social and economic world, not less. Indeed, … the major problems with economics today arose out of the mathematization of the discipline, especially as it proceeded after the Second World War. Mathematics become to economics what Latin was to the stagnant priest-caste that Luther and other reformers attacked during the Reformation: a means not to clarify, but to obscure through intellectual intimidation. It ensured that the common man could not read the Bible and had to consult the priest and, perhaps, pay him alms. (Pilkington 2016, 3)

(….) [M]athematics can, in certain very limited circumstances, be an opportune way of focusing the debate. It can give us a rather clear and precise conception of what we are talking about. Some aspects—by no means all aspects—of macroeconomics are quantifiable. Investments, profits, the interest rate—we can look the statistics for these things up and use this information to promote economic understanding. That these are quantifiable also means that, to a limited extent, we can conceive of them in mathematical form. It cannot be stressed enough, however, the limited extent to which this is the case. There are always … non-quantifiable elements that play absolutely key roles in how the economy works. (Pilkington 2016, 3-4)

(….) The mathematisation of the discipline was perhaps the crucial turning point when economics began to become something entirely other to the study of the actual economy. It started in the late nineteenth century, but at the time many of those who pioneered the approach became ever more distrustful of doing so. They began to think that it would only lead to obscurity of argument and an inability to communicate properly either with other people or with the real world. Formulae would become synonymous with truth and the interrelation between ideas would become foggy and unclear. A false sense of clarity in the form of pristine equations would be substituted for clarity of thought. Alfred Marshall, a pioneer of mathematics in economics who nevertheless always hid it in footnotes, wrote of his distress in his later years in a letter to his friend. (Pilkington 2016, 4)

[I had] a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics: and I went more and more on the rules—(1) Use mathematics as a shorthand language, rather than an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in (4), burn (3). This last I did often. (Pigou ed. 1966 [1906], pp. 427-428)

The controversy around mathematics appears to have broken out in full force surrounding the issue of econometric estimation in the late 1930s and early 1940s. Econometric estimation … is the practice of putting economic theories into mathematical form and then using them to make predictions based on available statistics…. [I]t is a desperately silly practice. Those who championed the econometric and mathematical approach were men whose names are not known today by anyone who is not deeply interested in the field. The were men like Jan Tinbergen, Oskar Lange, Jacob Marschak and Ragnar Frisch (Louçā 2007). Most of these men were social engineers of one form or another; all of them left-wing and some of them communist. The mood of the time, one reflected in the tendency to try to model the economy itself, was that society and the economy should be planned by men in lab coats. By this they often meant not simply broad government intervention but something more like micro-management of the institutions that people inhabit day-to-day from the top down. Despite the fact that many mathematical economic models today seem outwardly to be concerned with ‘free markets’, they all share this streak, especially in how they conceive that people (should?) act. (Pilkington 2016, 4-5)

Most of the economists at the time were vehemently opposed to this. This was not a particularly left-wing or right-wing issue. On the left, John Maynard Keynes was horrified by what he was seeing develop, while, on the right, Friedrich von Hayek was warning that this was not the way forward. But it was probably Keynes who was the most coherent belligerent of the new approach. This is because before he began to write books on economics, Keynes had worked on the philosophy of probability theory, and probability theory was becoming a key component of the mathematical approach (Keynes 1921). Keynes’ extensive investigations into probability theory allowed him to perceive to what extent mathematical formalism could be applied for understanding society and the economy. He found that it was extremely limited in its ability to illuminate social problems. Keynes was not against statistics or anything like that—he was an early champion and expert—but he was very, very cautious about people who claimed that just because economics produces statistics these can be used in the same as numerical observations form experiments were used in the hard sciences. He was also keenly aware that cetain tendencies towards mathematisation lead to a fogging of the mind. In a more diplomatic letter to one of the new mathematical economists (Keynes, as shall see … could be scathing about these new approaches), he wrote: (Pilkington 2016, 5-6)

Mathematical economics is such risky stuff as compared with nonmathematical economics, because one is deprived of one’s intuition on the one hand, yet there are all kinds of unexpressed unavowed assumptions on the other. Thus I never put much trust in it unless it falls in with my own intuitions; and I am therefore grateful for an author who makes it easier for me to apply this check without too much hard work. (Keynes cited in Louçā 2007, p. 186)

(….) Mathematics, like the high Latin of Luther’s time, is a language. It is a language that facilitates greater precision in some instances and greater obscurity in others. For most issues economic, it promotes obscurity. When a language is used to obscure, it is used as a weapon by those who speak it to repress the voices of those who do not. A good deal of the history of the relationship between mathematics and the other social sciences in the latter half of the twentieth century can be read under this light. If there is anything that this book seeks to do, it is to help people realise that this is not what economics need be or should be. Frankly, we need more of those who speak the languages of the humanities—of philosophy, sociology and psychology—than we do people who speak the language of the engineers but lack the pragmatic spirit of the engineer who can see clearly that his method cannot be deployed to understand those around him. (Pilkington 2016, 6)

Natural selection of algorithms?

If we suppose that the action of the human brain, conscious or otherwise, is merely the acting out of some very complicated algorithm, then we must ask how such an extraordinary effective algorithm actually came about. The standard answer, of course, would be ‘natural selection’. as creatures with brains evolved, those with more effective algorithms would have a better tendency to survive and therefore, on the whole, had more progeny. These progeny also tended to carry more effective algorithms than their cousins, since they inherited the ingredients of these better algorithms from their parents; so gradually the algorithms improved not necessarily steadily, since there could have been considerable fits and starts in their evolution until they reached the remarkable status that we (would apparently) find in the human brain. (Compare Dawkins 1986). (Penrose 1990: 414)

Even according to my own viewpoint, there would have to be some truth in this picture, since I envisage that much of the brain’s action is indeed algorithmic, and as the reader will have inferred from the above discussion I am a strong believer in the power of natural selection. But I do not see how natural selection, in itself, can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have. (Penrose 1990: 414)

Imagine an ordinary computer program. How would it have come into being? Clearly not (directly) by natural selection! Some human computer programmer would have conceived of it and would have ascertained that it correctly carries out the actions that it is supposed to. (Actually, most complicated computer programs contain errors usually minor, but often subtle ones that do not come to light except under unusual circumstances. The presence of such errors does not substantially affect my argument.) Sometimes a computer program might itself have been ‘written’ by another, say a ‘master’ computer program, but then the master program itself would have been the product of human ingenuity and insight; or the program itself might well be pieced together from ingredients some of which were the products of other computer programs. But in all cases the validity and the very conception of the program would have ultimately been the responsibility of (at least) one human consciousness. (Penrose 1990: 414)

One can imagine, of course, that this need not have been the case, and that, given enough time, the computer programs might somehow have evolved spontaneously by some process of natural selection. If one believes that the actions of the computer programmers’ consciousness are themselves simply algorithms, then one must, in effect, believe algorithms have evolved in just this way. However, what worries me about this is that the decision as to the validity of an algorithm is not itself an algorithmic process! … (The question of whether or not a Turing machine will actually stop is not something that can be decided algorithmically.) In order to decide whether or not an algorithm will actually work, one needs insights, not just another algorithm. (Penrose 414-415)

Nevertheless, one still might imagine some kind of natural selection process being effective for producing approximately valid algorithms. Personally, I find this very difficult to believe, however. Any selection process of this kind could act only on the output of the algorithms and not directly on the ideas underlying the actions of the algorithms. This is not simply extremely inefficient; I believe that it would be totally unworkable. In the first place, it is not easy to ascertain what an algorithm actually is, simply by examining its output. (It would be an easy matter to construct two quite different simple Turing machine actions for which the output tapes did not differ until, say, the 2^65536th place — and this difference could never be spotted in the entire history of the universe!) Moreover, the slightest ‘mutation’ of an algorithm (say a slight change in a Turing machine specification, or in its input tape) would tend to render it totally useless, and it is hard to see how actual improvements in algorithms could ever arise in this random way. (Even deliberate improvements are difficult without ‘meanings’ being available. This inadequately documented and complicated computer program needs to be altered or corrected; and the original programmer has departed or perhaps died. Rather than try to disentangle all the various meanings and intentions that the program implicitly depended upon, it is probably easier just to scrap it and start all over again!) (Penrose 1990: 415)

Perhaps some much more ‘robust’ way of specifying algorithms could be devised, which would not be subject to the above criticisms. In a way, this is what I am saying myself. The ‘robust’ specifications are the ideas that underlie the algorithms. But ideas are things that, as far as we know, need conscious minds for their manifestation. We are back with the problem of what consciousness actually is, and what it can actually do that unconscious objects are incapable of — and how on earth natural selection has been clever enough to evolve that most remarkable of qualities. (Penrose 1990: 415)

(….) To my way of thinking, there is still something mysterious about evolution, with its apparent ‘groping’ towards some future purpose. Things at least seem to organize themselves somewhat better than they ‘ought’ to, just on the basis of blind-chance evolution and natural selection…. There seems to be something about the way that the laws of physics work, which allows natural selection to be much more effective process than it would be with just arbitrary laws. The resulting apparently ‘intelligent groping’ is an interesting issue. (Penrose 1990: 416)

The non-algorithmic nature of mathematical insight

… [A] good part of the reason for believing that consciousness is able to influence truth-judgements in a non-algorithmic way stems from consideration of Gödel’s theorem. If we can see that the role of consciousness is non-algorithmic when forming mathematical judgements, where calculation and rigorous proof constitute such an important factor, then surely we may be persuaded that such a non-algorithmic ingredient could be crucial also for the role of consciousness in more general (non-mathematical) circumstances. (Penrose 1990: 416)

… Gödel’s theorem and its relation to computability … [has] shown that whatever (sufficiently extensive) algorithm a mathematician might use to establish mathematical truth — or, what amounts to the same thing, whatever formal system he might adopt as providing his criterion of truth — there will always be mathematical propositions, such as the explicit Gödel proposition P(K) of the system …, that his algorithm cannot provide an answer for. If the workings of the mathematician’s mind are entirely algorithmic, then the algorithm (or formal system) that he actually uses to form his judgements is not capable of dealing with the proposition P(K) constructed from his personal algorithm. Nevertheless, we can (in principle) see that P(K) is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all! (Penrose 1990: 416-417)

(….) The message should be clear. Mathematical truth is not something that we ascertain merely by use of an algorithm. I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must ‘see’ the truth of a mathematical argument to be convinced of its validity. This ‘seeing’ is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we conceive ourselves of the validity of Gödel’s theorem we not only ‘see’ it, but by so doing we reveal the very non-algorithmic nature of the ‘seeing’ process itself. (Penrose 1990: 418)

Charmed by Dimensional Analysis

I was charmed when as a young student I watched one of my physics professors, the late Harold Daw, work a problem with dimensional analysis. The result appeared as if by magic without the effort of constructing a model, solving a differential equation, or applying boundary conditions. But the inspiration of the moment did not, until many years later, bear fruit. In the meantime my acquaintance with this important tool remained partial and superficial. Dimensional analysis seemed to promise more than it could deliver. (Lemons 2017, ix, emphasis added)

Dimensional analysis has charmed and disappointed others as well…. The problem for teachers and students is that … [t]he mathematics required for its application is quite elementary — of the kind one learns in a good high school course — and its foundational principle is essentially a more precise version of the rule against “adding apples and oranges.” Yet the successful application of dimensional analysis requires physical intuition — an intuition that develops only slowly with the experience of modeling and manipulating physical variables. (Lemons 2017, Preface ix, emphasis added)

A Mistake to Avoid

A model of a state or process incorporates certain idealizations and simplifications. Skill and judgement are required to decide which quantities are needed to describe the state or process and what idealizations and simplifications should be incorporated. Similar skill and judgement are required in dimensional analysis, for the analysis in dimensional analysis is the analysis of a model. And the model we adopt in a dimensional analysis is determined by the dimensional analysis variables and constants we adopt and the dimensions in terms of which they are expressed. (….) While a certain part of dimensional analysis reduces to the algorithmic, no algorithm helps us answer [certain physical questions]. Rather, our answers define the state or process we describe and the model we adopt. We will, on occasion, make mistakes. (Lemons 2017, 11, emphasis original)

Dimensional analysis makes it possible to analyze in a systematic way dimensional relationships between physical quantities defining a model (Higham 2015, 90-91, emphasis added). Dimensional analysis is a clever strategy for extracting knowledge from a remarkably simple idea, nicely stated by Richardson[,] “… that phenomena go their way independently of the units whereby we measure them.” Within its limits, it works excellently, and makes possible astonishing economies in effort. The limits are soon reached, and beyond them it cannot help. In that it is like a specialized tool in carpentry or cooking or agriculture, like the water-driven husking mill … which husks rice elegantly and admirably but cannot do anything else. (Palmer 2015, v, emphasis added)

Physical (material) things have quantitative relationships that are measurable. A dimensional model uses a number of dimensional variables (physical variables) and constants that describe the model. Dimensional analysis is not a straightforward task for it requires skill and judgment — the same kind of skill and judgment needed to construct a model of a physical state or process. Add the complexity of open social systems and this requires even more skill and judgment.

So the legitimate questions arise when confronted with human social systems—such as economics is by its very nature—to what extent can mathematical models capture the true underlying causes of changes in economic behaviour?

When we become charmed by our mathematical tools, and fail to recognize there limitations, there range of validity, we become slaves to our tools rather than masters of them.

What is Applied Mathematics?

The Big Picture

Applied mathematics is a large subject that interfaces with many other fields. Trying to define it is problematic, as noted by William Prager and Richard Courant, who set up two of the first centers of applied mathematics in the United States in the first half of the twentieth century, at Brown University and New York University, respectively. They explained that:

Precisely to define applied mathematics is next to impossible. It cannot be done in terms of subject matter: the borderline between theory and application is highly subjective and shifts with time. Nor can it be done in terms of motivation: to study a mathematical problem for its own sake is surely not the exclusive privilege of pure mathematicians. Perhaps the best I can do within the framework of this talk is to describe applied mathematics as the bridge connecting pure mathematics with science and technology.

Prager (1972)

Applied mathematics is not a definable scientific field but a human attitude. The attitude of the applied scientist is directed towards finding clear cut answers which can stand the test of empirical observation. To obtain the answers to theoretically often insuperably difficult problems, he must be willing to make compromises regarding rigorous mathematical completeness; he must supplement theoretical reasoning by numerical work, plausibility considerations and son on.

Courant (1965)

Garrett Birkhoff offered the following view in 1977, with reference to the mathematician and physicist Lord Rayleigh (John William Strutt, 1842-1919):

Essentially, mathematics becomes “applied” when it is used to solve real-world problems “neither seeking nor avoiding mathematical difficulties” (Raleigh).

Rather than define what applied mathematics is, one can describe the methods used in it. Peter Lax stated of these methods, in 1989, that:

Some of them are organic parts of pure mathematics: rigorous proofs of precisely stated theorems. But for the greatest part the applied mathematician must rely on other weapons: special solutions, asymptotic description, simplified equations, experimentation both in the laboratory and on the computer.

Here, instead of attempting to give our own definition of applied mathematics we describe the various facets of the subject, as organized around solving a problem. The main steps are described in figure 1. Let us go through each of these steps in turn. (Higham 2015, 1)

Modeling a problem. Modeling is about taking a physical problem and developing equations—differential, difference, integral, or algebraic—that capture the essential features of the problem and so can be used to obtain a qualitative or quantitative understanding of its behavior. Here, “physical problem” might refer to a vibrating string, the spread of an infectious disease, or the influence of people participating in a social network. Modeling is necessarily imperfect and requires simplifying assumptions. One needs to retain enough aspects of the system being studied that the model reproduces the most important behavior but not so many that the model is too hard to analyze. Different types of models might be feasible (continuous, discrete, stochastic), and for a given type there can be many possibilities. Not all applied mathematicians carry out modeling; in fact, most join the process at the next step. (Higham 2015, 2)

Analyzing the mathematical problem. The questions formulated in the previous step are now analyzed and, ideally, solved. In practice, an explicit, easily evaluated solution usually cannot be obtained, so approximations may have to be made, e.g., by discretizing a differential equation, producing a reduced problem. The techniques necessary for the analysis of the equations or reduced problem may not exist, so this step may involve developing appropriate new techniques. If analytic or perturbation methods have been used then the process may jump from here directly to validation of the model.

Developing algorithms. It may be possible to solve the reduced problem using an existing algorithm—a sequence of steps that can be followed mechanically without the need for ingenuity. Even if a suitable algorithm exists it may not be fast or accurate enough, may not exploit available structure or other problem features, or may not fully exploit architecture of the computer on which it is to be run. It is therefore often necessary to develop new or improved algorithms.

Writing software. In order to use algorithms on a computer it is necessary to implement them in software. Writing reliable, efficient software is not easy, and depending on the computer environment being targeted it can be a highly specialized task. The necessary software may already be available, perhaps in a package or program library. If it is not, software is ideally developed and documented to a high standard and made available to others. In many cases the software stage consists simply of writing short programs, scripts, or notebooks that carry out the necessary computations and summarize the results, perhaps graphically.

Computational experiments. The software is now run on problem instances and solutions obtained. The computations could be numeric or symbolic, or a mixture of the two.

Validation of the model. The final step is to take the results from the experiments (or from the analysis, if the previous three steps were not needed), interpret them (which may be a nontrivial task), and see if they agree with the observed behavior of the original system. If the agreement is not sufficiently good then the model can be modified and the loop through the steps repeated. The validation step ma be impossible, as the system in question ma not yet have been built (e.g., a bridge or a building).

Other important tasks for some problems, which are not explicitly shown in our outline, are to calibrate parameters in a model, to quantify the uncertainty in these parameters, and to analyze the effect of that uncertainty on the solution of the problem. These steps fall under the heading of UNCERTAINTY QUANTIFICATION [II.34].

Once all the steps have been successfully completed the mathematical model can be used to make predictions, compare competing hypotheses, and so on. A key aim is that the mathematical analysis gives new insights into the physical problem, even though the mathematical model may be a simplification of it.

A particular applied mathematician is most likely to work on just some of the steps; indeed, except for relatively simple problems it is rare for one person to have the skills to carry out the whole process from modeling to computer solution and validation.

In some cases the original problem may have been communicated by a scientist in a different field. A significant effort can be required to understand what the mathematical problem is and, when it is eventually solved, to translate the findings back into the language of the relevant field. Being able to talk to people outside mathematics is therefore a valuable skill for the applied mathematician. (Higham 2015, 2)

Breaking Mathematical Sense

I asked him to outline the algo [algorithm] for me,” one junior accountant remarked about her derivatives-trading Porsche driving superior, “and he couldn’t, he just took it on faith.” “Most kids have computer skills in their genes … but just up to a point … when you try to show them how to generate the numbers they see on screen, they get impatient, they just want the numbers and leave where these came from to the main-frame.

Arvidsson, Adam. The Ethical Economy (p. 3). Columbia University Press. Kindle Edition.

Introduction

Mathematicians, as far as I can see, are not terribly interested in the philosophy of mathematics. They often have philosophical views, but they are usually not very keen on challenging or developing them—they don’t usually consider this as worthy of too much effort. They’re also very suspicious of philosophers. Indeed, mathematicians know better than anyone else what it is that they’re doing. The idea of having a philosopher lecture them about it feels kind of silly, or even intrusive. (Roi 2017, 3)

So we turn to people who have something to do with mathematics in their professional or daily lives, but are not focused on mathematics. Such people often have some sort of vague, sometimes naïve, conceptions of mathematics. One of the most striking manifestations of these folk views is the following: If I say something philosophical that people don’t understand, the default assumption is that I use big pretentious words to cover small ideas. If I say something mathematical that people don’t understand, the default assumption is that I’m saying something so smart and deep that they just can’t get it. (Roi 2017, 3-4)

There’s an overwhelming respect for mathematics in academia and wider circles. So much so that bad, trivial, and pointless forms of mathematization are often mistaken for important achievements in the social sciences, and sometimes in the humanities as well. It is often assumed that all ambiguities in our vague verbal communication disappear once we switch to mathematics, which is supposed to be purely univocal and absolutely true. But a mirror image of this approach is also common. According to this view, mathematics is a purely mechanical, inhuman, and irrelevantly abstract form of knowledge. (Roi 2017, 4)

I believe that the philosophy of mathematics should try to confront such naïve views. To do that, one doesn’t need to reconstruct a rational scheme underlying the way we speak of mathematics, but rather paint a richer picture of mathematics, which tries to affirm, rather than dispel, its ambiguities, humanity, and historicity. (Roi 2017, 4)

(….) The uncritical idolizing of mathematics as the best model of knowledge, just like the opposite trend of disparaging mathematics as mindless drudgery, are both detrimental to the organization and evaluation of contemporary academic knowledge. Instead, mathematics should be appreciated and judged as one among many practices of shaping knowledge. (Roi 2017, 4-5)

Some Ideas on Education in the Management Sciences, Management Science, 17: b2-4.

A Vignette: Option Pricing and the Black-Sholes Formula

The point of the following vignette is to give a concrete example of how mathematics relates to its wider scientific and practical context. It will show that mathematics has force, and that its force applies even when actual mathematical claims to not quite work as descriptions of reality…. The context of this vignette is option pricing. An “option” is the right (but not the obligation) to make a certain transaction at a certain cost at a certain time. For example, I could own the option to buy 100 British pounds for 150 US dollars three months from today. If I own the option, and three months from today 100 are worth more than 150 dollars, I will most probably simply discard it. Such options could be used as insurance. The preceding option, for example, would insure me against a drop in the dollar-pound exchange rate, if I needed such insurance. It could also serve as a simple bet for financial gamblers. But what price should one put on this kind of insurance or bet? There are two narratives to answer this question. The first says that until 1973, no one really knew how to price such options, and prices were determined by supply, demand, and guesswork. More precisely, there existed some reasoned means to price options, but they all involved putting a price on the risk one was willing to take, which is a rather subjective issue. (Roi 2017, 6)

In two papers published in 1973, Fischer Black and Myron Sholes, followed by Robert Merton, came up with a reasoned formula for pricing options that did not require putting a price on risk. This feat was deemed so important that in 1997 Scholes and Merton were awarded the Nobel Prize in economics [see The Nobel Factor] for their formula (Black had died two years earlier). Indeed, “Black, Merton and Scholes thus laid the foundation for the rapid growth of markets for derivatives in the last ten years”—at least according to the Royal Swedish Academy press release (1997). (Roi 2017, 6-7)

But there’s another way to tell the story. This other way claims that options go back as far as antiquity, and option pricing has been studied as early as the seventeenth century. Option pricing formulas were established well before Black and Scholes, and so were various means to factor out putting a price on risk (based on something called put-call parity rather than the Nobel-winning method of dynamic hedging, but we can’t go into details here). Moreover, according to this narrative, the Black-Sholes formula simply doesn’t work and isn’t used (Derman and Taleb 2005; Haug and Taleb 2011).

If we wanted to strike a compromise between the two narratives, we could say that the Black-Scholes model was a new and original addition to existing models and that it works under suitable ideal conditions, which are not always approximated by reality. But let’s try to be more specific. (Roi 2017, 7)

The idea behind the Black-Scholes model is to reconstruct the option by a dynamic process of buying and selling the underlying assets (in our preceding example, pounds and dollars). It provides an initial cost and a recipe that tells you how to continuously buy and sell these dollars and pounds as their exchange rate fluctuates over time in order to guarantee that by the time of the transactions, that money one has accumulated together with the 150 dollars dictated by the option would be enough to buy 100 pounds. This recipe depends on some clever, deep, and elegant mathematics. (Roi 2017, 7)

This recipe is also risk free and will necessarily work, provided some conditions hold. These conditions include, among others, the capacity to always instantaneously buy and sell as many pounds/dollars as I want and a specific probabilistic model for the behavior of the exchange rate (Brownian motion with a fixed and known future volatility, where volatility is a measure of the fluctuations of the exchange rate). (Roi 2017, 7)

The preceding two conditions do not hold in reality. First, buying and selling is never really unlimited and instantaneous. Second, exchange rates do not adhere precisely to the specific probabilistic model. But if we can buy and sell fast enough, and the Brownian model is a good enough approximation, the pricing formula should work well enough. Unfortunately, prices sometimes follow other probabilistic models (with some infinite moments), where the Black and Scholes formula may fail to be even approximately true. The latter flaw is sometimes cited as an explanation for some of the recent market crashes—but this is a highly debated interpretation. (Roi 2017, 7-8)

Another problem is that the future volatility (a measure of cost fluctuations from now until the option expires) of whatever the option buys and sells has to be known for the model to work. One could rely on past volatility, but when comparing actual option prices and the Black-Sholes formula, this doesn’t quite work. The volatility rate that is required to fit the Black-Sholes formula to actual market option pricing is not simply past volatility. (Roi 2017, 8)

In fact, if one compares actual option prices to the Black-Sholes formula, and tries to calculate the volatility that would make them fit, it turns out that there’s no single volatility for a given commodity at a given time. The cost of wilder options (for selling or buying at a price far removed from the present price) reflects higher volatility than the more tame options. So something is clearly empirically wrong with the Black-Sholes model, which assumes a fixed (rather than a stochastic) future volatility for whatever the option deals with, regardless of the terms of the option. (Roi 2017, 8)

So the Black-Sholes formula is nice in theory, but needn’t work in practice. Haug and Taleb (2011) even argue that practitioners simply don’t use it, and have simpler practical alternatives. They go as far as to say that the Black-Sholes formula is like “scientists lecturing birds on how to fly, and taking credit for their subsequent performance—except that here it would be lecturing them the wrong way” (101, n. 13). So why did the formula deserve a Nobel prize? (Roi 2017, 8)

Looking at some informal exchanges between practitioners, one can find some interesting answers. The discussion I quote from the online forum Quora was headed by the question “Is the Black-Sholes Formula Just Plain Wrong?” (2014). All practitioners agree that the formula is not used as such. Many of them don’t quite see it as an approximation either. But this does not mean they think it is useless. One practitioner (John Hwang) writes:

Where Black-Sholes really shines, however, is as a common language between options traders. It’s the oldest, simplest, and the most intuitive option pricing model around. Every option trader understands it, and it is easy to calculate, so it makes sense to communicate implied volatility [the volatility that would make the formula fit the actual price] in terms of Black-Sholes…. As proof, the exchanges disseminate [Black-Sholes] implied volatility in addition to data.

Another practitioner (Rohit Gupta) adds that this “is done because traders have better intuition in terms of volatilities instead of quoting various prices.” In the same vein, yet another practitioner (Joseph Wang) added:

One other way of looking at this is that Black-Sholes provides something of a baseline that lets you compare the real world to a nonexistent ideal world…. Since we don’t live in an ideal world, the numbers are different, but the Black-Sholes framework tells us *how different* the real world is from the idealized world.

So the model earned its renown by providing a common language that practitioners understand well, and allowing them to understand actual contingent circumstances in relation to a sturdy ideal. (Roi 2017, 9)

Now recall that practitioners extrapolate the implied volatility by comparing the Black-Sholes formula to actual prices, rather than plug a given volatility into the formula to get a price. This may sound like data fitting. Indeed, one practitioner (Ron Ginn) states that “if the common denominator of the crowd’s opinion is more or less Black-Sholes … smells like a self fulfilling prophecy could materialize,” or, put in a more elaborate manner (Luca Parlamento):

I just want to add that CBOE [Chicago Board Options Exchange] in early ’70 was looking to market a new product: something called “options.” Their issue was that how you can market something that no one evaluate? You can’t! You need a model that helps people exchange stuff, turn[s] out that the BS formula … did the job. You have a way to make people easily agree on prices, create a liquid market and … “why not” generate commissions.

The tone here is more sinister: the formula is useful because it’s there, because it’s a reference point that allows a market to grow around it. (Roi 2017, 9)

But why did this specific formula attract the market, and become a common reference point, possibly even a self-fulfilling prophecy? Why not any of the other older or contemporary pricing practices, which are no worse? Why was this specific pricing model deemed Nobel worthy? (Roi 29017, 10)

The answer, I believe, lies in the mathematics. The formula depends on a sound and elegant argument. The mathematics it uses is sophisticated, and enjoys a record of good service in physics, which imparts a halo of scientific prestige. Moreover, it is expressed in the language of an expressive mathematical domain that makes sense to practitioners (and, of course, it also came at the right time).

This is the force of mathematics. It’s a language that the practitioners of the relevant niches understand and value. It feels well founded and at least ideally true. If it is sophisticated and comes with a good track record in other scientific contexts, it is assumed to be deep and somehow true. All this helps build rich practical networks around mathematical ideas, even when these ideas do not reflect empirical reality very well. (Roi 29017, 10)

(….) [I]f we want to understand the surprising force of mathematics demonstrated in this vignette, we need to engage in a more careful analysis of mathematical practice. (Roi 29017, 10)