Computability and Economics

My incompleteness theorem makes it likely that mind is not mechanical, or else mind cannot understand its own mechanism. If my result is taken together with the rationalistic attitude which Hilbert had and which was not refuted by my results, then [we can infer] the sharp result that mind is not mechanical. This is so, because, if the mind were a machine, there would, contrary to this rationalistic attitude, exist number-theoretic questions undecidable for the human mind (Gödel in Wang 1996, 186-187)

(….) However, Wang reported that in 1972, in comments at a meeting to honor von Neumann, Gödel said: “The brain is a computing machine connected with a spirit ” (Wang 1996, 189). In discussion with Wang at about that time, Gödel amplified this remark:

Even if the finite brain cannot store an infinite amount of information, the spirit may be able to. The brain is a computing machine connected with a spirit. If the brain is taken to be physical and as a digital computer, from quantum mechanics there are then only a finite number of states. Only by connecting it to a spirit might it work in some other way. (Gödel in Wang 1996, 193)

Some caution is required in interpreting the remarks recorded by Wang, since the context is not always clear. Nevertheless Wang’s reports create the impression that, by the time of his note about Turing, Gödel was again tending toward a negative answer to the question, “Is the human mind replaceable by a machine?” (Copeland et. al. 2013, 21)

(….) The mathematician Jack Good, formally Turing’s colleague at Bletchley Park, Britain’s wartime code-breaking headquarters, gave a succinct statement of the Mathematical Objection in a 1948 letter to Turing:

Can you pin-point the fallacy in the following argument? “No machine can exist for which there are no problems that we can solve and it can’t. But we are machines: a contradiction.”

At the time of Good’s letter Turing was already deeply interested in the Mathematical Objection. More than eighteen months previously he had given a lecture in London, in which he expounded and criticized an argument flowing from his negative result concerning Entsheidungsproblem and concluding that “there is a fundamental contradiction in the idea of a machine with intelligence” (1947, 393).

At the time of Good’s letter Turing was already deeply interested in the Mathematical Objection. More than eighteen months previously he had given a lecture, in London, in which he expounded and criticized an argument flowing from his negative result concerning the Entsheidungsproblem and concluding that “there is a fundamental contradiction in the idea of a machine with intelligence” (1947, 393). (Copeland et. al. 2013, 21)

Copeland et. al.. (2013, 5; 21) Computability: Turing, Gödel, Church, and Beyond. MIT Press.

If the economy is driven by one individual choice after another in response to prices, this should be capable of being modelled on a computer. In the words of an eminent economist, consumer choice can be likened to a computer ‘into whom we “feed” a sequence of market prices and from whom we obtain a corresponding sequence of “solutions” in the form of specified optimum positions’. The ranking of preferences determines the market choices of economic man. Arriving at this ranking can be modelled as a sequence of pairwise comparisons, for example, making a choice between strawberry and vanilla flavours (taking price into account), and comparing likewise all other options in sequence until the budget is exhausted. Such choices can be embodied in an algorithm (a calculation procedure) to run sequentially on a computer and provide a numerical result.  (Offer and Söderberg 2019, 263)

But there is a snag: some algorithmic problems cannot be solved by a digital computer. They either take too long to compute, are impossible to compute (that is, are ‘non-computable’), or it is unknown whether they can be computed. For example, the variable of interest may increase exponentially as the algorithm moves sequentially through time. A generic computer (known after its originator as a ‘Turing machine’, which can mimic any computer) fails to complete the algorithm and never comes to a halt. Such problems can arise in deceptively simple tasks, for example, the ‘travelling salesman problem’, which involves calculating the shortest route through several given locations, or designing efficient networks more generally. For every incremental move, the time required by the computer rises by a power: there may be a solution, but it requires an impossible length of time to compute. In a more familiar example, encryption relying on the multiplication of two unknown prime numbers can be broken, but relies on solutions taking too long to complete. (Offer and Söderberg 2019, 263-264)

The clockwork consumer maximizes her innate preferences in response to market prices. But there is a flaw in the design: the clockwork may not deliver a result. It may have to run forever before making a single choice. This has been demonstrated formally several times. The ordering of individual preferences has been claimed to be ‘non-computable’, and Walrasian general equilibrium may be non-computable as well. Non-computability in economics is little cited by mainstream scholars. On the face of it, it makes a mockery of the neoclassical notions of rationality and rigour, both of which imply finality. Economics however averts its gaze. In practice, since standard microeconomics has never aspired to realism, it may be a reasonable response to say that it has formalisms that work, and that they constitute ‘horses for courses’. But what cannot be claimed for such formalisms is a unique and binding authority in a theoretical, empirical, policy-normative sense, in the way that scientific consensus is binding. (Offer and Söderberg 2019, 264-265)

Computation rears its head several times in Nobel economics. In the second Nobel Lecture, Ragnar Frisch described the task of the economist as validating and executing policy preferences by feeding them into computer models of the economy, and Milton Friedman expressed a similar idea in his Nobel Lecture of 1976. Hayek (NPW, 1974) made his mark in the ‘socialist calculation debate’. Defenders of socialist planning (and of neoclassical economics) in the 1920s and 1930s argued that private ownership was not crucial: socialism could make use of markets, and that the requirements for socialist calculation were no more onerous than the ones assumed for neoclassical general equilibrium. From then onwards, the debate should really be called ‘the neoclassical calculation debate’. Joseph Stiglitz (NPW, 2001) perversely framed a devastating demolition of general equilibrium economics as a criticism of market socialism. Kenneth Arrow (NPW, 1972), an architect of general equilibrium, pointed out (against general equilibrium) that in terms of computability, every person is her own ‘socialist planner’—the task of rationally ordering even private preferences and choices (which Hayek and economics more generally takes for granted) looks too demanding. Under general equilibrium, if even a single person is in a position to set a price (as opposed to taking it as given), ‘the superiority of the market over centralized planning disappears. Each individual agent is in effect using as much information as would be required by a central planner.’ (Offer and Söderberg 2019, 265)

In response to the socialist neoclassical defence, Hayek and his supporters questioned the very possibility of rational calculation. Hayek acknowledged the interdependence of all prices. But the consumer and entrepreneur did not need to be omniscient, just to make use of local price signals and local knowledge to price their goods and choices. The problem was not the static once-and-for-all efficiency of general equilibrium, but coping with change. The prices obtained fell well short of optimality (in the Pareto general equilibrium sense).29 Hayek implied that this was the best that could be achieved. But how would we know? Joseph Stiglitz (NPW, 2001) does not think it is. Regulation can improve it. Hayek’s position fails as an argument against socialism: if capitalism can do without omniscience, why not a Hayekian market socialism without omniscience? A key part of Mises’s original argument against socialism in 1920 was that that entrepreneurs require the motivation of profit, and that private ownership of the means of production was indispensable. But advanced economies are mixed economies: they have large public sectors, in which central banking, social insurance, and infrastructure, typically more than a third of the economy, are managed by governments or not-for-profit. They would be much less efficient to manage any other way. In Britain, for example, with its privatized railways, the biggest investment decisions are still reserved for government: the rails are publicly owned, the trains are commissioned and purchased by government, and a major high speed line project (HS2) can only be undertaken by government. Despite Hayek, smaller public sectors are not associated with more affluent economies: The expensive Nordic Social Democratic societies demonstrate this. (Offer and Söderberg 2019, 265-266)

Herbert Simon (NPW, 1978) pointed out that individuals could not cope with the computational challenges they faced. They did the best they could with what they had, which he called ‘bounded rationality’. The problem also appears in behavioural economics, where NPWs Allais, Selten, Kahneman, Smith, and Roth have all shown that real people diverge from the norms of rational choice, and that outcomes are therefore unlikely to scale up to ‘efficient’ equilibria. In a letter to the non-computability advocate Vela Velupillai, Simon spelled out the different degrees of cognitive capacity: There are many levels of complexity in problems, and corresponding boundaries between them. Turing computability is an outer boundary, and as you show, any theory that requires more power than that surely is irrelevant to any useful definition of human rationality. A slightly stricter boundary is posed by computational complexity, especially in its common ‘worst case’ form. We cannot expect people (and/or computers) to find exact solutions for large problems in computationally complex domains. This still leaves us far beyond what people and computers actually CAN do. The next boundary, but one for which we have few results … is computational complexity for the ‘average case’, sometimes with an ‘almost everywhere’ loophole [that is, procedures that do not apply in all cases]. That begins to bring us closer to the realities of real-world and real-time computation. Finally, we get to the empirical boundary, measured by laboratory experiments on humans and by observation, of the level of complexity that humans actually can handle, with and without their computers, and—perhaps more important—what they actually do to solve problems that lie beyond this strict boundary even though they are within some of the broader limits. The latter is an important point for economics, because we humans spend most of our lives making decisions that are far beyond any of the levels of complexity we can handle exactly; and this is where … good-enough decisions take over. (Offer and Söderberg 2019, 266-267)

This problem was also acknowledged by Milton Friedman (NPW, 1976). Surprisingly for a Chicago economist, he conceded that optimizing was difficult. His solution was to proceed ‘as if’ the choice had been optimized, without specifying how (the example he gives is of the billiards player, who implicitly solves complicated problems in physics every time he makes a successful shot). Asymmetric information, at the core of bad faith economics, is partly a matter of inability to monitor even the moves of a collaborator or a counter-party. The new classical NPW economists (Lucas, Prescott, and Sargent) avoid the problem of computational complexity (and the difficulty of scaling up from heterogeneous individuals) by using a ‘representative agent’ to stand for the whole of the demand or supply side of the economy. Going back to where we started, ‘imaginary machines’, the reliance on models (that is, radically simplified mechanisms) arises from the difficulty of dealing with anything more complicated. (Offer and Söderberg 2019, 267)

All this is just another way of saying that on plausible assumptions, the market-clearing procedures at the heart of normative economics (that is, its quest for ‘efficiency’) cannot work like computers. Having failed in the test of classic analysis, theory fails the test of computability as well. This suggests that actual human choices are not modelled correctly by economic theory, but are made some other way, with as much calculation as can be mustered, but also with short-cuts, intuitions, and other strategies. This is not far-fetched. Humans do things beyond the reach of computers, like carry out an everyday conversation. Policy is not made by computers, not by economists, but by imperfect politicians. Perhaps it is wrong to start with the individual—maybe equilibrium (such as it is) comes from the outside, from the relative stability of social conventions and institutions. This indeterminacy provides an analytical reason why understanding the economy needs to be pragmatic, pluralistic, and open to argument and evidence; an economic historian would say that we should embrace empirical complexity. Policy problems may be intractable to calculation, but most of them get resolved one way or another by the passage of time. History shows how. This may be taken as endorsing the pragmatism of Social Democracy, and of institutional and historical approaches which resemble the actual decision processes.  (Offer and Söderberg 2019, 267-268)

If economics is not science, what should we make of it? Economics has to be regarded as being one voice among many, not superior to other sources of authority, but not inferior to them either. In that respect, it is like Social Democracy. It commands an array of techniques, the proverbial ‘toolkit’ which economists use to perform concrete evaluations, including many varieties of cost-benefit analysis. It has other large assets as well: a belief system that commands allegiance, passion, commitment, groupthink, and rhetoric. Its amorality attracts the powerful in business, finance, and politics. It indoctrinates millions every year in universities, and its graduates find ready work in think tanks, in government, and in business. The press is full of its advocates. As an ideology, economics may be resistant to argument and evidence, but it is not entirely immune to them. Its nominal allegiance to scientific procedure ensures that the discipline responds to empirical anomalies, albeit slowly, embracing new approaches and discarding some of those that don’t seem to work. (Offer and Söderberg 2019, 268, emphasis added)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s