Philip Mirowski (University of Notre Dame)
Machines who Think vs. Machines that Sell
 
 

Chapter 8 of the book:

"Machine Dreams: Economics becomes a Cyborg Science"
 Cambridge University Press, forthcoming



first draft  - September 1999

We may trust ‘mechanical’ means of calculating or counting more than our memories. Why? – Need it be like this? I may have miscounted, but the machine, once constructed by us in such-and-such a way, cannot have miscounted. Must I adopt this point of view? – “Well, experience has taught us that calculating by machine is more trustworthy than by memory. It has taught us that our life goes smoother when we calculate by machines.” But must smoothness necessarily be our ideal (must it be our ideal to have everything wrapped in cellophane)? – (Wittgenstein, 1978, pp.212-3)

Once upon a time, a small cadre of dreamers came to share an aspiration to render the operations of the economy visible and comprehensible by comparing its configuration to that of rational mechanics. It was a simple and appealing vision of motions in a closed world of commodity space propelled towards an equilibrium of forces, with the forces representing the wants and desires of individual selves. Each and every agent was portrayed as a pinball wizard, deaf dumb and blind to everyone else. Not everyone who sought to comprehend and tame the economy harbored this particular vision; nor was it uniformly dispersed throughout diverse cultures; but the more people were progressively trained in the natural sciences, the more this dream came to seem like second nature. Economics was therefore recast as the theory of a particularly simple kind of machine. This is very nicely illustrated by two exhibits on the same floor of the National Science Museum in London -- in the center of the floor is A.W. Phillips’ contraption of Perspex hydraulic pipes, with different colored liquids sloshing around a “national economy’; and off in a side gallery is Charles Babbage’s ‘difference engine’, a mill for dividing the labor of producing numbers. It is true that the 19th century neoclassical theorists of the so-called ‘Marginalist revolution’ fostered a more abstract and mathematical version of the machine dream, although certain individuals such as William Stanley Jevons and Irving Fisher were not adverse to actually erecting their vision in ivory and steel and wood as well.

In this volume, it has been argued that something subtly profound and irreversible has happened to machine dreams sometime in the middle of the 20th century. Instigated by John von Neumann, and lavishly encouraged by the American military, the new generation of machine dreamers were weaned away from classical mechanics and made their acquaintance with a newer species of machine, the computer. Subsequently, the protagonists in economists’ dramas tended to look less like pinball wizards and came increasingly to resemble Duke Nukem instead. There were many twists and turns in how this transubstantiation was wrought; and one recurrent theme of this account has been the tension between an unwillingness on the part of economists to relinquish their prior fascination with classical mechanics, and the imperative to come to terms with the newer computer. After all, aren’t computers still made of metal and mineral, still subject to the same old rules of equilibria of classical forces as well? Must we really leave our familiar old rational mechanics behind? Gears grind and circuits flash, tradition and innovation clash, then become indistinct as software materializes out of hardware.

Indeed, one might suggest that by the end of the century, the embrace of the sharp-edged computer by the machine dreamers has nowhere yet been altogether wholehearted, indulgent, or complete. Rather, wave upon wave of computer metaphors keep welling up out of cybernetics, operations research, computer science, artificial intelligence, cognitive science and artificial life, and washing up and over the economics profession with varying periodicities, lags and phase shifts. The situation has been exacerbated by the historical fact that ‘the computer’ refers to no particular stable entity over our time frame. What had started out as a souped-up calculator-cum-tabulator grew under military imperatives to something closer to a real-time command-and-control device, complimentary to the discipline of operations research. Yet, there was simultaneously the consideration that the business world pressed its own agenda upon the computer, and therefore we observe the machine being reconfigured as a search-and-sort symbol processor, monitoring time-sharing, surreptitiously collecting reams of data and supporting Web commerce. Under the demands of mass commercialization, the PC familiar to almost everyone has rather resembled the offspring of miscegenation between the typewriter and the television. And in the not-so-distant future, the computer threatens to become indistinguishable from the biological organism, with DNA performing various of the functions identified above, as well as many others more closely related to physiological processes. The biggest obstacle to answering the question, What is the impact of the computer upon economics?, is that the computer has not sat still long enough for us to draw a bead on the culprit.

Nevertheless, the sheer shape-shifting character of the New Model Machine has not altogether hampered our historical investigations. The saga of the computer has unexpectedly provided us with the scaffolding for an account of the constitution of the postwar economic orthodoxy. It is true, however, that we have only managed to cover a small subset of all the ways that the computer made itself felt within the postwar neoclassical economics profession. We have mostly hewn to the precept to keep our attention focused upon the ways in which the computer has recast the economic agent by enhancing his cyborg quotient, which has meant in practice paying close attention to the ways in which neoclassical microeconomics and mathematical practice in America have been transformed in the shift to a command-control-communications-information orientation, the C3I paradigm. There were many other manifestations of computer influence upon postwar economics which have suffered neglect in the present account: one might identify the all-too-obvious alliance between econometrics, Keynesianism and the spread of the computer in the immediate postwar period, rendering tractable all manner of national models and elaborate statistical calculations previously deemed inaccessible; likewise, the field of financial economics found its conditions of existence in the computer-enabled ability to manipulate vast reams of real-time data and use them to construct new synthetic financial instruments. The efflorescence of experimental economics starting in the 1970s could never have happened without the large-scale computerization of experimental protocols and the attendant standardization of practices and data-collection capabilities, which in turn made it available for export to a broad array of aspiring laboratories. The impact of Soviet cybernetics upon Soviet-era Marxism is a vast terra incognita. Regarded prosaically as a technology, the computer conjured up all sorts of novel activities and functions which could be brought fruitfully under the ambit of economic expertise.

However, in this final chapter, we shall reconsider why it is woefully insufficient to treat the computer merely as a technology, just another gadget in that bodacious ‘box of tools’ which the notoriously unhandy economists love to evoke. Economists, at least when they are not dreaming, still think that they live in a world in which inanimate objects are readily and obediently bent to the purposes of their makers; but their history discloses a different situation: their tools are amazingly lively, whereas their profiles of the human actors are distressingly reactive, if not inert. This encounter of the irresistible force with the immovable object need not inevitably result in standoff. With increased recourse to the computer as an amazingly flexible and adaptable prosthesis, the computer inadvertently feeds back upon the very definition of what it means to be rational, not to mention what it means to be human. With increased dependence upon the computer to carry out all manner of economic activities, it has and will redound back upon the very meaning and referent of the term “economic” as well.
 
 

I. Where is the Computer Taking Us?

The core doctrines of the orthodoxy of neoclassical economics in the second half of the 20th century were never as stable as they have appeared to those credulous souls gleaning their economics from textbooks (or, sad to say, from standard works in the history of economics). The dominance of the Cowles version of Walrasian general equilibrium promoting an agent who looked like nothing so much as a structural econometrician of the 1960s gave way to the “rational expectations” approximation to information processing in the 1970s (Sent, 1998); and this, in turn, gave way to a “strategic revolution” in the 1980s, consisting primarily of dispensing with passive optimization for the rigors of the hermeneutics of suspicion after the manner of Nash game theory; coextensively, econometrics became increasingly displaced by experimental economics as the empirical procedure of choice by the avant garde; and in the 1990s, dissatisfaction with much of the accelerated obsolescence sweeping economic theory induced the appearance of a nouvelle vague associated with the Santa Fe Institute (see Mirowski, 1996) and often retailed under the rubric of "“computational economics." One can imagine many alternative ways to account for these shifts in enthusiasm amongst the cognoscenti: some seek to preserve a maximum of continuity inherent in doctrines from one shift to the next, insisting upon some untouched hard core of neoclassical commitments; some simply revel in the pop-cultural ethos of new toys for new boys; others greet each new development as finally promising release from the conceptual tangle that strangled the previous neoclassical tradition. While perfectly understandable as the kind of spin that accompanies any promotional campaign in a consumer culture, these accounts are all irredeemably short-sighted.

This book takes a different tack. It seeks to frame each of these transformations as halting incomplete accommodations to a larger complex of cyborg innovations which already appear nearly inexorable, extending well into the next millennium. To frame the thesis with maximum irony, the serried ranks of orthodoxy of microeconomics has been imperfectly shadowing the trajectory of John von Neumann’s own ideas about the most promising prospects for the development of formal economics, from their early fascination with fixed points and the linear expanding economy model, through game theory (and the red herring of expected utility), and finally (as we have been foreshadowing in chapter 7) coming to invest its greatest hopes in the theory of automata. This scientific titan who could only spare a vanishing fraction of his intellectual efforts upon a ‘science’ he regarded as pitifully weak and underdeveloped, has somehow ended up as the single most important figure in the history of 20th century economics. This mathematician who held neoclassical theory in utter contempt throughout his own life has nonetheless so bewitched the neoclassical economists that they find themselves dreaming his formal models, and imperiously claiming them for their own. This polymath who prognosticated that “science and technology would shift from a past emphasis on subjects of motion, force and energy to a future emphasis on subjects of communications, organization, programming and control” was spot on the money. The days of neoclassical economics as proto-energetics (Mirowski, 1989) are indeed numbered. Any lingering resemblances should be chalked up to nostalgia, not nomology.

But I should open myself to be vulnerable to a well-deserved charge of inconsistency if I opted to frame this narrative in such a purely personalized manner. It was not the person of John von Neumann that could mesmerize generations of neoclassical economists to mince about like marionettes. He may at some points have resembled Darth Vader, but he could never have been mistaken for being Gepetto. (The intervention bringing together Cowles with the military and RAND, however, stands out in the history as a singular exception.) Everything written in the previous chapter about the deliquescence of individual selves as self-sufficient protagonists in our postmodern world would argue against that story line. Rather, it was the computer (or more correctly, the computer-plus-human cyborg) which stalks the dreams of each succeeding generation of economic sleepwalkers; and it is the computer which continues to exercise dormative sway over economists so swelled with drowsy pride over their innovative accomplishments. Without that protean machine, it would have been highly unlikely that the history of neoclassical economic theory would ever have taken the course that it did after WWII. In the absence of that thing-without-an-essence, cyborgs would not have been able to infiltrate economics and the other social sciences in successive waves. Without the computer, it would still be obligatory to believe that mantra that economics really was about “the allocation of scarce resources to given ends” and not, as it now stands, obsessed with the conceptualization of the economic entity as an information processor.

Previous chapters have been almost entirely concerned with relating the narrative history of this epoch-making departure; now the opportunity has arrived in this final chapter to briefly face forward into the next century, and to speculate about the ways in which this overall trend might be extrapolated. The most concise way to do this is to ask: What does it mean to claim there either now exists or shortly will materialize a coherent “computational economics”? The task is neither as straightforward nor as technocratically transparent as it might initially seem. After all, von Neumann is dead; long live von Neumann?

Admittedly there are now published many journals with titles like Computational Economics and Netnomics, and there are courses with these titles offered at many universities. There is a ***Society for Computational Economics*** and a “Society for Social Interaction, Economics and Computation” with annual conclaves, awards, honorific offices and all the other trappings of academic professionalism. There are also a vast array of survey articles and books from many different perspectives. There exist the conferences and summers schools associated with the Santa Fe Institute, which is blessed with a history that is truly fascinating, but which we must regrettably pass by in this venue. Yet, amidst this beehive of cyberactivity, what seems altogether notable by its absence is any comprehensive map or outline of what ‘computational economics’ now is or could ultimately aspire to become. Indeed, in my experience, many of those engaged in activities falling under this rubric rather enjoy their (sometimes undeserved) reputation as ‘nerds’ who only can be distinguished from the more general populace by their high-tech obsessions. The last thing they would ever be caught dead doing would be dabbling in something as un-geek and un-chic as history or philosophy. Yet, one shouldn’t write this off simply to bland self-confidence or techno-hubris: there persists the problem that the ‘computer’ as an artifact has been changing so rapidly, that to engage in speculation about the impact of the computer on economics is to compound the ineluctable with the ineffable.

Thus, it falls to the present author to proffer some suggestions about what might just be the once and future impact of the computer upon how we think about the economy. Think of it, if you will, as what a cyborg does after a long hard day of information processing --: time to go into sleep mode, and access a few machine dreams.
 
 

II. Five Alternative Scenarios for the Future of Computational Economics

Just as all really good science fiction is never more than a tentative and inadequately disguised extrapolation of what is recognizably present experience, prognostications about the future of economics should possess a firm basis in what is already recognizably ensconced on the horizon. William Gibson’s Neuromancer, far from describing an alien posthuman world, bore the unmistakable shock of recognition to such a degree in 1984 that it has lately begun to curl with age, if only around the edges. To aspire to a similar degree of prognosis, I shall proceed to describe four versions of computational economics that are firmly grounded in the existing literature, and then give some reasons for thinking that they have not yet been adequately thought through, much less provided with coherent justifications. The fifth version will prove an epsilon more insubstantial, but will nevertheless be firmly grounded in the previous narrative, since it most closely resembles the ambitions of John von Neumann for economics at the end of his career. Once delineated and distinguished, it will be left for you, the reader, to assess the odds and place your bets on the next millennium’s Cyborg Hambletonian.
 
 

1] Judd’s Revenge

There is a particular breed of economist afoot in America that thinks the economics they learned back in graduate school really is a Theory of Everything, and can compensate for any deficiency of concrete knowledge of history, other disciplines, other people, or indeed anything else about the world they putatively live in. For them, the computer, like everything from falling birth rates to the fall of the Wall, therefore stands as just one more confirmation of their poverty-stricken world view:

Being economists, we believe the evolution of practice in economics will follow the laws of economics and their implications for the allocation of scarce resources... In the recent past, the theorem-proving mode of theoretical analysis was the efficient method; computers were far less powerful and computational methods far less efficient. That is all changing rapidly. In many cases the cost of computation is dropping rapidly relative to the human cost of theorem-proving.... The clear implication of standard economic theory is that the computational modes of theoretical analysis will become more common, dominating theorem-proving in many cases. (Judd, 1997, p.939). Here the computer isn’t really capable of transforming anything, since it is just a glorified calculator. Ever-cheaper number crunchers will only diminish the number of theorems proved in the AER and JET at the margin. Who ever thought that a mere monkeywrench could derail the magic of the market in its ongoing revelation of the Rational? We will call this narrow vision of the future “Judd’s Revenge”.

This version of the future of computational economics is most congenial for a neoclassical economist to imagine, mainly because it combines adherence to the absolute minimum of concessions to the transformative power of the computer upon economic thought with the maximum commitment to the maxim that tools, not ideas, are the coin of the realm in economics. In brief, this position is built around the premise that whatever sanctioned neoclassical model one might choose to work with, its prognostications will be marginally improved by rendering it explicit as a computer program and then having the computer help the analyst calculate specific numerical results of interest dictated by the prior model, be they ‘equilibria’ or boundary conditions or time paths of state variables. One must be very careful not to automatically equate the activities of this school with the building and calibration of so-called “computable general equilibrium models” (see below), if only because these latter models tend to migrate rather indiscriminately between the first three categories of computational economics herein enumerated. (The content/tool distinction turns out to be hard to maintain in practice, even in this instance.) Nevertheless, in rhetoric if not in technical practice, a computable Walrasian general equilibrium model exemplifies the style of theorizing most commonly favored by this cadre. The standard justification is that most formal neoclassical models are either too large (in principle), too nonlinear or too ‘complex’ to be solved analytically. The computer is conveniently trundled onto the stage as the tool which will unproblematically help us to do more of what it was that economists were doing at Cowles in the 1960s and 1970s—mostly optimization of one flavor or another -- but not really to do it any different. The computer is closer kin to the improved buggy whip, and not the horseless carriage. This is economics as if John von Neumann (and most of the events recounted in this book) had never happened. The premier exponent of this version is Kenneth Judd (1997, 1998).

One can quite readily appreciate the attractions of such a position for those whose primary concern is to maintain all appearance of continuity at any cost. The problem persists, however, that this version of computational economics must repress so much that has happened, that it is not at all clear why anyone should think they could make some sort of interesting intellectual career out of such paltry materials. Not only are most of the postwar changes that have happened in the microeconomic orthodoxy rendered so featureless as to be invisible, so that any residual idea of cumulative progress is left high and dry, but furthermore, the only source of intellectual excitement available to this sort of economist is the thrill of coming up with a niftier (faster, cheaper, cleaner) algorithm than their peers. This, it should go without saying, is software engineering and not what most economists think of as doing economics. Other sorts of ancillary functions in the history of economics – here one thinks of national income accounts generation, or enterprise accounting, or the development of econometric software – have rapidly been relegated to subordinate positions and farmed out as the basis for distinct separate professional identities. Advocates such as Judd have never seen fit to explain why one should not, at minimum, expect the same to happen with the objects of their enthusiasm. It is difficult to regard this version of computational economics as little more than a briefly passing phase.
 
 

2] Lewis Redux

Kenneth Judd betrays no outward sign of being aware that the theory of computation could be used to make theoretical criticisms of the practices and concepts of neoclassical microeconomics. Others (in increasing order of skepticism), from John Rust to Marcel Richter to Kumaraswamy Velupillai are vastly more sensitive to this possibility. This awareness, as we observed in chapter 6 above, dates back to Rabin’s (1957) critique of game theory, but reached a new level of sophistication with the work of Alain Lewis. At various junctures, Kenneth Arrow has sounded as though he endorsed this version of computational economics: “The next step in analysis... is a more consistent assumption of computability in the formulation of economic hypotheses. This is likely to have its own difficulties because, of course, not everything is computable, and there will be in this sense an inherently unpredictable element in rational behavior” (1986, p.S398). The touchstone of this version of computational economics is the thesis that the non-recursiveness of most neoclassical models does stand as a fundamental flaw in their construction, and that the flaw is serious enough to warrant reconstruction of the Walrasian approach from the ground up. Lewis’ letter to Debreu, quoted above in chapter 6, remains the most elegant summary of this position:

The cost of such an effectively realizable model is the poverty of combinatorial mathematics in its ability to express relative orders of magnitude between those sorts of variables, that, for the sake of paradigm, one would like to assume to be continuous. In exact analogy to the nonstandard models of arithmetic, the continuous models of Walrasian general equilibrium pay for the use of continuity, and the ‘smooth’ concepts formulated therein, with a certain non-effectiveness, that can be made precise recursion-theoretically, in the realization of the prescriptions of the consequences of such models. By the way, if ever you are interested in an analysis of the effective computability of the rational expectations models that are all the rage in some circles, it would not be hard to make the case that such models are irrational computationally... When I first obtained the result for choice functions, I thought my next task would be the reformulation of The Theory of Value in the framework of recursive analysis. I now have second thoughts about the use of recursive analysis, but still feel that a reformulation of the foundations of neoclassical mathematical economics in terms that are purely combinatorial in nature – i.e. totally finite models, would be a useful exercize [sic] model-theoretically. If successful, then one could ‘add on’ more structure to just the point where the effectiveness goes away from the models. Thus we ourselves could effectively locate the point of demarcation between those models that are realizable recursively and those which are not. Whereas there have been some isolated attempts to recast the Walrasian system (or individual versions of Nash games) in recursive format, it must be said that, by and large, they have not been accorded much in the way of serious attention within the economics profession; moreover, Lewis’s suggestion of isolating the exact threshold where one passes over from recursivity to non-recursiveness has attracted no interest whatsoever. The situation at the end of the 1990s inherent in this version of computational economics is much more curious.

First, what we tend to observe is that most of those in the forefront of this movement – one thinks of Arrow, for instance – never actually delve very deeply into to the uncomputable bedrock of rational choice theory. While they might acknowledge some undecidability results at a relatively higher level of the theory – say, at the level of collective choice and the aggregation of preferences – they never concede that there would be deep impossibilities for anyone to even possess the capacity for constructing their own neoclassical preference function. Instead, orthodox figures such as Arrow or Rust tend to speculate in vague ways how some future developments – maybe technological, maybe evolutionary -- elsewhere in the sciences will someday break the dreaded deadlock of noncomputability for orthodox economists. Beyond the rather ineffectual expedient of wishing for pie in the sky, this literature is itself misleading, for as we noted above in chapter 2, Turing noncomputability is a logical proposition and not predicated upon the unavailability of some scarce resource, and therefore it is not subject to be offset by any future technological developments, no matter how unforeseeable or unanticipated. In a phrase, someday in an advanced technological future it is conceivable that many computational problems will become less intractable, but it is far less likely that the formally uncomputable will be rendered computable. The careless conflation of intractability (NP-complete, NP hard problems) with noncomputability (undecidability on a Turing machine) under some generic rubric of computational complexity is one of the very bad habits prevalent in this literature.

Second, it seems that another bad habit of this literature is to blithely refer to Herbert Scarf’s (1973) algorithm for the computation of Walrasian general equilibria as if it had already successfully carried out the project outlined by Lewis above. This is where the word “computable” in the phrase “computable general equilibria” has fostered all manner of unfortunate confusions. What Scarf actually did was to devise an algorithm for approximation of a fixed point of a continuous mapping of a sphere Sn+

onto itself under certain limited circumstances. Appeal is then commonly made to Uzawa’s (1962) result on the equivalence between the theorem of existence of equilibrium prices for an Arrow-Debreu economy and Brouwer’s fixed point theorem on a vector field of excess demands. There are numerous slippages between this limited mathematical result and the empirical uses to which any such algorithm is put. First, there is the insuperable problem that the Sonnenschein-Mantel-Debreu results suggest that the fully general Walrasian system imposes almost no restrictions upon observable excess demand functions (other than ‘Walras’ Law’ and homogeneity of degree zero of demand functions), so the fact you can construct some correspondence between a given set of data and some arbitrary Walrasian model chosen from a vast array of candidates is not too surprising. But more to the point, Scarf’s algorithm does not confront the pervasiveness of noncomputability of the Walrasian model so much as simply work around it. Suppose the fixed point we search for turns out to be a non-recursive real number – that is, it falls afoul of Lewis’ proof. We might, for instance, try and find an approximate recursive real to the original point and construct an exact algorithm to calculate it in a guaranteed fashion; or alternatively, we might restrict the vector field of excess demands to be defined over the computable reals, and then use an algorithm to determine an exact solution. Scarf’s algorithm does neither: instead, it restricts the admissible excess demand functions a priori to one class of functions for which there are exact algorithms to compute equilibrium solutions. In effect, Scarf packs away computability considerations into unmotivated restrictions upon the specifications of the supposedly ‘general’ functions allowed by Walrasian theory and the arbitrary decisions involved in specification of a “good enough” approximation. It is as if one were given the task of finding any given exact location within an inch within California, and responding proudly that you could get within ten blocks of anyplace in Fresno. Although it is never openly discussed, he must achieve his objective by means of indirection and circumlocution, because both our other suggested routes involve significant undecidability obstacles. Scarf, like Cowles in general, opted to deal with fundamental paradoxes of Bourbakism by looking in the other direction. In saying this, there is no attempt to indict him for duplicity: indeed, he was just importing some standard OR programming techniques back into neoclassical economics: another Cowles predilection which had paid off handsomely in the past.
 
 

3] Simulatin’ Simon

The previous two scenarios are relatively easy to summarize because they take for granted that nothing substantive in the neoclassical orthodoxy need be relinquished in order for economists to find an accommodation with the computer. In a sense, they assume the position that because classical mechanics was good enough for dear old Dad, and therefore it’s good enough for me. By contrast, from here on out, the world should be divided into those who still take 19th century physics for their exemplar of a supremely successful scientific method, and those who believe that profound changes in the relative standings of the natural sciences in the later 20th century make it imperative to look to biology, and in particular, the theory of evolution, for some cues as to how to come to a rapprochement with the computer. These latter three versions of computational economics all find it salutary to make reference in one way or another to evolution as part of their program of reconciliation, although it may appear that each individual case may hew to a version of evolution which would be regarded as idiosyncratic and unreliable by their economic competitors, not to mention by real biologists.

For many economists, the fugleman figure of Herbert Simon best represents the progressive future of computational economics. As we saw in the previous chapter, Simon’s quest is to avoid explicit consideration of the formal theory of computation, and instead to build computer simulations of economic and mental phenomena, largely avoiding prior neoclassical models. This cannot be chalked up to convenience or incapacity on his part, since it is the outcome of a principled stance predicated upon his belief in bounded rationality: there are only a certain “middle range” of observed phenomena of a particular complexity which it is even possible for us mere mortals to comprehend; and since reality is modular, we might as well simulate these accessible discrete subsystems. Because the computer is first and foremost a symbol processor, in his estimation, Simon believes the computer constitutes the paradigmatic simulation machine, both capturing our own limitations and showing us a more efficient means for doing what we have done all along. Some day, these modular algorithms may be loosely coupled together to form a massive theory of the economy (or the brain) on some giant megacomputer; but in the interim, this simulation activity really is the best we can do, and is an end in itself. This humble prescription dovetails with Simon’s own conception of biological evolution as piecemeal engineering, since he thinks that is what happens in Nature as well as in the world of intellectual discourse.

A moment’s meditation will reveal just how influential Simon’s vision has been for computational economics (although individual economists may be unfamiliar with the philosophical underpinnings). Computer simulations account for the bulk of all articles making reference to computation appearing in postwar economics journals. Simon’s own “behavioral theory of the firm” (Cyert & March, 1963) was an early attempt to link computer simulations to empirical study of firm activities. When Richard Nelson and Sidney Winter (1982) heralded their novel “evolutionary economics”, it consisted primarily of computer simulations of firms not so very far removed from those of Simon. Whenever economists make reference to cellular automata exercises, as in the case of Schelling (1969), they are in fact engaging in wholesale simulation, and not addressing the formal theory of automata. Simulation is a technique which a long established history in operations research and organizational studies (Prietula et al, 1998). The popularity of the Santa Fe Institute with its advocacy of the nascent field of Artificial Life has only enhanced the credibility of various economic simulation exercises emanating from that quarter. Even the staid Brookings Institution felt it had to jump aboard the simulation bandwagon, or else be left behind in the slums of cyberspace (Epstein & Axtell, 1996).

Computers do foster a certain cultural atmosphere where simulations become much more common and therefore tend to appear more commonplace, a case made by thinkers from Baudrillard (1994) to Edwards (1996) to Webster (1995). But once one gets over the frisson of unreality, there still remains the nagging problem of the evaluation of the efficacy and quality of simulation in the conduct of scientific inquiry. Does the computer simulation differ in some substantial way from the process of mathematical abstraction when one constructs a ‘model’ of a phenomenon? For all Simon’s evocative appeal to his prospective “sciences of the artificial”, he does display a predisposition to conflate the two distinct activities in order to justify computer simulation. An interesting alternative to Simon’s own justification can be found in Galison (1996). In that paper, he argues that early computer simulations in the construction of the atomic and hydrogen bombs were first motivated as bearing a close family resemblance to actual experiments, but experiments where controls could be more thoroughly imposed. Over time, however, neither the mathematicians nor the bench experimentalists fully accorded computer simulations complete legitimacy within their own traditions, regarding them as too far-fetched, so a third category of specialization grew up, with its own lore and its own expertise, which served as a species of “trading zone” (in Galison’s terminology) which mediated some research interactions of mathematical theorists and particle experimentalists.

Whatever the relevance and aptness of Galison’s story for physics, it does appear dubious when applied to economics. First, contemporary advocates of economic simulation don't seem to mediate much of any interaction between theorists and empiricists, at least in orthodox precincts. Not only is the division of labor substantially less pronounced in economics than in physics, but the lines of communication between diverse specialists are more sharply attenuated. Furthermore, simulation practitioners in economics have a lot to learn when it comes to protocols and commitments to reporting the range of simulations conducted, as well as following standard procedures innovated in physics and elsewhere for evaluating the robustness and uncertainty attached to any given simulation. Forty years on, the first complaint of a member of the audience for an economic simulation is: Why didn’t you report that variant run of your simulations? Where is the sensitivity analysis? How many simulations does it take to make an argument? One often finds that such specific criticisms are often parried by loose instrumentalist notions, such as, “we interpret the question, can you explain it? as asking, can you grow it?” (Epstein & Axtell, 1996, p.177). On those grounds, there would never have been any pressing societal need for molecular biology, much less athlete’s foot ointment.

I would like to suggest a different set of criteria for the evaluation of simulations in economics. To set the record straight, computer simulations can and will never be banished from economics. Simulations will always be an important accompaniment to the spread of the cyborg sciences. Indeed, as we have argued, they are an indispensable component of the program, since they serve to blur the boundaries between digital and worldly phenomena. However, simulations will only be dependably productive in economics when they have been developed and transformed from the status of representations to the status of technologies. We have already witnessed this sequence of events more than once in this volume. For instance, early simulations of the staffing of air defense stations were transformed into training protocols for enlisted recruits; and then, they became testbeds upon which further technological scenarios could be played out in preparation for choice of prospective R&D trajectories. Simulated control exercises became templates for further automation of actual command and control functions. They also provided inspiration of an entirely new class of technical developments later dubbed ‘artificial intelligence’ in Simon’s own research. Or again, in the previous chapter, we observed simulated play of games by automata transmute into the design of ‘autonomous artificial agents’ concocted to conduct real transactions over the Internet. The lesson of these examples is that simulations become fruitful when they are developed and criticized to the point that they can become attached in a subordinate capacity to some other activity which is not itself a simulation. To put it bluntly, outside of some future prospective consumer markets for Sony Playstations and Sega virtual reality boxes, simulations do not stand on their own as intellectual exercises or compelling catechesis, or at least not without a well-developed history of professional specialization, such as that found in particle physics. This is where Simon and his minions have regrettably stopped well short of realizing the full potential of computational economics.
 
 

4] Dennett’s Dangerous Idea

The most common popular conception of computers at the end of the century (as Turing so accurately predicted) is of a machine who thinks. Because the computer so readily trespasses upon the self-image of man as the thinking animal, it has become equally commonplace to believe that the mind is nothing more than a machine; that is, it operates like the computer. Such deep convictions cannot be adequately excavated and dissected in this venue; but one must concede that this ubiquitous package of cybernetic preconceptions has profound implications for what a ‘computational economics’ may come to signify in the near future. Although Herbert Simon is considered one of the progenitors of the field of Artificial Intelligence, it is of utmost importance to understand that he neither promotes a global unified computational model of the mind, nor does he regard the neoclassical economic model as a serious or fit candidate for such a mental model. Others, of course, have rushed in to fill this particular void. There are a whole raft of self-styled ‘theorists’ – although scant few empirical cognitive scientists among them – who proclaim that it is possible to access some algorithms from Artificial Intelligence, combine them with a particularly tendentious understanding of the theory of evolution, and arrive at a grand Theory of Everything, all to the ultimate purpose of maintaining that all human endeavor is constrained maximization ‘all the way down’. One infallible earmark of this predilection is an unaccountable enthusiasm for the writings of Richard Dawkins. The theory of rational choice (perhaps simple optimization, perhaps game theory) is unselfconsciously treated as the very paradigm of information processing for biological organisms and machines; consequently both computers and humans are just a meme’s way of making another meme. Although one can find this hyperphysical sociobiologized version of economics in the works of a broad range of economists from Jack Hirshleifer (1977; 1978) to Kenneth Binmore (1998b), perhaps the most comprehensive statement of the approach can be found in the popular book by the philosopher Daniel Dennett, Darwin’s Dangerous Idea (1995).

Dennett may not himself become embroiled in much in the way of explicit economic prognostications, but that does not preclude him from retailing a certain specific conception of the economic as Natural common sense. “So there turn out to be general principles of practical reasoning (including, in more modern dress, cost-benefit analysis), that can be relied upon to impose themselves on all life forms anywhere” (1995, p.132). “The perspective of game-playing is ubiquitous in adaptationism, where mathematical game theory has played a growing role” (1995, p.252). “Replay the tape a thousand times, and Good Tricks will be found again and again, by one lineage or another” (p.308). [All original italics.] However, once one takes the death-defying leap with Dennett and stipulates that evolution is everywhere and always algorithmic, and that memes can soar beyond the surly bounds of bodies, then one can readily extrapolate that the designated task of the economist is to explore how the neoclassical instantiation of rational economic man ‘solves’ all the various optimization problems which confront him in everyday economic experience. Neoclassical economics is cozily reabsorbed into a Unified Science which would warm the cockles of a Viennese logical positivist.

The reader may enter a demurrer: Isn’t this version of computational economics really just the same as option [2] above? Indeed not: the drama is in the details. In Lewis Redux, the analyst sets out from a standard neoclassical model and subjects it to an ‘effectiveness’ audit using the theory of computation. Here, by contrast, the analyst starts out with what she considers to be a plausible characterization of the cognitive states of the agent, usually co-opted from some recent enthusiasm in a trendy corner of Artificial Intelligence, and rejoices to find that neoclassical results can be obtained from a machine-like elaboration of agent states, perhaps with a dollop of ‘evolution’ thrown into the pot. The literature on finite automata playing repeated games was one manifestation of this trend in economics (Kalai, 1990); and the recent enthusiasm about what has been dubbed “evolutionary game theory” (Mailath, 1998; Samuelson, 1997; Weibull, 1997) is another. Much of what has come to be called “behavioral economics”, in either its experimental (Camerer, 1997) or analytical (Rubinstein, 1998) variants, also qualifies. It is becoming the technique of choice at the economics program at the Santa Fe Institute (Lane etc). More baroque manifestations opt for appropriation of some new strain of artificial intelligence in order to construct economic models of agents, be it genetic algorithms (Marimon et al, 1990; Arifrovich), neural nets, or fuzzy sets. The key to understanding this literature is to note that once ‘algorithmic reasoning’ attains the enviable state of ontological promiscuity, than any arbitrary configuration of computers is fair game for economic appropriation, as long as they arrive eventually as what is deemed to be the ‘right’ answer. The distinctive move in this tradition is to make numerous references to agent mental operations as being roughly similar to some aspect of what computers are thought to do, but to simultaneously studiously avoid making reference to any computational theories: Computers computers everywhere, but never a stop to think. The rationale behind this awkward configuration of discourse should by now have become abundantly apparent: no one wants to openly confront the noncomputability of basic neoclassical concepts.

There are one or two reasons, over and above crude recourse to bait and switch tactics, for thinking that this brand of computational economics probably does not possess a bright future or real staying power. One drawback is that the ambivalence on the part of most economists in forging an alliance with Artificial Intelligence in the era of its chastened retreat from prior extremes of shameless hubris and undelivered-upon promises is palpable. AI has lost the knack of blinding people with science, at least for now. Economists have not historically been notably willing to ally themselves with crippled or flagging research programs in the natural sciences; they have been predictable dedicated followers of fashion. This must be compounded with the fact that prominent figures in Artificial Intelligence, such as Simon and Minsky, have not been all that favorably inclined towards neoclassical economics. Another drawback is that most orthodox economists’ level of faithfulness to the formal requirements of a theory of evolution is downright louche. For instance: To what economic phenomenon does the indispensable ‘replicator dynamics’ refer in evolutionary game theory? In exercises with genetic algorithms such as that found in (Sargent, 1993), do the individual strings of code refer to different ideas in the mind of a single agent, or is the pruning and winnowing and recombination happening in some kind of ‘group mind’? (Neither seems entirely correct.) Is anyone really willing to attest to the existence of any specific ‘meme,’ so that we would know one when and if we saw it? And then there is the dour observation that exercises in slavish imitation of AI and ALife have been produced at Santa Fe and elsewhere for more than two decades, and nothing much has come of them. But the bedrock objection can be stated with brutal simplicity: How likely is it that any economists will ever make any real or lasting contribution to cognitive science? And let’s be clear about this: we are talking here about people trained in the standard graduate economics curriculum. I have repeatedly posed this question to all types of audiences, running the gamut of all proportions of neoclassical skeptics and true believers, and not once have I ever encountered someone who was willing to testify in favor of the brave prospect of economists as the budding cognitive scientists of tomorrow. As they say in the math biz, QED.
 
 

5] Vending von Neumann

There remains one final possibility, albeit one for which there is very little tangible evidence in the economics literature, that economic research could attempt to conform to von Neumann’s original vision for a computational economics. The outlines of this automata-based theory were broached above in chapter 3. Von Neumann pioneered (but did not fully achieve) a logical theory of automata as abstract information processing entities exhibiting self-regulation in interaction with their environment, a mathematical theory framed expressly to address the following questions:

Von Neumann’s Theory of Automata

As one can observe from the absence of answers to some of the above questions, von Neumann did not manage to bequeath us a complete and fully articulated theory of automata. For instance, he himself was not able to provide a single general abstract index of computational complexity; while some advances (described above in chapter 2) have been made in the interim, there is still no uniform agreement as to the ‘correct’ measure of complexity (cf. Cowan et al., 1994). Further, some aspects of his own answers would seem to us today to be arbitrary. For instance, many modern computer scientists do not now believe that the best way to approach questions of evolution is through a sequential von Neumann architecture, opting instead to explore distributed connectionist architectures (Barbossa, 1993). Other researchers have sought to extend complexity hierarchies to such architectures (Greenlaw et al, 1995). Others speculate upon the existence of computational capacities ‘beyond’ the Turing Machine (Casti, 1997b). Nevertheless, the broad outlines of the theory of automata persist and remain clear: from (Rabin & Scott, 1959) onwards, computer science has been structured around a hierarchy of progressively more powerful computational automata to undergird “a machine theory of mathematics, rather than a mathematical theory of machines” (Mahoney, 1997, p.628). Computational capacity has been arrayed along the so-called ‘Chomsky hierarchy’ of language recognition: finite automata, pushdown automata, linear bounded memory automata, and at the top of the hierarchy, the Turing machine.

The problem facing economists seeking to come to grips with the theory of computation has been to work out the relationship of this doctrine to a viable economic theory. Due to parochial training or stubbornness or historical amnesia, they have been oblivious to the possibility that John von Neumann did not anticipate that his theory be applied to explicate the psychological capacities of the rational economic agent, or even to be appended to Nash non-cooperative game theory to shore up the salience of the solution concept. This would misconstrue the whole cyborg fascination with prosthesis. Instead of economic theorists preening and strutting their own putative mathematical prowess, what is desperately needed in economics is a “machine theory of mathematics,” or at least an answer to the question: Why is the economy quantitative, given the irreducibly diverse and limited mathematical competence in any population? As we have argued throughout this volume, von Neumann consistently maintained that his theory of automata should be deployed to assist in the explanation of social institutions. If social relations could be conceptualized as algorithmic in some salient aspects, then it would stand to reason that institutions should occupy the same ontological plane as computers: namely, as prostheses to aid and augment the pursuit of rational economic behavior. However, the theory of automata would further instruct us that these were prostheses of an entirely different character than hammers, and more Promethean than fire: they have the capacity to reconstruct themselves and to evolve. In the same sense that there could exist a formal theory of evolution abstracted from its biological constituent components (DNA, RNA, etc.), there could likewise exist a formal theory of institutions abstracted from their fundamental intentional constituents (namely: the psychological makeup of their participants).

Thus von Neumann sought to distill out of the formal logic of evolution a theory of change and growth of sweeping generality. At base, very simple micro-level rule structures interact in mechanical, and possibly even random manners. Diversity of micro-level entities stands as a desideratum for this interaction to produce something other than stasis. Out of their interactions arise higher-level regularities generating behaviors more complex than anything observed at lower micro-levels. The index of ‘complexity’ is here explicitly linked to the information-processing capacities formally demonstrable at each level of the macrostructure; in the first instance, and although von Neumann did not propose it, this means the Chomsky hierarchy. Von Neumann justified the central dependence upon computational metaphor to structure his theory of evolution because, “of all automata of high complexity, computing machines are the ones we have the best chance of understanding. In the case of computing machines the complications can be very high, and yet they pertain to an object which is primarily mathematical and we can understand better than most natural objects” (1966, p.32). It might be prudent to realize that instead of repeating the dreary proleptic Western mistake of idolizing the latest greatest technological manifestation of the Natural Machine as the ultimate paradigm of all law-governed rationality for all of human history, it might be conceivable that computers can better be acknowledged as transient metaphorical spurs to our ongoing understanding of the worlds that we ourselves have made. Since the computer refuses to sit still, so too will the evolving theory of automata.

So this clarifies von Neumann’s envisioned role for the computer in a modern formal social theory; but it still does not illuminate the question of to what entities that the overworked term “institutions” should refer in this brand of economics. What is it that economics should be about? Note well that whatever they are, these should be entities that can grow, reproduce and evolve. Unfortunately, this question was something concerning which von Neumann left little or no guidance; in options [1-4] above, the answer has always been the rational economic agent, something his heritage would rule out of consideration. It thus falls to the present author to suggest that the appropriate way to round out von Neumann’s vision for economics is to construe markets (and, at least provisionally, not memes, not brains, not conventions, not technologies, not firms, and not states) as formal automata. In other words, the logical apotheosis of all the various cyborg incursions into economics recounted in this book resides in a formal institutional economics which portrays markets as evolving computational entities.

The alien sci-fi cyborg character of this research program may take some getting used to for the average economist. Comparisons of markets to machines are actually thick on the ground in the history of economics, but the notion that one should approach this as providing a heuristic for mathematical formalization seems mostly repressed or absent. Markets do indeed resemble computers, in that they take various quantitative and symbolic information as inputs, and produce prices and other symbolic information as outputs. In the market automata approach, the point of departure should be that there is no single generic ‘market,’ but rather an array of various market algorithms differentiated along many alternative attributes – imagine, if you will, a posted offer market, a double auction market, a clearinghouse market, a sealed bid market, a personalized direct allocation device—and moreover, each is distinguished and subdivided further according to the types of bids, asks and messages accepted, the methods by which transactors are identified and queued, the protocols by which contracts are closed and recorded, and so on. In the market automata approach, it is deemed possible (to a first approximation) to code these algorithmic aspects of the particular market forms in such a manner that they can be classified as automata of standard computational capacities and complexity classes. The initial objective of this exercise is to array the capacities of individual market types in achieving any one component of a vector of possible goals or end-states, ultimately to acknowledge that market automata are plural rather than singular because of the fact that no single market algorithm is best configured to attain all (or even most) of the posited goals. This stands in stark contrast to the neoclassical approach, which has the cognitive agent commensurate and mediate a range of diverse goals through the instrumentality of a single unique allocation device called “the market”. Diversity in markets is the watchword of the automata approach, for both theoretical and empirical reasons. The bedrock empirical premise is that markets are and always have been structurally and functionally diverse in their manifestations in the world; it was only the neoclassical tradition which found itself forced to imagine a single generic market ever present throughout human history. The guiding theoretical watchword is that there can be no evolution without variability of the entities deemed to undergo descent with modification.

Once one can manage the gestalt switch to a plurality of markets of varying capacities and complexities, then attention immediately turns to the project of taxonomizing and organizing the categories. It gives one pause to come to realize how little attention has been accorded to the taxons of market forms in the history of economic thought. Under the impetus of some recent developments recounted in the next section, there has begun to appear some groundbreaking work in differentiation of market algorithms, sometimes under the finance rubric of “market Microstructure theory,” and in other instances under the banners of experimental economics or evolutionary economics. It is entertaining to read there the nascent beginnings of attempts to produce “family trees” of various formats of interest, such as that reproduced below as Figure I.
 
 

Figure 1 goes here.

[Friedman & Rust, 1993. p.8]

Although its author does not make anything of it, the resemblance of the diagram in Figure I to a phylogenetic tree is undeniable: a device commonly used to represent descent with modification in evolutionary biology. This diagram does not adequately capture any such phylogeny – indeed, its orthodox provenance has left the tradition that spawned it bereft of any means of judging whether one abstract market form could descend from another, much less the historical curiosity to inquire about whether it actually occurred or not.

It is my contention that the market automata approach does supply the wherewithal to prosecute this inquiry. Once the particular algorithm which characterizes a particular market format is suitably identified and represented as a specific automata, then it becomes feasible to bring von Neumann’s project back into economics. The agenda would look something like this. Starting from scrutiny of the algorithmic representation, the analyst would enquire whether and under what conditions the algorithm halts. This would encompass questions concerning whether the algorithm arrives at a ‘correct’ or appropriate response to its inputs. Is the primary desideratum to ‘clear’ the market in a certain time frame, or is it simply to provide a public order book in which information about outstanding bids and orders is freely available? Or, alternatively, is it predicated upon a simple quantifiable halting condition, such as the exhaustion of arbitrage possibilities within a given time frame? Is it configured to produce prices of a certain stochastic characterization? Some would insist instead upon the attainment of some posited welfare criterion. The mix of objectives will be geographically and temporally variable: the hallmark of an evolutionary process.

Next, the analyst would gauge the basic computational capacity of the specific market format relative to its identified objective or objectives. Is the market a simple finite automata, or perhaps something more powerful, approaching the power of a Turing machine? If it attains such power, then can it be further classified according to the computational complexity of the inputs it is prepared to handle? Then one might proceed to compare and contrast market automata of the same complexity class according to their computational ‘efficiency’ by invoking standard measures of time or space requirements (Garey & Johnson, 1979). Once the process of categorization is accomplished, the way is then prepared to tackle von Neumann’s ultimate question: namely, under what circumstances could a market of a posited level of complexity give rise to another market format of equal or greater complexity? In other words, in what formal sense is market evolution possible?

It may be here, at the image of one specific market automata giving rise to another, that economic intuition may falter, or revulsion for posthuman cyborgs undergoing parthenogenesis across the landscape may stultify analysis. What could it mean for a market automaton to ‘reproduce’? This is where the abstract essence of the computational approach comes into play. Concrete market institutions spread in an extensive manner by simple replication of their rules, say, at a different geographical location. This would not qualify as von Neumann reproduction, since it was not the market algorithm itself that was responsible for producing the copy. Rather, market automata ‘reproduce’ in this technical sense when they are able to simulate the abstract operation of other markets as a subset of their own operation.

An intuitive understanding of this process of simulation as assimilation can be gleaned from a familiar market for financial derivatives. When agents trade in a futures market for grain contracts, they expect the collection of their activities to simulate the (future) outputs of a different distinct market, namely, the spot market for the actual grain. It becomes pertinent to note that frequently the spot market (say, an English auction) does operate according to a certifiably different algorithm than does the futures market (say, a double auction or dealer market); here, in the language of von Neumann, one automaton is ‘reproducing’ an altogether different automaton. It may also be germane to note that the automaton may be simulating the activity of another market automaton of the same type, or even more intriguingly, an automaton of higher complexity is simulating a market of lower complexity, as may be the case with the futures emulation of the spot market for grain. It is this very self-referential aspect of market automata which suggests the relevance of machine logic for market operations, conjuring the possibility of a hierarchy of computational complexity, and opening up the prospect of a theory of evolutionary change. For while it may be formally possible for a market automaton of higher complexity to emulate a market of lower complexity, it is not in general possible for the reverse to take place.

In the theory of market automata, many economic terms of necessity undergo profound redefinition and revalorization. “The market” no longer refers to a featureless flat environment within which agents operate; rather, there is posited an ecosystem of multiform diversity of agents and cultures in which markets ply their trade of calculation and evolve. Moreover, perhaps for the first time in the history of economics, a theory of markets exists which actually displays all three basic components of a modern theory of evolution: [a] a mechanism for inheritance; [b] explicit sources of variation; and [c] one or more processes of selection. In barest outline: [A] Market automata ‘reproduce’ as algorithms by the process of emulation and subsumption of algorithms of other market automata. They are physically spread and replicated by the initiatives of humans propagating their structures at various spatio-temporal locations. [B] Market automata are irreducibly diverse due to (b1) their differing computational capacities, and (b2) the diversity of the environments –viz., the goals and objectives of humans—in which they subsist. Sources of variation can be traced to the vagaries of recombination – glitches in one market automata emulating an automata of a different type – and a rare kind of mutation, where humans consciously tinker with existing market rules to produce new variants. [C] Market automata are ‘selected’ by their environments – viz., ultimately by their human participants – according to their differential capacity to ‘reproduce’ in the von Neumann sense, which means performing calculations that dependably halt and displaying the capacity to emulate other relevant market calculations emanating from other market automata. It is a presumption of the theory that the process of selection is biased in the direction of enhanced computational complexity, although here, as in biology, the jury is still out on this thesis. Because there is no unique function or purpose across the board which a market may be said to be ‘for’ in this schema, there is no privileged direction to evolution in this economics.

Go to Part 2