PART 2

This sketch of the von Neumann alternative for computational economics has occupied slightly
greater space than the prior alternatives, if only because of its relative paucity in the postwar
economics literature. This ephemeral quality might be read in two ways. In the first, one might
opine that its relative underdevelopment was deserved, citing all the rationales tendered when
one tends to ignore inchoate research programs: it is vague, poorly linked to empirical inquiry,
lacks resonance with the concerns of modern social theorists, is outlandish in its metaphorical
inspiration, sports a dispiriting air about the human prospect, and so on. The reason that this
book is written as an history is to preclude such bromides. Rather, its purpose has been to argue
that von Neumann’s machinic hum has haunted the American economics profession in the
postwar period, and that proponents of the neoclassical orthodoxy have repeatedly shaped their
doctrines in reaction against the cyborg imperatives that they have detected in von Neumann’s
work. The automata approach is not the rodomontade of a few marginalized souls easily
drowned out by the strident sermons of the neoclassical elect; it is the characteristic allocution
of cyborg culture, found throughout fin de si?cle science. The remainder of this chapter reveals
that, if one listens carefully enough, it’s basso profundo can be heard echoing throughout the
long corridors of some modern developments within the orthodox economics profession.
Markets as automata are coming to a workstation near you...
 
 

III. The Hayek Hypothesis and Experimental Economics

The folkways of cyborgs often elude the commonplace understandings of men. We saw in
chapter 6 that one version of the experimental approach to economics was pioneered at RAND
by Merrill Flood and his ‘Robot Sociology’. Indeed, one could easily make the case, although
we shall regrettably decline to do so here, that von Neumann was also indirectly responsible for
the genesis of what passes as ‘experimental economics’ in our turn-of-the-century economics
profession. This is not the way the story is told amongst the experimental economists, and in
particular, by their premier spokesman, Vernon Smith. Our aim in the present venue is not to
challenge Smith’s history head on, but instead to recount how it was that Robot Sociology was
eventually readmitted into good graces in experimental economics after a protracted period of
neglect, and how this has had far-reaching consequences for the market automata approach to
computational economics.

Smith tells us that he was not originally sold on the virtues of laboratory experimentation in
economics, but instead was put off by a classroom exercise in a simulated market conducted by
Edward Chamberlin while he was a graduate student at Harvard. Chamberlin had assigned costs
and reservation prices on cards to individual subjects and had them mill about the room making
trades, only to triumphantly post the results on the blackboard which revealed that the students
had not converged to predicted Marshallian price and quantity (see Chamberlin, 1948). This
‘disconfirmation’ nagged away at Smith, who was convinced that the deck had (literally) been
stacked against neoclassical theory; he pondered how it might be possible to reduce the number
of intramarginal trades and induce more price stability in the outcome. From our present vantage
point, the relevant lesson to glean is that the Harvard experience had planted the suspicion in
Smith that all markets were not all alike, and that it would be prudent to pay closer attention to
the specific rule structures which made up the market experiment.

In the late 1950s, Smith ran a number of what he originally dubbed ‘market simulations’ in his
economics classes, all with the purpose of disconfirming Chamberlin’s disconfirmation.
Eventually he lit upon a structure, essentially a double-sided sequential auction with a price
improvement rule, which did manage to converge to the predicted Marshallian price and
quantity in the majority of cases; and moreover, Smith asserted that its operation resulted in
greater '‘efficiency” of outcomes than that of other formats, in the importantly limited
Marshallian sense of realized producer plus consumer surplus (crudely, the area between the
left-hand blade-plus-handle of the supply and demand ‘scissors’). Smith had appreciated early
on that experimental design bore more than a passing resemblance to computer programming:
"To write the instructions for an experimental task is to define a trading institution in all its
mechanical detail" (1991a, p.109). He parlayed this insight into an actual computer code when
he hooked up with Arlington Williams in the mid-1970s to computerize his market experiments.
The results then proceeded to display a machinelike dependability in two senses: one, the
protocol could be programmed onto a computer to automate many aspects of the experiment and
export it to other laboratories; and two, when conducted with real money, the theory would be
corroborated whatever the target population – experienced, inexperienced, economics majors,
comp lit majors, math wizards, numerically challenged, whatever. Experiments with other rule
structures did not turn out nearly so reliably or display such efficiency (Plott & Smith, 1978;
Smith, 1991a, pp.107 et seq.). Nevertheless, first Smith, and then others were so impressed with
the robust character of the double auction experimental protocol that they sought to subject it to
all sorts of ‘stress tests’, and its persistent vaunted reliability prompted Smith to proclaim in the
1980s that this was one of the first and most solid lessons of the burgeoning field of
experimental economics. To that end, in 1982 Smith announced something he called “the Hayek
Hypothesis”: [Experimental protocols of] “Strict privacy [regarding agents’ valuation and cost
data] together with the trading rules of a market institution are sufficient to produce competitive
market outcomes at or near 100% efficiency” (in Smith, 1991, p.223). Elsewhere (1991b) he
has drawn out what he considers to be the morals of the hypothesis: markets ‘work’ because of
an ‘economic rationality’ attributed to ‘institutions’, economic rationality is not reducible to
individual rationality, economic rationality is neither cognitively nor computationally intensive,
and therefore, experiments which test economic rationality on an individual level are nugatory.
“What is imperfectly understood is the precise manner in which institutions serve as social tools
that reinforce, even [NB-PM] induce individual rationality” (1991b, p.881).

Since this assertion that “experiments have shown” that markets work has become hallowed
folklore amongst many economists in the absence of any criticism or philosophical reflection, it
might behoove us to briefly clarify what the Hayek Hypothesis does and does not state. First off,
there is almost no textual support in Hayek for much of what is being asserted by Smith about his
experiments, beyond some generic political endorsements that the market does indeed ‘work’,
and that social and legal institutions are necessary prerequisites for its reliable operation.
Hayek displayed progressively diminishing interest in defending the neoclassical research
program over the course of his career, and in any event, would not have endorsed the explicit
Marshallian devices (such as consumer’s surplus as welfare index) which are so central to
Smith’s case. Of course, Smith may just be evoking Hayek in a merely ceremonial manner.
Nevertheless, harkening back to chapter 5, there is something apposite, something exquisitely
right about taking Hayek’s name in vain in this context. For Smith’s proclamation is merely the
continuation of the socialist calculation controversy by other means, those means still largely
dictated by the same set of scientific influences bearing down upon Hayek in the 1950s. Fifty
years on, the underlying motivation is still: where is the best location to project the metaphor of
the computer, and what does that imply for the possibility of planning the operation of the
economy? Only now, it would appear that Smith believes the best way to defend what he
conceives to be the pith of the neoclassical program from the depredations of those who bemoan
either the cognitive limitations of the real-world economic agent or the computational
limitations to the operation of the Land of Cockaigne ensconced in Walrasian theory, is to
imagine the market institution more explicitly as a computer, but the same sort of computer that
runs the market experiments in the lab.

However, here is where the neoclassical program comes acropper of the muted appeal to
cyborgs of Smith and his minions. For the “defense” of free markets, if that is in fact what is
intended, is much more narrowly based and strewn with paradoxes than Smith or anyone else
has yet openly conceded. Begin with an insistence upon some precision concerning the theory
which it is claimed is being supported. We have argued in this volume and elsewhere (Hands &
Mirowski, 1998) that it was wholesale rejection of the Marshallian program which provided the
impetus for the Walrasian neoclassical ascendancy in the postwar US; nothing revulsed that
earlier generation of mathematical economists more than (in their view) the putatively sloppy
way in which Marshallians made all sorts of outlandish claims for the significance of their
partial equilibrium exercises; and worse, the way the Marshallians wore their cavalier
disregard for obvious objections to their confident welfare evaluations as badges of honor. It is
not at all clear that Smith’s championing of that very same Marshallian position amounts to
something other than beating an unseemly retreat away from everything which had fostered the
conditions for the neoclassical orthodoxy to displace its rivals in the first place. To put this
differently: it is a rare neoclassical economist these days who would recommend a
controversial policy change on the basis of some expected improvement in Marshallian
consumer surplus. The aura of orthodoxy of Smith’s ‘defense’ of neoclassical economics comes
from confusing the Marshallian theory in introductory economics textbooks with what a
graduate-trained economist is supposed to believe at this late hour in the century.

There is an even deeper sense in which this defense is incoherent. In the laboratory, Smith takes
great precautions to render his subjects as close to neoclassical agents as inhumanly possible:
reservation prices and costs are imposed; controls are exerted so that the subjects both know
and concentrate upon the data that the experimenter deems relevant; ‘strict privacy’ means that
social interaction and communicative interchange is reduced to the stark minimum; if the rules
are violated the data is discarded; subjects are schooled in the protocols of machinelike
behavior by being run through preparatory conditioning sessions on the workstations. Once you
understand everything that is involved with the imposition of Smith’s experimental protocols, it
seems more than a little disingenuous to then bemoan that it is “imperfectly understood” the
ways in which these experimental markets “induce individual rationality.” These experiments
don’t test neoclassical theory (as Smith sometimes will admit); they try and impose mechanical
rationality upon subjects whom might not otherwise display it. They try, but never entirely
succeed, because full neoclassical rationality is unattainable on a Turing machine. Yet the bold
assertion that the market works takes it for granted that the full panoply of decision-theoretic
algorithms are firmly in place. If they are not really all there, if people are just a little
inconsistent or confused or fickle, then most of the criteria used by Smith to evaluate
equilibrium and efficiency in his markets are left without foundations. Hence, his statements
about the absence of any need for individual rationality to make neoclassical stories work are
simply groundless. One can’t help but suspect that the existence of the computer in the lab has
served to distract attention from paradoxes arising from the imperfect cognitive instantiation of
the computer in the head.

Next, even if we ignore the previous caveats and concede that every empirical generalization
which Smith reports for his lab subjects holds true (a tall order), it would still be the case that
his Hayek Hypothesis is caught in a cyborg conundrum. For his vaunted demonstration of
economic rationality at the machine/institution level only holds true for a very restricted version
of the double auction procedure – an institutional protocol with extremely circumscribed
prevalence in the world outside the laboratory, however popular it may be within its confines.
Smith’s program seems to be trapped in a rhetorical bind: experiments suggest that ‘markets’
really are diverse and different, but the need to preserve his bona fides dictates the generic
mantra that ‘The Market’ works go unchallenged. It seems he can’t decide if he wants to identify
markets with institutions as normally understood by economists, or with their older
supra-institutional status. Hence what can only charitably be described as a local regularity is
being blown all out of proportion into a lawlike generalization. The lack of self-awareness of
this problem can be thrown into sharp relief by considering what is perhaps the second-most
popular market format amongst the experimental economists after the double auction, namely,
the so-called “posted offer” market.

The posted offer market (unlike the double auction) is familiar to most readers as what they
encounter every week at their local grocery store. Sellers post a fixed price for each item, and
buyers can take it or leave it. If Piggly Wiggly won’t provide a good price, perhaps the Tesco or
the Carrefour down the lane will. Although prevalent throughout the developed world,
experimentalists can barely repress their disdain for the posted offer market. In their view it is
‘inefficient’ relative to the double auction, it outputs prices that are ‘too high’ for competitive
equilibrium, it inordinately slows convergence to competitive equilibrium, and if it has a
competitive equilibrium, it unfortunately does not coincide with the Nash equilibrium (Smith,
1991a, p.300). Less frequently admitted is the fact that in the laboratory, playing the role of the
‘buyer’ in a posted offer setting is dead boring: so much so, that early on, most experimentalists
decided to save some money by zeroing out the human buyers in their posted offer experiments
(Kagel & Roth, 1995, p.381; Davis & Holt, 1993, p.177). That is, they replaced all the buyers
with computer programs! Sometimes they tell their experimental subjects they are playing
simulated buyers, and sometimes not. The point here is not that there is something illegitimate
about introducing simulations into experimental situations – this is indeed the trademark cyborg
innovation – but rather that the Hayek Hypothesis is a house built on sand. If the posted offer is
so inferior, so boring, so utterly trivial and yet so ubiquitous, shouldn’t those interested in
singing hosannas to The Market buckle down and explain it? Furthermore, shouldn’t any
comparisons of the efficacies of different small-m market formats be expressly carried out in the
2? 2 experimental design of human double auctions with/out computer participants vs. human
posted offer with/out computer participants? Although all the cells of that research design have
yet to be sufficiently filled in, it was a foregone conclusion that the Hayek Hypothesis would
eventually summon up a revival of Robot Sociology; and that is precisely what happened.
 

IV. Gode and Sunder Go Roboshopping

One research project highlighted by Vernon Smith’s championing of his Hayek Hypothesis was
to try and explain the observed superior efficacy of the double auction in the vast array of
market experiments, auction and otherwise, which had been performed over more than two
decades. Smith, as we saw, was predisposed to attribute their clockwork efficacy to the market
institution as such, and not to concede much to the cognitive abilities of the experimental
subjects. But if economists were avoiding cognitive science like castor oil, how would this
question ever get resolved? It would turn out to be a stroke of genius to realize that the practical
means to separate out the two classes of causal factors and subject them to individual test had
already been pioneered within the experimental tradition, and in particular, by the use of
simulated buyers to run posted offer experiments. The experimental economists who first came
to this realization were Dhananjay Gode and Shyam Sunder; and for this reason alone their 1993
paper, “Allocative Efficiency of Markets with Zero-Intelligence Traders: Markets as a Partial
Substitute for Individual Rationality” will certainly go down in the annals of cyborg economics
as a late-20th century classic.

The paper was perhaps all the more effective because Gode and Sunder [henceforth: G&S]
were transparently not wholehearted converts to the cyborg cause. Beginning with the very title
(“Partial Substitute”), their paper is studded with all sorts of qualifications and reassurances
customized for the neoclassical audience of the JPE, amounting to assurances that their findings
would not derange the stolid complacency of the neoclassical economist. Indeed, a number of
those assertions made therein are demonstrably false, but ultimately irrelevant to their argument,
serving merely as window-dressing to lull the reader into a state of placid calm, as a prelude to
ushering them into the topsy-turvy world of cyborg roboshopping. For what G&S came to
appreciate was that Vernon Smith’s experimental protocols were really attempts to discipline
his subjects to act like machines, but that there would always be some residuum that would
escape control – “It is not possible to control the trading behavior of individuals” (1993, p.120)
– so why not take the logic of Smith’s method to the limit, and have robots – or more correctly,
algorithms – conduct the trades in the double auction experimental setup? Other than exhilarating
in the sheer cyborg delight in blurring the boundaries between man and machine, their exercise
had the compelling virtue of more directly addressing Smith’s contention that it was the market
‘institution’ which was primarily responsible for the vaunted efficiency of double auction
outputs.

To that end, G&S concocted what they called “Zero-Intelligence Agents” to participate in
double auctions; for convenience, in this volume we will call them “zoids”. These agents,
stripped of their agency, were just bits of software, whose sole purpose in Alife was to submit
messages to the software used in market experiments which normally came from human subjects.
What a zoid does, in its own inimitable fashion, is come up with a bid price or ask price for a
single unit of a nondescript unspecific ‘commodity’ (what else would a zoid want to buy,
anyway?). Because it is a little challenged in the intelligence department, the zoid draws this bid
or ask randomly from a uniform distribution of prices, which are bounded for the buyer zoid by
the numbers 1 and its pre-assigned reservation price, and for the seller zoid by the pre-assigned
cost and an arbitrary number bound. Since the prior imposition of reservation prices and cost
assignments are standard experimental protocol from Smith onwards, in this respect, at least,
zoids are being treated symmetrically with human subjects in experimental economics. The
difference comes in what happens next. G&S then further differentiate two classes of zoids,
which we will dub ‘A-zoids’ and ‘B-zoids’. A-zoids (or affectless zoids) have no further
function in Alife other than to mechanically spit out bids or asks, with absolutely no further
consequences for their own behavior. B-zoids (budgeted zoids), by contrast, come equipped
with the further capacity to stay within their prior specified budget constraint; this is
accomplished essentially by shrinking the support of their probability distribution as a
consequence of their actually being chosen within the auction setup to complete a transaction.
It’s not much of a brain, but computational theory would alert us to recognize that augmentation
of a memory capacity, however rudimentary, betokens a serious promotion up the scale of
complexity. Once the marketplace is populated with the zoids, then G&S had the inspired idea
to let the zoids trade amongst themselves on a computerized double auction, and have human
subjects trade on the exact same computerized double auction market, and then compare the
outputs.

The results for A-zoids were essentially what you would expect: braindead and affectless,
A-zoids provide yet another illustration of the old computer adage, ‘garbage in, garbage out.’
Double auctions with A-zoids did not converge to predicted price and quantity, but only emit
noise. The shock came with the B-zoids. In that setup, price and quantity did converge to
Marshallian predictions; more unexpectedly, measures of Marshallian efficiency also
approached the high levels observed in human experiments! In some setups, the human outputs
and B-zoid outputs were more or less indistinguishable in broad statistical terms. Realizing that
this verged upon the assertion that there is no detectable difference between zoids and humans,
G&S recoiled in horror from the implications of their experiments: “Note that we do not wish to
argue that human traders behaved like [zoids] in the market. fn. They obviously did not. Human
markets exhibit a pattern of lower efficiency in the first period, followed by higher efficiency in
later periods” (1993, p.133). No elaborate quantitative statistical evidence was provided to
back up this claim, however. The terror before which G&S stood transfixed was not that
economists would begin to confuse human beings with their bits of code (was it?), but rather that
it was all too easy to draw some devastating morals from their exercise about the utter futility of
more than a century of neoclassical economic theory. While in 1993 they tended to echo Vernon
Smith’s assertion that the double auction framework had managed to induce “aggregate
rationality not only from individual rationality but also from individual irrationality” (p.136),
the prognosis could just as easily have been that ‘aggregate rationality’ had no relationship to
anything that the neoclassicals had been trumpeting as economic rationality for all these years. If
anything, they backpedaled more awkwardly over time as the bad news began to sink in: “”We
assume simple trader behavior for tractability, not to challenge or criticize utility maximization”
(1997, p.604). This even extended to the prospect that the rational economic man threatened to
disappear into a welter of cyborg machines and institutions: “Market rules are a consequence of
individual rationality because they evolve out of individual choices over time” (1997, p.606).
And yes, Virginia, economic valuations can all be resolved into embodied labor time since
some human was required to initiate all economic activity.

If Vernon Smith lifted the lid just a crack, G&S have pried the top off Pandora’s Box, and no
amount of posturing will disguise the fact that cyborgs and zoids and all manner of other hybrids
now prowl the marketplace. Machines of middling capacities can produce market regularities,
something rendered salient through agent simulations. The question now should be: what is the
real lasting significance of the demonstrated result that the specific market format identified as
the most efficient, the most efficacious, the most powerful embodiment of the neoclassical ideal
of the Market, produces its hallowed results in experimental settings with severely impaired
robots? Whatever could the overworked term ‘rationality’ mean in this most artificial of
environments? We might venture the observation that, whereas G&S have demonstrated that they
possess prodigious instincts for concocting the most perspicacious of thought experiments for
provoking wonder and delighting the mind, their capacities for drawing out the implications of
those exercises for the practice of economics nowhere rises to the same level of mastery. For
those of us who still believe in the virtues of the division of labor, this should not cause anyone
to lose any sleep. What is important to take into account is that their ceremonial homage to the
neoclassical program has stood as an insuperable impediment to construction of a satisfactory
theoretical account of this very intriguing finding. Zoids do not and cannot tell us anything about
human cognitive capacities; nor do they illuminate awkward neoclassical notions of
‘institutions’. Zoids are automata, interacting through algorithms running on other automata. The
ghost of von Neumann has been whispering through G&S, whether they were aware of it or not.

The attempt to elevate the double auction setup as the best instantiation of the generic
neoclassical Market commits the fallacy of a logical conflation of the part with the whole; the
attempt to isolate realization of Marshallian consumer/producer surplus as the ne plus ultra of
all market activity only misrepresents the telos of the neoclassical model. These are incidental
mistakes, artifacts of their origins in Smith’s experimental economics, and once they are
dispensed with, it becomes possible to recast G&S’s research program as fitting snugly within
the framework of markets as evolving automata. For once we reorient attention to the fact that
markets are distinctly plural automata, a thesis already implicit in the experimental economics
literature, then we can come to entertain the notion that there are multiple, possibly
incommensurate, quantitative objectives which individual market formats exist in order to
satisfy. The masterful stroke of separating out the computational capacities of markets from the
cognitive attributes of their human participants, so crucial to the procedures of G&S, only
serves to open up the possibility from within the orthodox neoclassical literature that the market
algorithms themselves can be sorted, graded and analytically distinguished by their
computational regularities. Something approaching this option clearly has occurred to G&S,
since in their next major paper (1997), they do precisely that. There they seek to attribute the
capture of what they call “allocative efficiency” – really, Marshallian surplus – by B-zoids in
their roboshopping activities to different configurations of market rule structures.

One endearing attribute of the G&S research program is that it encourages novel perspectives on
age-old economic questions by making use of the essential modularity of programming in the
von Neumann architecture. Was it the people, or was it the market which accounted for the
regularities of the experimentalists’ double auction? Answer: Introduce zoids, and separate out
pure market effects. But the double auction is itself an agglomeration of a number of component
rules in the form of algorithmic specifications. What exactly is it about the double auction that
interacts so efficiently with zoids? Answer: Break the market software down into relatively
separable subroutines, let the zoids interact on each, and factor out their distinct effects. But
then, what are the relevant building blocks which comprise the double auction, and into which
the rules should be decomposed? Answer: Other similarly coded market formats. Significantly,
G&S concede that the latter two questions cannot be conveniently asked within any cognitive
framework, and therefore (unwittingly?) find themselves venturing further down the road
towards von Neumann’s version of market automata. In their (1997), they posit a sequence of
nested subroutines – a “Null market” which randomly allocates zoid asks to zoid buyers; a
Sealed Bid auction, which is comprised of the Null plus a price priority and binding contract
rules; the preceding augmented with a call provision which aggregates bids and asks over a
fixed time frame; a Double auction with one round; and a multiple round Double Auction. What
turns out to be fascinating about this exercise is that G&S had to decide where the boundaries of
the various subroutines/rules should be located, and which subsets of rules could be abstracted
away from for the purposes of their inquiry. Nothing in the neoclassical microtheory tradition
would begin to inform their choices in this regard; and therefore, much of what they did decide
was driven more by the exigencies of the experimental software than by any empirical
description of any real-life market “in the wild” (as experimentalists are wont to say).
Nevertheless, resorting to both simulation and some probabilistic analysis, G&S claim to
demonstrate that: the Sealed Bid algorithm results in a gain of Marshallian efficiency over the
Null market; the call provision itself can build in a modicum of efficiency; and the repeated
Double Auction routine guarantees even a higher level of efficiency. As they summarize:

    Efficiency is lower if (1) traders indulge in unprofitable trades; (2) traders fail to negotiate
    profitable trades, and (3) extramarginal traders displace intramarginal traders. Voluntary
    exchanges among traders who have the judgment to avoid losses eliminate the first source
    of inefficiency. Multiple rounds of bids and asks... reduce inefficiency due to the second
    source... the binding contract rule and the price priority rule... discriminates against
    extramarginal bidders because their redemption values are low, and their lower bids are
    given lower priority (1997, pp.622-3).
 
 

It will prove imperative to recast their summary with an enhanced modicum of precision and
specificity. What G&S have shown is that there exists an implicit hierarchy of market rule
structures which can be ranked according to one (rather tendentious) quantitative index of
market ‘success’; furthermore, resorting to simulations with zoids, one can attribute relative
proportions of improvement in performance in this single index to sequential augmentation of the
market algorithm with more powerful rules, primarily involving the enforcement of
consequences of zoid losses through memory calls, the consolidation and amplification of price
comparisons, and imposition of price improvement and priority rules as a prerequisite of
participation. (Given some rather astringent restrictions they felt they were impelled to make
upon the versions of market algorithms considered, G&S had very little to say about market
rules and their impact upon quantity variables.) Hence, theirs is ultimately a very limited
exercise, but no less instructive because of that fact. For, witting or no, what G&S have
achieved in this paper is to pioneer an entirely novel approach to the kinds of phenomena that
economics might aspire to explain. If different market formats (or, using Neumannesque
language, different market automata) such as the Random Allocator (Null), the Sealed Bid and
the Double Auction can be effectively ranked according to their efficacy in achieving some
specific quantitative indicator of success, then would it not be possible likewise to conduct such
a comparative exercise with respect to other quantitative indicators? There is no overwhelming
need to be held hostage to Marshallian notions of surplus; therefore, it becomes conceivable
instead to resort to other benchmarks: Is the objective to ‘clear’ the market in a timely fashion?
Or does it simply serve as a public order archive where outstanding bids and offers are
smoothly and accurately transmitted to all participants? Is the prevalence of the particular
market format predicated upon some other straightforward halting condition, such as the
exhaustion of targeted arbitrage possibilities within a specified time frame? Or is it perhaps
geared to produce a somewhat more complicated state of affairs, ranging from imposition of
some cultural notion of ‘orderly’ price dynamics (no quick drastic price movements, low
volatility, low autocorrelation), to maintenance of a situation where firms might more readily
acclimatize (if not more directly control) their client base to changes in product life cycles,
technological developments, or even scheduled style changes? Indeed, we might subject each
market automata to an effectiveness audit, running down the prospective list of all the things we
think it is that markets do, and evaluating their computational capacities to attain each objective.
Each of the above proposed aspects of markets, and many others, have been discussed at one
time or another in the history of postwar economics; the problem encountered most frequently
was that, in each case, they were considered in splendid isolation, bereft of the insight that
different market formats might exist to facilitate differing objectives, and thus blocked from
consideration of the thesis that no ‘generic’ market exists which could attain them all equally or
adequately. The beauty of G&S’s dependence upon zoids to provide a baseline from which to
measure and highlight the effects of differences in market formats is that we can effortlessly
dispense with the orthodox presumption that the raison d’?tre of markets stand or fall on what
their idealized unified individual ‘feels’ or ‘thinks’ about them.

The research program herein envisioned taking its inspiration from von Neumann is both
extensive and ambitious; if anything, it threatens to appear infinite in all directions, lacking any
concrete specificity or paradigm instances of the kinds of analysis which might be conducted
under its imprimatur. To counter that hasty impression, and cement the case that modern
economists have been propagating cyborg economics without being aware of it, I provide an
exercise in the appendix to this chapter which explicitly links the G&S (1997) exercise to the
treatment of markets as automata. G&S have shown that, in the presence of zoids and relative to
their Marshallian conception of '‘welfare'’, the Null market is inferior to the Sealed Bid market,
which in turn is inferior to the Double Auction. Hence they propose a nested hierarchy of market
algorithms of greater ‘power’ to achieve their specified objective. What I and my co-author
Koye Somefun have argued in our (1998) is that the abstract G&S characterizations of each of
those three formats can be restated in the idiom of the theory of automata and ranked along the
Chomsky computational hierarchy. In the appendix, we simplify the exercise to portray both the
Sealed Bid and Double Auction as pushdown automata with multiple one-way read-only heads;
each market is imagined to be an abstract language recognizer which accepts certain stylized
messages in the form of bids and asks. The appendix demonstrates that the G&S style 2-head
pushdown automaton can accept messages characteristic of the Sealed Bid, but not the Double
Auction; whereas the G&S style 4-head pushdown automaton can accept messages from both the
Sealed Bid and Double Auction. Because the Sealed Bid can be encoded on an automata of
lesser computational capacity than the Double Auction, we interpret this as suggesting the
Double Auction can emulate the Sealed Bid, but not vice versa. In other words, we have
demonstrated the existence of a computational hierarchy of market automata which maps
directly into G&S’s welfare hierarchy of market formats. It would thus seem fairly
uncontroversial to conclude that the reason the Double Auction is more efficacious in this
instance is that it exhibits greater computational power than the Sealed Bid. Because it can
handle greater complexity in the differentiation and coordination of extramarginal traders, it can
regularly realize greater Marshallian surplus. It is this general class of comparative argument,
although not an endorsement of its idiosyncratic welfare commitments, which we anticipate will
become increasingly familiar in a future computational economics hewing to von Neumann’s
tradition.

This brings us back to questions of evolution, so central to von Neumann’s vision, and so absent
in G&S. The breathtaking lack of curiosity so characteristic of American economics infects not
only to their inability to detect the mapping of their exercise into computational theory, but also
to ask what it means to maintain that there exist whole classes of markets which rate as inferior
by their own standards. The siren song of the Hayek Hypothesis has suppressed contemplation
of the bald conundrum presented to this research community by the proliferation of market forms
in the world which do not appear to make the grade by their lights. The notion of a nested
hierarchy of market forms bears no theoretical salience whatsoever for G&S; and they don’t
know what to make of it’s presence in their own work. In the von Neumann program, by
contrast, its significance is elevated to prime importance. The fact that a market of lesser
computational capacity can be subsumed within the code of another, more powerful market
automata as a subroutine, is the most important clue that markets evolve through emulation and
accretion of algorithms. Markets which may appear to be rudimentary and of low computational
capacity relative to a given quantitative objective manage to persist because they continue to
occupy an effective niche in the larger economic ecology, be it through having proven itself
robust to a local combination of diverse environmental demands (ie., plans and objectives of
humans), or else through symbiosis with market forms higher up the hierarchy. The arrow of
time is oriented in the direction of the appearance of ever-more computationally powerful
market automata, but this says very little about the relative prevalence of earlier and simpler
market formats over time. Indeed, one might expect to observe some restricted class of
specialized markets – and here financial markets spring to mind – outstrip the larger population
of market automata in terms of computational improvement and innovation. There may be many
sensible explanations of this phenomenon, but none of them will dictate that the great bulk of
markets for consumer goods are therefore obsolete. The great delusion of the neoclassical
project from the time of Walras has been to conflate the complexity of financial markets with the
complexity of markets as a whole.
 
 

V. Contingency, Irony and Computation

I am repeatedly surprised at the capacity of the narrative related in this volume to provoke the
most overwrought reactions: outrage, perfervid denial, disgust, but most frequently, the
desperate question -- Are the cyborg sciences a good thing or a bad thing? Is economics
ascending to the Valhalla of “hard science”, or is it skidding wildly down the slippery slope to
intellectual decrepitude? Do cyborgs herald a bright future of liberation, or are they just the next
phalanx of dull footsoldiers of entrenched power? For a long time, I used to plead agnosticism:
Isn’t that your job to judge, dear reader? Of course, it is not as though I had studiously avoided
all philosophical investigations in prior work. In my previous writings I have tried to explain
that recurrent aspirations to the status of science in the modern world are fundamentally
underdetermined, and fraught with all sorts of historical contingencies and strange ironies which
are rapidly repressed upon the triumph of a research program over its rivals. Appeals to richly
evocative metaphors or vague references to grand schemes of conceptual unification have
played as important a part as bald appropriation of mathematical formalisms or conformity to
putative rules of ‘scientific method’; and the reasons why a particular version of economics
gains adherents and societal support may have little or nothing to do with the immediate
motivations of the thinkers who claim to have been its progenitors. But I find time and again that
it is the more expansive notions of the motive forces of science which languish unheeded. The
historical ‘accident’ of a grand shift in American science funding and organization to military
dominance can have profound repercussions upon the very content of selected sciences for
decades to come, long after their imperatives recede in funding agendas and perceived
importance. Some protagonists whose encounter with economics was only fleeting may
eventually have greater consequence for its future than those who devote their entire lives to it.
Programs which proudly sport impregnable intellectual confidence and self-assurance can
crumble overnight, given the right set of circumstances; other programs can blithely limp along
in conceptual disarray for generations. All of these things and more are illustrated by events
recounted herein.

Nevertheless, the raw emotions encountered on the road have convinced me that a few parting
thoughts on the possible significance of this history are in order, especially given the fact that
this history hits home in the way a narrative safely ensconced a century or more in the past
cannot. What can it mean for our ideas of economic value to have once again – that is, after the
last major shift from substance to field theories of value in the 19th century recounted in
(Mirowski, 1989) – undergone utter transubstantiation, this time from the previous neoclassical
field definition of economics as “the allocation of scarce resources to given ends” to the cyborg
definition of the economy as a giant information processor? What will it mean to have the formal
definition of value in economics undergo further dematerialization and abstraction relative to
the early physicalist and materialist predispositions of classical political economy?

There are some indications that these sorts of questions are already getting an airing in the
popular culture. When the Mitchell Kapoors of the world intone, “Its not a problem of the have
and have nots; it is the divide between the know and know nots,” they may just be doing
something more than vying for laurels as the emperor’s new drycleaner, although irony cannot
be altogether ruled out. We have not yet come close to dispensing with manufacturing and raw
materials extraction, at least not yet. People have not left their lumbering meat machines behind
for effortless frolic in cyberspace, nor will they do so anytime soon; but they do presently find
themselves increasingly dealing with novel conceptual images and role models of what it means
to earn a living and generate economic value in our brave new millennium. Since half-interred
conceptions of value tend to lie at the root of what most people believe about justice, politics,
community and responsibility, the impact of the history of science upon the history of economic
value reverberates far beyond the closed community of professional economists. Hotbutton
issues like the superiority of market organization, the inviolability of individual freedom, the
legitimacy of affluence, the efficacy of democracy and the grounding of the social in the natural
all cannot escape being transformed by such revisions in the deep structure of valuation. But
there persists much uncertainty about how far down the fault lines run. Is it in fact the purpose of
this volume to intone, “The handmill gives us society with the feudal lord; the steam-mill,
society with the industrial capitalist; the computer, society with the military analyst”?

While the role of the American military is one of the major overlooked factors in the
stabilization of the American economic orthodoxy in the postwar period, and goes a long way to
explain the fin-de-si?cle fascination with game theory, one should not draw stark morals from
that fact. It does not follow, for instance, that economics at Cowles harbored inherently sinister
connotations; nor does it follow that the C3I orientation that informs so much of modern
economics is unavoidably inimicable to the freedom of citizens of a democratic society. The
American military, as we have tried to stress in this volume, was never monolithic in its goals
and intentions; and the scientists which accepted its largesse were often themselves consciously
at odds with it. Yet, acknowledging the parameters of freedom within which they moved, it did
make a difference that it was the military (especially the Air Force and the ONR), and not, say,
the U.S. Commerce Department, or (after 1945) the Rockefeller Foundation, or even the
Catholic Church, that acted as the executive science manager in charge of economics in the
postwar period. The beneficiaries may protest that they were left alone to do their research as
they wished with little or no interference; but we should take their testimony with the same grain
of salt that we should imbibe when a child of the affluent suburbs testifies that there was no
racial prejudice in postwar America when he was growing up.

Likewise, one should not presume that because the computer has been so central to the story
related here, that there is something pernicious or perverse about the fact that our own
self-understanding has been informed, if not driven, by what is, after all, just a machine. For a
person of a certain temperament, the mere suggestion that human inquiry might be conditioned by
something beyond its narrowly sanctioned subject matter and its nominal objectives is a
symptom of the decline of the West and the breakdown of civilized discipline. Economists
especially like to imagine themselves as rational self-made men (with a few self-made women
thrown in for good measure), transparently conscious and aware of every ‘choice’ they have
ever made in their careers, in intimate control of their every motive; no self-delusions allowed!
The merest whisper of the urgings of a collective unconscious risks toppling them over into a
frenzy of disdain for the ‘soft sciences’. To those imperious souls, I might just interject that there
is nothing demonstrably fallacious about using machines to pursue our own self-understanding,
even if it be in a less than fully self-conscious manner, since resort to metonymy, metaphor,
analogy, and all the other props of reasoning are just what one would expect from limited
cognizers and easily distracted bricoleurs such as ourselves. Far from regarding computer
metaphors as a debased pollution of pure economic thought, I would suggest that they
unpretentiously reprise a set of intellectual practices which have been dominant in Western
social thought ever since its inception. Machines, it seems, are good to think with.

There may be those who feel the appropriate moral to our little tale is to reject science and
machines altogether as inspirations for economics. To them I would put the question: Where
will you turn at this late date for your economics innocent of all scientism? Where will you find
your economic Erewhon bereft of all machines? Scholars can seek inspiration wherever they
will, but as an historian, I think it would be unconscionable not to point out that every school of
economics that has ever mustered even a spare modicum of support and a tiny coterie of
developers in the West has done so by accessing direct inspiration from the natural sciences of
their own era, and in particular, from machines. The problem for those with the courage to
acknowledge this fact becomes rather to understand the specific ways in which lighting upon the
computer instead of the steam engine or the mechanical clock has reconfigured the options for
social theory in the immediate past and the prospective future.

One of the consequences of this philosophical stance sketched in this chapter is that much of the
Methodenstreit over the meaning of the computer will come to a head in the impending battle to
decide where it is that valuation takes place for the purposes of economics: will it be portrayed
as happening within the recesses of the idealized computer situated between the ears of the
representative agent? Or will it be staged as the intermediate output of a population of automata
called ‘markets’ scattered throughout the landscape of diverse and cognitively opaque human
beings? Many will find the former option sweetly resonant with their “humanism”, while the
latter would be so “posthuman” as to be dispiriting and dehumanizing. I would ask these
interlocutors to stop for a minute and reconsider this position. Is it really the case that further
conflation of our somatic selves with machines is really the last best hope for a humanist
economics? Haven’t we already caught a glimmer of where that road leads in chapter 7? Once
the computer metaphor goes cognitive, it seems to be as corrosive to the individual human as
anything in the wildest fantasies of postmodernism —selfish genes, memes, downloadable
souls. The deliquescence of the homo economicus is the dirty little secret of neoclassical
economics, one which only becomes more apparent with every subsequent game theoretic
model. Nothing seems poised to prevent the hollowing out of human beings into hulking
mechanical shells: not experimental economics, not evolutionary game theory, not Herb Simon,
not the Santa Fe Institute, nothing.

Perhaps this prospect is just a foregone conclusion in our culture, the terminus of the Information
Age. After all, there is a certain species of scientist who revels in finding meaninglessness in
the cosmos; they hold this truth as self-evident.


                        Appendix to Chapter 8
 
 

                                 Appendix 8.1

               Double Auction and Sealed Bid encoded onto Automata
 
 

In this appendix we show that the algorithm underlying a sealed bid is of lower computational
complexity than the double auction [DA], thus formally illustrating the claims made above in
chapter 8 concerning the relationship of the theory of automata to the work of Gode and Sunder
(1997). This result is obtained by showing that a sealed bid auction can be encoded onto an
automata of less computational capacity than the DA. The class of machines which we access is
that of a push down automata with multiple one-way read only heads that move from left to right
scanning the input tape. We might regard such an automata as a language acceptor where a DA
automata and sealed bid automata are automata constructed so that they only accept the language
underlying a DA and sealed bid market, respectively. “Language” in this context just refers to
the characteristics of the algorithm representing a market institution. The language underlying the
sealed bid algorithm (with accumulation rule) is any sequence of bids preceded by an ask (the
minimum ask) where there is a designated bid that is at least as large in magnitude as all other
bids and which is at least as large as the minimum ask. Similarly, we can describe the language
underlying a DA market institution as any sequence of bids and asks (with at least one active bid
and ask) where there is a designated bid and ask. The designated bid is at least as large in
magnitude as the designated ask, the designated bid is at least as large as all other bids, and the
designated ask is at least as small as all other asks. We use the notion of a language to abstract
out the essential computational features of a certain market algorithm. The introduction of this
form of abstraction allows us to more rigorously compare and contrast the essential features of
different market institutions. Moreover, it enables the analyst to categorize markets with respect
to their computational capacity, where the notion of computational capacity is very precisely
defined.

Before we can move to the formal analysis it is necessary to introduce some notions taken from
the theory of computation (see Lewis & Papadimitriou [1981:29]) . We start with the notion of
an alphabet which is a finite set of symbols. An obvious example of an alphabet is the Roman
alphabet {a, b,...,z}. We follow the convention of having S denote an alphabet. A string over an
alphabet is a finite sequence of symbols from the alphabet. A string may have no symbols, in this
case it is referred to as the empty string and is denoted by e. The set of all strings --including the
empty string-- over an alphabet S is denoted by S *.

In this paper we use the alphabet S ={I,a,b} to represent orders. The string S represents an
individual order where S=xIm for x? {a,b} and Im abbreviates that S contains mI’s. The length of
S minus 1 gives the size of the submitted order and ? S? =m+1 denotes the length of S. We write
S(i) to emphasis that S is the ith submitted order and we write S(i)(a) (S(i)(b)) to emphasis that the
ith order is an asks (bid). The string p represents the concatenation of all orders submitted in a
certain trading period; i.e., p =S(1)...S(n). p i denotes the tail of p , which are all orders
following the ith order. S(j)? p i means that S(j) is an order in p i, so that by definition of p i we
have S(i)? p i. A multiple headed push down automata scans the input tape on which p is encoded.
The heads are one way which means that once a head scans up till p i it can no longer scan any
S(j) for j? i. In addition to the multiple heads the push down automata possesses restricted
storage facility in the form of a stack of arbitrary size. The stack is organized according to the
last in first out principle. This means that the symbols stored last onto the stack are deleted first.
More formally we can describe a push down automaton with multiple heads as follows.

Definition.

A k headed push down automata (henceforth k-PDA) is a sextuple M=(K, S , G , D , s, F),
where K is a finite set of states; S is an alphabet (the input symbols); G is an alphabet (the stack
symbols); s? K is the initial state; F? K is the set of final states; D , the transition relation, is a
mapping from Kx(S ? {e})kxG to finite subsets of KxG *, where (S ? {e})k abbreviates (S ?
{e})x...x(S ? {e}) the k symbols (possible the empty symbol/string) read by the k-heads.

Let M be a k-DPA, p and q be two state in M, u an input symbol, g ,b stack symbols and
((p,u,e,...,e, b ),(q,g ))? D . Then M, whenever it is in state p with b on top of the stack, may
read u from the input tape with head I, read nothing with the other heads, replace b by g on top
of the stack, and enter state q. A symbol is deleted whenever it is popped from the stack and a
symbols is stored whenever it is pushed onto the stack. M for example pops (deletes) b from the
stack with the transition ((p,u,b,...,I,b ),(q,e)) and pushes (stored) b with the transition
((p,u,b,...,I,e ),(q,b )). M is said to accept a string p ? S * if and only if (s,p ,e) yields (p,e,e) for
some state p? F and a sequence of transitions. The language accepted by M, denoted L(M), is the
set of all strings accepted by M.
 

With above notation in hand, we can more explicitly define the languages underlying a DA
market and sealed bid market institution. Let L(DA) and L(SB) denote the languages underlying
a DA and sealed bid, respectively. Then we can define L(DA) and L(SB) as follows.

Definition.


 

An automaton M that accepts L(SB) has the ability to scan the input tape p for the largest bid,
henceforth denoted by S(j)(b), and determines if S(j)(b)? S(1)(a) where only the first order should
be an ask. M should reach a final state only if S(j)(b) is identified and it is verified that S(j)(b)?
S(1)(a) holds. Similarly an automaton M that accepts L(AD) has the ability to scan the input tape
p for the largest bid S(j)(b) and the lowest ask, henceforth denoted by S(i)(a). Moreover, M
checks if S(j)(b)? S(i)(a) holds. M should only reach a final state only if S(j)(b) and S(i)(a) are
identified and it is verified that S(j)(b)? S(i)(a) holds. An essential feature of any machine
accepting either L(DA) or L(SB) is that it can compare the length of a particular string S(j) with
an arbitrary number of strings in p j. A machine accepting L(DA) requires the additional feature
of doing this for an additional string S(i) and the machine also needs to compare two designated
strings S(i) and S(j) with each other. In the following we will make use of these observations to
show that a 1-PDA cannot accept either a DA or sealed bid market and that both a 2 and 3-PDA
cannot accept a DA market. In the next appendix we show that a 2-PDA and 4-PDA can accept
the language underlying a sealed bid and DA market respectively. This completes the proof that
the algorithm underlying DA market is of a higher computational capacity than the algorithm
underlying the sealed bid auction.
 
 

Lemma I.

Let M denote a k-PDA that needs to compare the length of n? 1 substrings in p i with ? S(i)? ,
where S(i)? p and k? n. Then M can only perform this task if S(i) is stored onto the stack at some
point of the computation.

Proof.

(Proof by contradiction.) To compare S(i) with S(m)? p i requires at least two heads whenever S(i)
is not stored onto the stack. One to scan S(i) and one to scan S(m). Thus to compare S(i) with n
substrings of p i requires at least n+1 heads. n heads are needed to scan S(i) n-times and the first
time the length of S(i) is compared with a string in p i an additional (n+1)th head scans the other
string simultaneously. But by assumption there are only k? n heads available.

                                                                       QED.

The implication of lemma I is that every k-PDA that successfully performs the task of comparing
the length of S(i) with an arbitrary number of substrings of p i will always have to store S(i) onto the stack. If the machine does not always manage to store S(i) onto the stack it might have too few heads to perform the task. Therefore we can henceforth assume that S(i) is always stored onto the stack.
 
 

Lemma II.

A 1-PDA cannot perform the task of comparing the length of S(i) with an arbitrary number of
substrings of p i.

Proof.

Pushing (storing) S(i) onto the stack enables the machine to compare ? S(i)? with a the length
substring of p i denoted by S(m). To compare ? S(i)? with ? S(m)? requires popping (deleting) parts of S(i) at the same time S(m) is scanned. After S(m) is scanned p m remains where p m? p i. Thus the machine cannot recover S(i) because it is not a substring of p m.

                                                                       QED.
 
 

Lemma III.

Let M denote a k-PA that compares the length of an arbitrary number of substrings in p i with a
string S(i)? p . Then M should have at least two heads to perform this task.

Proof.

It follows from lemma I that 1-PA cannot compare the length of an arbitrary number of
substrings. Thus it suffices to show that a 2-PA can perform this task. With out loss of generality
we can assume that S(i) is pushed onto the stack and both heads are positioned, so that p m-1
remains for both heads, where S(m) is the next string which length machine M compares with ?
S(i)? . Furthermore we can assume without loss of generality that head 1 scans S(m). While
scanning S(m) symbols of S(i) are popped (deleted) until either head 1 completed scanning S(m) or S(i) is completely popped (deleted) from the stack. This procedure enables M to determine which string is longer. Next M restores S(i) by having head 2 push S(m) onto the stack after which head 1 scans the remainder of S(m) while for every scanned symbol a symbol is popped from the stack.

                                                                       QED.

Observe that the procedure described in Lemma III is the only procedure that can successfully
compare the length of S(i) with that of S(m) using only two heads. Lemma I already establishes
that S(i) has to be stored onto the stack to permit an arbitrary number of comparison with a
k-PDA. Furthermore comparing S(i) and S(m) implies popping S(i) from the stack. Thus any
successful strategy --that only uses two heads-- needs to be able to restore S(i) onto the stack
using the head that was not used to scan S(m) the first time. The only possible way to extract this
information from p i is described by the procedure in lemma III.
 

Lemma IV.

Let M denote a 2-PA then M cannot perform the task of comparing both ? S(i)? and ? S(j)? with an
arbitrary number of strings in p i and p j, respectively.

Proof

From lemma III it follows that at least two heads are needed to restore a string onto the stack.
This implies that M should simultaneously compare ? S(i)? and ? S(j)? with strings in p i and p j,
respectively. Furthermore it follows from lemma I that S(i) and S(j) have to be stored onto the
stack to allow an arbitrary number of comparisons with other strings. But to compare ? S(i)? and
? S(j)? simultaneously implies that sometime during the computation the string stored first --let’s
say S(i)-- has to be compared with a string currently read from the input tape, which implies that
the string stored last --S(j) in this case-- has to be deleted. After deleting this S(j) it is impossible
to restore it onto the stack because the heads are occupied comparing and restoring S(i).

                                                                       QED.

Proposition I.

Both the languages underlying sealed bid and DA market cannot be accepted by a 1-PDA.

Proof

From L(SB) and L(DA) it follows that any machine M accepting L(SB) or L(DA) should have
the capacity to compare the length of a string S(i) with an arbitrary number of strings in p i. It
follows directly from lemma II that a 1-PDA cannot perform this task.

                                                                       QED.
 
 

Proposition II.

The languages underlying a DA market cannot be accepted by k-PDA for k? 3.

Proof

Let M be a 3-PDA that accepts L(DA). From lemma III it follows that at least two heads are
needed to compare ? S(i)? with the length of an arbitrary number of string in p i. This means that
M cannot first scan p for S(i)(a) (the lowest ask) and then scan p for S(j)(b) (the highest bid)
because this would require at least four heads. Thus M has to simultaneously scan p for S(i)(a)
and S(j)(b). It follows from lemma I that both intermediate values of S(i)(a) and S(j)(b) have to be stored onto the stack. Furthermore it follows from lemma IV that two heads are not sufficient to scan p for S(i)(a) and S(j)(b) and store the intermediate values. Therefore we can assume that al three heads are occupied discovering S(i)(a) and S(j)(b). Once M completes this task both S(i)(a) and S(j)(b) are stored onto the stack. But now M cannot check if ? S(i)(a)? ? ? S(j)(b)? so that any 3-PDA falls to accept L(DA).

                                                                       QED.
 

                                 Appendix 8.2

                     Sealed Bid Auction with accumulation rule

In this appendix we give the pseudo code for a 2-PDA that accepts a sealed bid auction (with
accumulation rule). For simplicity we assume that the input string p already has the correct
format, that is, the first order is the minimum ask all following orders are bids in the format
S=xIm for x? {a,b}.

Step [I]. Push the minimum ask S(1)(a) onto the stack, using head I.

Step [II]. Compare stored ask with the next bid submitted bids, using head I.

        If ask >bid then recover ask by adding bid to remainder of ask, using head II.

        Otherwise, push bid onto stack, using head II.

Step[III]. Repeat step II until Ask? Bid or end of input tape is reached. When the latter happens
the automaton terminates without reaching a final state.

Step [IV]. Compare stored bid with bid currently scanned by head I.

        If stored bid ? scanned bid recover stored bid by adding scanned bid to remainder of
        stored bid, using head II.

        Otherwise, pop remainder of stored bid and push currently scanned bid onto stack,
        using head II.

Step [V]. Repeat step IV until the end of the input tape is reached, after which the machine
reaches a final state.

To construct a 4-PDA that accept a DA is not fundamentally different from above machine. Such
a machine first determines the lowest ask using a similar approach as above. Next above
machine is used as subroutine to determine the maximum bid and if it exceeds the minimum ask.

                                    Envoi
 

    Try to imagine the virtual worlds that will be made possible by the power of a shared
    parallel computer. Imagine a world that has the complexity and subtlety of an aircraft
    simulation, the accessibility of a video game, the economic importance of the stock market,
    and the sensory richness of the flight simulator, all of this with the vividness of
    computer-generated Hollywood special effects. This may be the kind of world in which
    your children meet their friends and earn their living... Whatever you imagine virtual
    worlds will be like, or whatever I imagine, is likely to be wrong. (Hillis, 1992, p.14)
 

    Welcome my son

    Welcome to the machine.

    What did you dream?

    That’s all right – we told you what to dream.
 
 

    Pink Floyd, "Welcome to the Machine"

    Wish You Were Here



Return to Main page