Navigation About this web General Themes The Questions Economics Statistics Belief formation Russia Political Science Philosophy
|
Critical reflections on Elster's arguments about the impossibility of rational collection of information: Infinite regress and problems of estimation
By Hans O. Melberg (draft version, not for citation, comments are welcome e-mail: hansom@online.no) Oslo, January 1999
Preliminary note: The first half of this paper is reasonably well structured, the second half is more "thinking aloud." Some of the arguments are elaborated in previous papers available at www.oocities.org/hmelberg/papers/papers.htm
Introduction How much information should you collect before making a decision? The short answer is that you should go on collecting information as long as the expected value of spending more time gathering information is larger than the expected costs of doing so. But, how do you know the expected value and cost of spending more time collecting information? In several articles and books Jon Elster argues that this question represents a serious problem for rational choice theory (see Table 1). The purpose of this essay is to discuss his views.
Arguments My tentative arguments are as follows:
What are Elster's arguments? To enable the reader to follow the discussion, I have maade the following table which summarizes Elster's writings on the impossibility of collecting an optimal amount of information.
Table 1: An overview of Elster's discussions of the problem of collecting information
Reflecting on the quotations First of all, the table indicates that there has been a shift in Elster's emphasis. From 1978 to 1983 the argument against the possibility of collecting an optimal amount of information was based on "the infinite regress argument." After the important article from 1985, the focus turned to the empirical problems of estimating the value of information when we are in novel situations and when the environment is fast changing. I shall label the first argument "the infinite regress problem" and the second "problems of estimation." Both arguments are used by Elster to argue that it is impossible to collect an optimal amount of information.
Briefly on the infinite regress argument The infinite regress argument is clearly inspired by S.G. Winter. The key quotations is: The "choice of a profit maximizing information structure itself requires information, and it is not apparent how the aspiring profit maximizer acquires this information, or what guarantees that he does not pay an excessive price for it "(Winter 1964, quoted from Elster 1983, p. 139-140, my emphasis, the full quotation is in the footnotes) Three things should be noted about this quotation. First, Winter does not use the term "infinite regress" in the quotation. In fact, in winter's article from 1975 the problem is said to be "self-reference" not "infinite regress." Second, the focus is on how somebody can acquire information about the value of more information. The term "not apparent" indicates some reservation whether the argument is purely logical (it is impossible) or empirical (it is not apparent). Third, there is something odd about the last sentence ("what guarantees that he does not pay an excessive price for it"). It seems to me that rationality does not demand that we never pay more than the true value of something. The question is whether we were justified in believing that the information was worth the costs when the decision was made. It may turn out that the information was less valuable than we believed, but - as Elster argues in Sour Grapes (p. 15-19) - it is possible to make rational mistakes. The contrast between the infinite regress argument and the estimation problem There is, at the very least, a tension between Elster's argument on the infinite regress problem and the estimation problem. When discussing the estimation problem, Elster admits that it is sometimes possible to choose what approximates the optimal amount of information (see, e.g. 1985, p. 70). This is a problem because it is inconsistent to argue that it is logically impossible to collect an optimal amount of information and at the same time argue that the problem is sometimes solved empirically. To what extent is this a problem in Elster's writings? First of all, I am hesitant about using the label "contradiction." Because Elster himself does not explicitly write that the problem of infinite regress represents a "logical problem" in the theory of optimization. On the other hand, consider the following quotations: "The demand for an optimal amount of evidence immediately leads to an infinite regress." (SG, 1983, p. 18) " firms cannot optimise ex ante, since they do not have and cannot get the information that would be required. Specifically, they would need an optimal amount information, but this leads to a new optimisation problem and hence into an infinite regress." (Crisis, 1983 Article, my emphasis) "One of his [S.G. Winter] contributions is of particular interest and importance: the demonstration that the neoclassical notion of maximizing involves an infinite regress and should be replaced by that of satisficing." (ETC, 1983, p. 139) As for the strength of these arguments, Elster writes that the argument "appears to me unassailable" (ETC, 1983, p. 139). He also thinks that S.G. Winter has provided a sketch of an "impossibility theorem" (US, p. 59) and that the infinite regress problem represents an argument "against the very possibility of planned profit-maximizing" (1982, p. 112, my emphasis) . The tendency of the argument seems to be that it is logically impossible to choose an optimal amount of information. To the extent this is true, Elster's early argument is in tension with his later emphasis on the empirical nature of the problem i.e. that it is usually impossible to form a reliable estimate about the value of information.
What is the infinite regress argument? Imagine that you have to make a choice between acting now or collecting more information. The situation could be visualized as in Figure 1. At time 0 you want to make a "rational" decision about what to do. Rational is here (preliminary) defined as choosing that option that maximizes your expected utility. Figure 1 Act 0< Act Collect Information < Act Collect < Act Collect < Collect
In theory this branching could go on forever, but in practice there is little reason to expect an infinite regress problem in the collection of information. The main reason for the infinite regress problem (in this figure) would be that it is impossible to calculate the expected value of collecting more information at 0 since the possibility of collecting more information goes on forever. Yet, many decisions simply cannot be postponed forever since they are time-limited. In the words of Holly Smith (1987), many decisions are non-deferral. In fact, as he also notes, all decisions are non-deferral since all humans eventually die. As long as this is the case, it seems rational to me simply to start at 0 and base your choice of whether to collect more information on your beliefs about the net value of collecting more information. We avoid the infinite regress since it would not be rational to include the options after your death (or after the time limit) in the calculation. Second, even if the decision could be postponed forever, the benefits of collecting more information might decrease and as such the problem has a solution in the limit. Of course, the real question is not only whether the problem has a solution in the limit, but whether it is possible for the agent to know this and the precise trade-off that enables him to make the rational choice about whether to collect more information or not. It seems to me that the answer is simply to base your decision on the best possible beliefs about the value of more information at time 0. Should I collect more information? Yes, if my beliefs (based on all my past experiences up to t=0) tell me that more information has a higher expected utility than acting now. Is it logically possible to estimate the expected net value of more information? Yes, but there may be large practical problems - as Elster points out. In theory there is no problem: You simply estimate the value of information based on historical experience. Of course, this is easier said than done, but sometimes you may compare with similar situations in the past (classical view of probability), sometimes you may use a theory which is developed using past data to predict the value of more information), and, finally, some would argue that it is rational to base your decision simply on your subjective beliefs regarding the value of more information. So, the only way the infinite regress in the collection of information could get off the ground if the visualization in Figure 1 is correct, would be that it we have an agent who is immortal (or acting on behalf on something which is immortal), the decision can be postponed forever and the value of information does not eventually converge. A problems based on these assumptions do not appear very significant in the real world. An alternative visualization of the problem, is presented in Figure 2. [This interpretation is inspired by, but not equal to, Lipman (1991).] Here the problem at t=0 is that the set of possible actions is infinite. One option is to act right away, another is to collect information; A third option is to collect information about how much information you should collect. Fourth, you may collect information about how much information to collect before you decide how much information you are going to collect. And so we could go on forever.
Figure 2 Possible alternatives at time 0: 1. Act 2. Collect information 3. Collect information about how much information you should collect 4. Collect information about how much information you should collect to decide how much information to collect.
If the problem is visualized in this way it is less obvious that the non-deferral of decisions can solve the problem. Among all the feasible alternatives at t=0 we want to choose the one that has the highest expected utility. If the set of feasible actions is not well defined (i.e. it is infinite), then we do not know for certain whether some alternative "far down" would be of higher expected value. Several comments are possible. First, it is simply not feasible given limitations in the human mind to go very deep. Most people are not able to go beyond three or at most four levels. Even experts in strategic thinking cannot go further than about seven. I need to think more about this, because information about information about information may not be comparable to "I know that you know I know." Consider, for instance, the case of buying a house. Most people want to collect information about the house. Some also collect information about what kind of information (and how much?) they should collect (e.g. books about how to collect information before buying a house). We could easily imagine, information about this information e.g. a magazine that reviews several books about how to collect information (But can we find information about how many books to read before determining how much information to collect?). This is three levels deep and it is still not too difficult to imagine. Maybe information about information about information is easier to imagine than "I know that he know that I know?" On the other hand, we seldom find that the regression goes beyond three or four levels (empirically speaking). This might indicate either that this information is not so valuable, or that it is difficult to utilize it given our inherent cognitive limitations (which in turn makes the information less valuable). In any case, being a perfectly calculating smart robot with no limitations is not the definition of rationality used in this paper. Rationality is doing the best we can within the set of feasible options. Second, the same argument that applied to the first visualization (Figure 1), applies here. That is, if the decision is non-deferral the set of alternatives is constrained. True, one could always choose to collect information at some very deep level at time 0, but as long as we know that time is limited the value of doing so is zero since after collecting this information we have to go through all the other levels before we finally make a decision. That is, after collecting information about how much information to collect we have to go out and collect the information (though, the deeper level might tell us to collect zero at the more immediate level). Since this process is time-consuming, time constraints limits the depth of the feasible set than needs to be considered. Let me try a third interpretation of the infinite regress argument which is closer to Elster and Winter's arguments. The problem is presented as follows by Elster in his 1982 article: In order to maximize, say, profits, you need information. As information is costly, it would be inefficient to gather all the potentially available information. One should rather settle for the optimal amount of information. But this means that the original maximization problem has not been solved, only replaced by a new one, that immediately raises the same problem (p. 112) Visualized the argument may look like this:
Figure 3 1. Collect an optimal amount of information (the first maximization problem) 2. To do (1) we must first collect information about how much information it would be optimal to collect (the second maximization problem) 3. To do (3) we have to collect an optimal amount of information about how much information we should collect before we decide how much information to collect. (third maximization problem) ... ... and so on forever
Since the chain goes on forever, the argument is that it has no rational solution. But, is it really true that we have to collect information before we decide how much information to collect? Is this not to demand that the agent always should know something that he does not know? (Maybe this is precisely the point, that Elster and Winter believe rational choice theory cannot yield a determinate solution because it demands the impossible). Imagine the following reply: At any point in time you simply have to base your decision on what you know at that time. This includes the decision about how much information to gather. Previous experience in making decisions and gathering information may give you some basis for estimating how much information to collect (or it may not, but this is an empirical question). In any case, the rational decision is simply to choose the best alternative - act or collect more information at whatever level - that gives you the highest expected utility for your beliefs at time zero. I fail to see how Elster and Winter's argument makes this choice impossible.
Sub-conclusion I have so far tried to understand Elster's argument on the impossibility of collecting an optimal amount of information because of the infinite regress problem. My conclusion is that the argument fail if it is meant to be a significant problem in rational choice theory. Empirically the conditions under which it may arise are very restrictive. And I do not think it constitutes a logical proof against the very possibility of choosing an optimal amount of information. I want to note, however, that I have only discussed this problem in the context of how much information to gather. There are at least two other categories of infinite regress problems that I have not discussed. That is, first, to decide how to decide. And, second, to form beliefs using a fixed set of information (for instance, the problems involved in reasoning like "I know that you know that I know ..."). There may be significant problems here for rational choice theory, but that was not the topic of the section above. Assuming these two problems have been solved, I asked whether there was a problem of infinite regress in the collection of information and my conclusion was negative.
Elster on the problems of estimation The argument that there is no logical infinite regress problem that makes it impossible to collect an optimal amount of information does not imply that it is empirically possible. For instance, when we are in a unique situation we cannot determine the value of information from historical experience of similar situations, and hence there is (in the classical view of probability) no rational basis for estimating the value of information. I have labeled these problems the estimation problem and I have characterized this as Elster's second main argument against the possibility of collecting an optimal amount of information. The purpose of this section is to examine this view more closely. What is the argument? The following two quotations should illustrate Elster's arguments: Deciding how much evidence to collect can be tricky. If the situation is highly stereotyped, as medical diagnosis is we know pretty well the costs and benefits of additional information. In situations that are unique, novel and urgent , like fighting a battle or helping the victim in a car accident, both costs and benefits are highly uncertain (Nuts and Bolts, p. 35, my emphasis) In many everyday decisions, however, not to speak of military or business decisions, a combination of factors conspire to pull the lower and upper bounds [on how much information it would be rational to collect] apart from one another. The situation is novel, so that past experience is of limited help. It is changing rapidly, so that information runs the risk of becoming obsolete. If the decision is urgent and important, one may expect both the benefits and the opportunity costs of information-collecting to be high, but this is not to say that one can estimate the relevant marginal equalities. (Elster 1985, p. 70)
Elster here lists some factors that make it difficult to estimate reliable probabilities: uniqueness, rapidly changing environments, novel situations, important and urgent decisions. There is no reason to disagree with the view that we often are very uncertain about, say, the expected net value of more information. There is, however, several reasons to argue about the implications Elster draws from this and the internal consistency of the argument. Is it irrational to base your decisions on weak beliefs? Elster writes: In my ignorance about the first decimal - whether my life will go better as a lawyer or as a forester - I look to the second decimal. Perhaps I opt for law school because that will make it easier for me to visit my parents. This way of deciding is as good as any - but it is not one that can be underwritten by rational choice as superior to, say, just tossing a coin. (SJ, p. 10) I think the argument is weak. Assume you have to choose between the following two alternatives: A: 10 000 USD with an estimated probability of 50.1 (and 0 with probability 49.9) B: 10 000 USD with an estimated probability of 49.9 (and 0 with probability 50.1) It seems to me that I would choose A even if my I knew that the variance in my estimated probability was high. True, I have not strong preference between the alternatives, but why toss coins as long as I have an option that gives a higher expected payoff? Elster might reply that this choise is an example of hyperrationality ("defined as the failure to recognize the failure of rational choice theory to yield unique prescriptions or predictions." SJ, p.17). I agree that it would be irrational to spend much time and money trying to estimate the second decimal if we were ignorant about the first in the case above, but that is not the question. The point is this: For a given set of information, I believe it is rational to act on the beliefs however weak they are. Why? True, the difference is small in the isolated example above, but in the real world we will encounter many choices in which we may rely on probabilities of varying reliability. Sometimes we are very uncertain, sometimes we are more certain. Let us compare the following two rules for choosing what to do: A: If our beliefs are very weak, we might as well toss a coin to decide; if the beliefs are reliable, you should choose the alternative with the highest expected utility (Elster's strategy) B: Choose the action with the highest expected utility both in situations with weak and strong beliefs. (Bayesian strategy) First of all, the fact that we have to make many choices means that the many small differences becomes large in aggregate. As a Bayesian says in response to why we should choose B: "... life is full of uncertainties - in a given week, you may buy insurance, bet on a football game, make a guess on an exam question, and so forth. As you add up the uncertainties of the events, the law of large numbers come into play, and the expected value determine your long-run gains" Gelman 1998, p. 168). Not do we have to make many choices, some are very important. Elster argues that important decisions often are clouded by uncertainty, i.e. we are in a situation in which coin-tossing is as good as any other method of choosing what to do. But, the larger the payoff the more significant the small difference in probability is. This seems to reveal a tension in Elster's argument: He argues that both factors (weak probabilities and large payoffs) pull in the direction of "coin-tossing", but it seems to me that the factors (at least in my example) pull in separate directions. Another problem with Elster's decision-rule, is the fact that before we make a decisions we have to determine whether the situation is one of "enough" certainty to use chose the action that maximizes expected utility, or whether we are so uncertain that we should randomize (or something else, like maximin). Where is the limit, and is it not costly to do examine the circumstances in this way every time we have to make a decision? Of course, we could go all the way and say that all our knowledge is always so weak that we always should toss coins. In this way we could avoid the problem of choosing when using Elster's strategy. Sometimes Elster is attracted to this argument, but at other times he seems to want to "have the cake and eat it". For instance, he is sympathetic to Descartes when he claims that our knowledge is limited in a way that can be compared to being lost in the forest. Yet, when discussing child custody after a divorce he does not want to go all the way and argue that it might as well always be decided using randomization. In some "obvious" cases the court should not toss a coin. But then the court first has to examine whether the case is obvious and this process is costly in the same way (but maybe not to the same extent) that a trial about child-custody would be. In short, either decision-rule A has a problem in terms of deciding when to toss a coin, or one has to believe that we are so lost that we might as well always toss coins. What are weak beliefs? Elster argues that uniqueness, novelty and fast changing environments are problematic for expected utility theory because we cannot use previous experience of similar situations to estimate the relevant probabilities. One possible counterargument, is that the view is heavily dependent on the classical view of probability as relative frequency. One alternative is the Bayesian view that probability reflects subjective uncertainty. On this view it is fully possible to speak about probability in a unique situation. In this sub-section I want to investigate this more closely, and I start with Elster's own arguments. The main source of Elster's view on probability is in the appendix to the book Explaining Technical Change (esp. pp. 196-203). In addition, there is a very important chapter on sub-chapter on subjective probability in Ulysses and the Sirens (p. 128-133). In ETC Elster argues that "it is possible to distinguish between different kinds of sources for probabilistic knowledge: objective frequencies, theoretical calculation, and subjective calibrations." (ETC, p. 195) Objective frequencies are calculated from past examples of similar events. Theoretical probabilities are calculated using theories (e.g. weather forecast). Subjective probability is simply the degree to which a person is uncertain about a statement (when asked to state his uncertainty in terms of probability). Elster then asks "whether these sources are equally reliable" (ETC, p. 195) First of all, it is interesting to note that Elster includes a third alternative, in addition to the usual objective vs. subjective distinction (see e.g. Iversen 199?). In fact, the inclusion of theoretical probabilities may undermine his arguments about the impossibility of forming estimates in novel and unique situations. Even if we cannot derive the probability form similar cases, we might be able to use theory to derive a reliable probability of the value of information. More interesting, however, is Elster's arguments against subjective probability. The arguments are (rather crudely), summarized in the following list:
On the first, it is possible to argue that this is a question of degrees. Moreover, it applies to the use of relative frequencies as well since there is no neutral criterion for what makes cases "similar" enough to be compared. There might also be a practical problem here. Since Elster argues that we sometimes have reliable probabilities, it follows that we have to decide whether to use the maximin or maximize expected utility. If the argument against the use of expected utility is that we tend to deceive ourselves so we cannot rely on our subjective probabilities, then one might also suspect that the agent deceives himself when making the choice about which procedure to use. To say that we sometimes should use maximin because we are biased, is not very helpful if the same bias makes us exaggerate the reliability of the probabilities so that we will not choose maximin. This is another instance of the problem already mentioned, when you do not go all the way to say that we should always use the maximin strategy. There is, also, a small problem in saying that a person knows that his or her own beliefs are biased. If you know this, then you would not have that belief! Of course, one might argue that the interpretation should be more general: I know that in general my beliefs are biased, but I do not know exactly which beliefs are biased. Yet, even if we agree with this, it does not follow that maximin is the best strategy. For instance, I might leave the decision to somebody who are more neutral than I am, I may try to adjust my optimistic beliefs (adjusting for the general knowledge that I am biased), or I may use some method of precommitment to prevent bias (such as hiring independent consultant to evaluate the question). In sum, I do not know the degree to which cold and hot cognitive biases distorts our beliefs. I tend to agree with Elster that these mechanisms are important. However, it does not follow, I think, that we should use maximin and/or only rely on objective probabilities. Maximin may be subject to the same bias and even if it is not, there are other alternatives to maximin that may be better (I don't know how true this is), and - finally - objective probabilities are also subjective in the sense that there is no neutral criterion for "similarity." The last comment also applies to Elster's second argument, that subjective probabilities are not intersubjectively valid. In addition to the response that objective probabilities are also "subjective", I think there is a tension between Elster's insistence on the subjective nature of rationality and the objective nature of probability. Consider the following quotation: It is not possible, however, to give general optimality criteria for the gathering of information. One frequently made proposal - to collect information up to the point where the expected value of more evidence equals marginal cost of collecting it - fails because it does not respect the subjective character of rational choice (RC, p. 14, my emphasis) The argument here is that an outside observes might be able to asses the value of information, but this does not help the person who tries to act rationally as long as he does cannot estimate the value of information. The information has to be available to the person who is making the decisions. This is true, but it also suggests that probability is an inherently subjective notion. Different persons have different information and as such it is possible that they both rationally estimate probabilities that differ. To demand that probabilities be intersubjectively valid (if one by this means that everybody should arrive at the same estimate), is to impose an objective standard on something that is inherently subjective. [On reflection I am not sure that this is what Elster means by the phrase "intersubjectively valid."] I now move on to the third argument, that subjective probability denies genuine uncertainty. That is, is it always possible to translate my uncertainty into probability statements about the world that can be acted upon? For instance, Knight (1921) made the well known distinction between risk (as measurable probabilities) and uncertainty (situations in which we could not even calculate the probabilities). Is this a weakness in the Bayesian approach? A naive Bayesian might argue that the problem could easily be solved by saying that total ignorance can simply be translated into the statement that "all outcomes are equally likely to happen." Unfortunately this is both conceptually and logically problematic. Conceptually, as Iversen (1984, p. 61) admits "saying that each value is equally likely is to say something about the parameter and represents one step up from complete ignorance." As for the logical problem, imagine that you have to guess the value of X, and all you know is that X is somewhere between (including) 0 and 5 (the example is from Iversen 1984, p. 61). If you use the assumption that complete ignorance means that all outcomes between 0 and 5 are equally likely, then the probability that X is less than 2.5 is 0.5: P (X < 2.5) = 0.5 But, if you are ignorant about the value of X, you are also ignorant about the value of X2. The possible range of X2 is from 0 to 25 (since X goes from 0 to 5). This means that the probability that X2 is less than 12.5 should be 0.5 (being ignorant about the value of X2 we simply say that all outcomes between 0 and 25 are equally likely). In other words: P (X2 < 12.5) = 0.5 By taking the square root of both sides of the inequality above, we get: P (X < 3.54) = 0.5 But this clearly contradicts the first statement that P(X < 2.5) = 0.5. I am not sure how to respond to this problem. It certainly shows that complete ignorance is not the same as a uniform probability distribution. It does not show, however, that complete ignorance is something that really exists. Further comments I have three rather ad hoc comments towards the end. First, I do not see how Elster can argue that some people have good judgment (see e.g. SG, p. 16, ETC p. 87) and at the same time argue that we live in a world in which most things are so uncertain that it often is a waste of time to seek information (it would only give information about the "second decimal" as he writes). If some people are able to form reliable probabilities (and this is not by luck), then we must admit that rational choice is not indeterminate in principle. In practice of course, many people may be so strongly affected by bias that they would do better not to spend too much resources on information. But this is a much weaker objection against rational choice theory. Thus, the second comment concerns the correctness of the label "indeterminacy" in rational choice theory. It seems to me that Elster does not show that rational choice theory is indeterminate, only that it may be invalid as a true description of how people actually act. Finally, I wonder if Elster tend to mix (but not confuse!) two issues. The first point, which I agree with, is that there is a tendency for people to spend too much resources trying to make the best possible decision (collecting information and so on). The second point is whether it is rational to act on weak beliefs and that it would be no less rational to toss a coin. The two points are both evident in the following quotation: I shall argue that the notion of subjective probability is less useful for a theory of rational decision-making than in argued in the Bayesian literature, and that it is often more rational to admit ignorance than to strive for a numerical quasi-precision in the measurement of belief. (US, p. 128) To make things clearer: A: It is irrational to use resources to collect information about the second decimal probability in a case B: It is no less rational to toss coins or use maximin strategies when we face great uncertainty than to base the decision on maximization of expected utility using subjective beliefs. I agree with a, but not with B.
Conclusion Rational choice theory can be attacked for many reasons. However, after reviewing Elster's arguments I do not think it is a significant objection to argue that it is impossible to make a rational decision because there is an infinite regress problem in the collection of information that makes a rational decision logically impossible. As for the problems of estimation, I agree that these are significant, but they do not prove the impossibility of making a rational choice and I am uncertain about the implications that follow (i.e. Elster's recommendation of randomization and maximin). Finally, I believe some of Elster's arguments on the issue are, in not contradictory then at least "in tension" with each other.
References (all the references to Elster are available from : the Jon Elster Page at www.oocities.org/hmelberg/elster.htm) Gelman, Andrew (1998): Some Class.Participation Demonstrations for Decision Theory and Bayesian Statistics, The American Statistician 52(2): 167-174. Iversen, Gudmund R. (1984): Bayesian statistical inference, SAGE Publications inc., Beverly Hills, CA. (Series: Quantitative Applications in the Social Sciences, no. 43 (07-043). Lipman, B. (1991): "How to decide how to decide how to...: Modeling limited rationality, Econometrica 59(4): 1105-1125. Mongin, Phillippe and Bernard Walliser (1986): "Infinite regression in the optimizing theory of decision," in Bertrand E. Munier: Risk, decision and rationality, pp. 435-457. Smith, Holly (1987): "Deciding how to decide: Is there a regress problem?" in Michael Bachrach and Susan Hurley (eds.), Foundations of decision theory, Oxford: Blackwell, pp. 194-219 Winter, S. G. (1975): "Optimizationand Evolution in the theory of the firm." In Adaptive Economic Models, ed. By R.G. Day and T. Groves, New York: Academic Press, pp. 73-118. |