[Hubin, Donald C. (1986), Of bindings and by-products: Elster on rationality, Philosophy and Public Affairs 15(1):82-95]

[ Books | Articles | Reviews | Index | The main menu ]


Of Bindings and By-products: Elster on Rationality

[start of page 82]

DONALD C. HUBIN

The following books are discussed in this review essay: Jon Elster, Ulysses and the Sirens (Cambridge: Cambridge University Press, 1979); Jon Elster, Sour Grapes (Cambridge: Cambridge University Press, 1983). Citations from the first will be indicated by US and those from the second by SG.

Rationality is on the rise. Not as a property of people, of course, but as a subject of philosophical discussion. Long a favorite with philosophers, it is enjoying even greater popularity as battles are waged over whether it consists in the maximization of objective value or merely subjective value, or whether it requires maximization of anything at all, whether rationality in situations of risk requires acting on objective probabilities, subjective probabilities, or some other probabilities, how one should act when ignorant of the relevant probabilities, and whether such conditions of ignorance ever really obtain.

Into this battlefield, Jon Elster introduces two salvos, Ulysses and the Sirens and Sour Grapes. These books take the reader on a sometimes dizzying tour of selected topics in economics, evolutionary theory, literature, sociology, historiography, and philosophy. In an age of overspecialization, these works are refreshingly interdisciplinary; they will entertain and educate people with widely varied backgrounds and interests. Perhaps they will even foster dialogue between students of diverse disciplines.

Because both books are collections of essays only loosely connected by various recurrent themes, it isn't feasible to give a complete summary of the contributions they make. Elster thinks that one significant contribution is providing a challenge to the received view of what an action is. He writes:

[end of page 82, start of page 83]
An action is the outcome of a choice within constraints. The choice, according to the orthodox view, embodies an element of freedom, the constraints one of necessity. In non-standard cases, however, these equations do not hold. The title of an earlier book on rational and irrational behavior, Ulysses and the Sirens, is a reminder that men are sometimes free to choose their own constraints. Sour Grapes conversely reflects the idea that the preferences underlying a choice may be shaped by constraints. Considered together, these two non-standard phenomena are sufficiently important to suggest that the orthodox theory is due for fundamental revision. (SG, p. vii)
The phenomena mentioned seem, pace Elster, to pose no real challenge to the orthodoxy he describes. But there is much more in the books than is reflected in this passage. And much of it is a challenge to orthodoxy - though the orthodoxy is that of the theory of rational action and not of action generally. For example, Elster argues that rationality is not a maximizing but a "satisficing" concept (US, pp. 133-37 and SG, pp. 2-26).1 He claims, contra the Bayesians, that there are cases of decision under ignorance (US, pp. 128-33). He asserts that the standard decision- theoretic (parametric) conception of rationality is "strange or even contradictory" compared with strategic rationality (US, p. 117). And he takes steps toward developing and defending a conception of both individual and collective rationality that places substantive constraints on preferences and beliefs (SG, Chapter 1).

In what follows I shall, for the most part, avoid those aspects of Elster's work that question the received view of rational action. I shall focus instead on two issues that are recurrent topics in both of these books and that Elster believes have interesting implications for moral and political philosophy: precommitment and self-defeating pursuits. Precommitting, or binding, oneself is an indirect strategy for achieving an end; it can loosely be described as 'self-coercion'. Like other indirect strategies, it allows one to achieve an end that might otherwise be attainable only with difficulty, if at all-a sort of rational "tacking into the wind." Elster discusses self-defeating pursuits because, he believes, there are some states that resist even indirect rational strategies. Though they can be brought about by action, they cannot be brought about as the intended result of

[end of page 83, start of page 84]

action. It may be possible to get "there" from "here," but not by trying. These states are, in Elster's terminology, "essentially by-products"; their pursuit is doomed to failure by the pursuit itself. These two issues are of general interest because they concern the power and the limitations of rational strategies. They are of special interest to us because of the moral and political applications to which Elster puts them.

PRECOMMITMENT

Thomas Schelling has demonstrated the virtues of precommitment in strategic interaction.2 By binding yourself - making certain courses of action impossible or very costly - you can manipulate an opponent or prevent him from manipulating you. If, for example, a union leader stakes his reputation on his refusal to approve a contract that includes a reduction in wages, he might thereby gain the upper hand in wage negotiations. Similarly, a government might adopt a firm policy of never paying ransom, making this a matter of principle, in order to deter extortionist threats. Such are the Orwellian oddities of strategic interaction: weakness can be strength.

In the title essay of Ulysses and the Sirens, Elster reminds us that strategy begins at home, for we may wish to manipulate (and avoid manipulation by) ourselves as much as others. More precisely, our present selves may wish to manipulate our future selves and to prevent our near future selves from manipulating our further future selves.

The strategy of precommitment is one very useful way of dealing with the practical problem of weakness of will. If you don't want to drink this weekend, lock up the liquor and put the key in the safe-deposit box until Monday.

Precommitment has other functions as well, largely ignored by Elster. It is not only our future irrationality we might wish to protect ourselves from, but our future rationality as well. Indeed, even the example that gives the book its title may not involve weakness of will. We are, I suppose, inclined to believe that the song of the Sirens would entice Ulysses to order his ship into the rocks without making it rational (either subjectively or objectively) for him to do so. But this may not be true. The story is

[end of page 84, start of page 85]

vague about the effect of the song of the Sirens. If it were to cause him to believe, falsely but reasonably, that steering into the rocks would not wreck his ship, his action may maximize his expected utility given his actual preferences and beliefs. If the song were to cause his preferences to change so that getting closer to the Sirens were the only thing important to Ulysses, his action may maximize his expected utility given his actual preferences and true beliefs. In either case, Ulysses may not be worried about weakness of will, but about the harm he will do quite rationally (either subjectively or objectively).

The further uses of precommitment only serve to make the strategy more interesting and an understanding of it more important. It is not as simple a concept as it might appear. But Elster offers a "tentative definition" (US, pp. 37-47) that is adequate for our purposes although problems remain.3 Slightly paraphrased, his account is this:
An agent, A, binds (precommits) himself if:
(i) A carries out a certain decision at time t1 in order to increase the probability that A will carry out another decision at time t2;
(ii) the act at t1 does not have the effect of inducing a change in the set of options that will be available at the later time in such a way that the new feasible set includes the old one;
(iii) the effect of carrying out the decision at t1 is to set up some causal process in the external world;
(iv) the resistance against carrying out the decision at t1 is smaller than the resistance that would have opposed the carrying out of the decision at t2 had the decision at t1 not intervened; and,
(v) the act at t1 is an act of commission, not omission.

Elster applies the concept of precommitment toward the understanding of Pascal's Wager, Descartes' critique of instant rationality, approaches to consistent planning, manipulation of people through endogenous preference changes, and the explanation of various historical and political phenomena. I shall focus on the last two applications because they are

[end of page 85, start of page 86]

most likely to be of interest to readers of this journal. What Elster has to say about these issues is interesting in its own right and, though I shall argue that neither manipulation through endogenous preference change nor the political phenomena that interest Elster are correctly analyzed as cases of precommitment, it is instructive to see why. Take first his discussion of manipulation through endogenous preference change. It is obvious that our preferences often change in light of our experience and that our experience is influenced by our choices. In this way our choices may affect our preferences. This raises the possibility that a person who would not choose to go directly from state1 to state3, deeming the latter to be less valuable than the former, might nonetheless willingly go from state, to state3 indirectly. This is because he may prefer some other state, state2 to state1 and move to it, and state2 may influence his preferences in such a way that he comes to prefer state3 to either alternative. One rather obvious example of this would be the case of a problem social drinker - Larry the Lush. Larry is, he regrets, sober. He would much prefer to have a slight buzz. However, he certainly does not want to get really drunk and embarrass himself as he has so often in the past. Unfortunately, he knows that once he has a few drinks to loosen up, he will want to get roaring drunk and have no concern for his self-respect.

This scenario raises the possibility of manipulation of creating state2 simply to lure another from state1 to state3. Someone who wishes to see Larry put the lampshade on his head and sing his out-of-key rendition of "Feelings" can offer him the first drink and let nature take its course.

Is such manipulation permissible? Elster thinks not. Indeed, he sees "no essential difference between" this sort of manipulation and coercion. "Coercion takes place, " Elster says, "when an individual prefers x over y, and continues to do so even when someone (physically) coerces him into doing y" (US, p. 82). This account is both unhelpful and counter-intuitive. The fact that Elster uses 'coerces' to define 'coercion' and does not analyze the former notion makes the analysis unilluminating. And it is an inaccurate account of our ordinary concept of coercion because most of us would consider some acts coercive even if the agent did not maintain his earlier preferences. Perhaps the clearest example of this is when a person is forced to kill himself. What Elster would say of this example I don't know; of a more mundane example one in which an agent comes to prefer the act he was coerced into doing - Elster says that

[end of page 86, start of page 87]

the agent has been "seduced," not subjected to "coercion" (in his sense). "Seduction occurs when an individual initially prefers x over y, but comes to prefer y over x once he has been coerced into doing y" (US, p. 82). This is even further from ordinary usage; but ordinary usage is not our concern here.

Elster finds both seduction and coercion to be intrinsically morally objectionable. Manipulation of a person through endogenous preference changes is, he thinks, conceptually distinct from either of these but morally on a par with them. He calls this kind of manipulation 'persuasion'. It occurs when "an individual is led by a sequence of short-term improvements into preferring y over x, even if initially he preferred x over y" (US, p. 83). Though this may appear to be merely a sequence of voluntary choices, Elster thinks this appearance hides a moral problem. Morally, "persuasion" is not at all like voluntary choice, which occurs when an "individual initially prefers y over x, and does y for that reason" (US, p. 82).

This issue is of great importance and interest. Elster's view suggests that government programs of inducement are just as morally objectionable as alternative coercive plans. He discusses at some length a plan suggested by C. C. von Weiszäcker to entice farmers away from the land and into the city by paying them to move. Once in the city, the hypothesis goes, the former farmers will come to prefer city life and the financial inducements can be dropped. A variety of other examples come to mind: meeting the gasoline shortage by subsidizing the purchase of small, fuel-efficient cars for a short period of time with the expectation that many will come to prefer such cars even without the subsidy; undermining consumer brand loyalty by offering free samples or below-cost discounts to lure potential customers into using one's product with the expectation that they will come to prefer it even at full price; using financial inducements to encourage one's children to work for good grades in the hope that once they become used to the self-satisfaction accompanying such success, that will be reward enough to sustain their efforts. Such "persuasions" seem common enough and not, in and of themselves, morally wrong.

Elster dissents. "My contention is that persuasion is more similar to seduction than to voluntary choice," he says (US, p. 83) - a claim that seems plausible until one remembers that by 'seduction' Elster means what most of us would call 'coercion'. He goes on to say that "there is

[end of page 87, start of page 88]

no essential difference between coercion and seduction, nor between seduction and this form of persuasion" (US, p. 83). He concludes from this that Robert Nozick is wrong when he claims that inducements are never coercive. 4 Because of the obvious importance of this and the initial implausibility of Elster's position, this claim merits persuasive argumentation (provided that is morally permissible). Unfortunately, it gets none. He contents himself with asserting that such persuasion is never justified unless you inform the object of it that you are about to manipulate him, because "[e]xploiting intrapsychic mechanisms that are unknown to the individual can never be justified"'(US, p. 83). Though I take this literally to say that such actions are absolutely wrong, a more charitable (but, I think, equally false) interpretation of Elster takes the claim to assert merely that such actions are always prima facie wrong because of the manipulation involved. To establish even this weaker conclusion, Elster must show that the subsidies, discounts, and inducements suggested above are prima facie morally wrong. This seems not to be the case. But let us ask what all of this about endogenous preference change has to do with precommitment. Elster believes that precommitment can take at least two forms. First, an agent can limit the set of feasible actions - more precisely, he can alter the world so that what might otherwise have been attractive and feasible becomes either unattractive, difficult, or impossible. Second, an agent may influence the mechanism by which he singles out a member of the feasible set of actions. This latter, Elster apparently believes, can be carried out either by deliberately bringing about a change in one's preferences or by refusing to allow such a change to take place. Hence, moving from S1 to S2 in order then to find S3 attractive and move to it counts as precommitment. Similarly, refusing to move from S1 to S2 in order not to choose S3 later counts as precommitment. Both of these last two claims seem wrong. I do not question whether one can bind oneself by altering one's selection mechanism; to use Schelling's example, one might take a drug to make oneself irrational and avoid the possibility of being extorted - extortion threats being ineffective on

[end of page 88, start of page 89]

an agent who is irrational in that particular way. But the cases Elster describes, though they are instances of self-conscious character management, are not cases of precommitment. 5

In the first case, the agent chooses not to move from S1 to S2 for one of two reasons: either he fears that he will then voluntarily move to S3 - a prospect that he now dislikes - or he simply does not want to be the sort of person who prefers S3 to other alternatives available. In either case, he has judged the desirability of S2 to be less than that of remaining in S1. In the first case it is the indirect effect of S2 on his later states that bothers him; in the second case it is the effect of S2 on his character to which he objects. If the agent fears that he might overlook or underweigh these subtle effects of S2, he may well bind himself to prevent his moving to S2 out of weakness of will. But the refusal to move to S2 itself is not an act of precommitment for it does not render the move to S3 more difficult or less desirable.

The second case is one in which the agent intentionally moves from S1 to S2 in order to make himself the sort of person who would prefer S3 to the other two options. This is an odd case. Presumably, the agent does not prefer S3 from the outset. If he did, there would be no need to move to S2 in order to acquire this preference. (Perhaps he already prefers S3 to the other options but doesn't have the strength of character to act on his preferences. If so, it might be that moving to S2 would strengthen his resolve to move to S3. But then this would not be a case of binding oneself to prevent the undesirable consequences of weakness of will; it would be a case of overcoming the weakness of will in the first place.) Thus, we are confronted with a case in which a person has a preference for a preference for S3 but no preference for S3. What are we to say of such a case? I think that we should follow Richard Jeffrey 6 and say that if a person moves to S2 in order to have these preferences, then his preference for preferring S3 was stronger than his preference for remaining out of S3. If, on the other hand, he refuses to move to S2, then his preference for staying out of S3 was stronger than his preference for having the preference for S3. In neither case should his action be seen as precom-

[end of page 89, start of page 90]

mitment. Again, this is because it is not a case of an agent manipulating his choice situation or his deliberative capacities to make a decision that would otherwise be possible or desirable not be so.

The two cases offered by Elster seem to be cases of precommitment only if we view an agent's preferences as binding him. But, of course, preferences do not generally bind an agent, even when they determine his action. The entire issue of endogenous preference change has no special relevance to the notion of precommitment.

The second of Elster's applications of the concept of precommitment that I will discuss here is that of a democracy binding itself through its constitution. There is no doubt that the populace of a democracy may be constrained by its constitution. At least two problems arise, however, when one attempts to interpret these constraints as arising from acts of precommitment. The first is pointed out by Elster and is, in fact, one of his recurrent themes: one cannot infer from the fact that some structure has an effect - even a desirable one - that it was designed in order to produce that effect. The second is that it is quite unclear that there is a single agent binding itself through an external mechanism as is required by the concept of precommitment. Since we may assume that constitutional constraints are designed to limit the actions of the majority in a democracy, the second problem is more serious.

Precommitment requires that there be a single agent who performs an action at one time with the intention of constraining itself at a later time. It is not at all clear that the act of imposing constitutional constraints will fit this model - especially in the case of intergenerational constraints.

Perhaps we can treat the agent as a population over time having the intention to bind itself. The problems with this approach are great and readily apparent. Even if they are overcome, this understanding of the agent would not allow the case of constitutional constraints to fit Elster's analysis. This is because the binding would then not be effected through an external causal mechanism. Instead, it would be analogous to a private side bet, which Elster intends the third clause of his definition to rule out as an instance of precommitment. Our single agent has, in effect, an agreement with itself not to undertake certain action without more than a majority vote. If, being convinced of the virtue of absolute pacifism but fearful of our own atavistic tendencies, we were to beat our swords into plowshares in order to prevent ourselves from retaliating for an attack,

[end of page 90, start of page 91]

this might plausibly be seen as social precommitment. But if we merely promise ourselves that we won't do something that is in our power to do, we have not bound ourselves in the relevant sense.

It seems premature, then, to conclude with Elster that "the analysis of democracy has offered some convincing examples of political precommitment" (US, p. 103). It thus seems premature to conclude that the analysis of precommitment can give us a tool for understanding the constitutional constraints of a democracy.

SELF-DEFEATING PURSUITS

Precommitment is an indirect strategy for achieving an end. It is often reasonable to employ it when a direct approach is less certain or more costly. There are other sorts of indirect strategies - for example, altering one's character in order to affect one's future choices. Taken together, indirect strategies provide a powerful, and often overlooked, technique for achieving our ends.

But the technique can also be overestimated. Elster's discussion of precommitment and the employment of indirect strategies for achieving one's ends in Ulysses and the Sirens leaves one with the impression that virtually any effect that can be brought about by human action can be brought about intentionally, either directly or indirectly.

This is a misconception, Elster believes, and he sets about to correct it in Sour Grapes. There are, he claims, states that are essentially by- products-that is, roughly, states that can be produced by action but cannot be produced as the intended effect of the action. When pursued, such states recede; they are attainable only by one who does not seek them. Failure to appreciate this fact leads to two related fallacies: the moral fallacy of by-products and the intellectual fallacy of by-products. The first occurs when one tries to bring about a state that is essentially a by-product-the second, when one attempts to explain such a state by reference to an agent's intention to bring it about.

Elster discusses at some length "willing what cannot be willed" or, less paradoxically, willing what cannot be brought about by a mere act of will. Examples discussed by Elster include being natural, sleeping, and forgetting. (Insomniacs will find his phenomenology of insomnia particularly interesting.) He sums up the discussion as follows: "[T]he absence of

[end of page 91, start of page 92]

consciousness of something cannot be brought about by an act of consciousness, since this privative state is essentially a by-product" (SG, p. 50).

But this seems to be the wrong summation. Though most of us cannot bring about the states with which Elster is concerned merely by willing them, we can surely bring them about by "an act of consciousness." The point is especially clear if one considers sleep and forgetfulness. There are any number of conscious acts one can perform to induce these states. For inducing sleep, warm milk, hot baths, sex, a glass of wine, a few sleeping pills, or a beginning logic lecture have been known to do the trick. For forgetfulness, getting involved in an attention-consuming activity, sex, a few glasses of wine, or the sleeping pills again, can accomplish the end.

Elster's point cannot be that there are some states of ourselves that we cannot bring about at will. This is too mundane to mention. If his point is that there are states of ourselves that, by their very essence, cannot be brought about by design, it is an exciting and bold hypothesis. Unfortunately, it achieves its excitement and bravado by its appearance of blatant falsity.

Elster does nothing to reduce this appearance - not that he doesn't try. He offers three "responses." "First, even if a certain state can be achieved by indirect means, it may still be a fallacy to believe it can be achieved at will" (SG, p. 56). It certainly may be false to believe this, and it would be a fallacy to infer it from the fact that the state could be achieved indirectly. But is is an easy trick to point out fallacies that no one is tempted to commit. We should not allow this intellectual sleight of hand to distract our attention from the fact that Elster's remark is no reply to the objection that has been made.

Second, Elster says, "[e]ven assuming the technical feasibility of bringing about the states in question by indirect means, there may be a cost- benefit problem that stops us from doing so. Not everything that is technically possible is also economically rational" (SG, p. 56). True, but irrelevant. Elster's thesis is not that there are some states that, even if desirable, ought not to be brought about even by indirect means. No one disputes this. His claim is that there are some states that, "for conceptual and not only for empirical reasons," cannot be brought about both intentionally and intelligently. His second point does nothing to support this claim.

[end of page 92, start of page 93]

Finally, Elster claims he will argue, "There are states which resist the indirect as well as the direct attempts to bring them about" (SG, p. 57). 'Resist' is a crucial word here. For Elster's purposes, it must mean 'cannot be effected by'. But Elster argues no such thing. Instead, he gives a couple of examples of difficulties that might arise in employing certain indirect means to produce certain states that, he claims, are essentially by-products. The most memorable is the "'hammock problem. " Rocking oneself to sleep in a hammock may be impossible for some because just before one falls asleep, one has become too sleepy to continue the rocking. As the rocking stops, one awakens enough to resume the rocking - again, almost to the point of sleep.

For Elster's thesis to be true, there must be some states that we cannot produce intelligently and intentionally by any technology. (This is Elster's revised account of a state that is essentially a by-product.) To generalize from the problem of rocking oneself to sleep in a hammock would be rather hasty - all the more so since the state in question, sleep, is clearly attainable by indirect means. So much for Elster's arguments. What about his thesis?

Setting aside trivial cases, Elster's thesis seems doomed to falsity. All the states considered by Elster are, presumably, achievable in principle by sufficiently sophisticated methods of psychosurgery - methods that may be used to produce the state both intentionally and intelligently. Let us proceed even further into science fiction long enough to imagine a brain-state replicator. It works like this: if I desire to have the same brain state as another, I place the cap of the replicator on my head and aim the pick up at the person currently in the desired brain state. That state is instantly replicated in me. With such a technology at our disposal, there seems to be no brain state inherently beyond our ability to produce intentionally and intelligently.

If we had to rely on such science-fiction examples to refute Elster, this would show something interesting about the limitations on the indirect strategies available to us. It would not, though, support Elster's thesis, which takes seriously the notion of a state being essentially a by-product. And although I think that my brain-state replicator example is convenient in that it handles Elster's examples in one fell swoop, I don't regard such far-fetched examples as necessary. I see no reason to believe that even without such exotic technology there are some states that cannot be produced except as by-products.

[end of page 93, start of page 94]

If we have no good reason to believe that there are such states, then most of what Elster says in his essay on by-products is unduly pessimistic. By employing indirect strategies, we can achieve far more than Elster concedes. And, more interestingly, the only political application to which Elster puts this thesis seems unwarranted. Let us turn finally to this issue.

Some have claimed that one of the main purposes of certain participatory political institutions is their educative function for those who are involved in them. Elster objects. "This would be to turn into the main purpose of politics something that can only be a by-product" (SG, p. 91). Although he has not argued that the enlightenment in question is essentially a by-product, it may still be true that if political institutions are created with the aim of producing greater awareness in the participants, they cannot achieve this.

Elster's real concern is whether it is possible to achieve this aim if one makes it the public justification of the institution. Some, like Kant and Rawls, insist that an aim can justify an institution only if making that aim the public justification of that institution is consistent with achieving that aim. If they are correct, then those aims the publication of which would preclude their attainment could not serve as the justification of political institutions.

Elster believes that if I involve myself in a political process with self- development being my sole or primary aim, I am sure to fail. This is because the end I have in mind is essentially a by-product and hence cannot be produced both intelligently and intentionally. For argument's sake, let us grant him this. He concludes from this that the achievement of such states cannot be the public justification of the political institutions. Herein lies the fallacy.

There are at least two lacunae in the inference. The first is in his assumption that the public justification of an institution must be the aim of individuals in setting up the institution. This need not be so. Each individual may strive to set up an institution for purposes of pure economic self-interest. But each may recognize that this sort of consideration does not constitute a satisfactory public moral justification of the institution. Each may then search for some effect of the institution that would serve this function. They may find it in the institution's ability to promote personal development or class consciousness. Each takes this to be the public moral justification of the institution, but no one establishes the institution with this as his aim.

[end of page 94, start of page 95]

The second flaw in the argument is more interesting for our present purposes. In order to see this fault clearly, let us grant Elster what we challenged above: that if there is an aim that justifies a political institution, then those who set up the institution do so in order to achieve this aim. The argument is fallacious, even allowing this, because Elster confounds two different aims. He fails to distinguish our aim in establishing an institution from our aim in participating in that institution. But these are quite distinct concepts and often distinct aims. One can imagine, for example, a follower of Adam Smith promoting the free market for the sole reason that it maximizes overall efficiency. Nevertheless, his reason for participating in the market once established is, of course, quite different. Given this crucial distinction, there seems to be no reason why some aim like personal development or class consciousness cannot be the public justification of an institution and the only aim of those who establish that institution and still be achieved by the institution. The motivation for participating in the institution need not be the same as that for establishing it.

This last point suggests an idea that should not have been lost on anyone as fascinated with precommitment as Elster is: one can create political institutions so as to bind oneself to become involved and achieve personal development. We might imagine members of a society that is subject to a benign dictator who choose to bind themselves to political action by establishing a democracy. Once it is established, self-interest may provide sufficient motivation to participate fully. The public justification of the institution is personal development, as is the aim of the populace in establishing it. Once established, the people do not aim at such development. Indeed, they may rue the day they forced themselves to achieve it, even as Ulysses regretted (for a time) his order to be bound to the mast. Their motivation for participation is of quite a different sort. Hence, they can achieve personal growth even if this growth cannot be achieved by a person who participates for the purpose of achieving it.

Surprisingly, Elster has underestimated the power of precommitment. "Where there's a will, there's a way," is surely false - but not for conceptual reasons. The interesting limitations on what can be achieved rationally are empirical.

For helpful discussions of issues I have focused on in this article, I am indebted to Brad Armendt, Daniel M. Farrell, Robert Kraut, and George Schumm. My debt to Daniel Farrell extends beyond this; his careful reading of an earlier draft of this paper has done much to improve it. I am also grateful to the Editors of Philosophy & Public Affairs for many helpful suggestions.

[end of page 95]

NOTES
[collected from their respective pages]

1. He never, though, reconciles this claim with his position that humans, in contrast to animals and evolutionary processes are "globally maximizing machines" (US, pp. 9-18).

2. Thomas C. Schelling, The Strategy of Conflict (Cambridge, MA: Harvard University Press, 1960).

3. For example, clause (i) would be better phrased as "A carries out a decision at t1 in order to decrease the probability that he will carry out another decision be believes he may make later." Ulysses' actions while tied to the mast can hardly be termed a "carrying out" of his decision not to steer the ship into the rocks. To avoid practical vacuity, condition (iii) should be rephrased to require that the effect of carrying out the decision at t1 on the probability of future actions is through an external causal mechanism. And neither clause (iv) or (v) seems to be a necessary condition for precommitment.

4. He ought not to put his point in this way since he does not deny the conceptual distinction between persuasion and coercion; even on his analysis of 'coercion' he is forced to agree with Nozick that the inducements in question are not coercive. Elster's point is really a normative one, disguised as a conceptual one.

5. It is worth noting that neither case appears to fit Elster's own explicit, if provisional, account of 'precommitment'.

6. Richard C. Jeffrey, "Preference among Preferences," Journal of Philosophy 71 (1974):377-91. A revised version appears in The Logic of Decision, 2d ed. (Chicago: University of Chicago Press, 1983), pp. 214-27.

[Hubin, Donald C. (1986), Of bindings and by-products: Elster on rationality, Philosophy and Public Affairs 15(1):82-95]

[ Books | Articles | Reviews | Index | The main menu ]


1