Killing for Trivial Gains

Robert Bass
Department of Philosophy
Coastal Carolina University
Conway, SC 29528
rhbass@gmail.com 





Egoism, I have claimed, is not an acceptable option for moral theory, but some have urged that whether egoism is acceptable or not depends upon the content assigned to interests. I don’t think the egoist has an entirely free hand in that regard. There are limits on what content he can assign if his position is plausibly to be described as egoistic.



One way to bring that out is by way of the following principle, which I think is a necessary (though not sufficient) condition for an egoist theory:



(E)  If an agent were confronted by exactly two options, A and B, and if A were better in terms of his interests than B, then it would not be wrong for him to select A.



It is clear that (E) is a necessary condition for any egoist theory because its denial would be that it may sometimes be wrong for an agent to select an option that serves his interests better than the alternative. Surely, no egoist could accept that denial. But from (E), we can get (M) by instantiation:


(M)    If an agent were confronted with a pair of options such that he could achieve a trivial gain – say, a dollar after all costs and consequences are taken into account – by committing murder, then it would not be wrong for him to commit murder.



Short of accepting (M), I think the egoist has exactly three theoretical options. First, he may reject the principle (E) from which it is derived. Second, he may say that, though such a case is possible in principle, such cases never in fact come up (or can never be known to come up), given the way the world is. Third, he may say that such cases are not possible in principle: When the content of interests is correctly construed, there is in principle never even a trivial gain to be had by committing murder.



1.     If he adopts the first, since (E) is a necessary condition for egoism, that amounts to rejecting egoism.



2.     The second is a weak response, because it misunderstands the argument. The argument is not about cases that actually come up. For a parallel, compare anti-utilitarian arguments that seem to show that utilitarians are committed to thinking that redistributing the organs of a (non-consenting) healthy person is acceptable if it will save more lives than it will cost. Those arguments need not hold that such cases come up or can be known to come up. They are instead arguments about the principles to which a utilitarian is committed. Like those arguments, the argument that egoism implies that murder for trivial gains is not wrong is an argument about the principles to which the egoist is committed. Just as the anti-utilitarian arguments bear on the counterfactual implications of utilitarianism (on what would be right, by utilitarian standards, if certain circumstances were to obtain), so this argument bears on the counterfactual implications of egoism (on what would be right, by egoist standards, if certain circumstances were to obtain).



Now, rejecting arguments that rely upon counter-factuals is not a reasonable option, for any moral principle has counter-factual implications – implications for what would be right or wrong in situations that do not occur or have not yet occurred. In fact, they must, if they are to guide choice, for they tell an agent what to do in circumstances he has never faced before. Even something so mundane as “don’t cheat your employer” implies things like “if you were to be employed by IBM, then you shouldn’t cheat IBM” – and that holds true whether or not there is any prospect that you ever will be employed by IBM. There would be something wrong with a principle that (ceteris paribus) directed you not to cheat your actual employer but implied that there would be nothing wrong with cheating IBM, were you to be employed by IBM. It is perfectly legitimate, in general, to test a moral principle by asking whether its counterfactual implications are acceptable. If they are not, that’s a good reason for rejecting the principle.

 

The person who adopts the second response is really accepting that it is permissible to commit murder for trivial gains. Saying the case doesn’t come up is just not responsive to the question, ‘What if it were to come up?’



3.     The third response says in effect that the case cannot come up if interests are correctly construed. This amounts to abandoning egoism in a different way, by trivializing it, but seeing why will take a bit of explanation.



It can be agreed at the outset that if a theorist has a completely free hand in specifying the content of interests, then there will be no problem saying that any particular plausible moral claim can be squared with action that is in the interests of the agent. But that is because any claim about how one should act, whether plausible or not, can be squared with acting in one’s interests, if there are no prior restrictions upon what can count as the content of interests. Knowingly and deliberately going to one’s own ruin to prevent a cabbage from being eaten could count as being in one’s interests if all that need be done is to say that ruin for the sake of cabbage-saving is in one’s interests. No theory that deserves to be called a version of egoism can be so permissive about the content of interests. Restrictions must be imposed somehow.



Whence are the restrictions to come? The short answer is that we need a theory of self-interest. The first step is simple and uncontroversial. Suppose an agent is faced with a pair of options, A and B. In virtue of what is it true that A or B (or neither) is more in her interests? The answer, I think, has to be something like this: The egoist must, on one hand, identify a class of basic or core interests, and on the other, say that other things count as being in the agent’s interests by virtue of standing in the right relation to the basic or core interests. More generally, she has to say that actions (and, mutatis mutandi, dispositions, states of affairs, etc.) are required, favored, permitted, disfavored or prohibited by virtue of their relation to the core interests. Let us call the specification of basic or core interests a List.



Now, the question faces us: Why do the items on the List count as being in the agent’s interests? How do items get on the List? Plainly, it will not do, for items on the List, to say that they are in the agent’s interests because they stand in the right relation to items on the List. We may be able to rule out cabbage-saving at the cost of slow death by torture as being in an agent’s interests because it doesn’t stand in the right relation to items on the List if we have something else there, but how are we to rule out the possibility that cabbage-saving (even at high cost) is itself an item on the List?



There must, on one hand, be some constraint on what items can appear on the List; on the other, they cannot be constrained by their relation to something else taken to be in the agent’s interests. The theory needs further articulation if it is to be a theory of self-interest. Without some further constraint, the structure in which there is a List of basic concerns and in which other things are recommended (or not) in terms of their relation to the basic concerns might amount to a theory of value, but not to a theory of self-interest.



Before saying more, positively, about what qualifies some item to appear on the List, I think we can identify two relevant features that the List or items on it must have. The first is a global property of the List: It needs to be relatively short. This is for both theoretical and pragmatic reasons. The theoretical reason is that the explanatory power of a theory with the structure we are considering is increasingly compromised the longer the List is. The theory is supposed to explain what we have reason to do. The more items that are placed on the List, the less work there is for items on the List to do, for there is less that is not on the List to be explained. The pragmatic reason is that the longer the List is, the more difficult it is to make use of the items on the List in order to evaluate other things. It may be clear that some option for action is favored by its relation to some one or a few items on the List, but not that it is favored by the List, taken as a whole. (It is an interesting question whether, for a List containing more than one item, there must also be some weighting function or priority rule to arbitrate between them in case of conflict and whether that function or rule must itself be an item on the List.)



The second important feature is that items on the List specifying basic interests must not be moralized. What I mean is that it must not be the case that an item appears on the List because it is deemed good, right or morally desirable. (On this point, I have sometimes been  careless in characterizing an egoist as someone who takes his own interests to be of ultimate value [to him]. The qualification should be added that what is of ultimate value to the egoist is his non-moralized interests.) The order of argument for an egoist is from interests to conclusions about what ought to be done. A theorist who holds that some action counts as self-interested because it is right is not really an egoist at all. Any moral theory that admits a legitimate role for self-interest – which is just about any moral theory – can be characterized as a version of egoism if it is allowed that whether something counts as self-interested is partly or wholly a function of whether it is morally right, good or desirable. A theory that lets moralized interests onto the List specifying basic interests trivializes egoism and robs it of any features that distinguish it from a multitude of other moral theories. (For more on moralized-interest theories and why egoists should avoid them, see Who is an egoist? What are interests? §



So, the egoist’s core values must be comprised of a short List of non-moralized interests, and the items on the List cannot be present because of their relation to some further interests. What could satisfy these conditions? I think only one possibility will work, which I shall call the test of immediate plausibility: For any item on the List, it must be immediately plausible that it is in the interests of the agent. Credible candidates for the List may include items such as longevity, happiness, pleasure, health, wealth, exemption from pain or suffering and the like. (Suppose there is something that belongs on the List but that is not immediately plausible to regard as being in the agent’s interests. Then, some case will have to be presented as to why it counts as being in the agent’s interests. How will that argument proceed except by linking it to something that is immediately plausible to view as being in the interests of the agent? But then, if some linkage must be effected to immediately plausible objects of interest, the new item will not be needed on the List of basic interests.)



This is, in a sense, untidy. Why do we find these things – and not others, such as cabbage-saving – immediately plausible? We would like to have some unifying theoretical account of what it is in virtue of which some things count as being in an agent’s interests and others do not. If we had it, we might be able to better articulate what does and does not belong on the List and why. But I do not see that we have any such unifying theoretical account. (More precisely, I know of several attempts to provide one, none of which are successful.)



Now, if candidates for the List of basic interests must pass the test of immediate plausibility, then the third strategy suggested above for avoiding the conclusion that it is not wrong to commit murder for trivial gains is bankrupt. For it is plain that in terms of immediately plausible candidates for basic interests, it is possible for there to be trivial gains from committing murder – under readily imaginable circumstances, it could make a person (slightly) happier, wealthier, longer-lived, etc. to commit the murder. What that means is that the egoist – who does not cheat by importing moralized interests into his account – is  indeed committed to saying that it is not wrong to commit murder for trivial gains.



Now, the reasonable thing to do when one must choose between two beliefs, one of which is more certain than the other, is to keep the more certain and give up the less certain of the two. Since it is more certain that it is wrong to kill for trivial gains than that egoism is correct, egoism is the one to give up.

 

 

 

 

Comments? I'd love to hear.

 

 



§ It is, I think, only in the sense of having a moralized-interest theory that it is plausible to say that Aristotle is an egoist – which, by my lights, means he is not one. See, for example, the discussion in Nicomachean Ethics 9.8 of good and bad self-love. The good man is said there to be a lover of self and to want the best things for himself, but Aristotle is quite explicit that what is best counts as such because it is or involves noble activity – that is, because of a moral quality: “In all the actions, therefore, that men are praised for, the good man is seen to assign to himself the greater share in what is noble. In this sense, then, as has been said, a man should be a lover of self; but in the sense in which most men are so, he ought not.” (1169a 33-1169b 2) Further, he is clear that noble activity is compatible with and may require any manner of what would ordinarily be called sacrifice for the sake of others (1169a 18-32).