The Netzercise: A Variation on the Prisoners' Dilemma
The Prisoner's
Dilemma To cooperate or to defect? The prisoner's dilemma was conceived at Princeton's Institute of Advanced Science in the 1950's. In the basic scenario, from whence it gained its name, two prisoners who the police know have committed crime A, but whom they also wish to convict of the more serious crime B, are held in separate cells and offered a deal:
Clearly, there are two choices. The first is to remain silent -- to cooperate (with your partner, not the police). The second option is to confess -- to defect. Can you trust your partner? If you are held in a cell separate from your partner, and have no means of communicating with her, you do not know whether you can trust her. Thus, most rational players will choose to defect (confess -- implicate your partner of the more serious crime so that you may go free). This enables you to maximize the upside (0 years) and minimize the downside (only 2 years instead of 3). Yet the outcome consistently is better for two cooperating players than for two defecting players. What's the rational choice? The exasperating conclusion any rational prisoner faces is that there is really no choice but to defect. Considering what the other person might do, for each case, your best option (less time in jail) is to defect. Of course, your partner comes to the same conclusion. The net result is a situation that is inferior to the situation you would get if both cooperated. Although simple, the prisoner's dilemma is a powerful idea that has applicability to every walk of life and all of human experience. It has been used to analyze problems in nuclear warfare, anthropology, business, political science, economics, biology and evolution. Do things change if the partners can communicate? As the Prisoner's Dilemma is most often described, the prisoners are not allowed to communicate with each other. The problem gets even more interesting if they do communicate. Our Netzercise was one such example. During a break in the cafeteria, all teams in a Set may have talked one another into playing the "Y". Later, while preparing to e-mail their submissions, the members of each of the teams more than likely had misgivings; "What if the other teams don't adhere to our mutual agreement?", or "What if they do adhere to their promise to play 'Y'? If so, why don't we gain an advantage by playing 'X'?" In such scenarios, the parties actually have an opportunity to influence the others' choices. What's the role of power in determining choices? From an organizational perspective, would could speculate about the role of power in these scenarios. What are the chances of "defecting" given differing amounts of power in the relationship. Let's take force or coercion as a simple example. As a big guy, it looks like Seth could do some serious damage to anyone who crossed him. Suppose he had let others in his Set know that he would be very displeased if anyone defected. This might have an important effect on the actions of other teams in the Set. We could make similar arguments for different bases of power (see French and Raven). Where's the rationality in "tit-for-tat"? Let's punish and reward. In a sequence of games (an "iterated prisoner's dilemma" such as our Netzercise) something different may happen. One or both players may fall into a pattern called "Tit for Tat", in which cooperation is rewarded and defection punished. Effectively, this means doing on this move whatever your partner did on the last. For example, in a computer tournament of programs playing the prisoner's dilemma against one another, held by political scientist Robert Axelrod in 1980, a four line program playing "Tit for Tat" beat out much more complex and sophisticated programs. Does the length of the relationship determine how the partners treat one another? If the game is played only once - - if the relationship between two parties is a short-term one -- there is no incentive for either player to do anything but defect. In fact, if the game is to be played a known number of rounds, there is a good chance all parties will defect on the last move. If this is your assumption, then you may well be inclined to defect in some earlier round. That puts you in the same situation for the next to the last move - and so on. But if the game is to be played an indefinite number of times, cooperation may evolve as the best policy. Using a Prisoner's Dilemma game perspective, the temptation to betray (defection) is lower when the interactions are perceived to be ongoing rather than coming to an end. This was certainly the case in most of the Netzercise iterations. In fact, some students verbalized that exact sentiment to me at varying times during the Netzercise ("We wouldn't dare double-cross the other teams; we've got to live with them for an entire term after this one!"). Thus, the longer the perceived continuity of the relationship, the more positive the assessment of the relationship. An important lesson of the prisoner's dilemma is that the better I know you and the more entwined our destinies are, the more likely we will learn to cooperate with each other. Someone who knows he will never meet you again may have nothing to lose by betraying you; someone who will have to deal with you many times more may be deterred, for fear of retaliation. Thus the future has a longer shadow in the second case. I sense that in many cases of the Netzercise, this was exactly the case. What role does history play in the relationship between partners? Additionally, it is assumed that each team or player remembers the past history of the interactions with each of the other players and that past history influences current decisions. This is, of course, also true in organizational behavior; our relationships with coworkers are affected by our histories with them. This is exemplified by the Expectancy Theory of Motivation. The Expectancy Theory posits that Instrumentality plays a role in determining our motivation; what is our expectation that good work will be instrumental in gaining for us, some kind of outcome (reward). Only our history with our boss or organization can provide us with information about this particular expectancy. In the Netzercise (the Prisoner's Dilemma), teams use history to establish what the expectancy of a particular "play" will be. Do cultures, values and norms determine behaviors? People develop strategies for behaving that are based on their expectations of what other people will do. This strategy may be generalized beyond individuals' expectations of a single opponent's behavior to their expectations of coworkers' behaviors in an organization. Organizational culture, defined as a form of social control that clarifies which behaviors and attitudes are more or less appropriate, may help individuals anticipate other members' likely reactions to their attitudes and behaviors. Again, we can point to participants in the Netzercise who successfully played the cooperative strategy. We could speculate that in some Sets, the prevailing culture was such that defecting was not a viable option. Furthermore, social values and norms in most cultures deem being trustworthy a positive trait. Being trusted is seen as not only beneficial to the trustee at a personal and operational level but it is perceived as critical for building and maintaining close relationships (see the business example above). However, we know from our Netzercise (at least in several of the Sets) that, group norms notwithstanding, some teams chose the "defect" option. Cooperation would have meant getting all teams to play a "Y". In some sets, several teams consistently opted for the "X" play. Is defecting just another name for social loafing? The problem of cooperation occurs when the pursuit of self-interest by all parties gives a bad outcome for all. This ties in nicely with our discussions of social loafing or free-riding. In our Netzecise, an example of the "Many Person Prisoner's Dilemma", it is in the best interests of all to contribute to some group goal. However, the way the payoffs were structured, any one particular team had even greater rewards by free-riding. The Prisoner's Dilemma provides insight into organizational or team behavior because we can see how any one team becomes "helpless" versus the actions of others. No matter what your "good" intentions, others, acting out of self-interest, can jeopardize the "greater good". What's the downside of being on the high moral ground? We saw, in our Netzercise, examples of teams who, out of moral obligation or naivete, cooperated on every move (see BUSA 1A, Team DIVA). Such teams will be (were) ignominiously defeated. The other team(s) had no incentive to cooperate. In light of Team DIVA's consistent cooperative strategy the other team(s) opted to defect/confess/play "X" and earn the greater payoff on every move. Although their own sense of rationality probably prevented DIVA from doing this, they would have done more for their own cause had they adopted the previously mentioned "Tit for Tat" strategy -- cooperation is rewarded and defection punished. So long as the teams playing "X" could do so with impunity, they continued to do so. Punishment (the result of a "Tit for Tat" play by DIVA) might have communicated the seriousness of DIVA's intentions to them. Cooperation, accommodation, and conflict resolution strategies. Again, we can draw lessons for organizational behavior from DIVA's strategy of cooperation. In light of the other teams' consistent "X" play, DIVA's "Y" seems like a classic case of accommodation. In our discussion of conflict resolution, we touched on conflict resolution styles being a function of the extent to which one shows concern for others versus the extent to which one shows concern for oneself. By refusing to punish the other teams' consistent self-interested plays, DIVA was communicating a message of accommodation -- high concern for other's need (or, in this case, the group's needs), and low concern for their own needs. Studies of conflict resolution indicate that reliance on accommodation by any one particular individual (team) can cause that individual to be seen as weak and willing to be exploited. So it was with DIVA. DIVA played the moral high ground, but in terms of organizational politics, they lost. This is consistent with Prisoner's Dilemma research (Axelrod, 1984) and social dilemma research (Komorita & Parks, 1995, and Messick & Brewer, 1983). This research defines cooperation as an act that maximizes the interest of the other (as an individual or as a collective) and it defines defection as an act that maximizes self-interest -- confrontation in the terminology of conflict resolution. Must we punish defectors? The moral of the BUSA 1A scenario is: cooperation is best, but only if defection is immediately punished. Axelrod coined the phrase "shadow of the future" to describe the force that keeps a player cooperating. The other teams in BUSA 1A had no such force to keep them cooperating, and obviously group or societal norms had no effect on them. Are some people just natural defectors? Interestingly, given the actions of the teams in BUSA 1A and COMP 3A who played "X", we note that in Kelley and Stahelski's (1970) research, subjects with individualistic motives simply did not consider the possibility that other people could (or would) behave cooperatively and, thus, always behaved and responded individualistically. For example, in research, economists were found to be significantly less likely to contribute to charities and more likely to defect in prisoner's dilemma games than noneconomists, despite their clear understanding of the benefits of cooperation (Frank, Gilovich, and Regan, 1993). This suggests that managers attempting to encourage cooperation need to realize that individualists may require greater persuasion and may never actually adapt to collectivistic demands. We can only speculate if the teams playing "X" had difficulty conceiving of another team opting for the cooperative alternative. The dilemma of every-day life in the organization, just like in the Prisoners' Dilemma, is that the greater payoff goes to the individual who defects. But, in acknowledging this, we also acknowledge that if everyone does this, the organization collapses. What's so tragic about the commons: why can't I fish whenever and wherever I want? The "tragedy of the commons" is a well-known variant on the idea of the Prisoner's Dilemma. The "tragedy of the commons" is widely invoked in discussions of natural resource management. In a hypothetical common pasture each herder in the community follows the same logic: "I will receive the benefit in the short run from increasing my herd by one animal; everyone will share any eventual cost of diminished pasture per animal; therefore I will add another animal to my herd." Overstocking is thus inevitable. Applying this notion to fish stocks on Canada's east and west coasts, one gets a fairly clear indication about what went wrong. Framed in the terms of our Netzercise; why would a fisher play a "Y" and risk being suckered by others playing "X". So, all fishers play "X", and the stocks are depleted. Another variation on this theme: Suppose a person, instead of just selfishly taking from the common pasture, actually contributed to it. Let us say this cooperative person concluded that with proper management, restraint in use and contributing a little fertilizer, the pasture would reward everyone with a return at least four times the amount invested. That is, if everyone contributed fertilizer, everyone would get rewards four times the amount invested. Let us say there are 20 users. What would happen if no one else contributes anything? Then the return to the one community-spirited individual is four times his investment divided by 20! A losing proposition! This proposition works only if we only have mutual trust and cooperation or if we institute regulations to govern individual's behaviors. From an organizational perspective, how do we encourage employees to act in a manner that benefits the entire organization, if the reward structure is such that only self-interested behavior is rewarded? Again, we are faced with the dilemma of the social loafer(s). If the reward structure rewards individualistic, self-interested behavior, how do we encourage behaviors that contribute to the overall well-being of the organization? Do piece rates encourage an emphasis on quality?
Is the Prisoner's Dilemma relevant to "real" life? Business life is rife with prisoner's dilemmas including the employer-employee relationship and that between vendor and customer.
|