Managing Five Paradoxes of 360-Degree Feedback
Jai Ghorpade

Executive Overview

The performance feedback method known as 360-degree feedback has gained wide popularity in the corporate world to the point of being nearly universal among Fortune 500 companies. A 360-degree feedback program enables organizational members to receive feedback on their performance, usually anonymously, from all the major constituencies they serve. Unlike the traditional approach to performance counseling, the 360-degree feedback concept does not rely solely on the supervisor as the source of information. Instead. it enlists superiors, peers, subordinates, suppliers, and customers in providing individuals with feedback on different aspects of their performance. Feedback recipients can also rate their own performance and compare it with feedback provided by others. Although the 360-degree method is widely used, its application is filled with paradoxes. While it delivers valuable feedback. the 360-degree concept has serious problems relating to privacy, validity, and effectiveness. This article identifies five paradoxes of 360-degree feedback programs and offers suggestions for managing them.

Significant management problems, according to Charles Handy, come in the form of paradoxes.1 [Note: these numbers appearing at the end of sentences refer to the list of references at the end of this article] A clean solution to a problem, without any accompanying dysfunctional effects, is possible only in theory. In the practical, dynamic world of organizations, solutions to significant managerial problems inevitably bring with them consequences that are contradictory and inconsistent.2 The art of management consists of accepting paradoxes as endemic to organizational life and finding ways of coping with them to attain desired ends. Rather than shy away from a particular idea, theory, or method because it is accompanied by an undesirable consequence, the manager needs to find ways of extracting the positive and minimizing or controlling the negative.

This advice from one of the leading philosophers of management is well worth heeding with regard to 360-degree feedback, a performance improvement method that is rapidly gaining acceptance in the corporate world but that is fraught with paradoxes. The rapidly expanding list of 360-degree feedback users currently includes leaders of the corporate sector such as AT&T, Exxon, GE, Amoco, IBM, Caterpillar, Levi Strauss, and Shell Oil. Twenty-two of Fortune's 32 most admired companies were using upward or 360-degree feedback as of 1994. By 1996, 360-degree programs had became nearly universal among Fortune 500 companies, which spend hundreds of millions of dollars annually to support them.3

The attraction of the 360-degree concept for American industry is easy to understand. During the past two decades, U. S. corporations have been involved in a massive program of restructuring to cope with the demands of the emerging global marketplace. Old products are being discarded and new ones are being created to service the needs of increasingly discriminating consumers. Internally, there is a move from the old bureaucratic style of management toward a flat organization that requires active participation from the rank and file and that is more suited to the rapid response industry now has to make to changes in product demand and supply markets.4

In this atmosphere, the 360-degree feedback concept has much to offer. Unlike the traditional performance appraisal model, in which superiors evaluate subordinates, the 360-degree approach does not rely solely on the superior to provide feedback to the employee. Instead, it enlists multiple constituencies to provide feedback to selected organizational members. These constituencies include superiors, peers, coworkers in support areas, subordinates, internal customers of the unit's work, and external customers of the organization's products. In this process, the feedback recipient is expected to evaluate his or her own performance on the selected behavioral dimensions. This self-evaluation is then compared with that provided by the other feedback providers. The recipient is encouraged to use the feedback to improve performance and to make a greater effort to blend his or her contributions with the needs of the group. This linking of individual performance with feedback from all relevant constituencies fits well into the emerging team-based workplace.5

Another difference from traditional performance appraisal is that 360-degree feedback is supposed to be given anonymously. Research has demonstrated that anonymous feedback is more honest and closer to what raters actually feel about the feedback recipients. Appraisers whose identity is known to the feedback recipients give higher ratings than those who are anonymous.6

Although any organizational member whose job interacts with others can participate in feedback exercises, 360-degree feedback is typically provided to managers, the only members with a full circle of 360-degree feedback providers. A survey of the members of the Society of Human Resource Managers found that 35 percent of the organizations used 360-degree feedback primarily for executives, and 37 percent for upper middle managers. Middle and first-level managers also were included, but to a lesser degree (23 percent and 18 percent, respectively).7

At this stage, no single corporate role is typically in charge of the 360-degree process. The person in charge could be the feedback receiver's superior, the human resources department, or even a committee.8

While the 360-degree concept has much to offer and many successes have been documented,9 there are also stories of confusion and disappointment. Many of the 360-degree programs are carried out in the absence of a strategic context, and fail to focus on contributions that they can make to a firm's competitive advantage.10 There is little consistency to what is being done, and 360-degree feedback programs can range from any deviations from the traditional vertical form of performance appraisal to highly sophisticated feedback systems that systematically gather, analyze, and disseminate behavior data to managers, professionals, and even rank-and-file workers functioning in teams.11

Many organizations adopt 360-degree feedback without clearly defining the mission and the scope of the program. Consequently, employees who receive the feedback are left to figure out for themselves how to cope with the results and tend not to develop goals and action plans following 360-- degree applications.12 One study concluded that while such programs are popular, in many cases little more than lip service is paid to them.13 Furthermore, there is discouraging evidence regarding the effectiveness of feedback-intervention programs as tools in bringing about improvements in performance. A review of over 600 feedback studies found that only one-third reported improvements in performance. Another third reported negative changes in performance, while the final third reported no impact.14

In their haste to gain the advertised benefits of 360-degree feedback, organizations may not be sufficiently aware of the problems that often accompany its adoption.15 Failure to recognize the paradoxes that often occur can lead to disillusionment, reduce the value of the exercise, and confirm the lip service that tends to be paid to 360-degree results.

Five paradoxes are discussed in this article. They are organized around four issues involved in the adoption of 360-degree feedback: Objectives, sources of behavior and performance data, methods of data gathering and feedback, and selection of a program administrator. In each case, the paradoxes and their sources are identified, and suggestions are offered for aiding managers to cope with them.

Objectives of 360-Degree Feedback

A firmly established tenet of 360-degree feedback is that such feedback is to be used for developmental rather than appraisal purposes.16 Yet a paradox is associated with this tenet:

Employee Development Paradox

The primary objective of 360-degree feedback is to develop rather than to appraise the participating organizational members. In practice, however, the 360-degree process gets entangled with the appraisal process, thus creating the potential for confusion and erosion of its usefulness as a developmental tool.

There are two reasons for this. First, the very nature of the process invites involvement and hence sets the stage for information seepage. Under a typical 360-degree program, groups of individuals from different levels are invited to comment anonymously on the performance of one or more individuals. However, such discussions sometimes go in directions other than those anticipated by the administrators. Data gathered for developmental purposes can leak out and become a part of the appraisal process in several ways. The feedback providers, particularly if they are untrained, may vent their own frustrations and problems, be unable to distinguish between individual failings and organizational defects, and generally raise issues that go beyond the performance of the individuals involved.

The ratees themselves can be another source of leaks. Feedback receivers who score high on achievement motivation and perceive high value to the appraisal feedback are more likely to discuss their appraisal results with appraisers.17 Ratees who are fearful of the process may attempt to force the providers to reveal themselves by intimidation. For example, at the Department of Energy's Operations Office in Albuquerque, N. M., some employees filed Federal Privacy Act requests to discover the source of certain ratings.18

The second reason why 360-degree feedback can get entangled with performance appraisal has its roots in the economic and political realities of organizations. Since 360-degree feedback appraisals are expensive, companies want to get their money's worth by addressing multiple objectives.19

This conflict between the developmental and appraisal objectives is a true paradox that has to be managed. The solutions depend on preferences of management with regard to objectives. Here are two solutions that assume different objectives: developmental and appraisal.

Solution 1. If possible, keep the 360-degree program as a developmental too], and formulate clear rules for information sharing. To cope with the problem of information seepage under anonymous feedback, managers must recognize that individuals will attempt to gain additional information and that leaving them to do so on their own is counterproductive.20 One way to encourage constructive information sharing is to allow feedback receivers to solicit additional information on specific issues that concern them. To protect the providers' identity, the information can be given in writing.

A second way to promote constructive information sharing is to encourage peers and other coworkers to share and clarify their views with the feedback receivers on a voluntary basis, and under the guidance of their supervisors. This can be done face-to-face, or indirectly through the supervisor. Face-to-face meetings break the anonymity of the raters, but the confidentiality of ratings given by individual raters can still be protected by requiring the attendees to focus on issues and themes that emerged from the feedback rather than on the entire report. Such personal confrontations, however, are risky, and should be undertaken only where trust levels are high and the culture supports openness.

Sharing and clarifying may also be done indirectly by the supervisor by interviewing the raters, individually or in groups, and asking them to respond to specific questions raised by the recipients. This offers the additional potential benefit of controlling a serious problem facing the entire process-making the feedback providers act responsibly in performing their roles as raters.21

Solution 2: Combining 360-degree feedback with performance appraisal should be pursued gradually, as part of a wider performance management plan. If this course of action is chosen, management needs to face the fact that the political climate often shifts when peers evaluate one another's performance. When 360-degree feedback is tied to performance appraisal , and hence to promotions and wage increases, individuals and subgroups may gang up to sabotage the process.22 Damage can be minimized by beginning the process with a 360-feedback program and then moving gradually to appraisal. This is viable, however, only if the employees view the process as fair, non-threatening, and beneficial.

Alternatively, traditional performance appraisal programs can gradually widen the base of appraisals by including raters other than superiors. For example, if peers are added to the list of raters and the addition is accepted as beneficial by the ratees, then they may be willing to broaden the base further by including their subordinateS.23

Sources of 360-Degree Feedback Data

Two critical assumptions are made in the 360-- degree feedback literature with regard to sources of information. The first is that including multiple sources widens the scope of information that is uncovered and sheds light on different facets of performance. The second is that anonymous ratings yield feedback that is more honest, and hence may be more valid. While the practice of including multiple constituencies anonymously in the 360-- degree process is now an integral part of the technique, two paradoxes arise:

Multiple Constituents Paradox

Involvement of multiple constituents in 360-degree feedback process broadens the scope of information provided to the receiver. However, more information does not necessarily yield better feedback.

Anonymous Ratings Paradox

Anonymous ratings are more honest than signed ratings. However, honest ratings may not necessarily be more valid.

The need for broadening the sources of information in performance feedback and evaluation has been recognized for some time.24 There is no doubt that involving multiple constituents broadens the scope of information that is gathered.25 However, a mere increase in the scope of information may not necessarily yield data that are more accurate, impartial, and competent than those provided by the individual manager serving as evaluator. Honest ratings, the major reason for making the process anonymous, are not necessarily more valid.

Inaccurate, biased, and even self-serving information can make its way into 360-degree feedback because of informational, cognitive, and affective causes.26 At the informational level, discrepancies in evaluation among multiple sources may arise when there is no clear idea what is expected of ratee roles, no clear criteria according to which they are judged, and no opportunities for observation. Whether the ratings are honest is not the issue; these discrepancies arise simply because one or more of the parties lacks the information needed to provide accurate ratings, or has the wrong information about the roles in question.27 For example, managers from large corporations who are recruited by small businesses may bring with them expectations about roles that do not match those of their prospective bosses.

At the cognitive level, coping with the complexity of the task is often a problem. Appraising a person in a role is a complex activity, requiring the evaluator to rationally acquire, store, retrieve, and integrate complex sets of information about the person, the job, the outputs, and time frames, and then to use this information to pass judgment. Evaluators typically simplify the process by forming an overall impression of the ratee rather than by focusing on specific behaviors; they rate according to this impression, discounting any inconsistent behaviors.28 Honesty is not an issue in such cognitive structuring: The raters might honestly believe that their impressions reflect reality.

Affective constraints, the third general cause of evaluation discrepancies, stem from psychological and political factors. Individuals may distort ratings, unconsciously or consciously, to protect their self-concept or to serve other personal ends. At the unconscious level, individuals may rate themselves higher than their peers to preserve their favorable self-esteem. This in turn may lead to a self-serving bias-the tendency to take personal credit for successful performance, but to assign responsibility for failure to external causes.29 Unconscious distortion of ratings of others may also occur because of hidden race, gender, age, or personality biases held by the raters.30

Conscious distortion of ratings, the second type of affective bias, has its roots in the political aspects of organizational life. As Burns pointed out 30 years ago: "Members of a business concern are at one and the same time cooperators in a common enterprise and rivals for the material and intangible rewards of successful competition with each other. The hierarchical order of rank and power, realized in the organization chart, which prevails in all organizations, is both a control system and a career ladder."31

Given this duality of roles, it is normal for people in organizations to take into account the negative consequences for themselves of the ratings that they give to others. One of these consequences may be unpleasant future personal relations with coworkers.32 Another might be a reduction in competitive status: The assignment of good ratings to peers, even when they are deserved, may reduce the relative ranking of the rater. Deliberate inflation or deflation of ratings may occur because of feelings of friendship or animosity, to preserve departmental territory and resources, or when evaluators are confronted with decisions involving allocation of scarce resources, such as distributing limited promotions and pay raises.33

The issue of honesty in ratings needs to be more fully confronted. It is accepted axiomatically in the 360-degree literature that anonymity enhances honesty in ratings,34 and considerable support is documented for this proposition.35

While conceding that anonymity can enhance honesty, it would be risky to equate honesty with validity of ratings. Untrained raters might inadvertently fall prey to the rating errors that industrial psychologists have warned about over the years, namely halo, leniency, strictness, and central tendency. Halo refers to assigning ratings on specific criteria on the basis of a general impression of the ratee; leniency refers to assigning higher than deserved scores; strictness (also referred to as severity) is the opposite of leniency and refers to assigning ratings lower than deserved; and central tendency refers to clustering ratings near the center of the scale.36

From a managerial perspective, the challenge that these paradoxes raise is twofold: how to evoke constructive responses from the multiple constituents, and how to incorporate the various facets of information into a feedback package for the information receivers that will enable them to improve their performance. Some of the solutions to these issues are technical and will be covered in the next section which deals with methods. At this stage, managers might consider the following suggestions for controlling these two paradoxes.

Solution 1: Get away from the assumption that increasing the number of rater constituencies automatically improves the quality of the feedback provided, and focus instead on the relevance of the feedback from a developmental perspective. As London and Beatty point out, the customer may not always be right or consistent in evaluating the performance of a work unit and its leadership.37 In order for the feedback recipient to fully benefit from multiple data sources, the information gathered must add value beyond that acquired through traditional performance appraisal . Effective feedback, regardless of source, should focus on the job as the incumbent performs it within the context of the organization, rather than on a role in an abstract sense; should focus on concrete behaviors that are linked to specific events and incidents, rather than on personal traits; should be free of racial, sexual, and other biases; and should provide concrete recommendations for change.38

Solution 2: Provide the raters with guidance and training, including a description of the major competencies expected of the role. This is particularly relevant in a full 360-degree application, where different constituencies might be expected to have varying degrees of familiarity with the job. For example, United Jersey Bank Financial decided to use 360-degree feedback as part of a strategic restructuring in pursuit of a vision of total customer satisfaction. They began by redefining five managerial jobs in units that required close customer contact. Competencies judged to be relevant for the future were incorporated into the 360-degree process. The ratings were then correlated with objective measures of bank performance. The feedback recipients were then provided with training to enable them to improve their behaviors within a specified time frame.39

Solution 3: Provide potential raters with opportunities to detect their own rating biases, if any, before they actually rate an organizational member. This is particularly relevant in instances where structured numerically scored scales are used because they are highly subject to unconscious rating errors of halo, leniency, central tendency, and strictness. These types of biases can be uncovered by getting a group of potential raters to use the instrument in a trial session by focusing on descriptions of hypothetical candidates who vary widely in their behavioral profiles. The common practice is to calculate means and standard deviations of the ratings and to display the results in comparative terms on a chalkboard or transparency. Each rater knows his or her own numbers, but the group as a whole sees only the distribution. Both of these statistics enable individual raters to know where they stand in reference to the group and thus to detect whether they habitually grade high, low, or in the middle in comparison with the others.40

These suggestions are relevant only if it can be assumed that the raters are in fact willing to act in good faith. No amount of training, however, is going to be of any help if the organizational climate is politically charged and trust is low. In such instances, rather than focusing on individuals, the organization would be better off using the logic of the 360-degree process to assess the standing of the organization itself in the eyes of its various constituencies.

Methods of Data Gathering and Feedback

Data needed for 360-degree feedback can be gathered in a variety of ways. Programs can vary in the form of information gathered (e.g., quantitative versus qualitative); the amount of structure in the procedure (scaled or open-ended questions) used to gather the data; and the extent to which the information gathered is context-specific versus generic. A context-specific method would focus on behaviors relevant to the immediate work unit and its culture, while a relatively generic 360-degree method would focus on generalized behaviors that have been judged to be common to the role in question, either through theory, industry experience, or expert opinions.41

The three choices with regard to method noted above are not mutually exclusive. In theory, many combinations are possible. For example, an instrument can be partially structured with a combination of scaled and open-ended questions. Generic information (e.g., industry practices) can be combined with company variables in thinking about its relevance to the company.

Even though the three method choices are not mutually exclusive, a decision nevertheless has to be made about which combination to use in a particular context. In making this decision, a paradox is encountered:

Structured Feedback Paradox

Quantitative and structured feedback based on generic behaviors is easy to acquire, score, and disseminate. However, such data may not have much relevance to a particular workplace and may even yield misleading results.

Structured, quantitative, and generic feedback offers two advantages. First, this approach is fairly inexpensive. For example, the Center For Creative Leadership sells its multirater assessment tool, Prospector, for $195 for a set of 12 questionnaires-- one for self-rating and the other 11 for coworker ratings.42 Second, the skills needed internally to administer such a program are minimal. Basically, the role of the HR representative is to perform clerical tasks-ordering the form, administering it, and circulating the numerical scores.

Despite these benefits, this combination of quantitative ratings acquired through a structured questionnaire containing generic role behavior information can raise some very serious problems with regard to fairness and accuracy. To begin with, quantitative data of a generic type can be difficult to interpret. Suppose a manager of an emergency ward in an inner city hospital receives a mean score of 3.7 on a five-point scale of effectiveness in explaining to subordinates their duties and responsibilities. Suppose further that the industry average for this behavior (supplied by the instrument publisher) is 4.2. Is this manager to be viewed as inferior because of a lower score than the industry average? Our ability to pass any kind of judgment would hinge on the extent to which the workplaces are comparable with regard to employee readiness to receive direction, the communication media made available to the manager, and a host of other environmental variables.

There is also the problem of controlling for the rater tendencies discussed earlier. In order to be able to assign a substantive value with confidence to a statistic resulting from a 360-degree administration, the ratings would need to be considered free from the informational and cognitive errors noted earlier. Even this is very difficult and is frequently not done in practice, as illustrated by the case of a manager who was told by her peers that her informing behaviors, while frequent, were not sufficient. This created a problem of interpretation-how much more can someone use a behavior that is already employed frequently? It was discovered at the information sharing and clarifying meeting that the weekly reports that the manager had decided not to circulate because people were complaining about being overloaded with papers were the very documents that they wanted to see.43 Their reservation was not with her ability to engage in informing behaviors, a generic variable, but a failure to circulate a particular document, a context-specific variable.

Consider now what it would take for a company to move toward the context-specific end of the continuum. As London and Beatty point out, 360-- degree feedback ratings should be made on performance dimensions that are strategic to organizational success and thus relevant to the job.44 The HR representative would have to have both the time and the ability to analyze tasks and behaviors involved in the jobs of the persons selected to participate in the program, translate these into measurable behaviors, and provide feedback that is specific to the context. This would no doubt increase the cost, both in terms of time and money needed to develop and administer such a program.

How does a company balance these conflicting options? Rather than see them as black-and-white choices, compromises are possible. An operational procedure for constructing a 360-degree feedback instrument that balances the conflicting options of the structured feedback paradox is given in Figure 1.(45) The preliminary step, as in all 360-degree programs, is to choose the feedback receivers and providers. Simultaneously, a panel of Subject Matter Experts (SMEs) is formed, consisting of the feedback receivers and providers, their peers, managers, subordinates, and others who have a stake in the jobs held by the feedback receivers. This is followed by selection of generic and structured behavior inventories. These are examined and rated by all the SMEs, including the feedback receivers and providers, for context relevancy. Only those that are judged to be context-relevant are retained and supplemented by other items of interest to the organization. This collection of context-- relevant items is then be formatted into a semistructured questionnaire allowing for feedback that is both quantitative and qualitative, structured, and context-specific. In keeping with the earlier discussion relating to the paradoxes dealing with employee development, multiple constituents, and anonymous ratings, raters are asked to provide ratings as well as to explain them.

The relevance of the procedure in Figure 1 for controlling the conflicts of the paradox dealing with structured feedback are evident in the above discussion. Nevertheless, here are some specific applications:

Solution 1: The problem of balancing the quantitative and qualitative information can be handled by asking the raters to provide both forms of information on the performance dimensions included in the feedback instrument. Thus, for each rating item on the instrument, space would be provided for qualitative comments. It can be pointed out to potential raters during the training sessions that their feedback will be taken more seriously if they provide concrete examples of behaviors, incidents, and events that led them to the ratings. In fact, a more stringent rule can be imposed: Ratings will not be counted unless they are explained.

Solution 2: The conflict stemming from the second continuum-context-specific versus generic feedback-can be managed by combining both types of items in the instrument. A pure contextspecific 360-degree instrument consists of a set of critical behaviors that have been arrived at through a systematic job analysis procedure from scratch, without any use whatsoever of any existing behavior inventories. While such a procedure has its appeal, it is unnecessary (and even wasteful) in today's context to begin from scratch for every job analysis. An alternative approach is to use inventories that are available in published form as starting points for arriving at context-- relevant behaviors. These can then be supplemented with context-relevant behaviors not found in the inventories.

Organizations willing to entertain this approach have a wealth of materials to draw on. There are at least five types of inventories available: Descriptions of generalized human behaviors relevant to the work context;46 descriptions of tasks typically performed in thousands of jobs in today's economy;47 documentation of skills involved in performing managerial and other complex jobs;48 abstract descriptions of human skills and abilities relevant to some degree in all work contexts;49 and inventories and instruments assembled specifically by various publishers for 360-degree feedback.50 It is relevant to note that developers of the last set draw heavily on the other four for their materials.

The end result of this process is an instrument capable of generating feedback that is both context-relevant and inclusive of behavior items of general applicability. This procedure should also enlist the support of feedback receivers, as they would have had a part in generating the instrument.51

The final step in the process is actually providing the feedback. Unlike the typical 360-degree program, however, the procedure outlined in Figure 1 provides for feedback both to the receivers and providers. The receivers get to know how they are perceived by their constituents and the providers learn about how well they performed their role as raters. Data relating to undesirable rating tendencies, such as halo, central tendency, leniency, and strictness, are revealed at this stage. Also, systematic and structured analysis of the contents of the qualitative comments can be done under different dimensions of performance (e.g. technical, interpersonal, and managerial competencies) and the results relayed to the feedback receivers.

Appointing the 360-Degree Administrator

A host of administrative challenges arises when the decision is made to engage in 360-degree feedback. A critical issue here is who should gather, process, and have custody of the feedback data. A paradox is encountered relating to this issue:

Managerial Involvement Paradox

Managerial involvement in gathering and processing 360-degree feedback data is legitimate and inevitable. However, involving persons in authority may taint the process and reduce its credibility.

While recognizing the legitimacy of managerial involvement in 360-degree administrations, it is also important to recognize that such programs may cause fear. They invite judgments on individual performance by a wide range of participants, and violations of confidentiality cannot be controlled. In fact, so great is the potential of these programs for causing fear that a Price Waterhouse survey referred to earlier advised management to plan for paranoia.52

Given this possibility, managers need to make the program appear as impartial and fair as possible. Here are two potential solutions to selecting a 360-degree administrator:

Solution 1: Assign the administrative tasks to a managerial role such as manager, HR director, or supervisor. While this is a common practice, its appropriateness within a particular context hinges entirely on the extent to which the person selected is trusted by the feedback receivers.

Solution 2: Assign the administrative tasks to a person whose involvement in the process can enhance trust. Given the inherent potential of 360degree programs for causing fear, the person in charge of the data gathering and analysis must be perceived as trustworthy53 by the feedback receivers and providers and must have the technical competencies to help the parties resolve the method issues encountered in conducting 360-- degree programs such as those depicted in Figure 1. Such individuals are hard to find in the chain of command of organizations. Persons who occupy such conflict management roles as ombudsman, mediators, and organizational development professionals may have greater credibility, and hence may be considered for this role. In one unionized company where multisource feedback was to be provided to supervisors, nominations for this role were sought directly from the supervisors themselves. The supervisors chose to delegate the administration of their program to a committee consisting of an HR representative, the union steward, and a supervisor from another area who was not a part of the feedback program.

If a 360-degree administrator cannot be found internally, outside consultants can be hired. However, the fees for such help are usually paid by management and hence they may not be trusted by employees.54 Therefore, if this avenue is chosen, the consultant should be retained on a long-term basis and continued only if he or she is able to build trust among organizational members. This, in fact, was the avenue that was chosen by the company in the case presented below.

How One Company Coped with the Five Paradoxes

The organization in which the 360-degree feedback took place began as an assembly plant of an Asian multinational electronics manufacturer located on the West Coast, and then expanded its business operations to include assembly and manufacture of various components. Starting with about 100 workers, the firm grew to over a thousand employees within five years. The growth, however, came with a price. The management team, which initially consisted of an Asian general manager and two Americans overseeing various work units and their supervisors, quickly grew to include 15 managers, with a mix of Asian and U. S. managers. While the company continued to grow, there was considerable tension within the managerial ranks at all levels. Rapid changes in technology, increasing ethnic mix of the labor force, and differing management styles led to process inefficiencies, high turnover, and worker dissatisfaction. A labor union initiated an organizing drive.

The general manager decided to act. He assembled the senior managers, the HR director, and an organization development (OD) consultant who had worked for the company for some time performing developmental and conflict resolution activities. Many options were explored, including organizational restructuring, executive development, and sensitivity training. After much deliberation, it was decided that a good starting point would be for the top managers to try to understand differences within their own rank. Multisource feedback was selected as a way of initiating this process. The OD consultant was placed in charge of this effort, as he had done projects for just about all the managers in the group and they had gotten to know him and felt comfortable with him.

The 15 middle managers were selected as the feedback receivers of this program. Right at the beginning, the general manager committed himself publicly to making the feedback effort developmental rather than evaluative. This was an important variable in unfreezing communication links.

The OD consultant then began the task of assembling the behaviors that were to form the basis of the feedback. First, the 15 managers were interviewed individually in an open-ended manner about their personal view of the situation. This yielded about 30 context-specific behaviors focusing on peer relations and dynamics. Examples of such behaviors were: Keeps commitments made to peers, keeps me in informed about changes that concern me, cooperates in getting things done, trustworthy, professionally competent, and knows job well. To this list were added 15 generic managerial behaviors from the management and leadership literature. Examples were: Planning and organizing, controlling, problem solving, and interfacing.

The 15 managers were then asked to rate the behaviors for relevance to their roles and situations on a rating scale that posed the following question: To what extent is this peer behavior or trait of concern to you in your job situation? The rating was done on a five-point Likert-type scale, where 1 was linked to a descriptive anchor of "No concern at all," and 5 was anchored to "A great deal of concern."

The mean scores of each of the 45 behaviors were tabulated and ranked. Consultation with the managers and the general manager led to elimination of 15 items that had received low scores (below 3) and that had high standard deviations, signifying low levels of agreement among raters. As might be expected from the discussion of Paradox 4, in this highly charged emotional atmosphere, these turned out to be mostly generic managerial behaviors such as planning, organizing, and controlling. The 30 remaining behaviors were reflective of the real problems faced by the managers. These behaviors were then anchored to a peer description scale where the lead question was: How descriptive is the following behavior or trait of this manager? Again, a five-point Likerttype scale was used, with 1 labelled as "Not at all descriptive," and 5 as "Very descriptive."

Multisource ratings were acquired. The general manager was rated by all the senior managers, who in turn were rated by the general manager and all their peers. Along with rating their peers, participants were asked to provide a self rating for the 30 items. They were also asked to justify all their numerical ratings with qualitative comments and explanations. Because much emotion had built up over the months preceding this exercise, participants poured out their feelings, both positive and negative. However, as might be expected, not all feelings expressed were useful or constructive. The OD consultant placed the qualitative comments on cards and sorted them into piles that contained similar sentiments. These were summarized and the wording was changed to protect individual identities.

Feedback to the participants, which included the general manager, was provided individually by the OD consultant. Feedback sessions lasted from one-and-a-half to three hours. During these meetings, the OD consultant began by providing the recipients with a statistical summary of the results that covered mean scores the candidate received from peers on the 30 items, a comparison of peer and self scores, and a summary of qualitative feedback from peers. Participants were asked not to discuss the results personally with anybody and to ponder the feedback for two weeks on their own, with the OD consultant standing by to help them with clarification and process issues. A meeting was called by the general manager at the end of two weeks. The purpose of this meeting was not to discuss substantive findings of the exercise but rather to assess feelings about the value of the process and to decide on a future course of action.

All participants stated that they had gained some value from the exercise and that they wanted the process to continue. Four follow-up actions were taken: (1) Process meetings-all managers agreed to meet twice a month for a period of three months to explore quality issues, without getting into personal issues; (2) expansion of feedback sources-all the managers asked for and were provided with feedback from the unit heads and supervisors under their command using the process described above; the supervisors in turn were provided with feedback from their peers and employees; (3) clarifying meetings-in a few cases, meetings were arranged among peers to discuss substantive issues raised in the feedback exercise with the involvement of the OD consultant; and (4) individual development-managers used the peer and subordinate feedback to seek out specialized reading materials, training programs, and other forms of professional development assistance.

The employee development paradox was resolved by a firm commitment by the general manager to making development the objective of the 360-degree feedback. The multiple constituents and anonymous ratings paradoxes were managed by the OD consultant's serving as the filter of the feedback provided to the managers. While the filtering worked as intended in this case, it needs to be recognized that the potential exists for such power to be misused. It may be useful to have more than one person involved in this filtering process to enhance trust. The structured feedback paradox was managed by involving the recipients in selecting the behavior items included in the feedback instrument. And the managerial involvement paradox was resolved by delegating the administration of the 360-degree feedback to an outsider who had built trust with the participants.

Should Organizations Use 360-Degree Feedback?

It would be difficult to argue against the general notion of multisource feedback in today's business climate. Corporations have decentralized their management systems and considerable importance is placed on teamwork with the role of the manager, particularly the middle manager, being closer to that of a team leader than that of an officer in the traditional bureaucratic sense. In this competitive context, it would be difficult for any manager in any complex organization to go very long without receiving some feedback from the multiple constituencies that the role serves. The 360-degree concept enables such feedback at a relatively low operating economic cost. Research indicates that the gains from 360-degree feedback, when used as a developmental tool, are substantial. Changes in behavior brought about by such programs tend to be immediate and frequently dramatic.

The 360-degree concept, however, is not without problems. In fact, its application is fraught with paradoxes that cannot be resolved through mechanical, technical actions. In order to gain the benefits that the process offers, and minimize the adverse consequences, organizations need to create an atmosphere of trust, openness, and sharing. Feedback is most meaningful when there is a genuine desire on both sides for a meaningful and authentic exchange of perceptions.

Jai Ghorpade

Jai Ghorpade is professor of management in the College of Business Administration at San Diego State University. He has served on the editorial board of the Academy of Management Review, and published articles in the Academy of Management Journal, Personnel Psychology, and Journal of Applied Psychology. He has experience in the applications of multisource performance feedback. Contact:



1 Handy, C. 1994. The age of paradox. Boston: Harvard Business School Press, 11-14.

2Ibid., 34-36.

3 Yammarino, F. Y., & Atwater, L. E. 1997. Do managers see themselves as others see them? Organizational Dynamics, Spring: 35-44; Antonioni, A. 1996. Designing an effective 360degree appraisal feedback process. Organizational Dynamics, Autumn: 24-38.

4Handy, C. 1989. The age of unreason. Boston: Harvard Business School Press. See particularly his description of the Shamrock Organization in Chapter 4.

5London, M., & Beatty, R. W. 1993. 360-degree feedback as a competitive advantage. Human Resource Management, 32 (2 & 3): 353-372.

6Antonioni, op. cit., 28. For other empirical research on this issue, see: London, M. & Smither, J. 1995. Can multisource feedback change perceptions of goal accomplishment, self-evaluations, and performance related outcomes? Personnel Psychology, 48: 803- 839; Waldman, D. A., Atwater, L. E. & Antonioni, D. 1998. Has 360-degree feedback gone amok? The Academy of Management Executive, 12(2): 86-94.

7Lepsinger, R., & Lucia, A. D. 1997. The art and science of 360-degree feedback. San Francisco: Jossey-Bass, 17.

8For variations on practice on this issue, see: Edwards, M. E., & Ewens, A. J. 1996. Providing 360-degree feedback. Scottsdale, AZ: American Compensation Association; Lepsinger & Lucia, op. cit.; Tornow, W. W., & London, M. 1998. Maximizing the value of 360-degree feedback. San Francisco: Jossey-Bass.

9 An entire conference dedicated solely to 360-degree feedback issues was organized in June 1997 by the International Quality & Productivity Center. Audio cassettes of this conference can be obtained from IQPC, 150 Clove Road, P. 0. Box 401, Little Falls, NJ 07424-0401. Also see: Hegarty, W. H. 1974. Using subordinate ratings to elicit behavioral changes in supervisors. Journal of Applied Psychology, 1974, 59(6), 764 -766; Hoffman, R. 1995. Ten reasons you should be using 360-degree feedback. HRMagazine, April: 82-85.

10London & Beatty, op. cit., p. 355; also see, Schneier, C. E., Shaw, D., & Beatty, R. W. 1991. Performance measurement and management: A new tool for strategy execution. Human Resource Management, 30: 279 -301.

11Variations in 360-degree practices are discussed in: Antonioni, op. cit.; London & Beatty, op. cit.; Yammarino & Atwater, op.cit.

12 Antonioni, op. cit.

13London & Beatty, op. cit., 353.

14Kluger, A. N., & DeNisi, A. 1996. The effects of feedback interventions on performance: A historical review, a meta-analysis, and preliminary feedback theory. Psychological Bulletin, 119: 254-284.

15Excellent reviews of the 360-degree feedback issues and research can be found in: London & Beatty, op. cit.; Antonioni, op. cit.; Yammarino & Atwater, op. cit.; and Waldman, op. cit.

16All four sources used as points of reference in this article (see 15) support placing the emphasis on development in 360degree programs: Waldman, et. al., op. cit., 86; Antonioni, op. cit. 26; London & Beatty, op. cit., 359; Yammarino & Atwater, op. cit., 36.

17Antonioni, op. cit., 26.

18Price Waterhouse, Performance measurement: The rating whirl., February, 1997,4-5.

19Waldman, et al., op. cit., 87.

20Antonioni, op. cit.; and Lepsinger & Lucia, op. cit.

21For additional ideas on conducting sharing and clarifying meetings, see: Lepsinger & Lucia, op. cit., 179-181.

22Waldman, et al., op. cit., 87-89.

23For additional ideas on combining 360-degree with perfor

mance appraisal, see: Waldman, et. al., op. cit.; and Tornow & London, op. cit., 68-73.

24For reviews of early research on multiple raters in performance appraisal see, Cascio, W. F. 1997. Applied psychology in human resource management (5th ed.). Englewood Cliffs, NJ: Prentice Hall, 62-65.

25For an empirical development of feedback items from different sources, see: Herold, D. M., & Parsons, C. K., 1985. Assessing the feedback environment in work organizations: Development of the job feedback survey. Journal of Applied Psychology, -70(2): 290-305.

26Campbell, D. J., & Lee, C. 1988. Self-appraisal in performance evaluation: Development versus evaluation. Academy of Management Review, 13(2): 302-314.

27Data on relative competencies of managers and subordinates as raters can be found in: Bernardin, H. J., & Beatty, R. W. 1987. Can subordinate appraisals enhance managerial productivity? Sloan Management Review, Summer, 63-73; Greller, M. M. & Herold, D. M. 1975, Sources of feedback: A preliminary investigation. Organizational Behavior and Human Performance, 13: 244-256.

28Cooper, W. H. 1981. The ubiquitous halo. Psychological Bulletin, 90, 218-244.

29Miller, D. T., & Ross, M. 1975. Self-serving biases in the attribution of causality: Fact or fiction? Psychological Bulletin, 82: 213-225. Also see the discussion of the egocentric-bias theory in, Harris M. M., & Schaubroeck, J. 1988. A meta-analysis of self-supervisor, self-peer, and peer-supervisor ratings. Personnel Psychology, 41: 55.

30Campbell & Lee, op. cit., 306.

31 Burns, T. 1969. Industrial man. Baltimore: Penguin Books, 232.

32Longenecker, C. 0., Sims, H. P., & Gioia, D. A. 1987. Behind the mask: The politics of employee appraisal. The Academy of Management Executive, 1: 183.

33For a review of this literature, see: Campbell & Lee, op. cit.; Longenecker, et. al., op. cit.; Gioia, D., & Longenecker, C. 0. 1988. Delving into the dark side: The politics of executive appraisal. Organizational Dynamics, 22(3): 47-57.

34Lepsinger & Lucia, op. cit., 120-121.

35 Antonioni, op. cit., 28. Also see, London & Beatty, op. cit., 366.

36Cascio, op. cit., 64 - 66.

37 Ibid., 361.

38For more on criteria for evaluating feedback data, see: London & Beatty, op. cit., 364-366; Lepsinger & Lucia, op. cit., 62-66.

39Lepsinger & Lucia, op. cit., 24-28. For additional ideas for improving rating effectiveness through rater training, see: Waldman, et al., op. cit., 91-92.

40A technique known as frame-of -reference training has

proven promising in improving accuracy of appraisals. For a review of research on this technique, see: Cascio, op. cit., 75-76.

41 For a comparative review of research-based management development scales, see: McCauley, C. D., Lombardo, M. M., & Usher, C. J. 1989. Diagnosing management development needs: An instrument based on how managers develop. Journal of Management, 15(3):389-403.

42Information obtained directly from the company by telephone.

43Lepsinger & Lucia, op. cit., 180-181.

44London & Beatty, op. cit., 364.

45For variations of the approach recommended in this paper, see: O'Neal, S. & Palladino, M. 1992. Revamp ineffective performance management. Personnel Journal, February; Yammarino & Atwater, op. cit., p. 36.

46McCormick, E. J., Jeanneret, P. R., & Mecham, R. C. 1969. The position analysis questionnaire. West Lafayette, Indiana: Occupational Research Center, Purdue University.

47U. S. Department of Labor. 1977. Dictionary of occupational titles, 4th Edition. Washington, D. C.: U. S. Government Printing Office.

48McCauley, et. al., op. cit.

49Fleishman, E. A. & Reilly, M. E. 1992. Handbook of human abilities. Palo Alto, CA: Consulting Psychologists Press, Inc.

50Lepsinger & Lucia, op. cit.

51Antonioni, op. cit., 26-27.

52Price Waterhouse, op. cit., 6.

53For a discussion of the role of interpersonal trust in the feedback process, see: Ilgen, D. R., Fisher, C. D., & Taylor, M. S. 1979. Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64(4): 340-371.

54For a discussion of the benefits and risks involved in the hiring of consultants, see: O'Shea, J., & Madigan, C. 1997. Dangerous company: The consulting powerhouses and the businesses they save and ruin. Chicago: Times Business.