Has 360° Feedback Gone Amok?
David A Waldman; Leanne E Atwater; David Antonioni
Executive Overview
Three hundred sixty degree feedback programs have been implemented in a growing number of American firms in recent years. A variety of individual and organizational improvement goals have been attributed to these feedback processes. Despite the attention given to 360 feedback, there has been much more discussion about how to implement such programs than about why organizations have rushed to join the bandwagon or even what they expect to accomplish. Are companies doing 360 degree feedback simply because their competitors are? What evidence exists to suggest that 360 degree feedback prompts changes in managers' behavior? This article explores the outcomes that organizations can realistically expect and provides recommendations for implementing innovations such as 360 feedback to best ensure improvements will be realized and the process will be a success.
Three hundred and sixty degree feedback programs can involve feedback for a targeted employee or manager from four sources: (1) downward from the target's supervisor, (2) upward from subordinates, (3) laterally from peers or coworkers, and (4) inwardly from the target himself. Studies show that about 12 percent of American organizations are using full 360 degree programs, 25 percent are using upward appraisals, and 18 percent are using peer appraisals.l Furthermore, it appears that the trend is growing.2 The most obvious reasons for this growth include the desire on the part of organizations to enhance management development, employee involvement, communication, and culture change.
The rise of 360 degree feedback can be traced to the human relations movement of the 1950s and 1960s, when organizations attempted to improve organizational processes and communication through various forms of what came to be known as organizational development. One popular form of organizational development was survey/feedback.
Survey/feedback involves a general employee survey of such factors as jobs, benefits, pay, and organizational communication. Traditional survey/ feedback was geared toward overall organizational processes, while 360 degree feedback programs are targeted toward supplying information to specific individuals, e.g., supervisors and managers, about their work behaviors. Traditional survey/feedback was an upward feedback process. While 360 degree programs have relied heavily on upward feedback, at least some attempts have been made to gather peer, supervisor, and/or customer feedback.
Reasons for Adopting 360 Degree Feedback
A key purpose driving the present use of 360 degree feedback is the desire to further management or leadership development. Providing feedback to managers about how they are viewed by direct subordinates, peers, and customers/clients should prompt behavior change. Many managers have not received as much honest feedback as is necessary for an accurate self-perception. When anonymous feedback solicited from others is compared with the manager's self- evaluations, the manager may form a more realistic picture of his or her strengths and weaknesses. This may prompt behavior change if the weaknesses identified were previously unknown to the manager, especially when such change is encouraged and supported by the organization.
Other potential benefits of 360 degree initiatives are targeted ultimately toward organizational change and improvement. These initiatives reflect resource dependence theory, which views organizational change as a rational response to environmental pressures for change or strategic adaptation.3 By increasing managerial self-awareness through formalized 360 degree or upward feedback, an organization's culture will become more participatory and will be able to react more quickly to the needs of internal and external customers. This should ultimately lead to increasing levels of trust and communication between managers and their constituents, fewer grievances, and greater customer satisfaction.
In addition to the logical, performance-based reasons for pursuing a 360 degree feedback program, at least three other reasons account for its proliferation.
Imitation
Institutional theory suggests that organizations make attempts to imitate their competition or other firms in an organizational network.4 This suggests that the choice to adopt 360 degree feedback reflects a response to environmental pressures. Such conformity gives a firm a sense of external legitimacy.
As an example, we worked closely with a large telecommunications firm to implement an upward feedback program. From the beginning, we sought to determine the precise reasons why the firm wanted to pursue this program. A consistent reason simply seemed to be a desire to keep up with the competition. Managers asked us to provide lists of other companies using upward feedback, almost as if that alone were reason to adopt it. Performance-based thinking was not absent, but a form of "satisficing" might have been at play whereby improved performance was expected simply by imitating others.
A similar phenomenon of imitation occurred years ago regarding quality circles and has occurred more recently with TQM. TQM can be implemented in a number of different ways. These include the use of various scientific and statistically-oriented approaches to solving quality problems, and an increase in activities directed toward understanding customers' perceptions and desires pertaining to quality. In an attempt to achieve external legitimacy, later adopters have often not been overly concerned about the specifics of TQM implementation and how or whether such specifics are actually linked with performance outcomes.
The recent implementation of teams in organizations provides another relevant example. Organizations have created teams to mimic the competition. Managers reason that if the competition is using teams and they are doing well, they should use them or fall behind. Little thought has gone into determining what improvements can be expected, or how technical and managerial systems would require change to support teams.
Institutional theory and imitation become more and more relevant as organizations face uncertain situations. Indeed, this may still be the case for 360 feedback since little research evidence exists regarding the precise methods and contexts in which it can positively affect organizational outcomes. In such situations, attempts to copy the actions of reputable others seem reasonable, and late adopters may not seriously question the potential effectiveness of 360 feedback. That is, they see no need to systematically demonstrate performance improvements before engaging in a widespread rollout.
A case can be made that given the increasing uncertainty, rapid change, and increasing competition facing organizations, managers feel that spending additional time and money testing the usefulness of such innovations prior to full implementation is not cost effective. It may be smarter in the long run to adopt the innovation and simply drop it later if it is unsuccessful. In short, we acknowledge the logic of attempting to imitate what other firms are doing with regard to 360 feedback initiatives. But imitating without clearly understanding what other firms have accomplished, or the likely outcomes for one's own firm, may be a questionable strategy.
360 Degree Feedback as Part of Performance Appraisal
A second alternative reason for the proliferation of 360 degree feedback is the desire to expand formal appraisal processes by making such feedback evaluative, thereby linking it directly with a manager's or employee 's performance appraisal . Our most recent experiences suggest that there are pressures to make 360 feedback evaluative because companies want to get their money's worth.
In theory, the use of 360 feedback for evaluative purposes seems logical. An individual held directly accountable for ratings received will be more motivated to take action to make improvements based on the feedback. Unfortunately, problems exist that may negate the possible benefits of 360 degree feedback if it is made evaluative. Employees may rebel and try to sabotage the program. For example, in the case of upward feedback, implicit or even explicit deals may be struck with subordinates to give high ratings in exchange for high ratings. Such maneuvering is less likely when the feedback is being provided strictly for developmental purposes.
Research has demonstrated that when ratings become evaluative rather than purely developmental, some raters (up to 35 percent) change their ratings.5 UPS tested the potential of using 360 ratings for evaluation. The company asked employees after they had provided upward ratings whether they would have altered the ratings if they knew they would be used as part of their managers' formal performance evaluations. Their findings suggested that some individuals would raise, and some would even lower, ratings if they were to be used for evaluation. Changes in ratings were made primarily in order to affect outcomes, i.e., keep the manager from trouble, or in some cases to get the manager in trouble. Three hundred sixty degree ratings are typically collected anonymously. Ratings that are not anonymous may differ from those that are. Ratings become less genuine if the rater believes he or she will be identified. Not surprisingly, some raters indicate that the y would raise their ratings if they were going to be identified to their managers. Anonymous ratings also have potential drawbacks. If anonymous 360 ratings were used as part of the documentation for a personnel action involving a manager-e.g., demotion, dismissal, or unattained promotion or pay raise-that manager could potentially make a legal case against the firm. Since the ratings are anonymous, they cannot be traced to specific individuals, and hence their validity could come into question in a court action. In contrast, traditional performance appraisal ratings are typically signed by the rater, i.e., one's supervisor, making them more verifiable.
A rating should be used for appraisal purposes only when the raters are committed to the goals of the organization, rather than merely to their own personal goals. This is often not the case, as the rater is primarily concerned with his or her own short-term needs. For example, a subordinate may only provide high upward feedback ratings to a manager who maintains the status quo, even though the individual and the organization could use a high degree of challenge.
This suggests another caution regarding ratings: be careful what you measure. If a manager's 360 ratings depend on creating a positive or even relaxed climate, these factors may actually detract from work directly geared toward bottom line results. For example, customers may call the manager away from the office frequently, or necessitate many hours on the phone, thus making the manager less available to employees. If this customer-oriented behavior is not part of the criteria measured and availability to subordinates is part of the criteria, customer-oriented behavior will diminish over time and be replaced by more frequent interactions with employees. Yes, relationships with employees may improve, but at what cost?
Some companies have abandoned the use of 360 feedback for appraisal purposes. For example, half of the companies surveyed in 1997 that had implemented 360 degree feedback for appraisal had removed it because of the negative attitudes from employees and the inflated ratings.6
Not all experts agree that using 360 degree feedback for evaluation is a problem. If traditional appraisal depends on the opinion of a supervisor who is not always in the best position to judge, and is never anonymous, wouldn't 360 appraisal be an improvement even if not always totally honest? Ratings from multiple sources also usually produce more reliable data. Data from a variety of organizations indicated that ratees were more satisfied with multi-rater appraisal than single-rater appraisal.7 Obviously, some ratees believe that 360 appraisal is an improvement over traditional appraisal, while others do not. This belief likely stems from such factors as levels of trust in the organization, and the type of traditional appraisal used.
We would suggest caution in adopting 360 appraisal. Use 360 feedback strictly for development at first. Let managers and others become comfortable with the process. Once employees see that negative repercussions are unlikely and managers see that the information truly is helpful, they will be less apprehensive about using 360 ratings for evaluation.
A pertinent example of using upward feedback involves student evaluations of teaching. Beginning mainly in the 1970s, student evaluations have provided a form of customer-based feedback (some might argue upward feedback) to faculty members at universities. In line with institutional theory, such evaluations have now become so commonplace in universities as to represent an institutional norm. Student evaluations were originally designed to be mainly developmental in nature, providing faculty with information that could be used to improve teaching. Over time, university administrations have increasingly used this feedback for evaluative purposes, e.g., for promotion and tenure decisions.
Has this feedback process resulted in improved student-faculty relationships, trust, and communication? Has it had any effect on student learning outcomes and the satisfaction of the ultimate customers-employers and society? Student feedback, especially when used for evaluation, can certainly modify a teacher's style without having an impact on student learning. It is also possible that universities have not had clear goals nor good outcome measures for the student feedback process. Instead, student evaluations represent an easy way to evaluate teaching, and in some cases, to provide information for students selecting professors. One university recently experienced pressure from other local universities to publish student ratings of faculty. The university decided to publish its student ratings, with the intent of giving students more information on which to base course selections. The criteria students would consider important were unknown. The decision to publish student ratings was based on customer demand, and mimicked the actions of other universities.
Using 360 Degree Feedback for Political Purposes
A third reason that companies engage in 360 feedback is politics.8 There is often competition among individuals and groups over ideas and the individuals or groups pushing those ideas. Individuals or groups try to impress higher level management with their innovative ideas and plans. A manager with authority to make an implementation decision may attempt to appropriate credit. In an organization that we helped to implement upward feedback, we communicated initially with a training director. Once his boss bought into the plan, the boss assumed ownership and credit. Indeed, the training director eventually left the organization.9
Similarly, a company as a whole may adopt 360 degree feedback to manage an impression. Organizations may embrace 360 feedback to convey an impression of openness and participation to clients or recruits when, in fact, this is not part of the organization's culture. While the innovations themselves may not be very successful, the political gains from impression management may be valuable.
Where are the Data?
A problem related to the absence of purpose in implementing 360 feedback is the absence of data, as well as the resulting dearth of knowledge on how or even whether 360 feedback really works. In recent telephone interviews with individuals who had spearheaded the implementation of 360 degree feedback in a number of Fortune 500 companies, the availability of effectiveness data was discouraging. The only data available were employee and manager perceptions of the process, random anecdotes, or, on rare occasions, changes in employee ratings of managers before and after upward feedback. Recent research in a retail store setting has shown that subordinate and peer ratings of managers increased after managers received 360 feedback, but managers' ratings from their supervisors and customers did not change. In addition, this research revealed that store sales volume was unaffected by the 360 feedback intervention.10
There are some data suggesting productivity improvements among university faculty and improved customer satisfaction ratings following the implementation of 360 feedback. However, the research generating these data did not include a control group, so it is difficult to conclude that the 360 process was solely responsible for the improvements.ll
We expect that in the future, few organizations will be able to afford to engage in costly training or development activities purely altruistically, or on the basis of speculative success. Rather, decisionmakers and participants will need to be convinced that the development effort can be expected to have a positive impact on the bottom line. Evaluating 360 Degree Feedback Efforts The above arguments suggest that little is known about the effects of 360 degree feedback programs in organizations. The following recommendations are offered in the hope of realizing more systematic knowledge regarding ways to ensure the effectiveness of 360 degree feedback programs. These recommendations should apply equally well to other organizational innovations, such as TQM and teams.
Make Consultants/Internal Champions Accountable for Results and Customization
How often are the people who are pushing an organizational innovation told that they must go into the process with specific goals, realistic timetables, and a plan for measuring results? We would argue that this is a rare event. Instead, consultants may jump on the 360 bandwagon, put together enticing packages, and subsequently feel reluctant to charge companies for evaluation. They may also fear demands from managers to explain the need for evaluation.
The result is a rush to implementation without a clear understanding of needs or expected results. Consultants, both internal and external, may simply implement the programs or activities of other organizations without systematic testing. One common example is the use of off-the-shelf 360 surveys. Although leadership may be a common factor of importance in, say, a mining organization, a police agency, and a high-tech think tank, a one- size-fits-all approach to survey items is not likely to be effective. The items will need to be customized.l3
Conflicts of interest can result when program evaluation is left in the hands of people who have either marketed or championed a process. Care must be taken to make sure that the evaluation process is objective, and that the data are verifiable.
Engage in a Pilot Test Initiative
Firms should learn to crawl before they walk. Managers tend to want immediate action, while a pilot study may last a year or longer. However, the benefits of a pilot study can be immense. In organizations with traditional hierarchies, the inversion of the organizational pyramid that accompanies 360 degree feedback can be threatening and problematic. Pilot studies can identify the threat and problems.
A pilot test we ran in a f ew departments before full-scale implementation of upward feedback in a large telecommunications firm identified problems with our original survey items that could be modified. We discovered both employee and managerial resistance and fear, which we were able to counteract with general information sessions for all employees in the targeted departments. We identified concerns with confidentiality and anonymity that stemmed from an earlier survey intervention by another company where breaches of confidentiality were suspected. We were able to present our strategies for ensuring anonymity and confidentiality to ease these concerns. Because these problems were corrected, we were able to implement a relatively smooth roll-out across the division. In addition, we were able to follow up the pilot group before implementation and obtain some initial effectiveness data. Our ability to demonstrate at least some success on a small scale helped convince reluctant managers that the rollout could be beneficial to the company.
Create Focus Groups to Identify Effectiveness Criteria Measures
The list of possible effectiveness criteria measures for an intervention such as a 360 degree feedback program can be quite extensive. Measures should focus on activity levels as well as results. Possibilities include:
ratee and rater reactions to the program, i.e., the extent to which they believe the process is valuable;
response rates-obviously a program cannot succeed if potential raters do not respond when surveyed;
grievance rates;
customer satisfaction;
employee satisfaction;
absenteeism/turnover;
recruiting success, e.g., strong qualifications of applicants and new hires;
work behaviors, e.g., leadership, communication, employee development efforts;
work performance, e.g., individual work output or contributions to work unit output; and
positive image with clients, customers, competitors, and suppliers.
One way to identify criteria is to form focus groups. The groups could be asked what they think would improve if those being rated got better at the dimensions on which they were being rated. The groups should be pressed for specifics and then guided to systematically monitor progress on the identified criteria before and after the innovation was fully implemented.
Evaluate Using a Pre-Post Control Group Design
Evaluation of the process is crucial to ensure that it is aiding in the accomplishment of the organization's goals, and working as intended. At least in the early stages, the organization should adopt a pre-post control group design to assess the impact of the process. Behaviors and outcomes should be measured prior to feedback, as well as after feedback, and some individuals should be selected to take part while others are not.
This recommendation may cut against the grain of typical managerial thinking. Many managers assume that if something is worth doing, it is worth doing for everybody right now. Managers also do not like their people being used as guinea pigs. We urge a reconsideration of this line of thought. In fact, this evaluation design could be implemented simply by beginning the process in stages in various parts of the organization.
Clearly, more experimental field studies on 360 degree feedback are needed. Research partnerships between the academic institutions and business organizations should be established. Research is needed on whether improvements in managerial or leadership behaviors cause improvements in performance and on whether the improvements have an effect on employee satisfaction, absenteeism, and turnover. With proper control for other factors that could affect the results, it is possible to determine what needs to be done to improve the 360 degree feedback process and, ultimately, whether the process is worth the time, money, and effort.
Be Careful What You Measure and How It's Used
What gets measured (and rewarded) drives behavior. Even when 360 degree feedback ratings are used strictly for developmental purposes, individuals will tend to modify behaviors in ways to receive more positive ratings. Therefore, it is extremely important that 360 degree surveys reflect those behaviors that the organization values most highly. Care should also be taken to ensure that behaviors measured are closely tied to the accomplishment of the organization's goals.l4
Student evaluations of teaching should encourage better teaching styles and classroom relationships. Communication between instructors and students should also improve. However, it can be argued that the process may also encourage behaviors and outcomes that are not always beneficial. Instructors may avoid challenging students for fear of upsetting them and obtaining lower student evaluations at the end of the semester. Assignments and readings may be made easier, and faculty may be hesitant to disagree with students' comments or concerns for fear of appearing disagreeable. Moreover, sensing that students dislike ambiguity, instructors may "teach the test" (i.e., virtually announce what will be on exams through the use of study guides) and provide a lockstep method of accomplishing assignments and research projects. The growing phenomenon of grade inflation should not be surprising. However, the ultimate customers, society and future employers, need and seek students who have been challenged and can adequately deal with ambiguity in solving problems. Future employers and graduate schools want to be able to look at grade point averages that have meaning. Although this example of upward feedback can provide valuable information for its recipients, we need to realize that people generally modify their behavior toward what gets measured and rewarded. Such behavior may not always lead to the realization of long-term goals and outcomes.
Train Raters15
Almost all 360 degree instruments rely on rating scales. Research has clearly established that raters commit different types of rating errors, such as rating too leniently or too harshly.16 Some raters play it safe by consistently using the central rating point. Other errors include halo effects (generalizing from doing well in one area to perceptions of doing well in other areas) and recency effects (weighting heavily behavior observed most recently). Raters need training in how to complete forms and how to avoid rating errors. Training should also cover the objectives of the surveys and the overall process. UPS, for example, explains the appraisal feedback process, and discusses how data will be used.
A few medium-sized organizations in the midwest have indicated to us that they are providing raters with a frame of reference training and teaching raters how to keep a log of observed behaviors that correspond with survey items. Frame of reference training covers the roles, responsibilities, and accountabilities of the ratee. Survey items are linked to roles and responsibilities in an attempt to help raters create a common frame of reference when they rate a ratee. To improve observations, raters are given surveys to keep throughout the year and are instructed to record their observations of incidents that they would use to help them determine their final ratings. Raters are encouraged to take their record of work incidents and supplement their ratings with written feedback. According to the HRM Directors in these organizations, raters thus far are willing to take risks to provide raters with specific written comments. Furthermore, the amount of written feedback has remained about the same over the last three years. Finally, ratees have indicated that the written feedback is more valuable to them than numerical ratings.
A Typical Case of 360 Degree Feedback Implementation
A CEO from a large size manufacturing company attended an executive development conference and heard about 360 degree feedback programs. He liked the idea. At the conference he heard all about the benefits of 360 degree feedback. He persuaded his senior management that the company should pursue 360 feedback because it would be a lot better than their current annual performance appraisal . He stated, "360 feedback comes directly from people who are in the best positions to evaluate the performance of the people they work with. Supervisors will have more information to support their appraisals of others and, therefore, more leverage to do something about some people's performance. We'll use the 360 feedback to help determine peoples' merit raises. That way, we'll make sure that people make improvements based on the feedback they received." The CEO asked the human resource management (HRM) department in the company to recruit a consulting firm that specialized in 360 degree feedback. HRM found a firm and then used a small focus group to help customize the firm's 99-item 360 degree survey.
After the first round of 360 degree feedback, almost everyone in the organization, including the CEO, was at least a little frustrated or disappointed with the outcomes of the process. People complained about how many surveys they had to fill out and how long the process took. Supervisors felt that many of the 360 ratings were inflated. In short, the data were not worth much. The company was faced with a decision about whether to continue or discontinue the 360 feedback program. What should the organization do? What should it have done at the beginning of the process?
Management decided to start over and took the next year to engage in several activities. A 360 feedback project team was formed, comprising representative employees from different areas and levels of the organization. The team's mission was to design, implement, and evaluate a 360 feedback process that would be acceptable to organizational members and would produce results. The following outcomes were defined: (1) improve communication by reducing the undiscussables between raters and ratees; (2) inc rease alignment of expectations between raters and ratees; and (3) improve ratees' work behaviors and performance.
The team, facilitated by the consultants, conceptualized how the 360 process should work in order to produce results. The team developed its own 360 feedback survey items based on values the organization deemed important and included a written feedback section on the survey. Pilot tests of survey items were conducted before producing the survey that would be used in a roll-out. In addition, the team explored the use of putting the survey on the computer network. Anonymity was still maintained, and this process eliminated the need to have someone outside the organization type the written comments. Raters, ratees, and coaches were trained about different aspects of the 360 process.
A decision was made that 360 surveys should be administered throughout the year. This procedure addressed the issue of overburdening people within one particular month. The team wanted the 360 process to provide individuals with information they could use to set specific improvement goals. Individuals were expected to review their 360 results with their respective coaches, who helped prepare those individuals for a discussion of appropriate 360 results with their respective raters. Thus, managers were expected to share the results of their upward appraisals with people reporting directly to them, and to share the results of their peer feedback with their peers. This took place during regularly scheduled meetings. The HRM area provided facilitation, if needed, when managers shared results with their raters.
Based on input from focus groups, the team also decided that the purpose of the 360 feedback process was to be primarily developmental, but with accountability. That is, individuals needed to use the feedback to set developmental goals. As part of this process, they had to meet with their respective supervisors and share the data. They were subsequently responsible for attaining their developmental goals. Consequences for failing to meet development goals ranged from being put on notice if no improvements occurred in the work behaviors targeted for improvement, to potential demotion if no changes occurred after three years. 360 feedback information would be used in annual performance appraisals only in cases of obvious need for corrective action and/or demotion.
In conjunction with the consultant, a three-year research program was designed to assess the 360 process. A number of research questions were formulated, and a research proposal was submitted to senior management. One question was whether those individuals who were trained to seek additional, follow-up feedback from respondents actually obtain more feedback and positive outcomes, as opposed to those who did not receive such training. Several meetings took place with management about the rationale for the research questions and the reasons for experimental field studies.
Finally, a 360 feedback steering committee was formed consisting of one member from the board of directors, one manager each at senior, middle, and first-line levels, and three employees. The committee is similar to the steering committees that help provide structure and guidance to TQM initiatives.
Conclusions
Unfortunately, in many organizations, 360 degree feedback or other innovations may be viewed increasingly as just another management fad. Employees and managers have seen a number of change initiatives begin abruptly with much fanfare, only to end abruptly, and often for little apparent reason.
It is obvious that organizations, like individuals, cannot erase their pasts. However, at the same, it is possible to keep new initiatives like 360 feedback from running amok, and to realize degrees of success. The case reviewed here can provide some lessons to avoid faddism and the potential cynicism it engenders. Specifically, this case demonstrates the need to systematically determine how 360 feedback (or other interventions) will be used and what outcomes can be expected. It also shows how the process needs to be tailored to the needs of the organization and subsequently scrutinized in terms of undergoing careful evaluation. The process was primarily developmental in nature, but with increasing accountability for ratees over the long run implementation. With careful planning and implementation, the benefits of 360 feedback can be clearly realized, rather than merely taken on faith.
Endnotes
1 See Antonioni, D. 1996. Designing an effective 360 degree appraisal feedback process. Organizational Dynamics, 25(2):2438.
2 As examples, see Training and Development. 1995. First
rate multirater feedback. August: 42-43. Also see APA Monitor. 1995. Subordinate feedback may foster better management. July: 30-31. Also see Fortune. 1994. 360 feedback can change your life. October: 93-100.
3 For a more complete description of resource dependence theory, see Ulrich, D., & Barney, J. 1984. Perspectives in organizations: Resource dependence, efficiency and population. Academy of Management Review, 9: 471-481. Also see Oliver, C. 1991. Strategic responses to institutional processes. Academy of Management Review, 16:145179.
4 For a more complete description of institutional theory, see Tolbert, P. S. 1985. Institutional environments and resource dependence: Sources of administrative structure in institutions for higher education. Administrative Science Quarterly, 30: 1-13.
5 See London, M., & Smither, J. 1995. Can multi-source feedback change perceptions of goal accomplishment, self-evaluations and performance related outcomes? Theory-based applications and directions for research. Personnel Psychology, 48: 803-839.
6 The results presented here were part of a survey of companies that belonged to a 360 degree feedback consortium. Results were presented by Timmreck, C., & Bracken, D. 1996. Multisource assessment: Reinforcing the preferred "means" to the end. Paper presented at the meeting of the Society for Industrial and Organizational Psychology, San Diego.
7 See Mark Edwards' and Ann Ewens' accounts of productivity improvement and improved customer satisfaction ratings following a 360 feedback intervention in their 1996 book, 360 degree feedback, AMACOM. New York.
8 For an interesting look at how the exercise of politics and other forms of power affect organizations, see Pfeffer, J. 1992. Managing with power. Boston: Harvard Business School Press.
9 Similar examples can be found in Jackall, R. 1988. Moral mazes: The world of corporate managers. New York: Oxford University Press.
10 See Bernardin, J., Hagan, C., & Kane, J. 1995. The effects of a 360 degree appraisal system on managerial performance: No matter how cynical I get, I can't keep up. In Tomow, W. (Chair), Upward feedback: The ups and downs of it. Symposium conducted at the Tenth Annual Conference of the Society For Industrial and Organizational Psychology, Orlando, FL.
11 See note 7 above.
12 The issue of activity-centered programs versus resultsdriven programs is discussed in more depth in the following article: Shaffer, R. H., Sr Thomson, H. A. 1992. Successful change programs begin with results. Harvard Business Review, 70(Jan.Feb.): 80-89.
13 We wish to thank an anonymous reviewer for pointing out the need for customizing innovations such as 360 feedback.
14 See Kerr, S. 1995. An Academy classic: On the folly of rewarding A while hoping for B. Academy of Management Executive, 9: 1, 7-16.
15 This recommendation is specific to 360 degree feedback interventions or those that include survey or rating instruments.
16 See Landy, F. J., & Farr, J. L. 1980. Performance rating. Psychological Bulletin, 87:82-107.
17 For an in-depth consideration of the antecedents and consequences of organizational cynicism, see Reichers, A. E., Wanous, J. P., & Austin, J. T. Understanding and managing cynicism about organizational change. 1997. Academy of Management Executive, 11(1): 48-59.
About the Authors
David Waldman is a professor in the School of Management at Arizona State University West. He received his PhD in industrial and organizational psychology from Colorado State University. He has taught at SUNY-Binghamton and Concordia University in Montreal. He has written approximately 50 scholarly articles and book chapters, been an investigator on grants totaling approximately $450,000, and held two editorial board memberships. His research interests focus on 360 degree appraisal and feedback, leadership, and aging and work behavior. He is the author, with Leanne Atwater, of The Power of 360 Degree Feedback: How to Leverage Performance Evaluations for Top Productivity, published by Gulf Publishing. Waldman has consulted for a number of private firms and public-sector organizations in the United States, Canada, and Mexico in the areas of 360 degree appraisal and leadership development.
Leanne Atwater is an associate professor of management at Arizona State University West, where she teaches organizational behavior and human resources management. She has a PhD from Claremont Graduate School. Atwater has received over $700,000 in grant dollars and has published over 25 articles and book chapters on leadership, self- perception accuracy, and 360 degree feedback. Her work has been published in Personnel Psychology, Human Resources Management, Research in Personnel, and Leadership Quarterly. She is the author, with David Atwater, of The Power 360 Degree Feedback: How to Leverage Performance Evaluations for Top Productivity, published by Gulf Publishing. Atwater has consulted for NORTEL, Lockheed Martin, the Arizona Department of Public Safety, and the City of Phoenix.
David Antonioni is an associate professor of management in the School of Business at University of Wisconsin-Madison. He is program director of the Mid-Management Development Certificate and the Masters Certificate in Project Management. Ant onioni teaches management development seminars and serves as a consultant to business and industry. He conducts applied research in the area of 360 degree feedback. He has published his model for an effective 360 degree feedback process in Organizational Dynamics. His model for 360 degree feedback was accepted for use by Anderson Consulting. He frequently consults with organizations on the design and implementation of 360 degree feedback process.