36 things to remember about the multi-rater feedback tool: 360 performance appraisals

For the past few months, I have been conducting a 'deep-dive' on the management practice of conducting 360 Performance Appraisals. My research included reviewing the multi-rater feedback tool history, scholarly research, instruments, and best practices. Here are 36 of the most important observations from my scan of the literature and practice in 2015 that are relevant to CEOs and managers desiring to get the most from the 360 tool.

  1. Since 1992, approximately 90% of Fortune 500 Companies report using the 360 performance appraisal in their companies. (Linman, 2006). The fact that there is widespread use of the 360 tool does not mean the instrument is widely appreciated; for some individuals and organizations there is great skepticism as many have sought to use the 360 for corrective action prior to termination, rather than as a developmental tool for performance improvement.

  2. Research shows the most effective post-360 behavioral change results occur when the organizations have a clear purpose for conducting the 360-degree multi-rater feedback assessment, such as an organizational performance issue, strategic need or feedback from customers.

  3. Research also shows there are potential problems associated with 360 degree reviews when used as an evaluation system rather than just as a personal development technique. (Brett, Atwater, 2001); one meta analysis of research studies showed 38% negative future employee performance after conducting the 360 (Kluger & DeNisi, 1996).

  4. 360’s are more commonly used by organizations because of ‘pressing needs for its leaders to engage in different behaviors to respond to organizational changes.”

  5. Organizations considering serious restructuring or downsizing are not in a good position to begin implementing a 360 multi-rater appraisal, as they will have difficulty garnering the employee trust needed amidst serious organizational change.

  6. The 360 process requires trust; employees who are cynical of the process, or who think that change is not possible or too difficult can interfere with 360 multi-rater feedback success.

  7. Rater anonymity among peer and subordinate raters has been shown to be important to promote honest responding. (Brutus and Derayah, 2002)

  8. Using a multi-rater system for feedback should not be used as a substitute for direct one-to-one feedback with the manager-employee relationship.

  9. A meta-study of 100 organizations showed 19% of companies conducting 360’s without integration into development, performance appraisal, training support, etc) found resistance to successful 360 implementation. (Brutus and Derayeh, 2002)

  10. Organizations need to assist all multi-rater participants in understanding how their feedback fits into the organizations initiatives and goals, and how the 360 process is aligned with other strategic and HR processes.

  11. Employee’s need to see how the results of their performance assessment fit with rewards and development opportunities offered in the organization. (Heneman and Gresham, 1998)

  12. Subordinate raters are often wary that the leader will somehow trace their responses back to them. Subordinates fearing reprisal might choose to inflate ratings to avoid retaliation.

  13. Research has shown that employees who believe that their ratings are anonymous are likely to give more honest feedback than are employees who think their response will be associated with them. (Antonioni, 1994)

  14. Beyond anonymity, confidentiality of feedback that the data collected will be used only for the purpose intended is important to feedback providers who might otherwise bridle their response, limit, or inflate their comments to create safety.

  15. Individuals who are the focus or target of the 360 respond differently to the feedback; and the post-assessment feedback facilitator should be mindful of the style of the reviewed. High self-esteem leaders who have high self-efficacy (i.e., the ability to engage in actions to achieve successful performance) are more likely to take value from the 360, as a way to focus attention on solving problems; whereas low self-efficacy leaders may dwell on their concerns and personal failures.

  16. The length of time that a rater has known the individual being evaluated has a significant effect on the accuracy of a 360-degree review; persons with 1-3 years are in the best position to judge; followed by persons known less than one year; followed by persons known three to five years, and the least valuable are those that have known the respondent for over 5 years.

  17. Pilot test your instrument prior to widespread use.

  18. Involve stakeholders in preparation of the 360-question tool.

  19. Communicate throughout the process, particularly the anonymity and confidentiality protection processes and methods to assure integrity of feedback.

  20. Breaking confidentiality will compromise your results and have negative impact on employee moral and potentially other issues.

  21. Be clear on the feedback use and purpose, and who owns the data.

  22. Use multiple forms of feedback (e.g., objective data, and specific essay, scalar, forced ranking, etc) to collect data.

  23. Time and effort required to complete 360 assessments are viewed as a negative to completing the assessment particularly when they were paper based, fill-in-the blank templates, and/or lengthy instruments; with good editing to simplify instruments and modern computerized tools, time expended can be much less an issue.

  24. Use online-based instruments to speed the process of data collection, and reduce the work load management, employees, and stakeholders responding to the 360.

  25. Research shows ¬no difference in reliability of 360 paper instrument results, versus online instrument results. (Huet-Cox, Nielsen, and Sundstrom, 1999)

  26. Use objective and friendly administration and scoring processes, being careful not to bias results, or deter others from participating.

  27. Use data visualizations (e.g., charts, graphs, multi-variable charts, etc) of 360 outcomes to increase understanding and accessibility of the assessment results.

  28. Provide a debriefing and developmental coaching session exploring the results of the 360 with your target of the assessment; use as part of development and promotion of your team members.

  29. In your debriefing, explore the nuances of what is reported, gaps and disparities between self-report, and other respondents.

  30. Be careful NOT to link your 360 to other performance systems without first testing results and considering potential impact.

  31. Using 360 for development is different than using it for merit increases, etc.

  32. The most effective use of the 360 is for developmental purposes; though once there is institutional knowledge and trust in the tool, managers can use the tool for strategic planning purposes.

  33. Use the 360 as an ongoing process with frequent updates, rather than a one-time event. Once employees are comfortable with the multi-rater process, it can be used for assistance in making company decisions; however not at first. “…Feedback can be used for both developmental and administrative purposes, but this takes time,” and works best when used over 2-4 rounds.

  34. Evaluate your effectiveness of the process, instrument, questions, feedback visualization, and coaching for the manager or target of the 360.

  35. A meta-study observation of conclusions from 24 longitudinal studies showed that “practitioners should not expect large, widespread performance improvements after employees receive feedback.” (Smither, London, and Reilly, 2005) Rather, the meta-study showed modest improvements, indicating that one can expect positive improvements in employee behaviors and attitudes.

  36. The 360 can be used as a method to identify improvement areas for employee and managerial communication and growth; consider incorporating 360 feedback learning’s into an employee’s Balanced Scorecard for personal development and growth.