The Handbook of Strategic 360 Feedback 0190879866, 9780190879860

This volume is the definitive work on strategic 360 feedback, an approach to performance management that is characterize

234 86 41MB

English Pages 562 [577] Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
The Handbook of Strategic 360 Feedback
Copyright
Contents
Foreword
Contributors
1. Introduction and Overview to The Handbook of Strategic 360 Feedback
2. What Is “Strategic 360 Feedback”?
Section I
3. Best Practices When Using 360 Feedback for Performance Appraisal
4. Historical Challenges of Using 360 Feedback for Performance Evaluation
5. Technological Innovations in the Use of 360 Feedback for Performance Management
6. Strategic 360 Feedback for Talent Management
7. Using Stakeholder Input to Support Strategic Talent Development at Board and Senior Executive Levels: A Practitioner’s Perspective
Section II
8. Application of 360 Feedback for Leadership Development
9. Moving Beyond “The Great Debate”: Recasting Developmental 360 Feedback in Talent Management
10. Team Development With Strategic 360 Feedback: Learning From Each Other
11. From Insight to Successful Behavior Change: The Real Impact of Development-​Focused 360 Feedback
12. Integrating Personality Assessment With 360 Feedback in Leadership Development and Coaching
13. Strategic 360 Feedback for Organization Development
Section III
14. Factors Affecting the Validity of Strategic 360 Feedback Processes
15. Can We Improve Rater Performance?
16. Rater Congruency: Why Ratings of the Same Person Differ
17. Is 360 Feedback a Predictor or Criterion Measure?
Section IV
18. The Journey From Development to Appraisal: 360 Feedback at General Mills
19. Harnessing the Potential of 360 Feedback in Executive Education Programming
20. An Alternative Form of Feedback: Using Stakeholder Interviews to Assess Reputation at Walmart
21. Mitigating Succession Risk in the C-​Suite: A Case Study
22. Integrating Strategic 360 Feedback at a Financial Services Organization
23. Leveraging Team 360 to Drive Business-​Enhancing Change Across the Enterprise at Whirlpool Corporation
24. What Kind of Talent Do We Have Here? Using 360s to Establish a Baseline Assessment of Talent
Section V
25. 360 Feedback Versus Alternative Forms of Feedback: Which Feedback Methods Are Best Suited to Enable Change?
26. Gender, Diversity, and 360 Feedback
27. Using Analytics to Gain More Insights From 360 Feedback Data
28. The Ethical Context of 360 Feedback
29. The Legal Environment for 360 Feedback
30. Using 360 Feedback to Shape a Profession: Lessons Learned Over 30 Years From the Human Resource Competency Study (HRCS)
31. The Handbook of Strategic 360 Feedback: Themes, Prognostications, and Sentiments
About the Editors
Name Index
Subject Index
Recommend Papers

The Handbook of Strategic 360 Feedback
 0190879866, 9780190879860

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

 i

THE HANDBOOK OF STRATEGIC 360 FEEDBACK

ii

 iii

THE HANDBOOK OF STRATEGIC 360 FEEDBACK Edited by

Allan H. Church David W. Bracken John W. Fleenor and

Dale S. Rose

1

iv

1 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2019 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. CIP Data is on file at the Library of Congress ISBN 978–0–19–087986–0 9 8 7 6 5 4 3 2 1 Printed by Sheridan Books, Inc., United States of America

 v

CONTENTS

Foreword Marshall Goldsmith

ix

Contributors

xi

1. Introduction and Overview to The Handbook of Strategic 360 Feedback Allan H. Church, David W. Bracken, John W. Fleenor, and Dale S. Rose

1

2. What Is “Strategic 360 Feedback”? David W. Bracken

11

SECTION I  360 FOR DECISION-​MAKING

3. Best Practices When Using 360 Feedback for Performance Appraisal Emily D. Campion, Michael C. Campion, and Michael A. Campion

19

4. Historical Challenges of Using 360 Feedback for Performance Evaluation Manuel London and James W. Smither

61

5. Technological Innovations in the Use of 360 Feedback for Performance Management Steven T. Hunt, Joe Sherwood, and Lauren M. Bidwell 6. Strategic 360 Feedback for Talent Management Allan H. Church 7. Using Stakeholder Input to Support Strategic Talent Development at Board and Senior Executive Levels: A Practitioner’s Perspective Paul Winum

77

97

123

v

vi

vi  / / ​  Contents

SECTION II  360 FOR DEVELOPMENT

8. Application of 360 Feedback for Leadership Development Cynthia McCauley and Stéphane Brutus

135

9. Moving Beyond “The Great Debate”: Recasting Developmental 360 Feedback in Talent Management Jason J. Dahling and Samantha L. Chau

149

10. Team Development With Strategic 360 Feedback: Learning From Each Other Allison Traylor and Eduardo Salas

159

11. From Insight to Successful Behavior Change: The Real Impact of Development-​Focused 360 Feedback Kenneth M. Nowack

175

12. Integrating Personality Assessment With 360 Feedback in Leadership Development and Coaching Robert B. Kaiser and Tomas Chamorro-​Premuzic

193

13. Strategic 360 Feedback for Organization Development Allan H. Church and W. Warner Burke

213

SECTION III  360 METHODOLOGY AND MEASUREMENT

14. Factors Affecting the Validity of Strategic 360 Feedback Processes John W. Fleenor

237

15. Can We Improve Rater Performance? David W. Bracken and Christopher T. Rotolo

255

16. Rater Congruency: Why Ratings of the Same Person Differ Adrian Furnham

291

17. Is 360 Feedback a Predictor or Criterion Measure? Elaine D. Pulakos and Dale S. Rose

309

 vii

Contents  //​ vii

SECTION IV  ORGANIZATIONAL APPLICATIONS

18. The Journey From Development to Appraisal: 360 Feedback at General Mills Tracy M. Maylett

327

19. Harnessing the Potential of 360 Feedback in Executive Education Programming Jay A. Conger

343

20. An Alternative Form of Feedback: Using Stakeholder Interviews to Assess Reputation at Walmart Lorraine Stomski

353

21. Mitigating Succession Risk in the C-​Suite: A Case Study Seymour Adler

361

22. Integrating Strategic 360 Feedback at a Financial Services Organization William J. Shepherd

373

23. Leveraging Team 360 to Drive Business-​Enhancing Change Across the Enterprise at Whirlpool Corporation Stefanie Mockler, Rich McGourty, and Keith Goudy

385

24. What Kind of Talent Do We Have Here? Using 360s to Establish a Baseline Assessment of Talent Christine Corbet Boyce and Beth Linderbaum

397

SECTION V  CRITICAL AND EMERGING TOPICS

25. 360 Feedback Versus Alternative Forms of Feedback: Which Feedback Methods Are Best Suited to Enable Change? Dale S. Rose 26. Gender, Diversity, and 360 Feedback Anna Marie Valerio and Katina Sawyer

409

427

vii

viii  / / ​  Contents

27. Using Analytics to Gain More Insights From 360 Feedback Data Alexis A. Fink and Evan F. Sinar

447

28. The Ethical Context of 360 Feedback William H. Macey and Karen M. Barbera

461

29. The Legal Environment for 360 Feedback John C. Scott, Justin M. Scott, and Katey E. Foster

479

30. Using 360 Feedback to Shape a Profession: Lessons Learned Over 30 Years From the Human Resource Competency Study (HRCS) Dave Ulrich

503

31. The Handbook of Strategic 360 Feedback: Themes, Prognostications, and Sentiments Allan H. Church, David W. Bracken, John W. Fleenor, and Dale S. Rose

517

About the Editors

531

Name Index

539

Subject Index

549

 ix

FOREWORD

The Handbook of Strategic 360 Feedback is a compilation of essays about the various aspects of feedback written by the top practitioners and academics in the field. You will not find a more comprehensive volume about this subject, so I congratulate you for picking this book and exploring this most important subject. Many refer to me as one of the “pioneers of 360 Feedback,” which just means that I have been studying the subject for a long time! In the years that I have been an executive coach, I have found that the key issue to recognize in giving feedback to top performers is that the “no-​news-​is-​good-​news” feedback approach is not an effective management technique for handling your superstars. Too often, we assume that these individuals know how much we value their contributions, and we take the lazy approach to providing feedback: “You know you’re doing a good job.” Or worse: “Write your own performance review, and I’ll sign it.” Sound familiar? Here are some quick tips to more effectively discuss performance—​and motivate—​your top talent:

1. Approach the discussion with the same preparation and attention to detail that you focus on team members with problem or growth opportunities. If they truly are valued by you and the organization, give them the thoughtfulness, respect, and time that they deserve. 2. Recognize that the quickest way to encourage a top performer to start looking for a job elsewhere is to tell them: “There is nothing that you need to work on.” Based on our database of over 4 million leaders, the highest ranked behavior of our top performers is a commitment to self-​improvement. These people want—​and need—​to learn and grow. Help them identify opportunities. 3. Specify the value that these performers bring to you and to the organization. Express the cause and effect of their contributions/​role in the organization and the appreciation that you personally feel.

ix

x

x  / / ​  Foreword





4. Be as honest as possible about future opportunities within the organization. Do not commit beyond your span of control. It is better to be candid and maintain trust than to have these individuals base decisions on deals that you cannot keep. 5. Recognize that as their leader, you have the greatest ability to retain these human assets. The number one factor that influences people’s intent to stay or leave a job is their satisfaction or dissatisfaction with their leader, so keep them challenged; provide them with ongoing feedback; and recognize/​express your appreciation for their contributions. Most important, recognize that you will have the most impact on their continued growth and satisfaction.

These are just a few of the things I have learned about feedback over the years. I hope they are helpful to you. In The Handbook of Strategic 360 Feedback, you are going to learn much more from exceptional thought leaders on the subject, including Dave Ulrich, Cindy McCauley, Manny London, and, of course, the editors Allan, David, John, and Dale. I know you will enjoy this outstanding work about Strategic 360 Feedback, and that applying what you learn here in your organization, with your teams and leaders, will take you and your companies from where you are to where you want to be. Life is good. Marshall Goldsmith

 xi

CONTRIBUTORS

Seymour Adler, PhD Partner, Aon Hewitt Karen M. Barbera, PhD Head of Client Delivery CultureIQ Lauren M. Bidwell, PhD Research Scientist Human Capital Management Research SAP SuccessFactors

Emily D. Campion, PhD Assistant Professor of Management Old Dominion University Consultant, Campion Consulting Services Michael A. Campion, PhD Krannert Chair Professor of Management Purdue University Consultant, Campion Consulting Services

Christine Corbet Boyce, PhD Vice President and Principal Consultant Right Management, Manpower Group

Michael C. Campion, PhD Vackar College of Business and Entrepreneurship University of Texas Rio Grande Valley Consultant, Campion Consulting Services

David W. Bracken, PhD Principal, DWBracken & Associates Professor, Academic Program Coordinator Keiser University Graduate Studies

Tomas Chamorro-​Premuzic, PhD Chief Talent Scientist, Manpower Group Professor of Business Psychology University College of London Visiting Professor, Columbia University

Stéphane Brutus, PhD RBC Professor of Motivation and Employee Performance John Molson School of Business Concordia University

Samantha L. Chau, PhD Director Talent Assessment, Performance, and Succession Management Novo Nordisk Inc.

W. Warner Burke, PhD E. L. Thorndike Professor of Psychology and Education Department of Organization and Leadership Teachers College Columbia University

Allan H. Church, PhD Senior Vice President Global Talent Assessment and Development PepsiCo.

xi

xii

xii  / /  ​ Contributors

Jay A. Conger, DBA Henry R. Kravis Professor of Leadership Studies Claremont McKenna College Jason J. Dahling, PhD Professor and Chair Psychology Department The College of New Jersey Alexis A. Fink, PhD Senior Leader, Talent Management Intel Katey E. Foster, PhD Associate Director and Litigation Associate Practice Leader APTMetrics Inc. John W. Fleenor, PhD Senior Researcher Center for Creative Leadership Adrian Furnham, DSc, DLit, DPhil Department of Leadership and Organizational Behavior Norwegian Business School Marshall Goldsmith, PhD Founder, Marshall Goldsmith Group Professor, Management Practice Tuck School of Business (Dartmouth) Keith Goudy, PhD Managing Partner Vantage Leadership Consulting Steven T. Hunt, PhD Senior Vice President Human Capital Management Research SAP SuccessFactors Robert B. Kaiser, PhD Kaiser Leadership Solutions Editor-​in-​Chief, Consulting Psychology Journal: Practice and Research Beth Linderbaum, PhD Vice President and Principal Consultant Right Management, ManpowerGroup

Manuel London, PhD Dean, College of Business SUNY Distinguished Professor of Management Stony Brook University William H. Macey, PhD Senior Research Fellow CultureFactors Inc. Tracy M. Maylett, EdD Chief Executive Officer, DecisionWise Faculty, Organizational Behavior/​HR Marriott School of Business Brigham Young University Cynthia McCauley, PhD Senior Fellow Center for Creative Leadership Rich McGourty, PhD Senior Consultant Vantage Leadership Consulting Stefanie Mockler, MA Consultant and Head of Client Insights Vantage Leadership Consulting Kenneth M. Nowack, PhD Chief Research Officer and President Envisia Learning Inc. Editor-​in-​Chief, Consulting Psychology Journal: Practice and Research Elaine D. Pulakos, PhD Chief Executive Officer PDRI Dale S. Rose, PhD President 3D Group Christopher T. Rotolo, PhD Vice President Global Talent Management and Organization Development PepsiCo

 xii

Contributors  //​ xiii

Eduardo Salas, PhD Allyn R. and Gladys M. Cline Professor and Chair Department of Psychological Sciences Rice University Katina Sawyer, PhD Assistant Professor of Management The George Washington University John C. Scott, PhD Chief Operating Officer and Cofounder APT Metrics Past Editor-​in-​Chief, Industrial and Organizational Psychology: Perspectives on Science and Practice Justin M. Scott, Esq. Scott Employment Law P.C. William J. Shepherd, PhD Director, Enterprise Learning and Development Wendy’s Joe Sherwood, MS Graduate Student Portland State University Evan F. Sinar, PhD Chief Scientist and Vice President DDI

James W. Smither, PhD Professor Management and Leadership Department La Salle University Lorraine Stomski, PhD Vice President, Global Learning & Leadership Walmart Allison Traylor Doctoral Student Rice University Dave Ulrich Rensis Likert Professor of Business Ross School of Business University of Michigan Partner, The RBL Group Anna Marie Valerio, PhD President Executive Leadership Strategies Paul Winum, PhD Senior Partner RHR International LLP

xvi

 1

1 / /​/ ​     / /​/​ INTRODUCTION

AND OVERVIEW TO THE HANDBOOK OF STRATEGIC 360 FEEDBACK ALLAN H. CHURCH, DAVID W. BRACKEN, JOHN W. FLEENOR, AND DALE S. ROSE

Where would we be without feedback? It is a constant aspect of our daily lives. We received feedback as children on how to behave, as students on what we have learned, as friends and partners in our relationships, as parents on how we are raising our children, and of course, as employees in the workplace. Whether it is feedback on our performance via a formal appraisal process; from a career conversation regarding our career prospects; or from our direct reports, peers, and others regarding our leadership and management behaviors, there is no escaping the impact or role of this “gift,” as some people like to call it, in our lives. It should come as no surprise then, that the act of collecting and delivering feedback in organizational settings has evolved from a disjointed set of informal conversations to a formal process that is a staple of human resource (HR) and management practices in the workplace today. Although many terms have been used since its inception in the early 1950s and surge in popularity in the 1990s in organization development (OD) and industrial–​organizational (I-​O) psychology, today what we call 360 Feedback is one of the most standard and commonly used HR practices in organizations to measure, develop, and drive change in employee behavior (Bracken, Rose, & Church, 2016). Recent benchmark studies, for example, have reported that upward of 50% of all organizations have some form of 360 Feedback mechanism in place that is used for talent management decision making purposes (e.g., 3D Group, 2016; United States 1

2

2  / / ​  H andbook of S trategic 36 0 Feedback

Office of Personnel Management, 2012). The most recent overview of 360 Feedback in the field, offered by Bracken et al. (2016), defines the process this way: 360 Feedback is a process for collecting , quantifying , and reporting co-​ worker observations about an individual (i.e., a ratee) that facilitates/​enables the (1) evaluation of rater perceptions of the degree to which specific behaviors are exhibited, and the (2) analysis of meaningful comparisons of rater perceptions across multiple ratees and between specific groups of raters for an individual ratee for the purpose of creating sustainable individual, group and/​or organizational change in behaviors valued by the organization. (p. 764)

While the early stages of 360 Feedback, with a few notable exceptions, were primarily focused on individual development coaching, leadership development, and organizational change efforts, today the process of collecting information on employee behaviors from multiple sources (e.g., direct reports, peers, supervisors, customers) has become an integral part of many HR and talent management processes, as well as use for decision-​making purposes. These include areas such as performance management, succession planning, high-​potential identification, and internal placement and promotion decisions. A recent benchmark study of top companies reported that 70% used 360 Feedback as the number one tool, along with personality measures and interviews, as a means for both assessing and developing their high-​potential individuals and senior executives (Church & Rotolo, 2013). While there was considerable debate on the efficacy of data from 360 Feedback processes in these types of applications at the turn of the millennium (e.g., Bracken, Timmreck, & Church, 2001; London, 2001), Bracken et al. (2016) noted the debate is over. 360 Feedback is no longer a fad or phenomenon but instead a theoretically grounded, highly researched, and well-​established practice area that has been shown to have a significant impact on individual, group (team), and organizational performance. Almost 20  years ago, Bracken et  al. (2001) edited the Handbook of Multisource Feedback, which was the first attempt to bring the best and latest thinking on the topic of 360 Feedback together into a single volume for practitioners and researchers in the field. The title itself reflects the changing nature of the term during that time period. For many years, that edition served as the “manual” for designing, implementing, and evaluating 360 Feedback systems in organizational settings. While several important review articles have appeared since in the academic literature clarifying the definition and intent of the approach, discussing major themes in practice, and offering key learnings to date (e.g., Bracken et al., 2016; Nowack & Mashihi, 2012), nothing with the same breadth or depth

 3

Introduction and Overview  //​ 3

has been offered that has matched the original handbook. Given the myriad changes in the business environment (e.g., globalization, new forms of organizations and the nature of work, generational differences and value structures), new capabilities that technology offers in this area (e.g., digital processes and Big Data applications), and the increasing pressures on organizations to address existing and emerging talent demands (Boudreau, Jesuthasan, & Creelman, 2015; Church & Burke, 2017; McDonnell, 2011; Meister & Willyerd, 2010; Zemke, Raines, & Filipczak, 2013), we felt it was time to revisit the “state of the science and practice” of 360 Feedback with a new definitive handbook on this important topic. As a result, we decided to close that gap. What you have in your hands is the Strategic Handbook of 360 Feedback. This volume represents a significant leap forward in our collective understanding of the systematic process of collecting behavioral data from multiple sources in the workplace and using the resulting feedback to enhance individual development, inform talent decision-​making, identify actionable organizational insights, and drive organizational change. In preparing this handbook, we have once again turned to both deep experts and leading edge researchers and practitioners who are engaged in the art and science of 360 Feedback today across a multitude of applications and organizational contexts. Prominent academics and scientist–​practitioners, including Adler, Barbera, Bracken, Brutus, Burke, Campion, Chamorro-​Premuzic, Church, Conger, Fink, Fleenor, Furnham, Hunt, Kaiser, London, Macey, McCauley, Nowack, Pulakos, Rose, Rotolo, Salas, Scott, Shephard, Sinar, Smither, Stomski, Ulrich, Valerio, and Winum, among others, have offered entirely new discussions, reviews, and applications on the use of 360 Feedback in organizations today. More than just views of experts, we wanted this book to be practical. We wanted to provide ideas, perspectives, and guidance that any organization could readily apply. To this end, the single largest section presents seven case studies describing the ways 360 Feedback is used by some of the largest, most successful companies of our time, including PepsiCo, Whirlpool, General Mills, and Walmart, along with a handful from other industries that chose to remain anonymous (which seems appropriate for a volume on 360 Feedback). Thus, this book represents a truly important collection of the latest thinking and best practice knowledge available anywhere on the subject. What makes the handbook unique, however, is our emphasis on the strategic intent and focus of many 360 Feedback processes. Until now, the vast majority of the literature has centered on the “what” and “how” of these data-​driven processes. Our goal was to go beyond the basics this time and focus on how 360 Feedback can and should be used at the individual, group, and organization levels to support the strategic goals of the business. While the fundamentals are clearly important, and we do offer some

4

4  / / ​  H andbook of S trategic 36 0 Feedback

guidance on those where appropriate, we would also direct the reader back to the original Handbook of Multisource Feedback for tactical guidance that has withstood the test of time. For this handbook, we offer a higher level perspective linked to the systems level of an organization, yet one grounded in practical realities with critical discussions, case studies, deeper application examples, and the latest emerging topics and research to assist the reader in implementing the best and most effective 360 Feedback systems they can. OVERVIEW OF THE BOOK

In designing the flow and contents for the handbook, we decided to structure the book based on three major sections that we felt would appeal to the variety of readers (and those designing, implementing, and researching 360 Feedback systems today). After an overview of 360 as a strategic process, the contents are presented in the major sections discussed next. Section I: 360 for Decision-​Making

Chapters in Section I focus on the design considerations, implications, and best practices (e.g., the latest technology) for using 360 Feedback in processes impacting employee outcomes, such as performance management, talent management, individual assessment, and high-​potential identification and in senior executive succession contexts. Given these areas reflect the evolution of the practice from development to decision-​making, we highlight them first to emphasize the shift in their importance in organizations. As organizations seek to qualify the value of 360 Feedback and utilize the results obtained, these are some of the hot topics for many companies today. Section II: 360 for Development

Chapters in Section II reflect the more deeply rooted and commonly used applications of 360 Feedback, including leadership development, team development, linking with personality data for enhancing impact, OD, and individual behavior change. In addition, new applications, such as using these types of processes for building functional capabilities, are also discussed. While many practitioners will be familiar with some of these approaches, the content presented here represents new thinking and perspectives for consideration. With 20 additional years of experience in these areas, it is clear the field has learned a

 5

Introduction and Overview  //​ 5

great deal about what makes developmental 360 Feedback efforts work (and not work) in a variety of settings. Section III: 360 Methodology and Measurement

Given the importance of ensuring strategic 360 Feedback applications are actually measuring what they purport to measure, the chapters in Section III focus on helping both practitioners and researchers understand the underlying mechanics of how the process works and the levers needed for success. The emphasis here is on critical measurement topics, such as the best ways to improve rater performance (i.e., enhance the quality and distribution of ratings), understand rating congruence between different sources and what to do about it, whether 360 is a predictor or a criterion measure, and the factors to consider for impacting the validity of these systems. Section IV: Organization Applications

With a firm understanding of the different types of practices and measurement components involved, Section IV focuses on more specific case study applications in organizational settings. Chapters here focus on the use of 360 Feedback in a variety of contexts for both development and decision-​making and reflect a number of different and somewhat intriguing approaches. While some of the topics are similar (though different cases) from those in previous sections, such as performance management, leadership development, talent assessment, and succession, others present unique approaches, including an emphasis on reputation, working with the board of directors, and team interventions. Section V: Critical and Emerging Topics

The final section of the book focuses on critical and emerging topics for the field. Interestingly, while some of these are consistent with concepts that were identified as early trends in 2000 (e.g., ethics issues, gender and diversity considerations, and legal implications, particularly when using 360 Feedback for decision-​making), others represent entirely new areas that are emerging today (e.g., new perspectives using data analytics, alternate forms of feedback, used of 360 to influence the HR profession itself, etc.). The fact that we identified both ongoing critical issues and new emerging trends speaks to the ubiquitous nature of 360 Feedback as an integral HR process.

6

6  / / ​  H andbook of S trategic 36 0 Feedback

KEY THEMES IDENTIFIED

In collecting, writing, and reviewing the other 30 chapters included in this volume, we have been struck by a number of themes that kept emerging almost regardless of the topic areas discussed. These are summarized as follows: 1. Purpose Matters: One of the central considerations in any strategic 360 Feedback system is the purpose of the process or program. If one were to read through the contents of this handbook end to end it, might be apparent that some of the recommendations and best practices offered seem to contradict each other in certain areas. Although we would argue that all strategic 360 Feedback efforts should be linked to the goals, values, mission, or vision of the business; be integrated with other HR systems; have solid measurement properties; and be inclusive of the target audience, the way in which decisions are made regarding these factors will be influenced by the overall purpose of the process. For example, a 360 Feedback process designed to drive large-​scale organizational change may be focused on an ideal or future state set of cultural imperatives, while one directed at individual development might be based on enhancing, via a highly facilitated coaching and development program, specific leadership competencies needed for individual effectiveness. A performance management–​based 360 Feedback program will likely have a different set of process rules, timing requirements, and measurement standards for validation than one focused on group dynamics or team interventions. If the emphasis is on high-​potential identification or C-​suite succession, the process might be highly selective and perhaps less transparent with respect to certain outputs (e.g., fit to senior profile indices or resulting “high-​potential” designation based on the data) versus one focused on enhancing a wide range of managerial skills around collaboration that is not linked to compensation or promotions. The key, then, as in any data-​driven consulting effort, when designing a new 360 Feedback system (Bracken et al., 2016; Church & Waclawski, 2001) is to contract (or determine) the true purpose of the process up front before heading into the rest of the design and implementation stages. Moreover, it is equally critical to fully understand the purpose, both stated and real—​sometimes they may not be the same—​when considering revisions or enhancements to an existing ongoing application. 2. Feedback Is No Longer for Development Only:  Although this was already a key premise going into the structure of the handbook (as noted previously) and the selection of chapter topics based on prior arguments we have made elsewhere (e.g., Bracken & Church, 2013; Bracken et al., 2016), many of the authors echoed our

 7

Introduction and Overview  //​ 7

conviction even in sections not intended for that part of the discussion. Although we would all agree that 360 Feedback is a key process aimed at developing individuals (and groups and organizations), we were struck by how many of our colleagues highlighted the ways in which these processes and the data gathered as a result can be used to inform or make decisions in organizations today. While some still support the development-​only model, and in targeted circumstances such as pure leadership capability-​building programs, culture change efforts, or targeted coaching interventions, the general trend appears to be toward using the data in ways that add value to the individual and the business. The last is key, of course, to our definition of whether a given 360 Feedback process is strategic in nature so it makes sense. Still, it appears as though the future some of us discussed in the original handbook (Bracken et al., 2001) is now the present. 360 Feedback is the most commonly used tool for identifying high-​potential individuals, assessing senior executives for succession-​planning efforts (e.g., Church & Rotolo, 2013; Silzer & Church, 2010), and increasingly finding its way into more robust performance management systems. The key, of course, which is highlighted in many of the chapters here, is to ensure the purpose and design elements are done the right way. It is not simply a case of using the same old 360 Feedback tools an organization has in place (or introducing some standard tool off the shelf) and changing the primary intent. That would result in serious risk to the organization and potential chaos among the employee population. Rather, we see organizations moving toward designing and implementing 360 Feedback systems that are focused on transparency of purpose, use sound measurement properties, are linked to the strategic direction of the business, and are empirically validated to ensure the results are predicting the right types of expected outcomes.   At this point, it is not about whether we should or should not use 360 data for these more strategic types of decision-​making applications, but rather how best to do so. As the legal landscape continues to increase in complexity, including new data privacy regulations as well as adhering to the standard Equal Employment Opportunity Commission (EEOC) guidelines in the United States, it is paramount that organizations follow the recommendations for practice included in this volume to ensure they are taking the steps required to utilize their 360 Feedback systems to the best possible advantage (and least possible risk). This means ensuring that practitioners trained in I-​O psychology and related disciplines are involved in the design and validation process along the way. 3. The Technology of 360 Feedback Is Both an Art and a Science: Although much of the content of this handbook focuses on the science of 360 Feedback systems,

8

8  / / ​  H andbook of S trategic 36 0 Feedback

we believe it is also important to recognize that creating effective strategic 360 processes is an art form as well. Just because people can do something (e.g., create and launch their own tools online) does not mean it is always a good idea. While technology has enabled a significantly broader access to 360 Feedback tools than ever before (some of us even remember doing these processes using optical scan forms and paper-​based methods) and to all types of professionals, including those in HR and even line managers, there is no guarantee that it will be done well. In fact, we have seen many examples where well-​intentioned leaders have created their own 360 processes using poorly written items, lopsided scales, and risky administration and reporting designs. While at first this might seem empowering to them and encouraging to those of us who have deep experience in the process (after all, it does speak to the perceived value of the methodology and the data), the risk associated with these rogue implementations is significant. Biased data, breaches in confidentiality, and inappropriate insights can lead to bad talent management decisions and larger negative consequences in the organization in terms of declines in employee engagement, trust in the company, and belief in the integrity of its leadership (not to mention legal exposure and poor business performance if the wrong leader is placed in a role based on a poor measure). Moreover, blind reliance on the science itself is no better. The emerging practice areas of Big Data and talent analytics suffers from a similar argument in that without the appropriate strategic oversight and context on the part of those developing the insights, the resulting information delivered may be entirely off base or suspect from a moral and ethical level (Church & Burke, 2017). The more we rely on artificial intelligence and machine learning to drive our efforts in organizations, the more potential we have for these issues here as well. How do machines know what the right type of linkage or relationship is to focus on when the people designing and managing them do not? Thus, the art of 360 Feedback lies within (a)  the content that is to be measured (i.e., the identification and drafting of unique competencies and behaviors); (b) the design and implementation decisions with appropriate trade-​offs regarding what will and will not work in a given organizational setting; (c) the determination of the appropriate and impactful insights for both individuals and organizations for development and decision-​making; and (d) methods to ensure that all those actions were performed as recommended, with adjustment and consequences for deviations by any user. The science of 360 Feedback (i.e., ensuring the right levels of transparency, confidentiality, validity, and accountability are present), on the other hand, has key elements that must be met every time a new process is launched. In thinking about the chapter contents

 9

Introduction and Overview  //​ 9

we selected for the handbook and those we did not pursue, this point has become even more salient for us. We firmly believe that the practice of 360 Feedback needs to be grounded in the appropriate philosophical, theoretical, and methodological models to ensure lasting success for both development and decision-​making in talent management–​related applications. 4. 360 Feedback Is Here to Stay: Years ago, there were many debates in the field concerning whether 360 Feedback was simply a fad or a truly important and lasting intervention for individuals and organizations. While some practitioners suggested it would one day fade into the distance, benchmark data cited previously as well as our work on the handbook have shown this not to be the case at all. If anything, 360 Feedback as a process is more vibrant and integrated than ever before. It is one of the core tools that organizations rely on for helping their employees grow and develop, as well as informing talent management and performance-​based outcomes. Even though not every application described in this book follows our formal definition of 360 Feedback, we are excited to see that the basic concepts come to life in such comprehensive and innovative ways—​from individuals to teams to the organization as a system, to the board of directors. Collecting behaviorally based ratings and observations (e.g., write-​in comments) from a variety of others in an organization and using that data for meeting individual growth and organizational talent needs is a vital component of the way organizations do business. Moreover, even if (or when) the robots take over for much of the work that leaders, managers, and HR do in organizations today, the ability to interpret and contextualize (and provide one-​to-​one feedback directly to clients) results and insights from 360 Feedback processes will remain in the hands of “human” trained professionals (Dotlich, 2018). There is a future yet for all of us. CONCLUSION

In closing, the purpose of The Handbook of Strategic 360 Feedback is to highlight the very latest theory, research, and practice regarding the state of the field in a comprehensive yet approachable manner. In this volume, you will find recommendations, best practices, case examples, and key questions to consider for almost any type of 360 Feedback application currently imaginable. The key to all of it is ensuring the work we do around the process is purposeful and strategic in nature. We hope the book meets expectations and helps others in organizations, whether they are I-​O psychologists, OD practitioners, HR business partners, learning and development professionals, or leaders and managers in the business, achieve these lofty goals.

10

10  / /  ​ H andbook of S trategic 36 0 Feedback

REFERENCES Boudreau, J. W., Jesuthasan, R., & Creelman, D. (2015). Lead the work: Navigating a world beyond employment. Hoboken, NJ: Wiley. Bracken, D. W., & Church, A. H. (2013). The “new” performance management paradigm: Capitalizing on the unrealized potential of 360 degree feedback. People & Strategy, 36(2),  34–​40. Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360 degree feedback. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(4), 761–​794. Bracken, D. W., Timmreck, C. W., & Church, A. H. (2001). The handbook of multisource feedback. San Francisco, CA: Jossey-​Bass. Church, A. H., & Burke, W. W. (2017). Four trends shaping the future of organizations and organization development. OD Practitioner, 49(3),  14–​22. Church, A. H., & Rotolo, C. T. (2013). How are top companies assessing their high-​potentials and senior executives? A talent management benchmark study. Consulting Psychology Journal: Practice and Research, 65(3), 199–​223. Church, A. H., & Waclawski, J. (2001). A five phase framework for designing a successful multirater feedback system. Consulting Psychology Journal: Practice & Research, 53(2),  82–​95. Dotlich, D. (2018). In first person: The future of C-​suite potential in the age of robotics. People & Strategy, 41(1),  48–​49. London, M. (2001). The great debate: Should multisource feedback be used for administration or development only? In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback (pp. 368–​388). San Francisco, CA: Jossey-​Bass. McDonnell, A. (2011). Still fighting the “war for talent”? Bridging the science versus practice gap. Journal of Business and Psychology, 26, 169–​173. doi:10.1007/​s10869-​011-​9220-​y Meister, J. C., & Willyerd, K. (2010). The 2010 workplace: How innovative companies attract, develop, and keep tomorrow’s employees today. New York, NY: HarperCollins. Nowack, K. M., & Mashihi, S. (2012). Evidence-​based answers to 15 questions about leveraging 360-​degree feedback. Consulting Psychology Journal: Practice and Research, 64(5), 157–​182. Silzer, R., & Church, A. H. (2010). Identifying and assessing high potential talent: Current organizational practices. In R. Silzer & B. E. Dowell (Eds.), Strategy-​driven talent management: A leadership imperative (pp. 213–​279; SIOP Professional Practice Series). San Francisco, CA: Jossey-​Bass. 3D Group. (2016). Current practices in 360 degree feedback (5th ed.). Emeryville, CA: 3D Group. United States Office of Personnel Management. (2012). Executive development best practices guide. Washington, DC: Author. Zemke, R., Raines, C., & Filipczak, B. (2013). Generations at work: Managing the clash of veterans, boomers, Xers, and Nexters in your workplace. New York, NY: American Management Association.

 1

2 / /​/ ​     / /​/​ WHAT

IS “STRATEGIC 360 FEEDBACK”? DAVID W. BRACKEN

A bevy of associates and I  (Bracken, Dalton, Jako, Pollman, & McCauley, 1997; Bracken, Timmreck, & Church, 2001; Bracken, Timmreck, Fleenor, & Summers, 2001) have diagnosed the application of 360 Feedback for solely developmental purposes versus use in decisions about employees. Some have argued that the distinction between “development only” and “decision-​making” is either not fruitful (Smither, London, & Reilly, 2005) or an artificial one (Bracken & Church, 2013). But, the discussion has not gone away, and drawing attention to the requirements for design, implementation, and use of data provided by a 360 process when used as an assessment can be useful (Bracken & Timmreck, 2001; Bracken, Timmreck, Fleenor, & Summers, 2001). There is no source that I can point to where the phrase “Strategic 360 Feedback” is used in the literature, though some vendors have integrated the phrase into their marketing. Dale Rose and I  have been using the name “Strategic 360 Forum” for about 6 years in conjunction with a consortium of 360 users where the primary criterion for membership was the use of the tool for decision-​making (i.e., integration into human resource (HR) systems). This book puts a stake in the ground regarding what Strategic 360 Feedback means, much as we have made a definitive statement about what 360 Feedback is and is not (Bracken, Rose, & Church, 2016). This chapter integrates the discussions with Chapter 3 by Campion, Campion, and Campion, and I encourage the reader to be familiar with that content. In Box 3.1, the first major heading is Strategic Considerations, and I quote many of their propositions and use 11

12

12  / /  ​ H andbook of S trategic 36 0 Feedback

them to make some assertions regarding their relevance to a Strategic 360 Feedback process (though my assertions may not match those of the Campions were they given the luxury and space to do so in their own chapter). The definition of Strategic 360 Feedback that we present here contains very little that has not been said before. Bernardin (1986) was perhaps the earliest proponent of using feedback in performance appraisals. We point to London, Smither, and Adsit’s (1997) “Accountability” article as the most comprehensive statement of the potential of the process to improve decisions in talent management systems when used correctly, including applying the concept of accountability to focal leaders, raters, and the organization. This handbook attempts to move the field ahead by collecting best practices and experiences where many of those ideas have been applied in the interim 30+ years. WHAT IS STRATEGIC 360 FEEDBACK?

When this handbook was conceptualized and came into being with the invitations to our contributors, we created an operational definition of strategic as it is applied to 360 Feedback processes. Our expectation was that those who were invited would make their decision regarding whether to accept based on whether their experiences and expertise were consistent with the book’s purpose, as well as providing guidance about how their content should explicitly acknowledge those ties. Here is our four-​point definition of Strategic 360 Feedback: 1. The content must be derived from the organization’s strategy and values, which are unique to that organization. Campion et al. (Chapter 3) stated this requirement as, “The process and performance indicators (items) rated should be linked to the organizational strategy and aligned with business goals and objectives” (p. 22). The content is sometimes derived from the organization’s values, where they can be explicit (the ones that hang on the wall) or implicit (which some people call “culture”). Campion et al. (Chapter 3) take this requirement a step further by applying it not only to the content but also to the entire process: “The concept of using 360 Feedback should be consistent with the culture of the organization to ensure readiness and fit (e.g., open communication, open to feedback, peer review valued, not overly hierarchical, learning and development oriented, low fear of reprisal, etc.)” (p. 21). This practice is a bit tricky because 360 Feedback can help create a climate via both the behaviors exhibited by leaders in support of the process and aligned behavior change

 13

What Is “Strategic 360 Feedback”?  //​ 13

that occurs because of feedback. This view of culture is consistent with my definition of organizational culture, adapted from the book Execution (Bossidy & Charan, 2002), as the behaviors that leaders exhibit, encourage, and tolerate. The behaviors of leadership and focal leaders (if they are different) are both under scrutiny by the followership before, during, and especially after the 360 process is conducted. This, in turn, leads to another related best practice from the Campions in Chapter 3: “The process should be developed with the input of subject matter experts (e.g., incumbents, managers, users of the system, etc.) to ensure that it meets their needs and expectations, and that they will be committed to its implementation” (p. 22). This specific practice expands alignment to all facets of the process, starting with purpose. The health of a 360 system is highly dependent on its formal and informal support by all stakeholders, even though each stakeholder group has different priorities and definitions of success (Bracken, Timmreck, Fleenor, & Summers, 2001). If any of those groups is not committed to its success, it is likely that the process will not survive beyond its first round of feedback collection.

2. The process must be designed and implemented in such a way that the results are sufficiently reliable and valid that we can use them to make decisions about the leaders (as in Point 3). This is not an easy goal to achieve, as discussed by Fleenor in Chapter 14 and Bracken and Rotolo in Chapter 15. Despite the challenges in establishing both reliability and validity in 360 processes, benchmark studies continue to indicate that 360s are the most commonly used form of assessment in both public and private sectors (Church & Rotolo, 2013; United States Office of Personnel Management, 2012).

When 360 Feedback processes are used for decision-​making, the knee-​jerk reaction of some practitioners is to treat them as “tests,” subject to psychometric scrutiny that often includes demands for criterion-​related validity studies. This topic is further explored by Pulakos and Rose in Chapter 17, where they present the case for the use of 360 data as both predictor and criterion (performance) measures. 3. The results of Strategic 360s are integrated with important talent management and development processes, such as leadership development and training, performance management, staffing (internal movement), succession planning, and high-​potential processes. Referring again to Chapter 3, the Campions state, “The process should be integrated with other human resource (HR) systems, such as compensation or promotion” (p. 21). Integration with HR processes is clearly a type of decision-​making. Allan and I (Bracken & Church, 2013) contended that

14

14  / /  ​ H andbook of S trategic 36 0 Feedback

even supposedly “development-​only” processes that result in decisions regarding training and other developmental experiences are decisions that often have substantial effects on the careers of the focal leaders. Under the umbrella of “talent management,” almost any decision could be improved by multisource input. Because 360 Feedback processes are systems whose validity is affected by all aspects of implementation (Bracken & Rose, 2011; Bracken & Rotolo, 2018), we need to repeat the London et al. (1997) mantra that integration into HR/​talent management systems also creates and requires accountability. While we usually think of accountability as referring primarily to the focal leader, London et al. (1997) forced us to examine the signs that the organization supports the system by its actions and decisions. As another best practice proposed by Campion et al. (Chapter 3), “The process should have the support of top management” (p. XX). This best practice may be the sine qua non of Strategic 360 Feedback. By definition, if it is not supported by senior management, it is no longer “strategic.” 4. Participation must be inclusive, that is, a census of the leaders/​managers in the organizational unit (e.g., total company, division, location, function, level). This practice is not included in the Campion et al. (Chapter 3) list of best practices but was initially proposed by Bracken and Rose (2011). We say “leaders/​managers” because a true 360 requires that direct reports are a rater group. One reason for this requirement is that, if the data are to be used to make personnel decisions, it usually requires comparing individuals, which in turn requires that everyone has the same data available. This requirement also enables us to use Strategic 360s to create organizational change, as in “large scale change occurs when a lot of people change just a little” (Bracken, Timmreck, & Church, 2001, p. 1). Some of our contributors present case studies where the focal leader is just that (i.e., a single person). If all the other requirements are met (alignment, reliability/​validity, used for decision-​making), then we would support the position that there is no need for comparisons, and the decision will be made on some other metric(s). If there are other focal leaders being considered as part of the decision (e.g., promotion), then those leaders should participate as well. USES FOR STRATEGIC 360 FEEDBACK

A 360 Feedback system is likely to be considered strategic if it is designed to serve one or more of the following uses:

 15

What Is “Strategic 360 Feedback”?  //​ 15





• Creates sustainable change in behaviors valued by an organization (i.e., those aligned with values, competencies, or strategies) • Creates behavior change in key leader(s) whose actions carry significant influence through decision-​making and modeling • Informs decisions integral to organization-​wide talent management processes (e.g., pay, promotions, development, training, staffing) or corporate strategy (pursue growth, focus on operational efficiencies, consolidate operations) • Informs decisions (selection, development, retention, assignments) of key subpopulations (e.g., high potentials, succession plans) • Supports the creation and maintenance of a feedback culture that creates awareness coupled with accountability for change

QUALIFIERS

Let me hasten to say that (a) all 360s, strategic or not, should have a development focus, and (b) none of this minimizes the value of 360 processes that are used in support of the development of leaders, one at a time. There is no question that innumerable leaders have benefitted from the awareness created by feedback, often also supported by a coach who helps not only by managing the use of the feedback but also by creating accountability for the constructive use of the feedback. We are not proposing, by any stretch, that those types of 360 Feedback processes need to change. We do request, however, that practitioners who are from that school be open to the proposal that there are uses of this powerful tool that can be of benefit outside the development-​only, one-​person-​at-​a-​time world. I had a short debate on LinkedIn with a development-​only proponent that ended abruptly when he exclaimed, “It should be used only for development. Full stop.” (He is British.) For him, there was no use in further discussion. On the contrary, we hope that a book like this demonstrates that there can be productive, parallel (sometimes intersecting) universes of applications for 360 Feedback.

REFERENCES Bernardin, J. H. (1986). Subordinate appraisal: A valuable source of information about managers. Human Resource Management, 25(3), 421–​439. Bossidy, L., & Charan, R. (2002). Execution:  The discipline of getting things done. New  York, NY:  Crown Business. Bracken, D. W., & Church, A. H. (2013). The “new” performance management paradigm: Capitalizing on the unrealized potential of 360 degree feedback. People & Strategy, 36(2),  34–​40. Bracken, D. W., Dalton, M. A., Jako, R. A., McCauley, C. D., & Pollman, V. A. (1997). Should 360-​degree feedback be used only for developmental purposes? Greensboro, NC: Center for Creative Leadership.

16

16  / /  ​ H andbook of S trategic 36 0 Feedback Bracken, D. W., & Rose, D. S. (2011). When does 360-​degree feedback create behavior change? And how would we know it when it does? Journal of Business and Psychology, 26, 183–​192. Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360° feedback. Industrial and Organizational Psychology, Perspectives on Science and Practice, 9(4), 761–​794. doi:10.1017/​ iop.2016.93 Bracken, D. W., & Timmreck, C. W. (2001). Guidelines for multisource feedback when used for decision making. In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback (pp. 495–​510). San Francisco, CA: Jossey-​Bass. Bracken, D. W., Timmreck, C. W., & Church, A. H. (2001). The handbook of multisource feedback. San Francisco, CA: Jossey-​Bass. Bracken, D. W., Timmreck, C. W., Fleenor, J. W., & Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 40(1),  3–​20. Church, A. H., & Rotolo, C. T. (2013). How are top companies assessing their high potentials and senior executives? A talent management benchmark study. Consulting Psychology Journal, Practice and Research, 65(3), 199–​223. London, M., Smither, J. W., & Adsit, D. J. (1997). Accountability: The Achilles’ heel of multisource feedback. Group & Organization Management, 22(2), 162–​184. Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A  theoretical model, meta-​analysis, and review of empirical findings. Personnel Psychology, 58,  33–​66. United States Office of Personnel Management. (2012). Executive development best practices guide. Washington, DC: Author.

/

 17

/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/​ SE C T I O N   I

/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/

360 FOR DECISION-​MAKING

18

 19

3 / /​/ ​     / /​/​ BEST

PRACTICES WHEN USING 360 FEEDBACK FOR PERFORMANCE APPRAISAL EMILY D. CAMPION, MICHAEL C. CAMPION, AND MICHAEL A. CAMPION

The 360 Feedback process was originally created for the sole purposes of managerial development (Hazucha, Hezlett, & Schneider, 1993). Defined as the solicitation of anonymous performance ratings of one individual from multiple sources (e.g., peers, subordinates, bosses, and customers), 360 Feedback (360s) is capable of examining the breadth and depth of a worker’s capabilities within his or her assigned roles. More recently, organizations have implemented this powerful tool as a performance management (PM) mechanism. However, some scholars warn against using 360s for anything more than development, citing the system’s poor criterion-​related validity, misalignment between the goals of PM and the characteristics of 360 Feedback (DeNisi & Kluger, 2000), potential social costs of inviting others into a high-​stakes decision (Funderburg & Levy, 1997), and the risk of ineffectively using and communicating the information gathered about an employee (Tornow, 1993). It is not surprising, though, that managers would be inclined to use 360s to gain a clearer picture of their employees’ performance when making pay or promotion decisions, and there is no doubt managers will continue to do so despite the warnings of researchers. Therefore, in an effort to respond to this need and extend Campion, Campion, and Campion’s (2015) article on

19

20

20  / /  ​ 3 6 0 for D ecision-Making

why organizations should be using 360s for PM, in this chapter we provide a “how-​to” resource by reviewing the literature on 360s and presenting a list of 56 research-​supported best practices to effectively use them for PM. DESIGN OF REVIEW AND DATA COLLECTION METHODOLOGY

We conducted an exhaustive review of the research and professional literature accumulated to date on the topic of 360s using PsycINFO, Business Source Premiere databases, and Google Scholar. Our search yielded 221 articles or book chapters on this topic. This chapter includes the professional literature because not all topics have been subjected to research analysis, and professional practice has valuable insight that is not represented in the current body of research literature. The result is a list of 56 best practices explaining how to conduct and use 360s for the purposes of PM. We define best practices as recommendations deriving from research findings, recommendations from professionals, or clear inferences from the literature regarding how to incorporate 360s into PM systems in organizations. It is important to note that these are not minimum expectations or required industry standards, but instead are ideal standards that well-​run organizations might aspire to achieve. It is not expected that an organization will meet  all of these best practices, and it does not mean that failing to meet a best practice indicates a fault with the organization’s process. Sometimes, best practices are not applicable in a given context, not necessary, too expensive, or otherwise discretionary. These best practices are divided in terms of major topic areas (e.g., strategic considerations, item content, rating scales, administration, etc.). The best practices are identified and summarized in Box 3.1, which also presents all the supporting citations in order to illustrate the magnitude of support and to direct interested readers and future researchers to the source documents. BEST PRACTICES FOR USING 360S FOR PERFORMANCE MANAGEMENT

Box 3.1 lists 56 best practices for implementing 360s for PM. For ease of understanding, the practices are grouped into nine categories:  strategic considerations, items, scales, raters, administrations, training/​instruction, interpretation of feedback, development, and review. In the sections that follow, we define and address the importance of each category, briefly discuss practices illustrative of each category, and propose future work needed to further elaborate on the practices within each category.

 21

BOX 3.1 BEST PRACTICES FOR USING 360 FEEDBACK FOR PERFORMANCE MANAGEMENT BEST PRACTICE IN 360 FEEDBACK Strategic Considerations

1. The process should be integrated with other human resource (HR) systems, such as compensation or promotion. Antonioni (1996); Atwater, Brett, and Charles (2007); Atwater and Waldman (1998); Atwater, Waldman, and Brett (2002); Bancroft et al. (1993); Bernardin (1986); Bernardin and Beatty (1987); Bernardin, Dehmus, and Redmon (1993); Bozeman (1997); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken, Timmreck, Fleenor, and Summers (2001); Brutus and Derayeh (2002); Carson (2006); Church and Bracken (1997); Church and Waclawski (2001); Fleenor, Taylor, and Chappelow (2008); Ghorpade (2000); Gillespie (2005); Heidemeier and Moser (2009); Herold and Fields (2004); R.  Hoffman (1995); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); London and Beatty (1993); London, Smither, and Adsit (1997); London, Wohlers, and Gallagher (1990); McCarthy and Garavan (2007); McEvoy and Buller (1987); Metcalfe (1998); Morgan, Cannan, and Cullinane (2005); Nowack and Mashihi (2012); Peiperl (2001); Rogers, Rogers, and Metlay (2002); 3D Group (2013); Toegel and Conger (2003); Tornow (1993a); Tyson and Ward (2004); van Hooft, Flier, and Minne (2006); Vinson (1996); Waldman and Atwater (2001); Waldman, Atwater, and Antonioni (1998); Wimer and Nowack (1998) 2. The concept of using 360 Feedback should be consistent with the culture of the organization to ensure readiness and fit (e.g., open communication, open to feedback, peer review valued, not overly hierarchical, learning and development oriented, low fear of reprisal, etc.). Atwater et  al. (2002, 2007); Atwater and Waldman (1998); Bancroft et  al. (1993); Bracken (1994); Bracken, Dalton, Jako, McCauley, Pollman, and Hollenbeck (1997); Carson (2006); Church and Waclawski (2001); Conway, Lombardo, and Sanders (2001); Drew (2009); Fleenor, Smither, Atwater, Braddy, and Sturm (2010); Fleenor et al. (2008); Funderburg and Levy (1997); Furnham and Stringfield (1994); Gillespie (2005); Heidemeier and Moser (2009); Hezlett (2008); R. Hoffman (1995); Lepsinger and Lucia (1997); London and Beatty (1993); London and Smither (2002); London et  al. (1990); Metcalfe (1998); Morgan et  al. (2005); Ng, Koh, Ang, Kennedy, and Chan (2011); Peiperl (2001); Robertson (2008); Salam, Cox, and Sims (1997); Seifert, Yukl, and McDonald (2003); Smither, London, and Reilly (2005); Waldman (1997); Waldman and Bowen (1998); Westerman and Rosse (1997); Wimer (2002); Wimer and Nowack (1998)

2

22  / /  ​ 3 6 0 for D ecision-Making

3. The process should be developed with the input of subject matter experts (e.g., incumbents, managers, users of the system, etc.) to ensure that it meets their needs and expectations, and that they will be committed to its implementation. Antonioni (1996); Bancroft et  al. (1993); Bernardin et  al. (1993); Bracken (1994); Bracken and Rose (2011); Church and Waclawski (2001); Drew (2009); Gillespie (2005); Heslin and Latham (2004); R.  Hoffman (1995); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); London and Beatty (1993); London et al. (1990, 1997); McEvoy and Buller (1987); Salam et al. (1997); Smither (2008); Smither et al. (1995); 3D Group (2013); Toegel and Conger (2003); van der Heijden and Nijhof (2004); Waldman (1997); Waldman and Atwater (2001); Waldman et  al. (1998); Walker and Smither (1999); Westerman and Rosse (1997); Wimer and Nowack (1998); Woehr, Sheehan, and Bennett (2005) 4.The purpose, policies, procedures, uses of the data, and other aspects of the process should be clearly defined and communicated to managers and employees. Atwater et al. (2007); Atwater and Waldman (1998); Bernardin (1986); Bernardin and Beatty (1987); Bernardin, Konopaske, and Hagan (2012); Bracken (1994); Bracken and Timmreck (1999); Bracken et  al. (2001); Brutus et  al. (2006); Church and Bracken (1997); Church and Waclawski (2001); Fleenor et al. (2008); Garbett, Hardy, Manley, Titchen, and McCormack (2007); R.  Hoffman (1995); Kanouse (1998); London and Beatty (1993); Maylett (2009); McCarthy and Garavan (2001, 2007); Metcalfe (1998); Morgan et al. (2005); Peiperl (2001); Pollack and Pollack (1996); Redman and Snape (1992); Robertson (2008); Smith and Fortunato (2008); Testa (2002); 3D Group (2013); Waldman and Atwater (2001); Waldman et al. (1998); Westerman and Rosse (1997); Wimer (2002); Wimer and Nowack (1998) 5. The process and performance indicators (items) rated should be linked to the organizational strategy and aligned with business goals and objectives. Bracken and Timmreck (1999); Bracken et  al. (2001); Brutus and Derayeh (2002); Carson (2006); Church and Waclawski (2001); Drew (2009); Fleenor et  al. (2008); Hezlett (2008); R. Hoffman (1995); Kanouse (1998); London and Beatty (1993); London et al. (1990); Maylett (2009); Morgan et al. (2005); Nowack and Mashihi (2012); Rogers et al. (2002); Smither et al. (1995); Waldman et al. (1998) 6. The performance expectations (including the performance indicators) should be clearly communicated and agreed on with employees at the beginning of the evaluation period. Bracken and Timmreck (1999); Church and Waclawski (2001); Dominick, Reilly, and McGourty (1997); Fleenor et  al. (2008); Lepsinger and Lucia (1997); London and Beatty (1993); London and Smither (1995); London et  al. (1990, 1997); Nowack and Mashihi (2012); Reilly, Smither, and Vasilopoulos (1996); Tornow (1993a); Williams and Johnson (2000)

 23

Best Practices for Performance Appraisal  //​ 23

7. The process should have the support of top management. Bracken and Timmreck (1999); Bracken et  al. (1997, 2001); Church (1995); Church and Waclawski (2001); Fleenor et al. (2008); Kanouse (1998); McCarthy and Garavan (2001); McCauley and Moxley (1996); Pollack and Pollack (1996); Rogers et al. (2002); Waldman et al. (1998) Items

8. The items rated should be highly job related (based on a job analysis or other evidence, or related to generic job requirements applicable to the jobs, such as leadership) so that they will be valid. Antonioni (1996); Atkins and Wood (2002); Atwater, Ostroff, Yammarino, and Fleenor (1998); Atwater, Roush, and Fischthal (1995); Bailey and Austin (2006); Bailey and Fletcher (2002); Bancroft et al. (1993); Bernardin and Beatty (1987); Bernardin et al. (2012); Bracken (1994); Bracken et al. (1997, 2001); Bracken and Rose (2011); Bracken and Timmreck (1999); Carson (2006); Church (1995); Conway (1996); Dai, De Meuse, and Peterson (2010); Fleenor et  al. (2008); Flint (1999); Furnham and Stringfield (1994); Garbett et al. (2007); Gillespie (2005); Herold and Fields (2004); Heslin and Latham (2004); R.  Hoffman (1995); B.  J. Hoffman et  al. (2012); B.  J. Hoffman and Woehr (2009); Johnson and Ferstl (1999); Kaiser and Craig (2005); Lepsinger and Lucia (1997); London and Beatty (1993); London and Smither (1995); London et al. (1990); Luthans and Peterson (2003); Manning, Pogson, and Morrison (2009); Maylett (2009); McCarthy and Garavan (2007); McEvoy and Buller (1987); Morgan et  al. (2005); Mount, Judge, Scullen, Sytsma, and Hezlett (1998); Reilly et al. (1996); Salam et  al. (1997); Smither (2008); Smither et  al. (1995); Testa (2002); 3D Group (2013); Toegel and Conger (2003); van Hooft et al. (2006); Viswesvaran, Schmidt, and Ones (2002); Waldman and Atwater (2001); Waldman et  al. (1998); Walker and Smither (1999); Westerman and Rosse (1997); Wimer and Nowack (1998); Woehr et al. (2005); Yammarino and Atwater (1997); Yukl and Lepsinger (1995) 9. The items should use the language of the organization (or written by those with organizational knowledge). Bailey and Austin (2006); Bracken and Rose (2011); Bracken et al. (2001); Fleenor et al. (2008); Garbett et al. (2007); Gillespie (2005); Johnson and Ferstl (1999); Kaiser and Craig (2005); Lepsinger and Lucia (1997); London and Beatty (1993); Maylett (2009); 3D Group (2013); Waldman and Atwater (2001); Walker and Smither (1999); Wimer and Nowack (1998); Yammarino and Atwater (1997) 10. The items should be behavior (observable) to the extent possible, and they should be specific rather than general. Antonioni (1996); Atkins and Wood (2002); Atwater et  al. (1995); Atwater and Van Fleet (1997); Atwater and Waldman (1998); Bailey and Austin (2006); Bernardin (1986); Bernardin and Beatty (1987); Bernardin et al. (1993); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Brutus and Facteau (2003); Church (1995); Fleenor

24

24  / /  ​ 3 6 0 for D ecision-Making

et al. (2008); Garbett et al. (2007); Ghorpade (2000); Gillespie (2005); Heidemeier and Moser (2009); Herold and Fields (2004); Heslin and Latham (2004); B. J. Hoffman et al. (2012); Jelley and Goffin (2001); Johnson and Ferstl (1999); Kaiser and Craig (2005); London and Beatty (1993); London and Smither (1995); London et al. (1997); Luthans and Peterson (2003); McCarthy and Garavan (2007); Nowack and Mashihi (2012); Redman and Snape (1992); Rogers et al. (2002); Salam et al. (1997); Toegel and Conger (2003); Viswesvaran et al. (2002); Waldman and Atwater (2001); Walker and Smither (1999); Woehr et al. (2005); Yammarino and Atwater (1997); Yukl and Lepsinger (1995) 11. A broad range of items should be considered, including citizenship-​related performance. Antonioni (1996); Atwater and Van Fleet (1997); Bracken (1994); Funderburg and Levy (1997); Garbett et al. (2007); Heidemeier and Moser (2009); Heslin and Latham (2004); London and Beatty (1993); Luthans and Peterson (2003); McCarthy and Garavan (2007); Smither et al. (1995); Thomason, Weeks, Bernardin, and Kane (2011); 3D Group (2013); Waldman and Atwater (2001); Waldman et al. (1998); Waldman and Bowen (1998); Walker and Smither (1999) 12. The behavior reflected by the items should be under the control of the employee and amenable to change (i.e., actionable). Antonioni (1996); Atkins and Wood (2002); Bracken (1994); Bracken and Timmreck (1999); Fleenor et al. (2008); Garbett et al. (2007); London and Beatty (1993); Luthans and Peterson (2003); McCarthy and Garavan (2007); Smither, London, and Reilly (2005); Smither et al. (1995); Tornow (1993a); Vecchio and Anderson (2009) 13. The items should be clear and understandable to everyone involved (e.g., raters, ratees, managers, etc.). Antonioni (1996); Bracken (1994); Bracken et al. (2001); Brutus and Facteau (2003); Church (1995); Fleenor et  al. (2008); Garbett et  al. (2007); Gillespie (2005); Herold and Fields (2004); Kaiser and Craig (2005); Lepsinger and Lucia (1997); London and Smither (1995); Luthans and Peterson (2003); Nowack and Mashihi (2012); Smither et al. (1995); Waldman and Atwater (2001); Wohlers and London (1989) 14. The items should generate reliable data (e.g., sufficient number of items, sound statistical properties, such as internal consistency, good factor structure, etc.). Bracken and Timmreck (1999); Fleenor et al. (2008); Fletcher, Baldry, and Cunningham-​ Snell (1998); Penny (2003); Yammarino (2003) Scales

15. The rating scale should be clear that performance is being evaluated. Antonioni (1996); Atwater and Waldman (1998); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Dai et  al. (2010); DeNisi and Kluger (2000); Farh, Cannella, and Bedeian (1991); Fleenor et al. (2008); Greguras, Robie,

 25

Best Practices for Performance Appraisal  //​ 25

Schleicher, and Goff (2003); Harris, Smith, and Champagne (1995); Heidemeier and Moser (2009); Kanouse (1998); Maylett (2009); Nowack and Mashihi (2012); Peiperl (2001); Toegel and Conger (2003); van der Heijden and Nijhof (2004); Waldman et al. (1998); Westerman and Rosse (1997); Wimer and Nowack (1998) 16. The rating scales (e.g., types, levels, etc.) should be tailored to distinguish between levels of performance. Antonioni (1996); Atkins and Wood (2002); Bailey and Fletcher (2002); Bernardin et al. (1993); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken et al. (2001); Carson (2006); Dai et al. (2010); Eichinger and Lombardo (2004); Furnham and Stringfield (1994); Gillespie (2005); Herold and Fields (2004); B. J. Hoffman et al. (2012); Jelley and Goffin (2001); Johnson and Ferstl (1999); London and Beatty (1993); London et al. (1990); Luthans and Peterson (2003); Maylett (2009); Mount et al. (1998); Nowack and Mashihi (2012); Peiperl (2001); Salam et  al. (1997); Smither et  al. (1995); 3D Group (2013); van Hooft et al. (2006); Waldman and Atwater (2001); Woehr et al. (2005) 17. The rating scale should be clear and understandable to everyone involved (e.g., raters, ratees, managers, etc.). Antonioni (1996); Atkins and Wood (2002); Bailey and Fletcher (2002); Bernardin et al. (1993); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken et  al. (2001); Carson (2006); Craig and Hannum (2006); Dai et  al. (2010); Fleenor et  al. (2010); Furnham and Stringfield (1994); Gillespie (2005); Herold and Fields (2004); B. J. Hoffman et al. (2012); Jelley and Goffin (2001); Johnson and Ferstl (1999); London and Beatty (1993); Luthans and Peterson (2003); Maylett (2009); Mount et al. (1998); Nowack and Mashihi (2012); Peiperl (2001); Salam et al. (1997); Smither et al. (1995); 3D Group (2013); van Hooft et al. (2006); Waldman and Atwater (2001); Woehr et al. (2005) 18. Narrative comments should also be collected. Antonioni (1996); Bailey and Austin (2006); Bernardin and Beatty (1987); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Carson (2006); Fleenor et  al. (2008, 2010); Garbett et  al. (2007); Gillespie (2005); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); London et al. (1990); McEvoy and Buller (1987); Ng et al. (2011); Nowack (2009); Nowack and Mashihi (2012); Peiperl (2001); Pollack and Pollack (1996); Smither and Walker (2004); Smither et  al. (1995); 3D Group (2013); Tornow (1993a); Vinson (1996); Waldman et  al. (1998); Yukl and Lepsinger (1995) Raters

19. Multiple rating sources (e.g., peers, subordinates, managers, customers) should be included, as appropriate. Albright and Levy (1995); Atkins and Wood (2002); Atwater et al. (1995, 1998, 2002, 2007); Bailey and Austin (2006); Bailey and Fletcher (2002); Bancroft et  al. (1993);

26

26  / /  ​ 3 6 0 for D ecision-Making

Bernardin and Beatty (1987); Bernardin et al. (1993, 2012); Bracken (1994); Bracken and Rose (2011); Bracken et al. (2001); Brutus et al. (2006); Carson (2006); Church and Bracken (1997); Conway et al. (2001); Craig and Hannum (2006); DeNisi and Kluger (2000); J. D. Facteau and Craig (2001); Farh et al. (1991); Fleenor et al. (2008); Fletcher and Baldry (2000); Furnham and Stringfield (1994); Garbett et  al. (2007); Gillespie (2005); Greguras, Ford, and Brutus (2003); Greguras, Robie, et  al. (2003); Greller and Herold (1975); Guenole, Cockerill, Chamorro-​Premuzic, and Smillie (2011); Harris and Schaubroeck (1988); Heidemeier and Moser (2009); B. J. Hoffman, Bynum, and Gentry (2010); B. J. Hoffman and Woehr (2009); R. Hoffman (1995); Holzbach (1978); Johnson and Ferstl (1999); Lance, Hoffman, Gentry, and Baranik (2008); LeBreton, Burgess, Kaiser, Atchley, and James (2003); Lepsinger and Lucia (1997); London and Smither (1995); London et  al. (1990); Luthans and Peterson (2003); Manning et  al. (2009); McCauley and Moxley (1996); Metcalfe (1998); Mount, Barrick, and Strauss (1994); Mount et  al. (1998); Ng et  al. (2011); Nowack (2009); Nowack and Mashihi (2012); Peiperl (2001); Pollack and Pollack (1996); Sala and Dwight (2002); Salam et al. (1997); Seifert and Yukl (2010); Siegel (1982); Smither, Brett, and Atwater (2008); Stone and Stone (1984); Testa (2002); 3D Group (2013); Toegel and Conger (2003); Tornow (1993a); van der Heijden and Nijhof (2004); Vecchio and Anderson (2009); Vinson (1996); Waldman and Atwater (2001); Wohlers and London (1989); Yammarino (2003); Yammarino and Atwater (1993); Yammarino and Atwater (1997); Yukl and Lepsinger (1995) 20. Self-​ratings should also be included. Albright and Levy (1995); Antonioni (1996); Atkins and Wood (2002); Atwater et al. (1995, 1998, 2002, 2007); Atwater and Van Fleet (1997); Atwater and Waldman (1998); Bailey and Austin (2006); Bailey and Fletcher (2002); Bernardin et al. (1993); Campbell and Lee (1988); Cheung (1999); Church (1995); Fleenor et al. (2008, 2010); Fletcher and Baldry (2000); Flint (1999); Furnham and Stringfield (1994); Goffin and Anderson (2007); Harris and Schaubroeck (1988); Heidemeier and Moser (2009); R.  Hoffman (1995); Holzbach (1978); Johnson and Ferstl (1999); Lane and Herriot (1990); London and Beatty (1993); London and Smither (1995); Luthans and Peterson (2003); Metcalfe (1998); Morgan et al. (2005); Mount et al. (1994); Nowack (1992, 2009); Nowack and Mashihi (2012); Pollack and Pollack (1996); Reilly et al. (1996); Sala and Dwight (2002); Salam et al. (1997); Seifert and Yukl (2010); Shrauger and Kelly (1988); Shrauger and Terbovic (1976); Smither (2008); Smither, London, and Reilly (2005); Smither, London, and Richmond (2005); Smither et  al. (1995); 3D Group (2013); Toegel and Conger (2003); Tornow (1993a); van der Heijden and Nijhof (2004); Vecchio and Anderson (2009); Van Velsor, Taylor, and Leslie (1993); Williams and Johnson (2000); Williams and Levy (1992); Wimer and Nowack (1998); Wohlers and London (1989); Wohlers, Hall, and London (1993); Yammarino and Atwater (1993); Yammarino and Atwater (1997); Yukl and Lepsinger (1995)

 27

Best Practices for Performance Appraisal  //​ 27

21. Raters should be anonymous, but ratings may sometimes be nonanonymous, depending on the purpose of the process (e.g., when it is important to know that the feedback is from specific sources). Antonioni (1994, 1996); Atwater et  al. (2002, 2007); Atwater and Waldman (1998); Bancroft et al. (1993); Bernardin (1986); Bernardin and Beatty (1987); Bracken (1994); Bracken et  al. (1997, 2001); Bracken and Timmreck (1999); Carson (2006); Church and Bracken (1997); Eichinger and Lombardo (2004); Fleenor et  al. (2008); Garbett et al. (2007); Herold and Fields (2004); Heslin and Latham (2004); Kanouse (1998); Lepsinger and Lucia (1997); London and Beatty (1993); London et al. (1990, 1997); Luthans and Peterson (2003); McCarthy and Garavan (2007); Metcalfe (1998); Redman and Snape (1992); Robertson (2008); Rogers et al. (2002); Smither (2008); 3D Group (2013); van der Heijden and Nijhof (2004); Vinson (1996); Waldman et al. (1998); Waldman and Bowen (1998); Westerman and Rosse (1997); Wimer (2002); Yammarino and Atwater (1997) 22. Sufficiently large samples of raters (with high enough response rates) should be obtained for each source to ensure anonymity of raters and interrater reliability. Antonioni (1996); Atwater et  al. (1995, 1998, 2007); Atwater and Waldman (1998); Bernardin and Beatty (1987); Bernardin et al. (2012); Bozeman (1997); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken et al. (2001); Carson (2006); Church (1995); Church and Bracken (1997); Church, Rogelberg, and Waclawski (2000); Church and Waclawski (2001); Conway (1996); Conway et al. (2001); Dai et al. (2010); Fleenor et al. (2008); Fletcher et al. (1998); Greguras, Robie, et al. (2003); Hensel, Meijers, Leeden, and Kessels (2010); Hezlett (2008); Jellema, Visscher, and Scheerens (2006); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); London and Beatty (1993); London and Smither (1995); London and Wohlers (1991); London et al. (1990); Luthans and Peterson (2003); Maylett (2009); Metcalfe (1998); Mount et  al. (1998); Nowack (2009); Nowack and Mashihi (2012); Pollack and Pollack (1996); Redman and Snape (1992); Robertson (2008); Scullen (1997); Seifert and Yukl (2010); Smither et al. (1995); Testa (2002); 3D Group (2013); Tornow (1993b); van Hooft et al. (2006); Vinson (1996); Waldman et  al. (1998); Waldman and Bowen (1998); Westerman and Rosse (1997); Wimer and Nowack (1998); Yammarino (2003); Yukl and Lepsinger (1995) 23. Selection of raters within source should consider the opportunity to observe performance, skill in evaluating performance, credibility, motivation to provide accurate judgments of performance, and the avoidance of biasing factors or gaming the system (e.g., friendships, competitors for promotion, special interests, unexpected events, etc.). Albright and Levy (1995); Antonioni (1996); Atwater and Waldman (1998); Bernardin (1986); Bernardin and Beatty (1987); Bernardin et  al. (1993, 2012); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken et al. (2001); Carson

28

28  / /  ​ 3 6 0 for D ecision-Making

(2006); Cederblom and Lounsbury (1980); Church (1995); Conway (1996); Conway et al. (2001); Eichinger and Lombardo (2004); Fleenor et al. (2008, 2010); Flint (1999); Garbett et al. (2007); Ghorpade (2000); Hannum (2007); B. J. Hoffman et al.and (2010); Jellema et al. (2006); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); Lewin and Zwany (1976); Maylett (2009); McCarthy and Garavan (2001); Metcalfe (1998); Nowack and Mashihi (2012); Redman and Snape (1992); Rogers et al. (2002); Sala and Dwight (2002); Smith and Fortunato (2008); Smither et al. (1995); Tornow (1993a); van Hooft et al. (2006); Vinson (1996); Waldman and Bowen (1998); Westerman and Rosse (1997); Wimer (2002); Woehr et al. (2005); Yammarino (2003); Yukl and Lepsinger (1995) 24. Selection of raters should follow a standardized process that is similar for everyone (with minimal potential for biased selection). Antonioni (1996); Atkins and Wood (2002); Bernardin et  al. (2012); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken et al. (2001); Brutus et  al. (2006); Fleenor et  al. (2010); Fox, Ben-​Nahum, and Yinon (1989); Garbett et  al. (2007); Gillespie (2005); Jellema et al. (2006); Lewin and Zwany (1976); London et al. (1990); McCarthy and Garavan (2007); McEvoy and Buller (1987); Metcalfe (1998); Mount et al. (1998); Nowack and Mashihi (2012); Robertson (2008); Rogers et al. (2002); Seifert and Yukl (2010); 3D Group (2013); Wimer and Nowack (1998); Yukl and Lepsinger (1995) 25. Ratees should have input, but there should also be oversight in the selection of raters (e.g., by manager, HR, etc.) to ensure consistency and following the correct procedures. Antonioni (1996); Atkins and Wood (2002); Bernardin and Beatty (1987); Bernardin et al. (2012); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken et al. (2001); Brutus and Derayeh (2002); Carson (2006); Fleenor et al. (2008, 2010); Flint (1999); Gillespie (2005); Lewin and Zwany (1976); Maylett (2009); Nowack (2009); Nowack and Mashihi (2012); Redman and Snape (1992); Rogers et al. (2002); Seifert and Yukl (2010); 3D Group (2013); Toegel and Conger (2003) 26. When necessary, there should be statistical adjustments or other control for outliers and average score differences by various factors (e.g., rating source, organizational unit, etc.). Atwater and Waldman (1998); Bernardin and Beatty (1987); Bracken and Timmreck (1999); Ghorpade (2000); Lepsinger and Lucia (1997); McEvoy and Buller (1987); Ng et al. (2011); Nowack and Mashihi (2012) Administration

27. Standardized procedures should be used for administration to help ensure reliability. Bernardin and Beatty (1987); Bracken and Timmreck (1999); Bracken et  al. (2001); Church and Bracken (1997); Craig and Hannum (2006); Fleenor et al. (2008); Gillespie (2005); Heslin and Latham (2004); R.  Hoffman (1995); Johnson and Ferstl (1999);

 29

Best Practices for Performance Appraisal  //​ 29

Kanouse (1998); London and Beatty (1993); London et al. (1990); McEvoy and Buller (1987); Tornow (1993a) 28. The 360 process should be conducted routinely, usually on an annual basis, and near in time to when the data are used for personnel decisions (e.g., pay increases). Antonioni (1996); Atwater et al. (2007); Bancroft et al. (1993); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Bracken et  al. (2001); Brutus and Derayeh (2002); Brutus et  al. (2006); Carson (2006); DeNisi and Kluger (2000); London and Beatty (1993); London and Smither (1995); London et al. (1990, 1997); McEvoy and Buller (1987); Nowack (2009); Pollack and Pollack (1996); Reilly et  al. (1996); Seifert and Yukl (2010); Smither (2008); Smither et al. (1995); 3D Group (2013); Wimer and Nowack (1998) 29. There should be a follow-​up, such as a midyear or other intermediate review to ensure progress is being made and to provide guidance. Antonioni (1996); Atwater et al. (2007); Bracken (1994); Bracken and Timmreck (1999); Church (1995); Church and Waclawski (2001); Fleenor et al. (2008); London and Beatty (1993); London et al. (1990); McEvoy and Buller (1987); Nowack (2009); Reilly et al. (1996); Smither (2008); Smither et al. (2008); 3D Group (2013); Walker and Smither (1999); Westerman and Rosse (1997); Wimer (2002); Wimer and Nowack (1998); Yukl and Lepsinger (1995) 30. Administration and use of the process should be monitored by HR. Bernardin (1986); Bernardin and Beatty (1987); Fleenor et al. (2008); Ghorpade (2000); 31. The feedback and all related data should be kept confidential. Bracken and Timmreck (1999); Church (1995); Church and Waclawski (2001); Fleenor et  al. (2008); Ghorpade (2000); McCarthy and Garavan (2001); Pollack and Pollack (1996); Testa (2002); Wimer (2002); Wimer and Nowack (1998) 32. The process should not be unduly burdensome in terms of time, costs, and so on. Bracken and Timmreck (1999); Bracken et  al. (2001); Brutus and Derayah (2002); Fleenor et al. (2008); Westerman and Rosse (1997) Training/​Instruction

33. Raters should be trained or well instructed. Antonioni (1996); Atkins and Wood (2002); Atwater et  al. (2002, 2007); Atwater and Waldman (1998); Bernardin (1986); Bracken (1994); Bracken et al. (1997, 2001); Bracken and Rose (2011); Bracken and Timmreck (1999); Carson (2006); Church and Bracken (1997); Diefendorff, Silverman, and Greguras (2005); Fleenor et  al. (2008, 2010); Ghorpade (2000); Gillespie (2005); Guenole et al. (2011); Heslin and Latham (2004); Hezlett (2008); R.  Hoffman (1995); Kanouse (1998); Lepsinger and Lucia

30

30  / /  ​ 3 6 0 for D ecision-Making

(1997); London and Beatty (1993); London et  al. (1997); McCarthy and Garavan (2007); Ng et al. (2011); Nowack (1992); Nowack and Mashihi (2012); Peiperl (2001); Pollack and Pollack (1996); Redman and Snape (1992); Robert and Shipper (1998); Rogers et al. (2002); 3D Group (2013); Waldman and Atwater (2001); Waldman et al. (1998); Westerman and Rosse (1997); Yammarino and Atwater (1997); Yukl and Lepsinger (1995) 34. Employees receiving the feedback should be trained or well instructed. Antonioni (1996); Atwater et al. (2002, 2007); Atwater and Waldman (1998); Bancroft et  al. (1993); Bracken (1994); Bracken et  al. (1997, 2001); Bracken and Timmreck (1999); Church and Bracken (1997); Fleenor et al. (2008); R. Hoffman (1995); Kanouse (1998); London and Beatty (1993); London et al. (1990, 1997); Luthans and Peterson (2003); McCarthy and Garavan (2001); Metcalfe (1998); Peiperl (2001); Pollack and Pollack (1996); Robert and Shipper (1998); Rogers et al. (2002); Seifert et al. (2003); Smither (2008); Smither, London, and Reilly (2005); 3D Group (2013); Toegel and Conger (2003); Tornow (1993b); Tyson and Ward (2004); van der Heijden and Nijhof (2004); Waldman and Atwater (2001); Westerman and Rosse (1997);Yammarino and Atwater (1997); Yukl and Lepsinger (1995) 35. Managers using the 360 results should be trained or well instructed. Antonioni (1996); Atwater et al. (2002); Bracken (1994); Bracken et al. (1997); Bracken and Timmreck (1999); Carson (2006); Fleenor et al. (2008); R. Hoffman (1995); London and Beatty (1993); London et  al. (1997); Nowack and Mashihi (2012); O’Reilly and Furth (1994); Peiperl (2001); Rogers et  al. (2002); 3D Group (2013); Wimer (2002); Yammarino and Atwater (1997) Interpretation of Feedback

36. Feedback should be detailed (including statistics showing central tendency and dispersion), and there should be standardized guidance on interpreting the feedback (e.g., instructions, graphics, etc.). Antonioni (1996); Atkins and Wood (2002); Atwater and Brett (2006); Atwater et al. (2007); Bernardin (1986); Bernardin and Beatty (1987); Bernardin et al. (1993); Bracken (1994); Bracken et al. (1997, 2001); Bracken and Rose (2011); Bracken and Timmreck (1999); Brutus et al. (2006); Church and Waclawski (2001); DeNisi and Kluger (2000); Fleenor et  al. (2008, 2010); Gillespie (2005); Hezlett (2008); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); London and Beatty (1993); London and Smither (1995); London et al. (1990); Luthans and Peterson (2003); Maylett (2009); McEvoy and Buller (1987); Morgan et al. (2005); Mount et al. (1998); Nowack (2009); Nowack and Mashihi (2012); Pollack and Pollack (1996); Reilly et al. (1996); Robertson (2008); Seifert et al. (2003); Smither (2008); 3D Group (2013); Vinson (1996); Waldman and Atwater (2001); Westerman and Rosse (1997); Yammarino and Atwater (1997); Yukl and Lepsinger (1995)

 31

Best Practices for Performance Appraisal  //​ 31

37. There should be coaching of employees on the use of 360 (e.g., by manager or trainer). Antonioni (1996); Atwater and Brett (2006); Atwater et  al. (2002, 2007); Bancroft et al. (1993); Bracken (1994); Bracken and Rose (2011); Bracken and Timmreck (1999); Brett and Atwater (2001); Brutus and Derayeh (2002); Brutus et  al. (2006); Carson (2006); Church (1995); Church and Waclawski (2001); Craig and Hannum (2006); Dai et al. (2010); DeNisi and Kluger (2000); Drew (2009); Fleenor et al. (2008); Fletcher, Taylor, and Glanfield (1996); Garbett et al. (2007); Gillespie (2005); Heslin and Latham (2004); Hezlett (2008); R. Hoffman (1995); Lepsinger and Lucia (1997); London and Beatty (1993); London et  al. (1990, 1997); Luthans and Peterson (2003); Manning et al. (2009); McCarthy and Garavan (2001); McCarthy and Garavan (2007); McCauley and Moxley (1996); McEvoy and Buller (1987); Metcalfe (1998); Morgan et al. (2005); Nowack (2009); Nowack and Mashihi (2012); Peiperl (2001); Pollack and Pollack (1996); Robertson (2008); Rogers et al. (2002); Seifert and Yukl (2010); Seifert et al. (2003); Smither (2008); Smither et  al. (2008); Smither, London, Flautt, Vargas, and Kucine (2003); Testa (2002); 3D Group (2013); Tyson and Ward (2004); Vinson (1996); Waldman and Atwater (2001); Wimer (2002); Wimer and Nowack (1998); Yukl and Lepsinger (1995) 38. Interpretation should consider the individual differences of employees in responses to feedback. Antonioni (1996); Atwater et al. (2002, 2007); Atwater and Van Fleet (1997); Atwater, Waldman, Atwater, and Cartier (2000); Bailey and Austin (2006); Beyer (1990); Bowen, Swim, and Jacobs (2000); Brett and Atwater (2001); Church and Bracken (1997); Church and Waclawski (1998); Craig and Hannum (2006); Drew (2009); Fleenor et al. (2010); Fletcher and Baldry (2000); Fletcher et al. (1996); Funderburg and Levy (1997); Goffin and Anderson (2007); Guenole et al. (2011); Hensel et al. (2010); Heslin and Latham (2004); Lepsinger and Lucia (1997); London and Smither (1995); London and Smither (2002); London and Wohlers (1991); London et al. (1990); Luthans and Peterson (2003); McCarthy and Garavan (2007); McEvoy and Buller (1987); Nilsen and Campbell (1993); Nowack (2009); Nowack and Mashihi (2012); Ostroff, Atwater, and Feinberg (2004); Shrauger and Kelly (1988); Shrauger and Terbovic (1976); Smither (2008); Smither, London, and Reilly (2005); Smither, London, and Richmond (2005); Thomason et  al. (2011); Tornow (1993a, 1993b); Vecchio and Anderson (2009); Van Velsor et  al. (1993); Waldman (1997); Waldman and Atwater (2001); Waldman and Bowen (1998); Williams and Johnson (2000); Williams and Levy (1992); Wohlers et al. (1993); Wohlers and London (1989); Yammarino and Atwater (1993); Yammarino and Atwater (1997)

32

32  / /  ​ 3 6 0 for D ecision-Making

39. Results should be interpreted with consideration of potential biasing factors (e.g., types of job, business conditions, opportunity to perform, unexpected events, other constraints, etc.). Antonioni (1996); Bernardin and Beatty (1987); Bernardin et  al. (1993); Herold and Fields (2004); Johnson and Ferstl (1999); Metcalfe (1998); Nowack (2009); Toegel and Conger (2003); Yammarino and Atwater (1997) 40. The meaningfulness of differences in feedback from the different sources and between self and others should be interpreted. Albright and Levy (1995); Antonioni (1996); Atkins and Wood (2002); Atwater et  al. (1995, 1998, 2002, 2007); Atwater and Van Fleet (1997); Atwater and Waldman (1998); Bailey and Austin (2006); Bailey and Fletcher (2002); Baril, Ayman, and Palmiter (1994); Bass and Yammarino (1991); Bernardin and Beatty (1987); Beyer (1990); Bowen et al. (2000); Bozeman (1997); Brett and Atwater (2001); Campbell and Lee (1988); Carless, Mann, and Wearing (1998); Cheung (1999); Church and Bracken (1997); Church and Waclawski (1998); Conway (1996); Conway et  al. (2001); Craig and Hannum (2006); Eichinger and Lombardo (2004); C.  L. Facteau, Facteau, Schoel, Russell, and Poteet (1998); J.  D. Facteau and Craig (2001); Farh et  al. (1991); Farh and Dobbins (1989); Fleenor, McCauley, and Brutus (1996); Fleenor et al. (2008, 2010); Flint (1999); Fox et al. (1989); Furnham and Stringfield (1994); Furnham and Stringfield (1998); Garbett et al. (2007); Gioia and Sims (1985); Goffin and Anderson (2007); Greguras, Ford, et al. (2003); Greguras, Robie, et al. (2003); Greller and Herold (1975); Hannum (2007); Harris and Schaubroeck (1988); Hazucha, Hezlett, and Schneider (1993); Heidemeier and Moser (2009); Herold and Fields (2004); B. J. Hoffman et al. (2010); B. J. Hoffman and Woehr (2009); Holzbach (1978); Jellema et al. (2006); Johnson and Ferstl (1999); Kaiser and Craig (2005); Lance et al. (2008); LeBreton et al. (2003); Lepsinger and Lucia (1997); Levy, Cawley, and Foti (1998); London and Beatty (1993); London and Smither (1995); London and Wohlers (1991); London et al. (1997); Luthans and Peterson (2003); Maurer, Raju, and Collins (1998); Maylett (2009); McEvoy and Buller (1987); Metcalfe (1998); Morgan et al. (2005); Mount et al. (1994, 1998); Ng et al. (2011); Nilsen and Campbell (1993); Nowack (1992, 2009); Nowack and Mashihi (2012); Ostroff et al. (2004); Penny (2003); Pollack and Pollack (1996); Riggio and Cole (1992); Salam et al. (1997); Schrader and Steiner (1996); Scullen (1997); Seifert and Yukl (2010); Seifert et al. (2003); Shrauger and Kelly (1988); Siegel (1982); Smither (2008); Smither et al. (2008); Smither, London, and Reilly (2005); Smither et  al. (1995); Stone and Stone (1984, 1985); Testa (2002); Thomason et  al. (2011); Tornow (1993a, 1993b); van der Heijden and Nijhof (2004); van Hooft et al. (2006); Van Velsor et al. (1993); Varela and Pemeaux (2008); Vecchio and Anderson (2009); Vinson (1996); Viswesvaran et al. (2002); Waldman and Atwater (2001); Williams and Johnson (2000); Williams and Levy (1992); Woehr et al. (2005); Wohlers et al. (1993); Wohlers and London (1989); Yammarino (2003); Yammarino and Atwater (1993); Yammarino and Atwater (1997); Yukl and Lepsinger (1995)

 3

Best Practices for Performance Appraisal  //​ 33

41. Additional assistance should be provided in the interpretation of negative feedback and large self–​other differences. Albright and Levy (1995); Antonioni (1996); Atkins and Wood (2002); Atwater and Brett (2006); Atwater et al. (1995, 1998, 2002, 2007); Atwater and Van Fleet (1997); Atwater and Waldman (1998); Bailey and Austin (2006); Beyer (1990); Bowen et  al. (2000); Brett and Atwater (2001); Campbell and Lee (1988); Carless et  al. (1998); Carson (2006); Cheung (1999); Diefendorff et al. (2005); C. L. Facteau et al. (1998); Fleenor et  al. (1996, 2008, 2010); Furnham and Stringfield (1994); Furnham and Stringfield (1998); Garbett et al. (2007); Gioia and Sims (1985); Hazucha et al. (1993); Heidemeier and Moser (2009); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); Levy et al. (1998); London and Beatty (1993); London and Smither (1995); London et al. (1990); Luthans and Peterson (2003); McEvoy and Buller (1987); Metcalfe (1998); Morgan et  al. (2005); Ng et  al. (2011); Nilsen and Campbell (1993); Nowack (1992, 2009); Nowack and Mashihi (2012); Schrader and Steiner (1996); Smither (2008); Smither, London, and Reilly (2005); Stone and Stone (1984, 1985); Tornow (1993a, 1993b); van der Heijden and Nijhof (2004); Vecchio and Anderson (2009); Van Velsor et  al. (1993); Vinson (1996); Waldman and Atwater (2001); Yammarino and Atwater (1993); Yammarino and Atwater (1997); Yukl and Lepsinger (1995) 42. Normative information should be provided to help interpret the feedback. Atwater and Brett (2006); Atwater et  al. (2007); Atwater and Van Fleet (1997); Bernardin and Beatty (1987); Bernardin et al. (1993); Bracken and Timmreck (1999); Bracken et al. (2001); DeNisi and Kluger (2000); Drew (2009); Fleenor et al. (2008, 2010); Herold and Fields (2004); Johnson and Ferstl (1999); London and Beatty (1993); London, Smither, and Adsit (1997); London et al. (1990); Ng et al. (2011); Nowack and Mashihi (2012); Smither et al. (1995); 3D Group (2013); Yukl and Lepsinger (1995) 43. Feedback should include both an absolute performance evaluation (e.g., compared to expectations) and a relative performance evaluation (e.g., compared to other employees). Antonioni (1996); Heidemeier and Moser (2009); Kane and Lawler (1978); London and Smither (1995); Maylett (2009); Nowack and Mashihi (2012) 44. Objective performance data also should be considered in the overall evaluation of performance, if applicable (e.g., sales, profits, productivity, errors, etc.). Atwater et  al. (1998); Bernardin (1986); Bernardin and Beatty (1987); Bracken and Timmreck (1999); Eichinger and Lombardo (2004); Farh and Dobbins (1989); Gillespie (2005); Hannum (2007); C.  C. Hoffman, Nathan, and Holden (1991); London and Smither (1995); London et  al. (1990); Luthans and Peterson (2003); Ostroff et  al. (2004); Sala and Dwight (2002); Schrader and Steiner (1996); Van Velsor et al. (1993); Waldman and Bowen (1998)

34

34  / /  ​ 3 6 0 for D ecision-Making

45. Narrative comments should normally be made anonymously if needed and otherwise made more useful for feedback purposes (e.g., by summarizing, interpreting, or eliminating identifying information). Bernardin and Beatty (1987); Bracken (1994); Church and Waclawski (2001); Gillespie (2005); London and Beatty (1993); London et al. (1990); Nowack and Mashihi (2012) 46. Narrative comments should be interpreted along with the ratings and other information on the employee’s performance. Bracken (1994); Bracken and Timmreck (1999); Bracken et  al. (2001); Johnson and Ferstl (1999); London et  al. (1990); Nowack (2009); Nowack and Mashihi (2012); Smither and Walker (2004); Smither et al. (1995); Vinson (1996); Waldman et al. (1998) 47. Employees receiving the feedback should be allowed to suggest interpretations of the feedback before the performance review is finalized. Flint (1999); Luthans and Peterson (2003); Smither (2008); Yukl and Lepsinger (1995) 48. In some situations, it is useful for ratees to meet with raters (e.g., manager, subordinates, peers, etc.) to help interpret the results and create action plans. Antonioni (1996); Atwater et al. (2002, 2007); Atwater and Waldman (1998); Bancroft et  al. (1993); Bracken et  al. (1997, 2001); Bracken and Rose (2011); Bracken and Timmreck (1999); Fleenor et al. (2008); Flint (1999); Ghorpade (2000); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); London and Beatty (1993); London et al. (1990, 1997); Metcalfe (1998); Morgan et al. (2005); O’Reilly and Furth (1994); Pollack and Pollack (1996); Rogers et  al. (2002); Smither (2008); Smither, London, Reilly, Flautt, Vargas, and Kucine (2004); Smither et al. (1995); Waldman and Atwater (2001); Walker and Smither (1999) Development

49. The process should be used for performance development as well as for performance evaluation, and resources for development should be provided. Antonioni (1994, 1996); Atkins and Wood (2002); Atwater et al. (2002, 2007); Atwater and Waldman (1998); Bailey and Austin (2006); Bailey and Fletcher (2002); Bancroft et al. (1993); Bozeman (1997); Bracken (1994); Bracken et al. (1997, 2001); Bracken and Timmreck (1999); Brutus et  al. (2006); Carson (2006); Church (1995); Church and Bracken (1997); Church and Waclawski (2001); Craig and Hannum (2006); Dai et al. (2010); Drew (2009); Farh et al. (1991); Fleenor et al. (2008, 2010); Garbett et al. (2007); Gillespie (2005); Hazucha et al. (1993); Heidemeier and Moser (2009); Hensel et al. (2010); Herold and Fields (2004); Hezlett (2008); R. Hoffman (1995); Johnson and Ferstl (1999); Lepsinger and Lucia (1997); London and Beatty (1993); London and Smither (1995); London et  al. (1990, 1997); Luthans and Peterson (2003); Maylett (2009); McCarthy and Garavan (2001); McCarthy and Garavan (2007); McCauley and Moxley (1996); McEvoy and Buller (1987); Metcalfe (1998); Morgan et  al. (2005); Mount et  al. (1998); Ng et  al. (2011); Pollack and Pollack (1996); Robertson (2008); Rogers et al. (2002); Seifert et al. (2003); Smither, London, and Reilly (2005); Testa

 35

Best Practices for Performance Appraisal  //​ 35

(2002); 3D Group (2013); Tornow (1993a, 1993b); Tyson and Ward (2004); van Hooft et  al. (2006); Waldman et  al. (1998); Walker and Smither (1999); Westerman and Rosse (1997); Wimer (2002); Wimer and Nowack (1998) 50. The performance evaluation process should usually include a plan for future performance, especially if performance improvement is needed, preferably with the participation of the employee to ensure commitment. Antonioni (1996); Atwater et  al. (2002, 2007); Bailey and Austin (2006); Bancroft et al. (1993); Bracken (1994); Bracken et al. (1997); Bracken and Rose (2011); Brutus et  al. (2006); Carson (2006); Dai et  al. (2010); Drew (2009); Fleenor et  al. (2008); Flint (1999); Gillespie (2005); Hazucha et al. (1993); Herold and Fields (2004); Hezlett (2008); R. Hoffman (1995); Lepsinger and Lucia (1997); London et al. (1990); Luthans and Peterson (2003); McCarthy and Garavan (2001); McCauley and Moxley (1996); Metcalfe (1998); Morgan et al. (2005); Nowack (2009); Nowack and Mashihi (2012); O’Reilly and Furth (1994); Peiperl (2001); Pollack and Pollack (1996); Redman and Snape (1992); Rogers et al. (2002); Seifert et al. (2003); Smither, London, and Reilly (2005); Smither et al. (2003); Testa (2002); 3D Group (2013); Vinson (1996); Walker and Smither (1999); Westerman and Rosse (1997); Yukl and Lepsinger (1995) 51. The performance evaluation process should usually include a goal-​setting component, preferably with the participation of the employee to ensure commitment. Antonioni (1996); Atwater et al. (2007); Atwater and Waldman (1998); Bancroft et al. (1993); Bernardin et al. (2012); Brutus, London, and Martineau (1999); Carson (2006); Church (1995); Dai et  al. (2010); DeNisi and Kluger (2000); Fleenor et  al. (2008); Hezlett (2008); Lepsinger and Lucia (1997); London and Smither (1995); Maylett (2009); McCarthy and Garavan (2001); Nowack (2009); Nowack and Mashihi (2012); Reilly et al. (1996); Seifert et al. (2003); Smither et al. (2003); Smither, London, and Reilly (2005); Waldman et al. (1998) 52. The performance evaluation process should usually include a discussion and possibly a plan for career development. Carson (2006); Hazucha et al. (1993); R. Hoffman (1995); Metcalfe (1998); Wohlers et al. (1993) Review

53. The performance evaluation should be reviewed with the next higher level of management to get input on performance, ensure the process is administered consistently, gain approval, and other reasons. Although noted in only one literature source (3D Group, 2013), this best practice is common and expected, but an unnecessary topic to research. 54. The performance evaluation should be documented, including the ratings, narrative comments, action plans, comments, dates of meetings, etc. This is an obvious best practice for any HR data, but an unnecessary topic to be the subject of research.

36

36  / /  ​ 3 6 0 for D ecision-Making

55. An appeal mechanism should be allowed for incumbents to raise concerns to a higher level or outside authority if needed. Barrett and Kernan (1987); Cascio and Bernardin (1981); Catano, Darr, and Campbell (2007); DeNisi (2011); Folger, Konovsky, and Cropanzano (1992); Gilliland and Langdon (1998); Grote (2000); Kleiman and Durham (1981); Kline and Sulsky (2009); Latham, Almost, Mann, and Moore (2005); Martin, Bartol, and Kehoe (2000); Martin, Bartol, and Levine (1986); Mobley (1982) 56. The process itself should be reviewed on some regular basis to determine if it is effective and to identify improvements. Bracken and Timmreck (1999); Church and Waclawski (2001); DeNisi and Kluger (2000); Fleenor et al. (2008); Rogers et al. (2002); Wimer and Nowack (1998) APPENDIX: REFERENCES FOR BOX 3.1 Albright, M. D., & Levy, P. E. (1995). The effects of source credibility and performance rating discrepancy on reactions to multiple raters. Journal of Applied Social Psychology, 25, 577–​600. Antonioni, D. (1994). The effects of feedback accountability on upward appraisal ratings. Personnel Development, 47, 349–​356. Antonioni, D. (1996). Designing an effective 360-​ degree appraisal feedback process. Organizational Dynamics, 25(2),  24–​38. Atkins, P. W., & Wood, R. E. (2002). Self-​versus others’ ratings as predictions of assessment center ratings:  Validation evidence for 360-​degree feedback programs. Personnel Psychology, 55, 871–​904. Atwater, L., & Brett, J. (2006). Feedback format: Does it influence manager’s reactions to feedback? Journal of Occupational and Organizational Psychology, 79, 517–​532. Atwater, L. E., Brett, J. F., & Charles, A. C. (2007). Multisource feedback lessons learned and implications for practice. Human Resource Management, 46(2), 285–​307. Atwater, L. E., Ostroff, C., Yammarino, F. J., & Fleenor, J. W. (1998). Self-​other agreement: Does it really matter? Personnel Psychology, 51, 577–​598. Atwater, L., Roush, P., & Fischthal, A. (1995). The influence of upward feedback on self-​and follower ratings of leadership. Personnel Psychology, 48,  35–​59. Atwater, L. E., & Van Fleet, D. D. (1997). Another ceiling? Can males compete for traditionally female jobs? Journal of Management, 23, 603–​626. Atwater, L., & Waldman, D. (1998). Accountability in 360 degree feedback. HR Magazine, 43(6),  1–​7. Atwater, L. E., Waldman, D. A., Atwater, D., & Cartier, P. (2000). An upward feedback field experiment:  Supervisors’ cynicism, reactions, and commitment to subordinates. Personnel Psychology, 53, 275–​297. Atwater, L. E., Waldman, D. A., & Brett, J. F. (2002). Understanding and optimizing multisource feedback. Human Resource Management, 41(2), 193–​208. Bailey, C., & Austin, M. (2006). 360 degree feedback and developmental outcomes: The role of feedback characteristics, self-​efficacy and importance of feedback dimensions to focal managers’ current role. International Journal of Selection and Assessment, 14(1),  51–​66.

 37

Best Practices for Performance Appraisal  //​ 37

Bailey, C., & Fletcher, C. (2002). The impact of multiple source feedback on management development: Findings from a longitudinal study. Journal of Organizational Behavior, 23, 853–​867. Bancroft, E., Friedman, L., Gyr, H., Halling, C., Moravec, M., & Stoneman, K. (1993). A 21st century communication tool. HR Magazine, 38(7),  77–​81. Baril, G. L., Ayman, R., & Palmiter, D. J. (1994). Measuring leader behavior:  Moderators of discrepant self and subordinate descriptions. Journal of Applied Social Psychology, 24(1),  82–​94. Barrett, G. V., & Kernan, M. C. (1987). Performance appraisal and terminations: A review of court decisions since Brito V. Zia with implications for personnel practices. Personnel Psychology, 40, 489–​503. Bass, B. M., & Yammarino, F. J. (1991). Congruence of self and others’ leadership ratings of naval officers for understanding successful performance. International Association of Applied Psychology, 40(4), 437–​454. Bernardin, J. H. (1986). Subordinate appraisal: A valuable source of information about managers. Human Resource Management, 25(3), 421–​439. Bernardin, J. H., & Beatty, R. W. (1987). Can subordinate appraisals enhance managerial productivity? Sloan Management Review, 28(4),  63–​73. Bernardin, J. H., Dahmus, S. A., & Redmon, G. (1993). Attitudes of first-​line supervisors toward subordinate appraisals. Human Resource Management, 32(2&3), 315–​324. Bernardin, J. H., Konopaske, R., & Hagan, C. M. (2012). A comparison of adverse impact levels based on top-​down, multisource, and assessment center data: Promoting diversity and reducing legal challenges. Human Resource Management, 51(3), 313–​341. Beyer, S. (1990). Gender differences in the accuracy of self-​evaluations of performance. Journal of Personality and Social Psychology, 59(5), 960–​970. Bowen, C.-​C., Swim, J. K., & Jacobs, R. R. (2000). Evaluating gender biases on actual job performance or real people: A meta-​analysis. Journal of Applied Social Psychology, 30(10), 2194–​2215. Bozeman, D. P. (1997). Interrater agreement in multi-​source performance appraisal: A commentary. Journal of Organizational Behavior, 18, 313–​316. Bracken, D. W. (1994). Straight talk about multirater feedback. Training & Development, 48(9),  44–​51. Bracken, D. W., Dalton, M. A., Jako, R. A., McCauley, C. D., Pollman, V. A., & Hollenbeck, G. P. (1997). Should 360-​degree feedback be used only for developmental purposes? Greensboro, NC: Center for Creative Leadership. Bracken, D. W., & Rose, D. S. (2011). When does 360-​degree feedback create behavior change? And how would we know it when it does? Journal of Business and Psychology, 26, 183–​192. Bracken, D. W., & Timmreck, C. W. (1999). Guidelines for multisource feedback when used for decision making. The Industrial-​Organizational Psychologist, 36(4),  64–​74. Bracken, D. W., Timmreck, C. W., Fleenor, J. W., & Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 40(1),  3–​20. Brett, J. F., & Atwater, L. E. (2001). 360-​degree feedback: Accuracy, reactions, and perceptions of usefulness. Journal of Applied Psychology, 86, 930–​942.

38

38  / /  ​ 3 6 0 for D ecision-Making

Brutus, S., & Derayeh, M. (2002). Multisource assessment programs in organizations:  An insider’s perspective. Human Resource Development Quarterly, 13(2), 187–​202. Brutus, S., Derayeh, M., Fletcher, C., Bailey, C., Velazquez, P., Shi, K., et  al. (2006). Internationalization of multi-​source feedback systems: A six-​country exploratory analysis of 360-​degree feedback. Human Resource Management, 17(11), 1888–​1906. Brutus, S., & Facteau, J. (2003). Short, simple, and specific: The Influence of item design characteristics in multi-​source assessment contexts. International Journal of Selection and Assessment, 11(4), 313–​325. Brutus, S., London, M., & Martineau, J. (1999). The impact of 360-​degree feedback on planning for career development. Journal of Management Development, 18(8), 676–​693. Campbell, D. J., & Lee, C. (1988). Self-​appraisal in performance evaluation: Development versus evaluation. Academy of Management Review, 13(2), 302–​314. Carless, S. A., Mann, L., & Wearing, A. J. (1998). Leadership, managerial performance and 360-​ degree feedback. Applied Psychology: An International Review, 47(4), 481–​496. Carson, M. (2006). Saying it like it isn’t: The pros and cons of 360-​degree feedback. Business Horizons, 49, 395–​402. Cascio, W. F., & Bernardin, J. H. (1981). Implications of performance appraisal litigation for personnel decisions. Personnel Psychology, 34, 211–​226. Catano, V. M., Darr, W., & Campbell, C. A. (2007). Performance appraisal of behavior-​based competencies: A reliable and valid procedure. Personnel Psychology, 60, 201–​230. Cederblom, D., & Lounsbury, J. W. (1980). An investigation of user acceptance of peer evaluations. Personnel Psychology, 33, 567–​579. Cheung, G. W. (1999). Multifaceted conceptions of self-​other ratings disagreement. Personnel Psychology, 52,  1–​36. Church, A. H. (1995). First-​rate multirater feedback. Training & Development, 49(8),  42–​43. Church, A. H., & Bracken, D. W. (1997). Advancing the state of the art of 360-​degree feedback. Group & Organization Management, 22(2), 149–​161. Church, A. H., Rogelberg, S. G., & Waclawski, J. (2000). Since when is no news good news? The relationship between performance and response rates in multirater feedback. Personnel Psychology, 53, 435–​451. Church, A. H., & Waclawski, J. (1998). The relationship between individual personality orientation and executive leadership behavior. Journal of Occupational and Organizational Psychology, 71, 99–​125. Church, A. H., & Waclawski, J. (2001). A five-​phase framework for designing a successful multisource feedback system. Consulting Psychology Journal:  Practice and Research, 53(2),  82–​95. Conway, J. M. (1996). Analysis and design of multitrait-​multirater performance appraisal studies. Journal of Management, 22, 139–​162. Conway, J. M., Lombardo, K., & Sanders, K. C. (2001). A meta-​analysis of incremental validity and nomo logical networks for subordinate and peer rating. Human Performance, 14(4), 267–​303.

 39

Best Practices for Performance Appraisal  //​ 39

Craig, B. S., & Hannum, K. (2006). Research update:  360-​degree performance assessment. Consulting Psychology Journal: Practice and Research, 58(2), 117–​122. Dai, G., De Meuse, K. P., & Peterson, C. (2010). Impact of multi-​source feedback on leadership competency development: A longitudinal field study. Journal of Managerial Issues, 22(2), 197–​219. DeNisi, A. S. (2011). Managing performance to change behavior. Journal of Organizational Behavior Management, 31, 262–​276. DeNisi, A. S., & Kluger, A. N. (2000). Feedback effectiveness:  Can 360-​degree appraisals be improved? Academy of Management Executive, 14(1), 129–​139. Diefendorff, J. M., Silverman, S. B., & Greguras, G. J. (2005). Measurement equivalence and multisource ratings for non-​managerial positions: Recommendations for research and practice. Journal of Business and Psychology, 19(3), 399–​425. Dominick, P. G., Reilly, R. R., & McGourty, J. W. (1997). The effects of peer feedback on team member behavior. Group & Organization Management, 22(4), 508–​520. Drew, G. (2009). A “360” degree view for individual leadership development. Journal of Management Development, 28(7), 581–​592. Eichinger, R. W., & Lombardo, M. M. (2004). Patterns of rater accuracy in 360-​degree feedback. Human Resource Planning, 7(4),  23–​25. Facteau, C. L., Facteau, J. D., Schoel, L. C., Russell, J. E., & Poteet, M. L. (1998). Reactions of leaders to 360-​degree feedback from subordinates and peers. Leadership Quarterly, 9(4), 428–​448. Facteau, J. D., & Craig, B. S. (2001). Are performance appraisal ratings from different rating sources comparable? Journal of Applied Psychology, 86, 215–​227. Farh, J. L., Cannella, A. A., & Bedeian, A. G. (1991). Peer ratings: The impact of purpose on rating quality and user acceptance. Group & Organization Studies, 16(4), 367–​386. Farh, J. L., & Dobbins, G. H. (1989). Effects of comparative performance information on the accuracy of self-​ratings and agreement between self-​and supervisor ratings. Journal of Applied Psychology, 74, 606–​610. Fleenor, J. W., McCauley, C. D., & Brutus, S. (1996). Self-​other rating agreement and leader effectiveness. Leadership Quarterly, 7(4), 487–​506. Fleenor, J. W., Smither, J. W., Atwater, L. E., Braddy, P. W., & Sturm, R. E. (2010). Self-​other rating agreement in leadership: A review. Leadership Quarterly, 21, 1005–​1034. Fleenor, J. W., Taylor, S., & Chappelow, C. (2008). Leveraging the impact of 360-​degree feedback. San Francisco, CA: Pfeiffer. Fletcher, C., & Baldry, C. (2000). A Study of individual differences and self-​awareness in the context of multi-​source feedback. Journal of Occupational and Organizational Psychology, 73, 303–​319. Fletcher, C., Baldry, C., & Cunningham-​Snell, N. (1998). The psychometric properties of 360-​ degree feedback: An empirical study and a cautionary tale. Psychometric Properties of 360 Degree Feedback, 6(1),  19–​34.

40

40  / /  ​ 3 6 0 for D ecision-Making

Fletcher, C., Taylor, P., & Glanfield, K. (1996). Acceptance of personality questionnaire feedback:  The role of individual difference variables and source of interpretation. Personal Individual Differences, 20(2), 151–​156. Flint, D. H. (1999). The role of organizational justice in multi-​source performance appraisal: Theory-​ based applications and directions for research. Human Resource Management Review, 9(1),  1–​20. Folger, R., Konovsky, M. A., & Cropanzano, R. (1992). A due process metaphor for performance appraisal. Research in Organizational Behavior, 14, 129–​177. Fox, S., Ben-​Nahum, Z., & Yinon, Y. (1989). Perceived similarity and accuracy of peer ratings. Journal of Applied Psychology, 74, 781–​786. Funderburg, S. A., & Levy, P. E. (1997). The influence of individual and contextual variables on 360-​degree feedback system attitudes. Group & Organization Management, 22(2), 210–​235. Furnham, A., & Stringfield, P. (1994). Congruence of self and subordinate ratings of managerial practices as a correlate of supervisor evaluation. Journal of Occupational and Organizational Psychology, 67,  57–​67. Furnham, A., & Stringfield, P. (1998). Congruence in job-​performance ratings: A study of 360-​ degree feedback examining self, manager, peers, and consultant ratings. Human Relations, 51(4), 217–​530. Garbett, R., Hardy, S., Manley, K., Titchen, A., & McCormack, B. (2007). Developing a qualitative approach to 360-​degree feedback to aid understanding and development of clinical expertise. Journal of Nursing Management, 15, 342–​347. Ghorpade, J. (2000). Managing five paradoxes of 360-​degree feedback. Academy of Management Executive, 14(1), 140–​150. Gillespie, T. L. (2005). Internationalizing 360-​degree feedback: Are subordinate ratings comparable? Journal of Business and Psychology, 19(3), 361–​382. Gilliland, S. W., & Langdon, J. C. (1998). Creating performance management systems that promote perceptions of fairness. In J. W. Smither (Ed.), Performance appraisal: State of the art in practice (pp. 209–​243). San Francisco, CA: Jossey-​Bass. Gioia, D., & Sims, H. P., Jr. (1985). Self-​serving bias and actor-​observer difference in organizations: An empirical analysis. Journal of Applied Social Psychology, 15(6), 547–​563. Goffin, R. D., & Anderson, D. W. (2007). The self-​rater’s personality and self-​other disagreement in multi-​source performance ratings. Journal of Managerial Psychology, 22(3), 271–​289. Greguras, G. J., Ford, J. M., & Brutus, S. (2003). Manager attention to multisource feedback. Journal of Management Development, 22(4), 345–​361. Greguras, G. J., Robie, C., Schleicher, D. J., & Goff III, M. (2003). A field study of the effects of rating purpose on the quality of multisource ratings. Personnel Psychology, 56,  1–​21. Greller, M. M., & Herold, D. M. (1975). Sources of feedback:  A preliminary investigation. Organizational Behavior and Human Performance, 13, 244–​256. Grote, D. (2000). Public sector organizations: Today’s innovative leaders in performance management. Public Personnel Management, 29,  1–​20.

 41

Best Practices for Performance Appraisal  //​ 41

Guenole, N., Cockerill, T., Chamorro-​Premuzic, T., & Smillie, L. (2011). Evidence for the validity of 360 dimensions in the presence of rater-​source factors. Consulting Psychology Journal: Practice and Research, 63(4), 203–​218. Hannum, K. M. (2007). Measurement equivalence of 360-​degree assessment data: Are different raters rating the same constructs? International Journal of Selection and Assessment, 15(3), 293–​301. Harris, M. M., & Schaubroeck, J. (1988). A meta-​analysis of self-​supervisor, self-​peer, and peer-​ supervisor ratings. Personnel Psychology, 41,  43–​62. Harris, M. M., Smith, D. E., & Champagne, D. (1995). A field study of performance appraisal purpose: Research-​versus administrative-​based ratings. Personnel Psychology, 48, 151–​160. Hazucha, J. F., Hezlett, S. A., & Schneider, R. J. (1993). The impact of 360-​degree feedback on management skills development. Human Resource Management, 32(2/​3), 325–​351. Heidemeier, H., & Moser, K. (2009). Self-​other agreement in job performance ratings: A meta-​ analytic test of a process model. Journal of Applied Psychology, 94, 353–​370. Hensel, R., Meijers, F., Van Der Leeden, R., & Kessels, J. (2010). 360 degree feedback: How many raters are needed for reliable ratings on the capacity to develop competences, with personal qualities as developmental goals? The International Journal of Human Resource Management, 21(15), 2813–​2830. Herold, D. M., & Fields, D. L. (2004). Making sense of subordinate feedback for leadership development: Confounding effects of job role and organizational rewards. Group & Organization Management, 29, 686–​703. Heslin, P. A., & Latham, G. P. (2004). The effect of upward feedback on managerial behavior. Applied Psychology: An International Review, 53(1),  23–​37. Hezlett, S. A. (2008). Using multisource feedback to develop leaders: Applying theory and research to improve practice. Advances in Developing Human Resources, 10(5), 703–​720. Hoffman, B. J., Gorman, A. C., Blair, C. A., Meriac, J. P., Overstreet, B., & Atchley, K. E. (2012). Evidence for the effectiveness of an alternative multisource performance rating methodology. Personnel Psychology, 65, 531–​563. Hoffman, B. J., Lance, C. E., Bynum, B., & Gentry, W. A. (2010). Rater source effects are alive and well after all. Personnel Psychology, 63, 119–​151. Hoffman, B. J., & Woehr, D. J. (2009). Disentangling the meaning of multisource performance rating source and dimension factors. Personnel Psychology, 62, 735–​765. Hoffman, C. C., Nathan, B. R., & Holden, L. M. (1991). A comparison of validation criteria: Objective versus subjective performance measures and self-​versus supervisor ratings. Personnel Psychology, 44, 601–​619. Hoffman, R. (1995). Ten reasons you should be using 360-​degree feedback. HR Magazine, 40(4),  1–​5. Holzbach, R. L. (1978). Rater bias in performance ratings:  Superior, self-​, and peer ratings. Journal of Applied Psychology, 63, 579–​588.

42

42  / /  ​ 3 6 0 for D ecision-Making

Jellema, F., Visscher, A., & Scheerens, J. (2006). Measuring change in work behavior by means of multisource feedback. International Journal of Training and Development, 10(2), 121–​139. Jelley, B. R., & Goffin, R. D. (2001). Can performance-​feedback accuracy be improved? Effects of rater priming and rating-​scale format on rating accuracy. Journal of Applied Psychology, 86, 134–​144. Johnson, J. W., & Ferstl, K. (1999). The effects of interrater and self-​other agreement on performance improvement following upward feedback. Personnel Psychology, 52, 271–​303. Kaiser, R. B., & Craig, B. S. (2005). Building a better mouse trap: Item characteristics associated with rating discrepancies in 360-​degree feedback. Consulting Psychology Journal: Practice and Research, 57(4), 235–​245. Kanouse, D. (1998, January). Why multi-​ rater feedback systems fail. Performance Management, p. 3. Kleiman, L. S., & Durham, R. L. (1981). Performance appraisal, promotion, and the courts:  A critical review. Personnel Psychology, 34, 103–​121. Kline, T. J., & Sulsky, L. M. (2009). Measurement and assessment issues in performance appraisal. Canadian Psychology, 50, 161–​171. Lance, C. E., Hoffman, B. J., Gentry, W. A., & Baranik, L. E. (2008). Rater source factors represent important subcomponents of the criterion construct space, not rater bias. Human Resource Management Review, 18, 223–​232. Lane, J., & Herriot, P. (1990). Self-​ratings, supervisor ratings, positions and performance. Journal of Occupational Psychology, 63,  77–​88. Latham, G. P., Almost, J., Mann, S., & Moore, C. (2005). New developments in performance management. Organizational Dynamics, 34(1),  77–​87. LeBreton, J. M., Burgess, J. R., Kaiser, R. B., Atchley, K. E., & James, L. R. (2003). The restriction of variance hypothesis and interrater reliability and agreement:  Are ratings from multiple sources really dissimilar? Organizational Research Methods, 6(1), 80–​128. Lepsinger, R., & Lucia, A. D. (1997). 360-​degree feedback and performance appraisal. Training, 34(9),  1–​5. Levy, P. E., Cawley, B. D., & Foti, R. J. (1998). Reactions to appraisal discrepancies: Performance ratings and attributions. Journal of Business and Psychology, 12(4), 437–​455. Lewin, A. Y., & Zwany, A. (1976). Peer nominations: A model, literature critique and a paradigm for research. Personnel Psychology, 29, 423–​447. London, M., & Beatty, R. W. (1993). 360-​degree feedback as a competitive advantage. Human Resource Management, 32(2&3), 353–​372. London, M., & Smither, J. W. (1995). Can multi-​source feedback change perceptions of goal accomplishment, self-​ evaluations, and performance-​ related outcomes? Theory-​ based applications and directions for research. Personnel Psychology, 48, 803–​839. London, M., & Smither, J. W. (2002). Feedback orientation, feedback culture, and the longitudinal performance management process. Human Resource Management Review, 12, 81–​100. London, M., Smither, J. W., & Adsit, D. J. (1997). Accountability: The Achilles’ heel of multisource feedback. Group & Organization Management, 22(2), 162–​184.

 43

Best Practices for Performance Appraisal  //​ 43

London, M., & Wohlers, A. J. (1991). Agreement between subordinate and self-​ratings in upward feedback. Personnel Psychology, 44, 375–​390. London, M., Wohlers, A. J., & Gallagher, P. (1990). A feedback approach to management development. Journal of Management Development, 9,  17–​31. Luthans, F., & Peterson, S. J. (2004). 360-​degree feedback with systematic coaching: Empirical analysis suggests a winning combination. Human Resource Management, 42(3), 243–​256. Manning, T., Pogson, G., & Morrison, Z. (2009). Interpersonal influence in the workplace: Influencing behavior and 360-​degree assessments. Industrial and Commercial Training, 41(5), 258–​269. Martin, D. C., Bartol, K. M., & Kehoe, P. E. (2000). The legal ramifications of performance appraisal: The growing significance. Public Personnel Management, 29, 379–​405. Martin, D. C., Bartol, K. M., & Levine, M. J. (1986). The legal ramifications of performance appraisal. Employee Relations Law Journal, 12, 370–​396. Maurer, T. J., Raju, N. S., & Collins, W. C. (1998). Peer and subordinate performance appraisal measurement equivalence. Journal of Applied Psychology, 83, 693–​702. Maylett, T. (2009). 360-​degree feedback revisited: The transition from development to appraisal. Compensation & Benefits Review, 41,  52–​59. McCarthy, A. M., & Garavan, T. N. (2001). 360[degree] feedback process:  Performance, improvement, and employee career development. Journal of European Industrial Training, 25(1),  5–​32. McCarthy, A. M., & Garavan, T. N. (2007). Understanding acceptance of multisource feedback for management development. Personnel Review, 36(6), 903–​917. McCauley, C. D., & Moxley, R. S., Jr. (1996). Developmental 360:  How feedback can make managers more effective. Career Development International, 1,  15–​19. McEvoy, G. M., & Buller, P. F. (1987). User accepting of peer appraisals in an industrial setting. Personnel Psychology, 40, 785–​797. Mobley, W. H. (1982). Supervisor and employee race and sex effects on performance appraisals: A field study of adverse impact and generalizability. Academy of Management Journal, 25, 598–​606. Morgan, A., Cannan, K., & Cullinane, J. (2005). 360-​degree feedback: A critical enquiry. Personnel Review, 34(6), 663–​680. Mount, M. K., Judge, T. A., Scullen, S. E., Sytsma, M. R., & Hezlett, S. A. (1998). Trait, rater and level effects in 360-​degree performance ratings. Personnel Psychology, 51, 557–​576. Mount, M. K., Barrick, M. R., & Strauss, P. J. (1994). Validity of observer ratings of the big five personality factors. Journal of Applied Psychology, 79, 272–​280. Ng, K.-​Y., Koh, C., Ang, S., Kennedy, J. C., & Chan, K.-​Y. (2011). Rating leniency and halo in multisource feedback ratings:  Testing cultural assumptions of power distance and individualism-​collectivism. Journal of Applied Psychology, 96, 1033–​1044. Nilsen, D., & Campbell, D. P. (1993). Self-​observer rating discrepancies: Once an overrater, always an overrater? Human Resource Management, 32(2&3), 265–​281. Nowack, K. M. (1992). Self-​assessment and rater-​assessment as a dimension of management development. Human Resource Development Quarterly, 3(2), 141–​155.

4

44  / /  ​ 3 6 0 for D ecision-Making

Nowack, K. M. (2009). Leveraging multirater feedback to facilitate successful behavioral change. Consulting Psychology Journal: Practice and Research, 61(4), 280–​297. Nowack, K. M., & Mashihi, S. (2012). Evidence-​based answers to 15 questions about leveraging 360-​degree feedback. Consulting Psychology Journal:  Practice and Research, 64(3), 157–​182. O’Reilly, B., & Furth, J. (1994). 360 feedback can change your life. Fortune, 130(8), 93–​100. Ostroff, C., Atwater, L. E., & Feinberg, B. J. (2004). Understanding self-​other agreement: A look at rater and ratee characteristics, context, and outcomes. Personnel Psychology, 57, 333–​375. Peiperl, M. A. (2001). Getting 360-​degree feedback right. Best Practice, 79(1), 142–​147, 177. Penny, J. A. (2003). Exploring differential item functioning in a 360-​degree assessment: Rater source and method of delivery. Organizational Research Methods, 6(1),  61–​79. Pollack, D. M., & Pollack, L. J. (1996). Using 360-​degree feedback in performance appraisal. Public Personnel Management, 25, 507–​528. Redman, T., & Snape, E. (1992). Upward and onward:  Can staff appraise their managers? Personnel Review, 21(7), 32–​46. Reilly, R. R., Smither, J. W., & Vasilopoulos, N. L. (1996). A longitudinal study of upward feedback. Personnel Psychology, 49, 599–​612. Riggio, R. E., & Cole, E. J. (1992). Agreement between subordinate and superior ratings of supervisory performance and effects on self and subordinate job satisfaction. Journal of Occupational and Organizational Psychology, 65, 151–​158. Robertson, C. (2008). Employee development: Getting the information you need through a 360-​ degree feedback report. Chemical Engineering, 115(4),  63–​66. Rogers, E. E., Rogers, C. W., & Metlay, W. (2002). Improving the payoff from 360-​degree feedback. Human Resource Planning, 25(3),  44–​54. Rosti, R. T., & Shipper, F., Jr. (1998). A study of the impact of training in a management development program based on 360 feedback. Journal of Managerial Psychology, 13(1),  77–​89. Sala, F., & Dwight, S. A. (2002). Predicting executive performance with multirater surveys: Whom you ask makes a difference. Consulting Psychology Journal: Practice and Research, 54(3), 166–​172. Salam, S., Cox, J. F., & Sims, H. P., Jr. (1997). In the eye of the beholder: How leadership relates to 360-​degree performance ratings. Group & Organization Management, 22(2), 185–​209. Schrader, B. W., & Steiner, D. D. (1996). Common comparison standards:  An approach to improving agreement between self and supervisory performance ratings. Journal of Applied Psychology, 81, 813–​820. Schuler, R. S., & Jackson, S. E. (1987). Linking competitive strategies with human resource management practices. Academy of Management Executive, 1, 207–​219. Scullen, S. E. (1997). When ratings from one source have been average, but ratings from another source have not: Problems and solutions. Journal of Applied Psychology, 82, 880–​888. Seifert, C. F., & Yukl, G. (2010). Effects of repeated multi-​source feedback on the influence behavior and effectiveness of managers:  A field experiment. The Leadership Quarterly, 21, 856–​866.

 45

Best Practices for Performance Appraisal  //​ 45

Seifert, C. F., Yukl, G., & McDonald, R. A. (2003). Effects of multisource feedback and a feedback facilitator on the influence behavior of managers toward subordinates. Journal of Applied Psychology, 88, 561–​569. Shrauger, S. J., & Kelly, R. J. (1988). Global self-​evaluation and changes in self-​description as a function of information discrepancy and favorability. Journal of Psychology, 56(4), 709–​728. Shrauger, S. J., & Terbovic, M. L. (1976). Self-​evaluation and assessments of performance by self and others. Journal of Consulting and Clinical Psychology, 44(4), 564–​572. Siegel, L. (1982). Paired comparison evaluations of managerial effectiveness by peers and supervisors. Personnel Psychology, 35, 843–​852. Smith, A. F., & Fortunato, V. J. (2008). Factors influencing employee intentions to provide honest upward feedback ratings. Journal of Business Psychology, 22, 191–​207. Smither, J. W. (2008). What do leaders recall about their multisource feedback. Journal of Leadership & Organizational Studies, 14(3), 202–​212. Smither, J. W., Brett, J. F., & Atwater, L. E. (2008). What do leaders recall about their multisource feedback? Journal of Leadership & Organizational Studies, 14(3), 202–​218. Smither, J. W., London, M., Flautt, R., Vargas, Y., & Kucine, I. (2003). Can working with an executive coach improve multisource feedback ratings over time? A quasi-​experimental field study. Personnel Psychology, 56,  23–​44. Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-​analysis, and review of empirical findings. Personnel Psychology, 58,  33–​66. Smither, J. W., London, M., Reilly, R. R., Flautt, R., Vargas, Y., & Kucine, I. (2004). Emerald article: Discussing multisource feedback with raters and performance improvement. Journal of Management Development, 23(5), 456–​468. Smither, J. W., London, M., & Richmond, K. R. (2005). The relationship between leaders’ personality and their reactions to and use of multisource feedback. Group & Organization Management, 30(2), 181–​210. Smither, J. W., London, M., Vasilopoulos, N. L., Reilly, R. R., Millsap, R. E., & Salvemini, N. (1995). An examination of the effects of an upward feedback program over time. Personnel Psychology, 48,  1–​34. Steiner, D. D., & Rain, J. S. (1989). Immediate and delayed primacy and recency effects in performance evaluations. Journal of Applied Psychology, 74(1), 136–​142. Stone, E. F., & Stone, D. L. (1984). The Effects of multiple sources of performance feedback and feedback favorability on self-​perceived task competence and perceived feedback accuracy. Journal of Management, 10, 371–​378. Testa, M. R. (2002). A model for organization-​ based 360 degree leadership assessment. Leadership & Organization Development Journal, 23(5), 260–​268. Thomason, S. J., Weeks, M., Bernardin, J. H., & Kane, J. (2011). The differential focus of supervisors and peers in evaluations of managerial potential. International Journal of Selection and Assessment, 19(1), 82–​97.

46

46  / /  ​ 3 6 0 for D ecision-Making

3D Group. (2013). Current practices in 360 degree feedback:  A benchmark study of North American companies. Emeryville, CA: Author. Toegel, G., & Conger, J. A. (2003). 360-​Degree assessment: Time for reinvention. Academy of Management Learning and Education, 2(3), 297–​311. Tornow, W. W. (1993a). Editor’s note:  Introduction to special issue on 360-​degree feedback. Human Resource Management, 32(2), 211–​219. Tornow, W. W. (1993b). Perceptions or reality: Is multi-​perspective measurement a means or an end? Human Resource Management, 32(2), 221–​229. Tyson, S., & Ward, P. (2004). The use of 360 degree feedback technique in the evaluation of management development. Management Learning, 35(2), 205–​223. van der Heijden, B. I., & Nijhof, A. H. (2004). The value of subjectivity: Problems and prospects for 360-​degree appraisal systems. International Journal of Human Resource Management, 15(3), 493–​511. van Hooft, E. A., van der Flier, H., & Minne, M. R. (2006). Construct validity of multi-​source performance ratings:  An examination of the relationship of self-​, supervisor-​, and peer-​ ratings with cognitive and personality measures. International Journal of Selection and Assessment, 14(1),  67–​81. Van Velsor, E. V., Taylor, S., & Leslie, J. B. (1993). An examination of the relationships among self-​ perception accuracy, self-​awareness, gender, and leader effectiveness. Human Resource Management, 32(2), 249–​263. Varela, O. E., & Premeaux, S. F. (2008). Do cross-​cultural values affect multisource feedback dynamics? The case of high power distance and collectivism in two Latin American countries. International Journal of Selection and Assessment, 16(2), 134–​142. Vecchio, R. P., & Anderson, R. J. (2009). Agreement in self-​other ratings of leader effectiveness:  The role of demographics and personality. International Journal of Selection and Assessment, 17(2), 165–​179. Vinson, M. N. (1996). The pros and cons of 360-​degree feedback: Making it work. Training & Development, 50(4),  11–​12. Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2002). The moderating influence of job performance dimensions on convergence of supervisory and peer ratings of job performance: Unconfounding construct-​level convergence and rating difficulty. Journal of Applied Psychology, 87, 345–​354. Waldman, D. A. (1997). Predictors of employee preferences for multirater and group-​based performance appraisal. Group & Organization Management, 22(2), 264–​287. Waldman, D. A., & Atwater, L. E. (2001). Attitudinal and behavioral outcomes of an upward feedback process. Group & Organization Management, 26(2), 189–​205. Waldman, D. A., Atwater, L. E., & Antonioni, D. (1998). Has 360 degree feedback gone amok? Academy of Management Executive, 12(2),  86–​94. Waldman, D. A., & Bowen, D. E. (1998). The acceptability of 360 degree appraisals: A customer-​ supplier relationship perspective. Human Resource Management, 37(2), 117–​129. Walker, A. G., & Smither, J. W. (1999). A five-​year study of upward feedback: What managers do with their results matters. Personnel Psychology, 52, 393–​423.

 47

Best Practices for Performance Appraisal  //​ 47

Westerman, J. W., & Rosse, J. G. (1997). Reducing the threat of rater nonparticipation in 360-​ degree feedback systems. Group & Organization Management, 22(2), 288–​309. Williams, J. R., & Johnson, M. A. (2000). Self-​supervisor agreement: The influence of feedback seeking on the relationship between self and supervisor ratings of performance. Journal of Applied Social Psychology, 30(2), 275–​292. Williams, J. R., & Levy, P. E. (1992). The effects of perceived system knowledge on the agreement between self-​ratings and supervisor ratings. Personnel Psychology, 45, 835–​847. Wimer, S. (2002). The dark side of 360-​degree feedback. Training and Development, 56(9),  37–​42. Wimer, S., & Nowack, K. M. (1998). 13 common mistakes using 360-​degree feedback. Training and Development, 52(5),  69–​80. Woehr, D. J., Sheehan, K. M., & Bennett, W., Jr. (2005). Assessing measurement equivalence across rating sources: A multitrait-​multirater approach. Journal of Applied Psychology, 90, 592–​600. Wohlers, A. J., Hall, M. J., & London, M. (1993). Subordinates rating managers: Organizational and demographic correlates of self/​subordinate agreement. Journal of Occupational and Organizational Psychology, 66, 263–​275. Wohlers, A. J., & London, M. (1989). Ratings of managerial characteristics: Evaluation difficulty, co-​worker agreement, and self-​awareness. Personnel Psychology, 42, 235–​261. Yammarino, F. J. (2003). Modern data analytic techniques for multisource feedback. Organizational Research Methods, 6(1),  6–​14. Yammarino, F. J., & Atwater, L. E. (1993). Understanding self-​perception accuracy: Implications for human resource management. Human Resource Management, 32(2), 231–​247. Yammarino, F. J., & Atwater, L. E. (1997). Implications of self-​other rating agreement for human resources management. Organizational Dynamics,  35–​44. Yukl, G., & Lepsinger, R. (1995). How to get the most out of 360 degree feedback. Training,  45–​50.

Strategic Considerations

Strategic considerations include practices that ensure 360s are used to link PM to other important elements within the organization that influence whether the organization’s strategy—​both its overall business strategy and it talent management strategy—​can be effectively pursued. Aligning PM practices with organizational strategy is important because well-​conducted PM dictates effective succession planning and setting the stage for organizations to meet long-​term goals. There are three notable practices in this category. First and foremost, 360s should be integrated with other human resource management (HRM) systems, such as compensation or promotion. This is the most researched best practice in this category, with 43 citations. Secondary to this, with 35 citations, is making sure that the use of 360s is aligned with the organizational culture. For example, organizations with cultures focused on learning and development where there is a high level of trust and cohesion among employees and supervisors will be more successful

48

48  / /  ​ 3 6 0 for D ecision-Making

using 360s than cultures that are more siloed or competitive (Moravec, Gyr, & Friedman, 1993). Finally, clearly defining the purpose, policies, procedures, and uses of data for managers and employees will help quell concerns. Transparency around potentially sensitive processes will lessen concerns of distributive or procedural injustice. Future research is needed to expand this category. For example, an interesting research direction would be to consider the jobs, industries, or organizational types in which 360s are not ideal. This type of assessment generally requests that raters reflect on behaviors of the focal leader (London, Wohlers, & Gallagher, 1990); however, many jobs are completed for the most part, if not entirely, in isolation (e.g., truck drivers, car salespeople). These jobs leave little to be evaluated beyond objective outputs and communication response rates (e.g., responding to emails in a timely manner). Using data from O*NET OnLine, future research should look at occupations with little human interaction to begin to explore ways in which these individuals can be evaluated. Building on this, the sheer number of independent workers and small business owners may render the use of 360s illogical. This evaluation method caters to the more traditional perception of what an organization is while inadvertently neglecting nontraditional workers (e.g., startups). It is unlikely that these workers do not want to be evaluated or developed so much as they do not have the resources to do it themselves. Therefore, future research should examine alternative strategies for emerging companies. Items

The second category, items, includes practices specific to how items are developed, their content and psychometric properties, and so on. Consistent with research on developing questions for interviews, it is critical that the items for 360s used for PM are grounded in theory, requiring that raters reflect on behaviors strictly related to the job an employee performs, not traits such as his or her level of likeability. Not surprisingly, the most researched practice of this category, with 58 citations, is that items must be highly job related to support the validity of using 360s. This can be accomplished by basing 360 items on a job analysis, on a competency model, or at the very least on generic job requirements applicable to the job (e.g., leadership). This best practice may be the most important one of the entire 56 listed in this chapter. Without valid items, the entire process is useless. Second, with 41 citations, is that the items need to refer to specific and observable behaviors to limit raters from extrapolating beyond how the employee acts. Finally, items should include a broad range of behaviors to capture

 49

Best Practices for Performance Appraisal  //​ 49

the scope of the individual’s performance within his or her job, and it is ideal if the items can be written with the language generally used in the organization to ensure common understanding across raters. Opportunities for future research in this area are rife. Specifically, one major contribution would be to find an effective, practical, and empirically supported way to integrate the legal and psychometric considerations of 360s with the limitations of organizations. For example, theory would suggest that 360s would benefit from examining how to best translate a job description into behaviorally anchored items from the various perspectives of the sources given that successful performance of a task may look one way to the supervisor and another way to a peer. It should be noted, however, that while creating items tailored to each job is most psychometrically sound, it may be impractical. Similar to traditional performance appraisals, items tend to be standardized across positions, and designing items specific to a job can be time consuming and costly. As such, while it is optimal items be created directly from job descriptions to ensure validity, it may be more practical to revise items according to functional area or department so they can be used across a larger number of employees while maintaining some job specifics. Scales

One of the main issues with traditional PM approaches is that they do not allow for differentiation among employees. To create an effective PM system using 360s, then, it is worth spending time considering the scales used to ascertain differentiation and increase variance in scores across employees. Of great importance is the type of scales used to distinguish between levels of performance. There is evidence to suggest that relative scoring, where raters are asked to compare the focal leader to another employee, is superior to using absolute rating scales (Goffin & Olson, 2011). Similarly, Hoffman et al. (2012) found that providing frame-​of-​ reference scales (FORSs) for raters with specific behaviors on the scales, rather than just on the anchors, resulted in less measurement error and less overlap. Further, in line with existing research on best practices for PM, it is necessary that the rating scale is clearly understood by everyone involved to ensure reliability, validity, and perceptions of fairness (also see Chapter 15). While there are obvious concerns regarding using another employee as a referent (e.g., similar to grading on a curve, you are measuring relative ability vs. actual ability), future research should expand on existing work to provide a clearer referent by which

50

50  / /  ​ 3 6 0 for D ecision-Making

raters can evaluate the focal leader (Hoffman et al., 2012). Moreover, it may be useful to utilize a number of different scales (e.g., FORS, absolute) to derive the most information on the focal leader.

Raters

What sets 360s apart from traditional evaluation systems—​w hether designed for development or PM—​is that it attempts to increase reliability of ratings by triangulating among several viewpoints and allows for a more comprehensive illustration of the focal leader’s performance. Thus, raters are a critical component in the successful implementation of 360s, and choosing raters to participate is one of the most important decisions. In addition to there being several raters, as the name of the system implies, they should be appropriately included such that managers should carefully consider whether an individual’s perspective will contribute above and beyond the existing perspectives. Traditionally, raters include peers, subordinates, supervisors, and customers. However, it is crucial employees are afforded the opportunity to rate themselves. Not only does this increase perceptions of fairness, but also discrepancies between self and other ratings provide an opportunity for the focal leader to develop greater self-​awareness. Finally, rater anonymity is encouraged but can vary depending on the purpose of the 360s. For example, a majority of research on 360s states that raters should, in no uncertain terms, be anonymous to allow raters to be completely honest without fear of retribution (Bracken, Timmreck, Fleenor, & Summers, 2001). The use of software to collect 360s ratings now makes anonymity difficult, so it is recommended that feedback remains confidential. However, there are instances in which knowing the source allows for more specific feedback (Antonioni, 1996). An overlooked occurrence in the 360 literature is the assumption that raters, given anonymity and a proper understanding of the process, will be sufficiently motivated to participate. As is commonplace in data collection in the social sciences (e.g., Rose, Sidle, & Griffith, 2007), incentives may have a place in 360s should there be difficulty obtaining raters, particularly customer ratings. Research on customer surveys suggest that customers self-​select into surveys, potentially introducing a systematic error due to the lack of representativeness in the sample (Lin & Jones, 1997). This problem could be reduced by offering meaningful incentives that appeal to a larger demographic, potentially yielding a more representative sample and a higher response rate (Cobanoglu & Cobanoglu, 2003).

 51

Best Practices for Performance Appraisal  //​ 51

Administration

Administration refers to the procedure of implementing 360s. Using 360s for PM purposes is a substantial endeavor. From identifying raters and training appropriate parties to managing the data and interpreting feedback, 360s cost organizations time, for which they will never be reimbursed if 360s are administered poorly and ineffectively. In this category are best practices that address the frequency with which 360s should be implemented, the burden they place on workers, and ways to maintain perceptions of procedural justice. Timing of 360s administration is perhaps the most important consideration for several reasons. First, 360s should be conducted at a time when the data will be used for personnel decisions, such as promotions or pay increases. Doing so allows the most up-​to-​ date information regarding performance to be examined and applied in these decisions. Further, the link between performance and outcomes for employees will be clearer given the recent nature of the appraisal, which can help to increase perceptions of procedural fairness in pay and promotion decisions. Second, it is important 360s are conducted with regularity because development is incremental and a function of the frequency of feedback. While 360s are time consuming and should be administered on an annual basis, there should be midyear follow-​ups and other intermediate reviews to maintain progress toward goals set as a result of the 360s (Antonioni, 1996). For example, in a field study of managerial development, Seifert and Yukl (2010) found that supervisors who received repeated feedback were rated as more effective managers than those who did not. Finally, those in charge of administering 360s should make sure the process is not unduly burdensome. For example, consider a manager who has 10 subordinates. Asking the manager to rate all 10 subordinates requires his or her time and energy. Instead, scholars suggest that should the feedback not be used for pay and promotion decisions in the immediate future, managers should stagger administration so they do not have to rate all 10 employees at once (Brutus & Derayeh, 2002). While there is a general understanding that 360s cost time, money, and energy, this has not been thoroughly examined by researchers (Atwater, Brett, & Charles, 2007). We suggest future research analyzes the costs associated with 360s and maps them onto behavioral differences (e.g., skill development) as a result of the feedback and midyear reviews. This will provide further evidence that the investment of 360s pays off over time and yields increased pay and promotions, as well as increased productivity for the organization. It may be that focal employees or raters who do not take the system seriously, or do not engage in the process fully, will be more committed if they were aware of the exact costs and value of participating in 360s.

52

52  / /  ​ 3 6 0 for D ecision-Making

Training/​Instruction

As noted, 360s are a complex evaluation mechanism with many moving parts. Therefore, proper training on how to implement, complete, and interpret data is required for the program to be successful and for the information collected to be useful. There are three key groups that require training: the individuals managing the process, the raters, and the focal leaders. To lessen the burden of 360s on raters and focal leaders, well-​trained managers of the process will be able to streamline it more effectively, find solutions to issues during administration, and ideally decrease the amount of time between ratings and interpretation. Further, by training the raters and the employees receiving feedback, organizations reduce the chance of large discrepancies between self-​and other ratings, which have several potential negative consequences. First, employees may be less likely to trust the procedure, the information gathered, and their supervisors if ratings among raters (i.e., self vs. other, other vs. other) differ too much. A critical factor in the success of 360s is employee buy-​in (Atwater et al., 2007), so it is important that managers of the process be trained to seek explanations regarding why differences occur by meeting with other raters. Understanding discrepancies in ratings will be beneficial when communicating feedback to the focal leader being evaluated. Second, inaccuracies in ratings may not provide enough useful information to pinpoint areas of possible development, marking a missed opportunity for the employee and the organization (Atkins & Wood, 2002). Therefore, training raters on the instruments—​and more specifically creating a common mental model of the scales used—​increases reliability of the responses (Guenole, Cockerill, Chamorro-​Premuzic, & Smillie, 2011). Research on training and instruction is fairly straightforward: Make sure everyone involved in the process is well trained. However, scholars remain concerned that raters may still suffer from emotional responses to the focal leader (Robbins & DeNisi, 1994). As such, biases such as leniency and halo remain threats to the effectiveness of the 360 system. Perhaps including items after the evaluation that are related to the raters’ relationships to the ratee (e.g., How long have you two worked together? Do you spend time together outside work? Do you consider this person a friend?) would allow those who analyze the data to control for such biases. While this concept is in direct contradiction to the best practice of ensuring 360s remain anonymous or at least confidential (perceived or actual) from the perspective of the rater, interestingly, it may serve as a safeguard for organizations, allowing them to control for social context issues affecting accuracy after the fact, and it may ensure raters provide more accurate ratings to begin with as questions such as these may suggest to them that these factors are being taken into account. Comparing

 53

Best Practices for Performance Appraisal  //​ 53

organizations that consider social effects with organizations that adhere to more traditional 360s that maintain rater anonymity is an area with many possibilities for future research and would make an important contribution to the theory and practice of 360s. Interpretation of Feedback

Of all the categories from our literature review, interpretation of feedback comprises the most recommendations, with 13 practices (the second most populated category is raters). The best practices in this category include various approaches not only to help employees understand the information from raters, but also to ensure that it is received well enough to be internalized and used to help the employee develop and perform better. Yielding more than 110 citations, the most commonly researched practice is meaningfully interpreting differences in feedback from different sources and between self and others. As mentioned, discrepancies between self-​and other ratings, particularly where other ratings are notably more negative than self-​ratings, risk poor reception. However, meaningful interpretation alone is potentially not sufficient to guarantee positive outcomes. Paired with other best practices, such as considering the focal leader’s individual differences or coaching, can significantly increase the effectiveness of 360s (e.g., Bono & Colbert, 2005; Luthans & Peterson, 2003). Finally, while in general the feedback provided to employees about their performance is helpful, including information about where the employee stands relative to other employees can aid in their interpretation of feedback. Humans are social creatures who naturally compare themselves to others (Festinger, 1954). Therefore, by placing the employees’ results of the 360s within the context of other important factors (e.g., coworker performance; Antonioni, 1996) and objective performance data (e.g., sales, errors; Eichinger & Lombardo, 2004), managers can create a more complete picture of their employees’ performance. This allows focal leaders to identify which steps they need to take to be more productive (Carver & Scheier, 1982) and introduces potential role models (i.e., high-​performing coworkers). Future research is needed to expand this category. Building from work by Shipper, Hoffman, and Rotondo (2007), as well as Eckert, Ekelund, Gentry, and Dawson (2010), scholars should explore interpretation across national cultures. While feedback has overwhelmingly been considered an important mechanism for employee development and performance (e.g., Hackman & Oldham, 1975), this finding is likely specific to Western ideals. Cultures generally have strong norms regarding expectations of feedback. For example, the idea of “saving face” is a central tenet of Asian cultures, yielding a sensitivity to raw personal feedback that could reflect poorly on one’s family or community (Kim & Nam, 1998). It may be that multinational enterprises (MNEs) or multicultural

54

54  / /  ​ 3 6 0 for D ecision-Making

organizations within the United States need to accommodate the diverse set of values by taking steps such as guaranteeing restricted access to 360 results, limiting meetings about results to essential personnel, or providing feedback in writing before consulting the focal leader in person to allow him or her the chance to emotionally process the information in private. Development

While the original intent of 360s was for development purposes, over time the method has migrated to being used for PM (Maylett, 2009). However, scholars recommend that it is used simultaneously to assess and develop employees. To evaluate an employee’s performance without the intent of developing him or her is a wasted opportunity for the employee and the organization. As such, this category includes best practices regarding the use of 360s for development. For example, scholars suggest that it is a best practice that, even when 360s are used for PM, the employee should also receive resources for development. In so doing, focal leaders will be more likely to perceive organizational and managerial support of their role in the company and potentially strengthen their willingness to use feedback from 360s (Smither, London, & Reilly, 2005). Further, when identified as a competency for development and provided appropriate support and resources, research shows individuals will be more likely to focus on gaining those skills (Dai, De Meuse, & Peterson, 2010). Perhaps most critical to actual development, however, is planning. Assuming other best practices have been following—​items and scales are clear, raters are trained, and employees are invested—​the feedback only becomes actionable when employees work with their managers or coaches to create development goals (Smither, London, Flautt, Vargas, & Kucine, 2003). This category could benefit from research that considers the employees’ workplace network. Social capital is an incredibly powerful force that can be used to gain employment (Granovetter, 1973), access resources (Burt, 1992), harness needed social support, and provide access to alternative ideas, potentially yielding greater performance and creativity (Reagans, Zuckerman, & McEvily, 2004). Therefore, it is reasonable to theorize that an employee’s connections at work can contribute to his or her development. While the best practices in this category offer recommendations regarding how managers can provide all needed resources for employee development, employees at the same level who reach out to each other for assistance in gaining a new skill or practicing an existing one may be better resources given equal status and similar experiences in the organization. Future research should examine how individuals armed with feedback from their

 5

Best Practices for Performance Appraisal  //​ 55

360s seek advice from their workplace networks and whether relying on the network yields more sustained behavioral changes. Review

The final category includes practices related to sending the results to appropriate higher level managers and maintaining the system for future use. This category is important because it highlights oft-​forgotten follow-​ups that ensure procedural effectiveness and perceptions of executives not directly involved in PM that the system is working. There are two best practices illustrative of this category. First, just as the 360 provides a mechanism for employees to reflect on their performance and set development goals, so should there be a mechanism to evaluate the effectiveness of the 360 system itself (Bracken & Timmreck, 1999). Second, given these data are recorded and stored for future reference, there should be an appeal mechanism for focal leaders to raise concerns about ratings or the process in general. Importantly, such an option increases perceptions of procedural justice (Latham, Almost, Mann, & Moore, 2005). While there is some concern that employees who perceive unfairness in 360s, particularly when the outcomes are high stakes (e.g., increased pay or promotions), may consider legal action (Martin, Bartol, & Kehoe, 2000), the more prevalent threat resulting from perceived unfairness is reduced productivity or voluntary turnover. Luthans and Peterson (2003) found that with feedback and regular coaching, focal leaders’ turnover intentions significantly decreased from before the 360s were conducted. It is possible, then, that poorly addressed discrepancies in self-​and other ratings and a lack of meaningful interpretations can lead an employee to perceive procedural injustice. Future research should examine the consequences of perceived unfairness to assess potential outcomes. Such an exploration will help pinpoint weak points in the process and potentially assist individuals in identifying solutions when there are significant perceptions of injustice to reduce the probability of poor performance or intent to turnover. DISCUSSION

Performance management is at the heart of HRM systems as it is responsible for managing and enhancing the use of an organization’s human capital to achieve an organization’s goals. Yet, despite its centrality to HRM, PM is plagued by three issues that limit its effective implementation and earning it the reputation of being the “Achilles heel” of HRM (Pulakos & O’Leary, 2011). First, performance ratings tend to be unreliable given that it is largely the duty of the manager to provide a rating about

56

56  / /  ​ 3 6 0 for D ecision-Making

an employee (Murphy & Cleveland, 1995). Second, ratings tend to be biased and do not differentiate employees (e.g., Roberson, Galvin, & Charles, 2007; Steiner & Rain, 1989). Finally, such bias leads to employee perceptions of unfairness and a lack of user acceptance (Folger, Konovsky, & Cropanzano, 1992). Moreover, by basing these evaluations on the input of a single respondent, the amount of information garnered and utilized is often low, neglecting potentially critical social and contextual factors relevant to the evaluation itself (Levy & Williams, 2004), as well as ignoring opportunities for development. We propose that 360s offer a way in which organizations can overcome these shortcomings of traditional PM. Further, when used in a manner consistent with the best practices outlined in this chapter, we propose they may enable PM systems to better capture and create value in two important ways (Edwards & Ewen, 1996). First, their use, if it is consistent with these best practices, may allow an organization to derive more value by exploiting human capital resources (HCRs) already in existence within it. For example, this may include sending strong signals about which behaviors are desired, effectively measuring employee performance, and so on. Second, a PM system may be able to better create value by rapidly altering the nature of HCRs within the organization (e.g., in response to environmental or organizational changes). For example, this may include the ability to quickly acquire information regarding changes in work requirements, redefine roles, and motivate individuals to acquire knowledge, skills, abilities, and other characteristics (KSAOs) relevant to enacting these roles. Consistent with March’s (1991) argument that the most competitive firms strike a balance between exploitation and exploration, the best PM systems are likely capable of both, making them ambidextrous. Finally, Ennen and Richter (2010) suggest that some complementarities among organizational practices exist by virtue of another factor. Here, we propose that if 360s are incorporated into PM in a manner consistent with the best practices, then they create a powerful complementarity between performance appraisal and training and development within the PM system. This enables it to impact individual and, potentially, unit-​ level outcomes more strongly. For example, 360s extract information from a greater number of role partners (e.g., supervisor, peers, subordinates, customers). Because of this, they are more precise about pinpointing which KSAOs are relevant to individual and unit-​level outcomes and measure their behavioral demonstration more accurately. Similarly, 360s allow for less external attributions to be made by employees. Thus, they may more strongly motivate employees to develop desired behaviors, thus impacting the accessibility of their KSAOs to the unit in which they work and their capacities to perform.

 57

Best Practices for Performance Appraisal  //​ 57

CONCLUSION

In summation, the implementation of 360s is a well-​researched topic that yields 56 best practices to most effectively use this system as a method for PM and a mechanism for development. The key insights from this chapter are as follows:

• • • •



• •



• •

Align the use of 360s with the organization’s strategies. Create items that are job related and anchored in observable behaviors. Appropriately select raters and always allow focal leaders to rate themselves. Should results from 360s be used for pay and promotion decisions, conduct them as temporally close to those decisions as possible. Make sure ratings are confidential. Train managers of process to examine underlying reasons for notable discrepancies across raters. Train managers to provide meaningful interpretations for the focal leader. Do not forget to review the 360 system as a whole for inefficiencies.

REFERENCES Antonioni, D. (1996). Designing an effective 360-​degree appraisal feedback process. Organizational Dynamics, 25(2),  24–​38. Atkins, P. W., & Wood, R. E. (2002). Self-​versus others’ ratings as predictions of assessment center ratings: Validation evidence for 360-​degree feedback programs. Personnel Psychology, 55, 871–​904. Atwater, L. E., Brett, J. F., & Charles, A. C. (2007). Multisource feedback lessons learned and implications for practice. Human Resource Management, 46(2), 285–​307. Bono, J. E., & Colbert, A. E. (2005). Understanding responses to multi-​source feedback: The role of core self-​evaluations. Personal Psychology, 58(1), 171–​203. Bracken, D. W., & Timmreck, C. W. (1999). Guidelines for multisource feedback when used for decision making. The Industrial-​Organizational Psychologist, 36(4),  64–​74. Bracken, D. W., Timmreck, C. W., Fleenor, J. W., & Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 40(1),  3–​20. Brutus, S., & Derayeh, M. (2002). Multisource assessment programs in organizations: An insider’s perspective. Human Resource Development Quarterly, 13(2), 187–​202. Burt, R. S. (1992). Structural holes. Cambridge, MA: Harvard University Press. Campion, M. C., Campion, E. D., & Campion, M. A. (2015). Improvements in performance management through the use of 360 feedback. Industrial and Organizational Psychology, 8(1),  85–​93. Carver, C. S., & Scheier, M. F. (1982). Control theory: A useful conceptual framework for personality-​social, clinical, and health psychology. Psychological Bulletin, 92(1), 111–​135. Cobanoglu, C., & Cobanoglu, N. (2003). The effect of incentives in web surveys: Application and ethical considerations. International Journal of Market Research, 45(4), 475–​488. Dai, G., De Meuse, K. P., & Peterson, C. (2010). Impact of multi-​source feedback on leadership competency development: A longitudinal field study. Journal of Managerial Issues, 22(2), 197–​219. DeNisi, A. S., & Kluger, A. N. (2000). Feedback effectiveness:  Can 360-​degree appraisals be improved? Academy of Management Executive, 14(1), 129–​139.

58

58  / /  ​ 3 6 0 for D ecision-Making Edwards, M. R., & Ewen, A. J. (1996). 360 feedback: The powerful new model for employee assessment & performance improvement. New York, NY: Amacom. Eckert, R., Ekelund, B. Z., Gentry, W. A., & Dawson, J. F. (2010). “I don’t see you like you see me, but is that a problem?” Cultural influences on rating discrepancy in 360-​degree feedback instruments. European Journal of Work and Organizational Psychology, 19(3), 259–​278. Eichinger, R. W., & Lombardo, M. M. (2004). Patterns of rater accuracy in 360-​degree feedback. Human Resource Planning, 7(4),  23–​25. Ennen, E., & Richter, A. (2010). The whole is more than the sum of its parts—​Or is it? A review of the empirical literature on complementarities in organizations. Journal of Management, 36, 207–​233. Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117–​140. Folger, R., Konovsky, M. A., & Cropanzano, R. (1992). A due process metaphor for performance appraisal. Research in Organizational Behavior, 14, 129–​177. Funderburg, S. A., & Levy, P. E. (1997). The influence of individual and contextual variables on 360-​degree feedback system attitudes. Group & Organization Management, 22(2), 210–​235. Goffin, R. D., & Olson, J. M (2011). Is it all relative? Comparative judgments and the possible improvements of self-​ratings and ratings of others. Perspectives on Psychological Science, 61(1),  48–​60. Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–​1380. Guenole, N., Cockerill, T., Chamorro-​Premuzic, T., & Smillie, L. (2011). Evidence for the validity of 360 dimensions in the presence of rater-​source factors. Consulting Psychology Journal: Practice and Research, 63(4), 203–​218. Hackman, J. R., & Oldham, G. R. (1975). Development of the job diagnostic survey. Journal of Applied Psychology, 60(2), 159–​170. Hazucha, J. F., Hezlett, S. A., & Schneider, R. J. (1993). The impact of 360-​degree feedback on management skills development. Human Resource Management, 32(2/​3), 325–​351. Hoffman, B. J., Gorman, A. C., Blair, C. A., Meriac, J. P., Overstreet, B., & Atchley, K. E. (2012). Evidence for the effectiveness of an alternative multisource performance rating methodology. Personnel Psychology, 65, 531–​563. Kim, J. Y., & Nam, S. H. (1998). The concept and dynamics of face: Implications for organizational behavior in Asia. Organization Science, 9(4), 522–​534. Latham, G. P., Almost, J., Mann, S., & Moore, C. (2005). New developments in performance management. Organizational Dynamics, 34(1),  77–​87. Levy, P. E., & Williams, J. R. (2004). The social context of performance appraisal: A review and framework for the future. Journal of Management, 30, 881–​905. Lin, B., & Jones, C. A. (1997). Some issues in conducting customer satisfaction surveys. Journal of Marketing Practice: Applied Marketing Science, 3(1),  4–​13. London, M., Wohlers, A. J., & Gallagher, P. (1990). A feedback approach to management development. Journal of Management Development, 9,  17–​31. Luthans, F., & Peterson, S. J. (2003). 360-​degree feedback with systematic coaching:  Empirical analysis suggests a winning combination. Human Resource Management, 42(3), 243–​256. March, J. G. (1991). Exploration and exploitation in organizational learning. Organizational Science, 2,  71–​87. Martin, D. C., Bartol, K. M., & Kehoe, P. E. (2000). The legal ramifications of performance appraisal: The growing significance. Public Personnel Management, 29, 379–​405. Maylett, T. (2009). Understanding performance appraisal: Social, organizational, and goal-​based perspectives. Thousand Oaks, CA: Sage. Moravec, M., Gyr, H., & Friedman, L. (1993). A 21st century communication tool. HR Magazine, 38,  77–​77. Murphy, K. R., & Cleveland, J. N. (1995). Understanding performance appraisal: Social, organizational, and goal-​based perspectives. Sage. Pulakos, E. D., & O’Leary, R. S. (2011). Why is performance management broken? Industrial and Organizational Psychology: Perspectives on Science and Practice, 4, 146–​164. Reagans, R., Zuckerman, E., & McEvily, B. (2004). How to make the team: Social networks vs. demography as criteria for designing effective teams. Administrative Science Quarterly, 49(1), 101–​133.

 59

Best Practices for Performance Appraisal  //​ 59 Robbins, T. L., & DeNisi, A. S. (1994). A closer look at interpersonal affect as a distinct influence on cognitive processing in performance evaluations. Journal of Applied Psychology, 79(3), 341–​353. Roberson, L., Galvin, B. M., & Charles, A. C. (2007). When group identities matter: Bias in performance appraisal. Academy of Management Annals, 1, 617–​650. Rose, D. S., Sidle, S. D., & Griffith, K. H. (2007). A penny for your thoughts: Monetary incentives improve response rates for company-​sponsored employee surveys. Organizational Research Methods, 10(2), 225–​240. Seifert, C. F., & Yukl, G. (2010). Effects of repeated multi-​source feedback on the influence behavior and effectiveness of managers: A field experiment. The Leadership Quarterly, 21, 856–​866. Shipper, F., Hoffman, R. C., & Rotondo, D. M. (2007). Does the 360 feedback process create actionable knowledge equally across cultures? Academy of Management Learning & Education, 6(1): 33–​50. Smither, J. W., London, M., Flautt, R., Vargas, Y., & Kucine, I. (2003). Can working with an executive coach improve multisource feedback ratings over time? A quasi-​experimental field study. Personnel Psychology, 56,  23–​44. Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A  theoretical model, meta-​analysis, and review of empirical findings. Personnel Psychology, 58,  33–​66. Steiner, D. D., & Rain, J. S. (1989). Immediate and delayed primacy and recency effects in performance evaluations. Journal of Applied Psychology, 74(1), 136–​142. Tornow, W. W. (1993). Editor’s note: Introduction to special issue on 360-​degree feedback. Human Resource Management, 32(2), 211–​219.

60

 61

4 / /​/ ​     / /​/​ HISTORICAL

CHALLENGES OF USING 360 FEEDBACK FOR PERFORMANCE EVALUATION MANUEL LONDON AND JAMES W. SMITHER

This chapter begins by noting the strengths of 360 Feedback as a strategic performance management tool. We then describe several major challenges of using 360 Feedback for performance evaluation and administrative decisions (about pay or promotion). We conclude with recommendations for human resource (HR) practitioners. As a strategic management tool, 360 Feedback aims to create an organizational environment in which performance feedback and improvement are parts of an ongoing process. In such an environment, focal leaders are receptive to feedback, seek it, and process it mindfully. The process recognizes that feedback comes from many sources that surround the focal leader, including the manager, direct reports, peers, and others outside the focal leader’s team. The survey and feedback process calls attention to key dimensions of performance and areas for improvement. Traditionally, organizational psychologists and HR practitioners have recommended that 360 Feedback be used only for development of the focal leader, not sharing the results with the manager or any higher level manager who has decision-​making authority and can influence decisions about the focal leader’s career (London, 2015). Using feedback for development alone reduces the anxiety that can accompany receiving feedback. It can also increase the willingness of raters to be candid when raters are promised that 61

62

62  / /  ​ 3 6 0 for D ecision-Making

their ratings and comments will be anonymous and will not affect the focal leader’s pay or career prospects (Day, Fleenor, Atwater, Sturm, & McKee, 2014). However, Bracken, Rose, and Church (2016) reported a growing trend of using 360 Feedback for performance evaluation, thereby affecting pay raises, promotion decisions, job reassignments, identification of high-​potential talent, succession planning, and talent management. They reported that, in 2003, only 27% of companies were using 360 Feedback for performance evaluation, and by 2013, 48% were using it for performance evaluation (3D Group, 2013, cited in Bracken et al., 2016). In this context, organizations should take steps to ensure that raters are credible and have opportunities to make substantive narrative comments, and that focal leaders have the chance to process the results mindfully and use them constructively to improve performance and career development (Bracken et al., 2016). FORMS OF 360 FEEDBACK

The 360 Feedback data may be gathered in several ways. Usually, 360 Feedback data are gathered via a survey in which ratings and comments are collected from direct reports, coworkers/​peers at the same level as the focal leader, the manager (and perhaps higher level managers), and possibly externally (e.g., ratings from suppliers or customers). Most often, self-​ratings are also collected. The feedback report summarizing survey responses is likely to provide the average or frequency distribution of ratings on each item from each rater group (as long as there are at least three respondents from each rater group to protect raters’ anonymity). Results may also be averaged across items within dimensions, thereby simplifying the report for the focal leader. The report may include comparison data, such as the average ratings for other focal leaders at the same organizational level and the prior year’s results for the focal leader. Narrative comments are usually included verbatim (although rarely they are summarized by a coach or HR professional). The report may be sent directly to the focal leader and the manager or delivered by a coach. Recently, some organizations have been using electronic technologies (e.g., apps) that allow employees to request from or provide feedback to coworkers at any time (e.g., following a meeting, after an interaction, throughout project implementation) to create an environment in which feedback is available on a day-​to-​day or week-​to-​week basis (Chapter 5, this volume, by Hunt, Sherwood, & Pytel). Another form of 360 Feedback, not necessarily exclusive of those described, is less formal. Managers may call, email, or meet with coworkers of the focal leader to gather comments about the individual’s performance. This might occur annually or periodically (e.g., as projects progress or conclude). Managers may feel they need detailed input from others to evaluate a focal leader

 63

Historical Challenges of Use for Performance Evaluation  //​ 63

who is a member of multiple teams or projects (especially when the manager is unable to directly observe the focal leader in these different roles). The manager may use the information to justify personnel decisions (e.g., assignment to another project) about the focal leader without necessarily revealing the source. Yet another form of survey feedback that can be used for performance evaluation is annual or periodic team engagement, pulse, climate, or satisfaction surveys that include items such as “My supervisor . . .” or “My subordinates. . . .” Ratings are averaged at the team or unit level, providing a type of upward feedback concerning the focal leader’s leadership skills. Higher level managers can use this information to judge the focal leader’s people management skills and incorporate the results into the annual performance evaluation of the focal leader. STRENGTHS OF 360 FEEDBACK

Early research on 360 Feedback showed the importance of gathering performance evaluations from different perspectives (Klimoski & London, 1974) and especially the value of direct reports’ ratings of their managers’ behaviors (Hegarty, 1974). Considerable research has focused on leniency effects of self-​ratings and the extent of agreement among raters (Fleenor, Smither, Atwater, & Sturm, 2010). The strengths of 360 Feedback are worth emphasizing in the context of its use for performance evaluation and related administrative decisions. As a support for organizational strategy and promoting a feedback-​ based performance management culture, the 360 Feedback process





• collects information about a focal leader from people who have exposure to elements of performance that the manager does not have; • can assess different performance dimensions for different rater groups (e.g., direct reports rating coaching behaviors and peers rating team behaviors); • can address technical knowledge (e.g., perceptions of the leader’s technical skills and knowledge of emerging trends in the discipline) as well as managerial skills (e.g., handling conflict, coaching, decision-​making); • is especially valuable when the manager does not have direct access to all elements of the focal leader’s performance, for instance, when the focal leader is assigned to several different project teams over time, and when the focal leader interacts with multiple constituencies (direct reports, coworkers, higher level managers, customers, suppliers, and other contacts internal or external to the organization); • involves all constituencies in performance management; thereby helping to develop a culture that values feedback;

64

64  / /  ​ 3 6 0 for D ecision-Making



• can focus on outcomes as well as the behaviors used to attain those outcomes; • communicates the dimensions of performance that the organization views as most important; • delivers the message that everyone is involved in performance management and everyone’s viewpoint matters; and • contributes to a feedback-​oriented, engaging performance management culture with focus on the depth and quality of feedback and performance reviews and feedback (cf. Meinecke, Lehmann-​Willenbrock, & Kauffeld, 2017; Qian, Wang, Song, Li, Wu, & Fang, 2017; Steelman & Wolfeld, 2016; Young, Richard, Moukarzel, Steelman, & Gentry, 2017).

CHALLENGES FOR USING 360 FEEDBACK FOR PERFORMANCE EVALUATION

Despite the potential advantages of 360 Feedback, its use presents many challenges for organizations. This section describes five of the most important challenges. Challenge 1: Should 360 Feedback Be Used for Performance Evaluation, Development, or Both?

Unfortunately, there is little, if any, empirical data that can help HR practitioners answer this question. A meta-​analysis of 360 Feedback (Smither, London, & Reilly, 2005) found that the average effect size (improvement in 360 Feedback ratings from Time 1 to Time 2) was greater when the purpose was for development only (d = +0.25) than when it was for administrative decisions, such as compensation or promotion (d = +0.08). This result suggests that 360 Feedback will be more effective when used for development only, rather than for administrative purposes. But, this finding should be interpreted with caution because very few studies in the meta-​analysis used 360 Feedback for an administrative purpose. Interpreting change in 360 Feedback from Time 1 to Time 2 can be further complicated because, as team members change over time, different raters might be providing feedback at Time 1 than at Time 2. More recently, Kim, Atwater, Patel, and Smither (2016), in a longitudinal study of 232 firms in South Korea, examined the relationship between the use of 360 Feedback and subsequent financial performance. They found that the use of 360 Feedback was positively related to subsequent workforce productivity (total sales per focal leader), and that this relationship was stronger when firms used the feedback for both development and administration rather than for either purpose alone. This finding suggests that companies might see more benefits when they use 360 Feedback for both developmental

 65

Historical Challenges of Use for Performance Evaluation  //​ 65

and administrative purposes; however, this result should also be viewed with caution because only 13% of the firms in the sample used 360 Feedback for both developmental and administrative purposes. The results reported by Kim et al. (2016) are in contrast to a study reported by Pfau, Kay, Nowack, and Ghorpade (2002). But, the Pfau et  al. study did not appear in a peer-​refereed journal and its design limited its ability to establish causal effects. In addition, traditional performance appraisal processes (which rely on ratings only from the focal leader’s manager) have been criticized for their lack of success in simultaneously serving both developmental and administrative purposes. In the absence of sufficient evidence, we do not know whether 360 Feedback ratings can do better. Organizations should consider the likelihood that focal leaders, knowing that their 360 Feedback will affect their performance evaluation and pay, will be able to process their feedback in an open-​minded manner. Stated differently, when negative feedback is perceived by recipients as a potential threat (because it can affect pay, promotion, and so on), will they be able to avoid defensive reactions and adopt a growth or mastery mindset that enables them to use the feedback to guide their development efforts? Finally, Greguras, Robie, Schleicher, and Goff (2003) found that direct report ratings were of better quality when made for developmental than for administrative purposes. The purpose of ratings can also affect the rater’s willingness to be candid ( Jawahar & Williams, 1997). Imagine a direct reporter who believes his focal leader is bright, supportive, technically skilled, and an excellent coach. The manager, however, has weak presentation skills (e.g., poor eye contact, stumbles over words, speaks too quickly). When asked to provide feedback that will be used only for developmental purposes, the direct reporter will likely mention the focal leader’s need to improve his or her presentation skills and might even offer useful suggestions. When asked to provide feedback for administrative purposes (e.g., ratings that will influence the focal leader’s pay increase), the same direct reporter will likely avoid mentioning the focal leader’s weakness for fear that it will negatively affect the manager’s pay. Bracken et al. (2016) recently argued that the debate concerning the appropriate use of 360 Feedback is over. They stated (p. 776), “Unless the individual receiving the feedback never (and we mean literally never) shares his or her data or any insights from that data with anyone inside the organization, it is not ‘development only.’ ” For example, even if 360 Feedback helps decide who receives access to formal development or coaching programs, it is in some sense being used for administrative purposes. Even when the 360 Feedback report is sent only to the focal leader, a discussion between that individual and his or her manager about the feedback might affect a subsequent performance evaluation made by the manager about the focal leader. In the end, organizations need to decide whether 360 Feedback will be shared with people other than the focal leader (such as the individual’s

6

66  / /  ​ 3 6 0 for D ecision-Making

manager), whether the focal leader is expected to summarize and share the results with others (such as direct reports; e.g., Walker & Smither, 1999), and whether the feedback should affect decisions about succession planning, compensation, and promotion. Challenge 2: How Should Raters Be Selected?

Usually, raters are selected based on a conversation between the focal leader and his or her manager (although we have seen organizations where raters are selected by the HR department). Ideally, all direct reports are asked to provide feedback, and other colleagues are selected based on the nature and extent of their interactions with the focal leader. In addition, ratings from each rating source (e.g., direct reports and peers) are presented in the feedback report only if three or more raters from that source provide feedback. These guidelines create several dilemmas. What happens when the focal leader has only one or two direct reports? If these direct reports are not invited to provide ratings, a potentially valuable perspective on the focal leader’s behavior is lost. If they are invited to provide feedback, they might reasonably have concerns about their lack of anonymity and whether providing negative feedback could damage their relationship with the focal leader. Selecting peers or other coworkers can also be a challenge. At first glance, it would seem desirable to invite all coworkers who have had ample opportunity to observe the focal leader’s behaviors for a reasonable period of time. But, this can lead to some coworkers being asked to rate many focal leaders, a task that is time consuming and can detract from accomplishing important work. Allowing the focal leader to select raters increases the likelihood that he or she will select only coworkers who have a favorable impression of his or her contributions. On the other hand, the focal leader’s manager might select raters in a manner that serves the manager’s agenda. For example, a manager who wants to punish a focal leader for a minor transgression might invite feedback from a coworker who is known to have an especially negative opinion about the focal leader. It is likely better to have the supervisor and focal leader collaborate in selecting peers and coworkers. Challenge 3: Will Raters Have the Ability and Motivation to Provide Accurate Ratings and Useful Narrative Feedback?

For decades, research has shown that rating accuracy cannot be enhanced easily by tinkering with rating scale formats. Instead, rating inaccuracy often occurs due to

 67

Historical Challenges of Use for Performance Evaluation  //​ 67

limitations in the ability of people to observe, store, and recall information accurately (Smither, 2012). Because direct reports and peers typically have less experience in completing performance ratings (compared to managers), it seems safe to assume that their ratings will also be affected by the many rating errors that affect supervisors’ rating, including halo, leniency, primacy, recency, availability (overemphasizing vivid but infrequent behaviors), and contrast errors. These errors are especially likely when raters provide feedback infrequently (e.g., once per year), as would typically be the case when 360 Feedback is used for performance evaluations. In such circumstances, raters will likely have difficulty accurately recalling behaviors that have occurred many months ago. Some rater training programs, especially frame-​of-​reference training, have been shown to increase the accuracy of ratings, but such programs can be expensive to deliver to a large number of raters (DeNisi & Murphy, 2017). In addition to limitations in rater ability, rater motivation should be a concern. A meta-​analysis by Jawahar and Williams (1997) found that ratings collected for administrative purposes (e.g., affecting pay or promotion) were more lenient than ratings collected for developmental purposes. Stated simply, rating purpose matters. Raters might feel the need to compete with their peers for good performance evaluations. In this climate, knowing that 360 Feedback ratings might affect the final appraisal received by the focal leader, raters might feel that giving negative ratings to peers can enhance the likelihood of receiving a more favorable final appraisal for themselves. Cultural norms and stereotypes can also bias ratings such as patronizing comments about women or negative ratings to penalize counternormative behavior (e.g., women being perceived as aggressive, while men who exhibit the same behaviors are not; Bear, Cushenbery, London, & Sherman, 2017). Because peer and direct report ratings are usually anonymous, peers and direct reports might feel little accountability to provide accurate ratings. All of this raises the importance of ensuring that the organization is well prepared to use 360 Feedback for performance evaluations. This includes much more than rater training. All focal leaders need to understand who will see their feedback and precisely how it will be used. For example, will their feedback directly (or indirectly) affect pay decisions? Will there be a mechanism for focal leaders to challenge ratings that they believe are biased or inaccurate? As Scott, Scott, and Foster suggest in Chapter  29, this ability will be essential in European countries. Will there be any way to limit game playing (e.g., providing positive ratings to reward a friend, providing negative ratings in retribution for unfavorable treatment, having two peers collude to make themselves look good and others look bad)?

68

68  / /  ​ 3 6 0 for D ecision-Making

Challenge 4: Can the Organization Expect Managers to Interpret 360 Feedback in a Fair and Unbiased Manner?

Pulakos and O’Leary (2011) recommended that, when 360 Feedback is used for administrative purposes, managers should “serve as gate keepers, gathering and combining information from the different rating sources, judging its credibility and quality, and balancing it against other available information” (p. 154). They argued that peers and direct reports might lack the motivation and perspective to make accurate ratings. They also noted instances when peers have colluded to rate each other favorably when ratings will be used for performance evaluation. Of course, managers are not immune from confirmation bias: the tendency to seek, attend to, and interpret data in a manner that confirms one’s preexisting opinions (Bastardi, Uhlmann, & Ross, 2011; Lewandowsky & Oberauer, 2016; Lord, Ross, & Lepper, 1979). 360 Feedback reports typically contain an enormous amount of data, including the mean and frequency distributions of ratings on many items from multiple sources, plus narrative comments. This raises the possibility that a manager can attend to or recall only data that are consistent with the manager’s preexisting opinion of the focal leader. Several years ago, one of the authors was working as a leadership coach with a focal leader in a global company. The focal leader’s manager had formed an unfavorable opinion about the focal leader from interactions they had many years earlier when they were peers. During the coaching, the focal leader received 360 Feedback. The feedback from the focal leader’s direct reports was quite favorable. When the manager learned the feedback was favorable, he responded that the focal leader had not changed, and the favorable ratings were proof that his direct reports were afraid of him and therefore provided favorable (albeit inaccurate) ratings. This was a classic example of confirmation bias and a no-​win situation for the focal leader. If the direct report feedback had been unfavorable, it would have confirmed the manager’s unfavorable opinion of the focal leader. When the direct report feedback was favorable, the manager simply interpreted this as evidence that the focal leader was a terrible leader whose team members were too frightened to be honest. Confirmation bias can lead managers to interpret 360 Feedback in a positive or negative manner (depending on the manager’s preexisting opinion of the focal leader). And, because confirmation bias has been demonstrated in many settings, organizations should understand that their managers will not always be unbiased judges.

 69

Historical Challenges of Use for Performance Evaluation  //​ 69

Challenge 5: How Can 360 Feedback Be Gathered for All Focal Leaders Without Creating an Excessive Demand on the Time of Raters?

During an organization-​wide 360 Feedback program (as likely needed when such feedback is used for performance evaluations), a typical focal leader could be asked to complete ratings concerning many (10 or more) coworkers. For example, a midlevel focal leader could be asked to rate her manager, direct reports, several peers, and herself. Organizations might be understandably reluctant to ask focal leaders to spend so much time completing ratings rather than focusing on other task demands. The time demands of 360 Feedback can be lessened in two ways. First, a limit can be placed on the number of coworkers any individual can be asked to rate as a manager, peer, or direct report. Of course, this means that each focal leader will receive feedback from fewer coworkers. And, this risks damaging the representativeness of the feedback. It can also be challenging to identify which feedback providers to remove. For example, which peers are likely to provide the most valuable feedback? Also, all other things being equal, reducing the number of raters will lower the reliability of the average rating. Second, the number of items on the 360 Feedback survey can be kept to a minimum (e.g., 10 to 20 items). When the number of items is dramatically reduced, it is our experience that items typically become less specific (or become double or triple barreled), making it difficult to interpret the meaning of ratings. For example, four items about conflict management (e.g., when dealing with conflict, separates the people from the problem; avoids forcing his or her opinion on others; uses a problem-​solving rather than a win–​lose approach; serves as an effective mediator) could be reduced to a single item (e.g., deals effectively with conflict). But, the savings in time for raters is accompanied by a loss of specificity. When this occurs, focal leaders who receive negative feedback are uncertain about where they should focus their improvement efforts. The net result is the feedback is less helpful. In addition to the five major challenges described, some organizations may not be ready to use 360 Feedback in performance evaluations. This is likely to occur in organizations that lack a supportive feedback culture (London & Smither, 2002) or when there is a strong top-​down, hierarchical culture that appears inconsistent with asking direct reports to provide feedback.

70

70  / /  ​ 3 6 0 for D ecision-Making

RECOMMENDATIONS TO MAXIMIZE THE VALUE OF 360 FEEDBACK FOR PERFORMANCE EVALUATION

Fortunately, with careful management, the 360 Feedback process can be structured to recognize and overcome these challenges. Campion, Campion, and Campion (Chapter 5, this volume) provide a comprehensive list of recommendations for incorporating 360 Feedback into the performance management process. The recommendations we highlight here are focused specifically on making 360 Feedback a productive part of performance evaluation and personnel decisions.

1. Be sure the process is understood by all concerned.

If the 360 Feedback results will influence performance evaluations (and therefore affect decisions about pay and promotion), this should be clear to all raters and focal leaders from the start. If the organization has previously used 360 Feedback solely for development and now the organization is using the data for performance evaluation, the reasons for the change need to be explained well before coworkers are asked to provide ratings and comments about the performance of their colleagues. The items and rating scales in the 360 Feedback survey need to be clearly defined, discussed, and recalibrated annually as part of rater training so that all focal leaders have the same understanding of the meaning of the items and points on the rating scale. Also, when the organization revises its strategic competency model, these changes need to be reflected not only in its performance rating scales but also in informal feedback throughout the year and training and other development programs. This recognizes that performance management is an ongoing process that requires continuous attention and improvement. 2. Suggest that managers observe and collect data about their direct reports throughout the year, especially at critical moments, for instance, when projects reach important hurdles or accomplishments. Throughout the year, the focal leader’s manager and coworkers (direct reports and peers) are likely to have suggestions for performance improvement as well as positive reinforcement for accomplishments. These are likely to be forgotten when asked to provide general statements or ratings about performance at the end of the year. When this happens, ratings may become an exercise in ticking boxes with little meaning for the rater or the focal leader.

 71

Historical Challenges of Use for Performance Evaluation  //​ 71

Some HR practitioners have argued against annual appraisals. They claim the following:







• Annual appraisal processes are rigid, the processes provide little specific information to guide performance improvement, managers do not give constructive feedback in annual appraisals, and overall, the process takes too much time and energy for little benefit (Adler et al., 2016). Once-​a-​year appraisals might discourage managers from giving feedback throughout the year, especially in the moment (soon or immediately after a critical incident). • Annual appraisals tend to focus on the most salient, visible, and easily recalled performance incidents. Some important events might be out of the rater’s mind, and more subtle elements of performance are not given attention. • Managers may have preconceived impressions (biases) about a focal leader’s performance, and they make ratings that conform to these impressions (confirmation bias) rather than forming and revising impressions based on a thorough assessment of performance. • Favoritism or discrimination based on demographic characteristics (e.g., gender, race, age, sexual orientation) can too easily bias annual ratings. • Annual appraisals may not recognize that focal leaders are likely to belong to different project teams and make different contributions to these projects. This is a reason for managers to collect input about the focal leader’s performance from different sources and combine that input into the annual appraisal and feedback session.

Given these concerns, critics of the annual performance appraisal suggest eliminating the process altogether in favor of daily, weekly, or monthly evaluations and discussions, with a special focus on feedback when projects are completed (Buckingham & Goodall, 2015). The timing and frequency of feedback will therefore be different for different focal leaders. As an essential aspect of a manager’s job, the manager should keep track of progress, collect information about a focal leader’s accomplishments (and struggles), review these accomplishments (and struggles) with the focal leader, and establish short-​term goals for continuous improvement. This can produce a performance-​and feedback-​oriented culture and make discussing performance easier (Atwater, Wang, Smither, & Fleenor, 2009; London & Smither, 2002). Performance evaluation, then, is not a once-​a-​year, all-​or-​nothing process, but part of an ongoing process that is at the heart of the relationship between focal leaders and their managers (London, 1995).

72

72  / /  ​ 3 6 0 for D ecision-Making

This practice leaves the timing of feedback and performance evaluation up to each manager. Managers can also be given greater discretion about how and when to recognize accomplishments with salary increases or bonus payments (or other favorable outcomes or punishments). On the one hand, this approach increases the likelihood of favoritism or unfairness. On the other hand, managers can be held accountable for evaluating direct reports and collecting information from other sources to support their judgments of direct reports’ performance. 3. Use 360 Feedback to highlight key elements of performance and engage focal leaders in performance management Use of 360 Feedback should not take the place of performance evaluations. Instead, 360 Feedback should help focal leaders understand how others perceive elements of their performance and ways they can improve. Focal leaders should have input into the design of the feedback process and writing the survey items. They need to understand the items in the survey and recognize that these elements are important in contributing to the strategy of the department or organization as a whole. The items should stem from the organization’s mission and goals and may dive deeper to include items related to the department’s or team’s goals. In other words, the survey should focus on topics that matter to the organization, department, and focal leaders. The items should include evaluation of competencies that are needed by the organization—​competencies that, when developed, will make each focal leader better prepared for more responsible positions.

4. Provide the resources to support incorporating 360 Feedback into performance evaluations.

Focal leaders receiving 360 Feedback should be given the resources to help them take advantage of the results to improve their performance and increase their opportunities for career growth. These resources might include consulting time with a coach, as well as funds and time to attend training programs outside the organization or workshops offered within the organization. Managers also need training concerning their role as coach. Research has shown that external coaches (albeit expensive) can enhance the value of 360 Feedback for managers (Smither, London, Flautt, Vargas, & Kucine, 2003), but a focal leader’s manager also has an important role to play as coach. Managers should be encouraged to establish a climate of trust and openness about feedback, as well as the value of viewing performance problems as learning opportunities.

 73

Historical Challenges of Use for Performance Evaluation  //​ 73

As noted, resources should also be devoted to training that ensures raters (a) understand how to complete a 360 Feedback survey about fellow focal leaders and (b) can provide accurate ratings along with meaningful comments. Without these resources, 360 Feedback results can become merely a “desk drop” of a report, and the recipient will have little incentive to take the time and energy to process the feedback. This is especially the case because people often resist hearing feedback, especially when it might be negative. They worry about how others will see them. So, they shy away from feedback and as such may miss receiving valuable information that can point to paths for improvement and advancement.

5. Make 360 Feedback an integral part of the organization’s corporate culture.

An overarching goal is to make 360 Feedback an important element in the way the organization manages performance. As noted, the process should enhance focal leader engagement in performance management, focus on development for performance improvement (rather than punishment for poor ratings), and recognize contextual issues, such as an especially challenging assignment, that can affect feedback. Organizations should observe whether some focal leaders appear to resist completing 360 Feedback surveys or deride the value of the feedback for themselves. This might happen because these naysayers were not involved in the design of the process (so they do not fully understand it) or if they are wary about whether the results will be unfairly used against them. Accountability is also important. This includes accountability for (a) raters to provide accurate ratings and useful comments, (b) managers to use the data in a balanced and fair manner, and (c) focal leaders to process the results mindfully and use them to guide continuous performance improvement (London, Smither, & Adsit, 1997). Accountability can be enhanced when there is a positive-​feedback climate where focal leaders share the highlights of their feedback with raters and describe their plans for acting on the feedback. CONCLUSION

Organizations will find that 360 Feedback will be more valuable when it becomes a routine part of the organization (London, 2015). As the process is repeated periodically, once a year, once every 6 months, or whenever needed by a focal leader who would like input from others, it becomes an accepted part of the organization’s culture and is likely to be used more constructively as input to career coaching discussions and as a means for focal leaders to track their own performance improvement over time.

74

74  / /  ​ 3 6 0 for D ecision-Making

REFERENCES Adler, S., Campion, M., Grubb, A., Murphy, K., Ollander-​Krane, R., & Pulakos, E. D. (2016). Getting rid of performance ratings: Genius or folly? A debate. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(2), 219–​252. doi:10.1017/​iop.2015.106 Atwater, L., Wang, M., Smither, J. W., & Fleenor, J. W. (2009). Are cultural characteristics associated with the relationship between self and others’ ratings of leadership? Journal of Applied Psychology, 94(4), 876. http://​dx.doi.org/​10.1037/​a0014561 Bastardi, A., Uhlmann, E. L., & Ross, L. (2011). Wishful thinking: Belief, desire, and the motivated evaluation of scientific evidence. Psychological Science, 22, 731–​732. Bear, J. B., Cushenbery, L., London, M., & Sherman, G. D. (2017). Performance feedback, power retention, and the gender gap in leadership. The Leadership Quarterly. doi:10.1016/​j.leaqua.2017.02.003 Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360° feedback. Industrial and Organizational Psychology, 9, 761–​794. doi:10.1017/​iop.2016.93 Buckingham, M., & Goodall, A. (2015). Reinventing performance management. Harvard Business Review, 93(4),  40–​50. Day, D. V., Fleenor, J. W., Atwater, L. E., Sturm, R. E., & McKee, R. A. (2014). Advances in leadership and leadership development: A review of 25 years of research and theory. Leadership Quarterly, 25, 63–​82. https://​doi.org/​10.1016/​j.leaqua.2013.11.004 DeNisi, A. S., & Murphy, K. R. (2017). Performance appraisal and performance management: 100 years of progress? Journal of Applied Psychology, 102, 421–​433. http://​dx.doi.org/​10.1037/​apl0000085 Fleenor, J. W., Smither, J. E., Atwater, L. E., & Sturm, R. E. (2010). Self-​other rating agreement in leadership: A review. The Leadership Quarterly, 21, 1005–​1034. doi:10.1016/​j.leaqua.2010.10.006. Greguras, G. J., Robie, C., Schleicher, D. J., &Goff, M. (2003). A field study of the effects of rating purpose on the quality of multisource ratings. Personnel Psychology, 56(1),  1–​21. Hegarty, W. H. (1974). Using subordinate ratings to elicit behavioral changes in managers. Journal of Applied Psychology, 59(6), 764–​766. http://​dx.doi.org/​10.1037/​h0037507 Kim, K. Y., Atwater, L., Patel, P. C., & Smither, J. W. (2016). Multisource feedback, human capital, and the financial performance of organizations. Journal of Applied Psychology, 101, 1569–​1584. http://​dx.doi. org/​10.1037/​apl0000125 Klimoski, R., & London, M. (1974). Role of the rater in performance appraisal. Journal of Applied Psychology, 59, 443–​451. Jawahar, I. M., & Williams, C. R. (1997). Where all the children are above average: The performance appraisal purpose effect. Personnel Psychology, 50(4), 905–​992. Lewandowsky, S., & Oberauer, K. (2016). Motivated rejection of science. Current Directions in Psychological Science, 25, 217–​222. London, M. (1995). Giving feedback: Source-​centered antecedents and consequences of constructive and destructive feedback. Human Resource Management Review, 5, 159–​188. https://​doi.org/​10.1016/​ 1053-​4822(95)90001-​2 London, M. (2015). The power of feedback: Giving, seeking, and using feedback for performance improvement. New York, NY: Routledge. (Third edition update of Job feedback, 2nd ed., 2003; 1st ed., 1997, Mahwah, NJ: Erlbaum.) London, M., & Smither, J. W. (2002). Feedback orientation, feedback culture, and the longitudinal performance management process. Human Resource Management Review, 12(1), 81–​101. London, M., Smither, J. W., & Adsit, D. J. (1997). Accountability: The Achilles’ heel of multisource feedback. Group and Organization Management, 22(2), 162–​184. https://​doi.org/​10.1177%2F1059601197222003 Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098–​2109.

 75

Historical Challenges of Use for Performance Evaluation  //​ 75 Meinecke, A. L., Lehmann-​Willenbrock, N., & Kauffeld, S. (2017). What happens during annual appraisal interviews? How leader-​follower interactions unfold and impact interview outcomes. Journal of Applied Psychology, 102, 1054–​1074. http://​dx.doi.org/​10.1037/​apI0000219 Pfau, B., Kay, I., Nowack, K. M., &Ghorpade, J. (2002). Does 360-​degree feedback negatively affect company performance? HR Magazine, 47(6),  54–​59. Pulakos, E. D., & O’Leary, R. S. (2011). Why is performance management broken? Industrial and Organizational Psychology, 4, 146–​164. Qian, J., Wang, B., Song, B., Li, X., Wu, L., & Fang, Y. (2017). It takes two to tango: The impact of leaders’ listening behavior on focal leaders’ feedback seeking. Current Psychology. doi:10.1007/​s12144-​017-​9656-​y Smither, J. W. (2012). Performance management. In S. W. J. Kozlowski (Ed.), The Oxford handbook of organizational psychology (pp. 285–​389). New York, NY: Oxford University Press. Smither, J. W., London, M., Flautt, R., Vargas, Y., & Kucine, I. (2003). Can executive coaches enhance the impact of multisource feedback on behavior change? A quasi-​experimental field study. Personnel Psychology, 56(1), 23–​44. doi:10.1111/​j.1744-​6570.2003.tb00142.x Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-​analysis, and review of empirical findings. Personnel Psychology, 58(1), 33–​66. doi:10.1111/​j.1744-​6570.2005.514_​1.x Steelman, L. A., & Wolfeld, L. (2016). The manager as coach: The role of feedback orientation. Journal of Business and Psychology. doi:10.1007/​s10869-​016-​9473-​6 Walker, A. G., & Smither, J. W. (1999). A five-​year study of upward feedback: What managers do with their results matters. Personnel Psychology, 52, 393–​423. Young, S. F., Richard, E. M., Moukarzel, R. G., Steelman, L. A., & Gentry, W. A. (2017). How empathic concern helps leaders in providing negative feedback: A two-​study examination. Journal of Occupational and Organizational Psychology, 90(4), 535–​558. doi:10.1111/​joop.12184

76

 7

5 / /​/ ​     / /​/​ TECHNOLOGICAL INNOVATIONS

IN THE USE OF 360 FEEDBACK FOR PERFORMANCE MANAGEMENT STEVEN T. HUNT, JOE SHERWOOD, AND LAUREN M. BIDWELL

Performance management is a hotly debated topic in the field of human resources (Pulakos, Mueller Hanson, Arad, & Moye, 2015). The fundamental problem is not performance management itself, but use of bad performance management methods (Hunt, 2015). In this chapter, we discuss how innovations in use of 360 Feedback are enabling significant improvements in performance management overall. The use of 360 Feedback can provide a range of benefits supporting both the developmental coaching and measurement evaluation components of performance management. The focus of this chapter is not on the reasons why companies should use 360 Feedback, as that has been done elsewhere (Campion, Campion, & Campion, 2015). Instead, the chapter focuses on methods for incorporating 360 Feedback into performance management processes. This necessitates discussing the increasing role technology plays in performance management in general. As work becomes more digitalized, historic methods of performance management and 360 Feedback are being replaced by techniques that were not possible prior to the development of new forms of social and mobile technology (Hunt, 2011). This chapter is not specifically about

77

78

78  / /  ​ 3 6 0 for D ecision-Making

technology, but we cannot effectively talk about innovations in the use 360 Feedback for performance management without addressing the role technology is playing in making these innovations possible. The chapter is divided into four sections. First, we provide a brief discussion of the work on which this chapter is based. Second, we define the terms 360 Feedback and performance management and outline key relationships between the two concepts. We then discuss how technology is transforming how 360 Feedback is used in performance management. Last, we explore technological, process design, and environmental factors related to the use of 360 Feedback to support the three main components of performance management: aligning job expectations, maintaining developmental dialogue, and making talent decisions. A TECHNOLOGY-​BASED PERSPECTIVE OF PERFORMANCE MANAGEMENT

The observations shared in this chapter are based on work done for an organization that provides cloud-​based human capital management (HCM) technology solutions to support staffing, development, performance management, compensation, and other HCM processes. Over 4,000 companies use this organization’s performance management technology. Our work in this organization is focused on understanding why some companies are more successful than others at using this technology. Because this technology is cloud based, all the companies using it have access to the same features and functionalities (Griffith, 2016). Companies can configure the technology in different ways, but there is no feature that one company uses that another could not adopt. Consequently, differences in the value companies receive tend to be less about the technology itself and more about how it is being used. The information shared in this chapter is a result of work looking at this fundamental question: What are the most effective ways to use performance management technology to positively influence employee performance and development and improve talent decisions? Through this work, we have examined performance management methods used by hundreds of companies across multiple industries from around the world. The smallest of these companies typically have about 1,000 employees, while the largest have workforces of 100,000 or more. One of the most important things we have learned from this work is that there is no single best way to conduct performance management (Hunt, 2015). Methods that are effective in one company can fail in another. On the other hand, we have identified common characteristics associated with more successful performance management processes. One of these is creating processes and cultures that enable greater use of 360 Feedback as part of performance management.

 79

Technological Innovations for Performance Management  //​ 79

THE RELATIONSHIP BETWEEN 360 FEEDBACK AND PERFORMANCE MANAGEMENT

The term 360 Feedback is defined in this chapter as information collected from multiple stakeholders about an employee’s behavior and accomplishments to support future development, performance, and talent management decisions. This information is usually focused on past behaviors and accomplishments but may also include recommendations for future actions and goals. This definition is much broader and more loosely defined than how 360 Feedback has been previously defined in the research literature (see Bracken, Rose, & Church, 2016). Nevertheless, it reflects how the human resource professionals and business leaders we work with use the term. When these people talk about wanting to build 360 Feedback into their performance management process, they usually do not mean they want to create a structured, anonymous survey that systematically collects information from an employee’s manager, peers, and direct reports. They simply want to incorporate information from a more diverse range of stakeholders beyond the employee and the employee’s direct manager to gain a more accurate and complete picture of employee capabilities, contributions, and potential. In this definition, 360 Feedback is not a process or a tool. 360 Feedback is a type of information created and utilized by different kinds of processes and tools. We define performance management as “processes used to communicate job expectations to employees, evaluate employees against those expectations, and utilize these evaluations to guide talent management decisions related to compensation, staffing and development” (Hunt, 2014, p.  151). The term development encompasses ongoing coaching and feedback as well as career planning, training, and succession. The primary purpose of performance management is to align the behavior and development of employees with the strategic goals of the organization.1 Performance management can be divided into three distinct components that are listed in the first column of Table 5.1. The first component is aligning job expectations between the employee and the company. The purpose is to focus employee motivation and attention on job goals and behaviors that balance the company’s business needs with the employee’s career objectives. The second component is about creating ongoing dialogue that provides the employee with insight, guidance, and support to more effectively perform their role. This component typically not only stresses employee–​ manager dialogue, but also incorporates feedback from peers, customers, or other people the employee works with. The third component focuses on making workforce

 Another important purpose of performance management is focused on complying with employment regulations. But, that is not as relevant to the topics discussed in this chapter.

1

80

80  / /  ​ 3 6 0 for D ecision-Making TABLE 5.1  Use of 360 Feedback in Different Components of Performance Management Performance Management

Potential Role of 360 Feedback

Component

Aligning job expectations

• Soliciting input from multiple stakeholders to define job goals and development objectives • Comparing goals and objectives with other stakeholders to better coordinate roles and expectations

Maintaining developmental

• Gathering ongoing behavioral feedback from stakeholders

dialogue

• Providing real-​time recognition to reinforce positive behaviors

Making talent decisions

• Incorporating input from stakeholders to guide decisions related to performance evaluation, compensation, staffing, and development • Soliciting input from stakeholders on the quality and accuracy of talent management decisions

decisions that require considering differences in employee contributions to the organization (e.g., making staffing or compensation decisions where employee performance is a criterion). Table 5.1 also provides examples of how 360 Feedback can play a role in each of the three components of performance management. During the first phase of aligning job expectations, 360 Feedback can be used to develop and prioritize the goals an employee should focus on in their role. In the second phase of maintaining developmental dialogue, 360 Feedback can encourage and help employees to improve their performance over time. In the third phase of making talent decisions, 360 Feedback can improve the effectiveness of compensation, staffing, and development decisions. INNOVATIONS IN 360 FEEDBACK TECHNOLOGY

The use of 360 Feedback methods in performance management is being accelerated, in part, by several innovations in HCM technology. Companies have access to a wide range of technology to support collecting and utilizing different forms of 360 Feedback. And, new solutions and features are being developed every year. Some of the more common technology solutions include the following:

• Comprehensive 360 survey solutions. These solutions support the traditional 360 survey designed to gather feedback from an employee’s manager, peers, and direct reports. Modern 360 solutions contain tools to protect anonymity, avoid oversurveying raters and creating “rater fatigue,” manage survey follow-​ups and

 81

Technological Innovations for Performance Management  //​ 81

























reminders, and automatically generate coaching reports that combine quantitative results with qualitative developmental suggestions. Pulse survey solutions. These enable companies to administer and score short surveys that focus on a specific topic and audience. For example, sending a three-​ question survey to members of a project team to obtain feedback on an employee’s management style. “Get feedback” solutions. These allow employees or managers to request confidential and structured feedback from specific individuals in the organization via email, instant messaging, or other forms of social media. Online communities. These enable companies to create online spaces where people can share ideas, upload work documents, and discuss work-​related goals, topics, and behavior. Most of these communities include tools so participants can provide feedback to other members regarding ideas and contributions. Spot reward and recognition systems. These systems provide people with online tools to provide informal and formal recognition and rewards to their coworkers. These solutions typically support monetary and nonmonetary recognition in the form of short notes, badges, gift cards, or monetary awards. Depending on the system, employees may be able to receive recognition from coworkers, managers, direct reports, and even customers. Continuous performance management systems. These systems support employees with ongoing coaching conversations with their managers and other stakeholders. They provide features that assist with scheduling conversations, requesting feedback from others, structuring conversation topics, tracking daily or weekly activities or tasks, logging achievements and past conversations, and linking information from coaching conversations to other talent management systems used for goal setting, career development, and performance assessment. These systems may also be integrated with other 360 Feedback tools, such as pulse surveys, get feedback solutions, or spot reward systems. Calibration talent review systems. These systems facilitate talent review meetings where multiple stakeholders discuss the contributions and capabilities of employees. They often include tools to gather information about employees from other sources, such as continuous performance management systems, spot recognition systems, or get feedback tools.

The proliferation of different types of HCM technology solutions is changing how 360 Feedback is being conceptualized and used by organizations to support talent management. 360 Feedback used to be thought of as something associated with structured

82

82  / /  ​ 3 6 0 for D ecision-Making

“events,” such as administering a survey or conducting a coaching session. Companies now often view 360 Feedback more broadly as a type of information that can be collected and incorporated into talent management processes in a variety of ways, both structured and unstructured. The remainder of this chapter discusses how technology can be used to enable greater use of 360 Feedback for different components of performance management. This includes addressing process and cultural conditions that enable effective use of the technology. Technology enables change, but it only works if it is used effectively. We also point out opportunities for future research to advance the use of 360 Feedback in performance management. ALIGNING EXPECTATIONS

Historically, people tended to think of 360 Feedback as a method to gather information about past employee behavior and performance. But, it can also be used to collect input from multiple stakeholders when defining future job goals and objectives. This use of 360 Feedback is becoming increasingly valuable as work becomes more collaborative and team oriented. The Role of Technology

The use of 360 Feedback for aligning expectations is strongly enabled by three types of technology: online communities, pulse surveys, and continuous performance management systems. Online communities allow employees to post their job goals and invite input from coworkers, managers, and direct reports on their value, definition, and relevance. They can also look at the goals and job expectations of other employees and leaders to guide how they define their own roles. Pulse surveys provide a way for employees to obtain targeted feedback from other stakeholders on what priorities should be a focus. Continuous performance management systems can be used to encourage ongoing discussion of goals and job expectations throughout the year. Ongoing discussions of job expectations are particularly important when employees are working in an environment where business priorities can shift significantly over time. Culture and Process Design Considerations

Using 360 Feedback to support aligning expectations requires changing three elements often found in traditional performance management methods. First, companies must shift from a “top-​down” method of goal cascading to one that encourages greater

 83

Technological Innovations for Performance Management  //​ 83

cross-​functional dialogue. Rather than just meeting with their manager to discuss future goals, employees should be encouraged to solicit input about their goals from their coworkers and their direct reports. This includes providing employees with adequate time to collect 360 Feedback about their job expectations. The second change is a shift to publicly sharing goals across the organization. Despite research demonstrating the value of transparency (Bowen & Ostroff, 2004), many companies place tight limits on employees’ ability to share their goals with colleagues. Goal-​setting systems used in many companies do not allow anyone to see an employee’s goals other than their direct manager and the leaders above them. If employees want 360 Feedback on their goals, then they must have a way to share their goals with their peers and direct reports. It may make sense to place some limits on how widely goals are shared across the company, but some level of goal transparency is needed if 360 Feedback is going to be used to enable better alignment of job expectations. The third change is about refining and updating expectations over time. In many jobs, it is unrealistic to expect that goals set at the beginning of the year are going to remain unchanged over the following 12 months. It may be useful to periodically obtain 360 Feedback about the relevance and appropriateness of an employee’s goals and use this to update goals as necessary. This sort of ongoing goal discussion and refinement is one benefit of using continuous performance management solutions. However, for this to have value, companies must allow employees to revise their goals and job expectations based on this feedback. This contrasts with how many companies have historically treated performance management goals, where goals are seen more like a “fixed contract” that cannot be changed once agreed on at the beginning of the year. Questions in Need of Further Research

The concept of using 360 Feedback to more effectively align expectations is not new. Some employees have always reached out to coworkers for input to help define their job goals. But, the ability to use technology to more actively encourage and support this sort of use of 360 Feedback is relatively new. Traditional goal-​setting research tends to position goal setting as a dialogue solely between an employee and their manager. It would be useful to have more research into the sorts of collaborative goal-​setting methods enabled by online communities, pulse surveys, and continuous performance management. For example, do processes that enable employees to provide suggestions regarding the goals of their peers or manager influence employee commitment and team performance? In addition, most feedback research focuses on how to effectively make suggestions based on a person’s past job behavior. It

84

84  / /  ​ 3 6 0 for D ecision-Making

does not look at how to provide feedback on the quality and value of someone’s job goals. What is the best way to constructively criticize someone else’s goals as being too vague, irrelevant, or easy? There is clearly a role for 360 Feedback in aligning job expectations. And, innovations in technology are likely to further increase the use of 360 Feedback in this manner. However, we lack clear models and evidence-​based practices to guide how 360 Feedback should be used for this objective. DEVELOPMENT DIALOGUE

A core function of performance management is helping employees develop effective strategies for achieving goals, acquiring new skills, and adapting to changing environments. Developmental dialogue incorporating constructive performance feedback plays a critical role in this aspect of performance management. Historically, managers have been treated as the primary, and in some cases only, formal source of development feedback used during performance management. An overreliance on manager feedback creates several limitations. Managers may not possess the relevant expertise needed to give effective feedback and might not be present in the precise moment feedback would be most effective; because managers might only be present during a fraction of an employee’s performance, they may not be aware of certain performance deficiencies or strengths. During performance management, 360 Feedback is a natural solution to the problem of overrelying on manager feedback. Fortunately, technology is providing companies with a variety of new ways to use 360 Feedback to support ongoing development dialogue. This technology makes it easier to increase the frequency and real-​time nature of feedback and to increase the number of available feedback sources. But, this technology is unlikely to work unless it is integrated into a larger change management effort focused on building a feedback-​rich work environment. The Role of Technology

The following technologies are particularly well suited to gathering ongoing developmental 360 Feedback:

• Comprehensive 360 survey solutions. Conducting traditional 360 surveys takes far less effort with current technology than it did in the past. This makes it easier for companies to give employees or managers the ability to proactively request, design, and even self-​administer 360 surveys to support development.

 85

Technological Innovations for Performance Management  //​ 85





• Continuous performance management systems. This is arguably the fastest growing area of performance management technology. These solutions use cloud and mobile technology to help employees and managers gather, discuss, and track feedback from colleagues. Most of these systems focus on collecting short qualitative comments instead of ratings. They may also include tools for giving employees spot awards, badges, or other forms of positive feedback and recognition. • Pulse survey solutions. These surveys are used to gather feedback on topics that are particularly relevant for managers and leaders, for example, feedback on aspects of organizational culture or company strategy that directly or indirectly reflect leadership behavior and decisions. Pulse surveys can also be an effective tool for providing employees with customer feedback.

Technology solutions can address many of the operational challenges related to gathering and reviewing 360 Feedback. But, they are not, and should not be, viewed as tools to replace actual coaching dialogue between employees and managers. And, technology by itself cannot create a culture where effective feedback conversations happen regularly. Culture and Process Design Considerations

Just because employees can use technology to easily leverage 360 Feedback does not mean they will do so. An environment where feedback is shared and development conversations happen on a regular basis does not appear from thin air. Such cultures are cultivated through resource investment and procedural support. To better understand the transition from traditional performance management systems to models focused on continuous, feedback-​rich dialogue, we conducted interviews with human resource leaders in 10 companies adopting continuous performance management technology solutions. The solution is available on mobile devices and desktop computers. It includes features that support gathering feedback from multiple sources, scheduling ongoing coaching meetings, tracking development activities and job tasks, recording achievements, and sharing data with other talent management processes tied to compensation, staffing, and career development. Table 5.2 lists eight specific conditions identified through this study that support a culture of feedback and continuous performance management. The more organizations had these conditions, the easier it was for them to use continuous performance management technology to support ongoing collection and use of 360 Feedback. At the same time, many of these conditions were created by moving toward more continuous, feedback-​rich performance management methods. The study suggested there is a

86

86  / /  ​ 3 6 0 for D ecision-Making TABLE 5.2  Conditions Impacting Continuous Performance Management Feedback Continuous Performance Technology Is More Effective When

Buy-​in and Motivation

People perceive the cultural and business benefits associated with coaching and feedback as worthy of their time, energy, and resources.

Feedback-​rich culture

People are already accustomed to regularly giving, seeking, and receiving feedback.

Goal-​oriented culture

Goals and roles are well defined and communicated, and people are held accountable.

Relationship quality

There is a strong level of trust between managers and employees. People are more willing to give and receive feedback from those they know and believe are committed to their success.

Training on giving and receiving

Managers and employees know how to give and receive feedback.

feedback

Feedback must be delivered appropriately to maximize performance and engagement. Poorly delivered feedback can seriously hurt performance and commitment.

Transparency of data and talent

People understand how talent decisions related to compensation,

decisions

staffing, and development are made. The better they understand how these decisions are made, the more comfortable they will be with openly talking about their performance.

Feedback systems and structures

There is a defined process and system that triggers and encourages routine dialogue and feedback and maintains a centralized and accessible record of these conversations.

Program evaluation

Resources are dedicated to tracking the frequency and effectiveness

and management

of coaching conversations and the use of 360 Feedback, and these data are used to guide ongoing process improvement.

two-​way relationship between creating feedback-​rich cultures and using continuous performance management technology that enables greater use of 360 Feedback. Effective use of the technology depends on having a supportive culture, but having a supportive culture is enabled by effective use of the technology. In this sense, a company’s existing culture should not be seen as a barrier to adopting continuous performance management technology to support use of 360 Feedback, but as a gauge of the difficultly and benefits that will come from adopting such technology. More insights about each of the eight conditions listed in Table 5.2 are provided next. Buy-​in and Motivation. Many successful managers and leaders were promoted to their roles for accomplishments that had little to do with being good coaches. In other words, most managers did not become managers because they excelled at giving

 87

Technological Innovations for Performance Management  //​ 87

feedback. If managers did not have to do something in the past, do not assume they will start doing it in the future just because it “makes sense” from a psychological standpoint. Taking time to gather and use 360 Feedback to support continuous performance management must make sense to managers from a business standpoint. This starts with defining in clear behavioral terms what people will be required to do in the future that they may not be doing now and then having open and honest discussions around why people may resist the change and what is needed to enable and support the transformation. It is also useful to create “feedback advocates” and use these as change champions. The ideal change champions are business leaders and employees outside human resources who are widely respected in the company for their contributions and commitment to the company’s success. Last, identify skeptics of the change and bring them into the decision process. People who initially resist an idea are often its greatest advocates once they fully understand its value, know how it will work, and feel their concerns have been listened to and acted on. These people are also often the most likely to identify potential mistakes before they are made. Feedback-​Rich Culture. Continuous performance management is ultimately about enabling, creating, and supporting a “feedback-​rich” culture. This is a culture where people are comfortable having candid discussions about the impact their actions are having on group performance and company success and how they can be improved. Employees in feedback-​rich cultures not only accept and act on feedback but also actively seek and expect it. Feedback-​rich cultures must be created and nurtured through leadership role modeling, access to tools and training that support the use of effective feedback, policies and norms that encourage feedback conversations, and environmental cues that trigger feedback-​related thinking. It is particularly important to make sure leaders understand the full extent of their role in shaping a feedback culture. Managers often tend to manage their direct reports based on how they themselves are managed. If leaders want managers to hold coaching feedback sessions with their direct reports, then leaders should hold similar conversations with the managers who report to them. Goal-​Setting Culture. The primary purpose of continuous performance management is to create more effective ongoing coaching conversations and feedback between employees, managers, and coworkers to support performance and development. These conversations mainly discuss activities and accomplishments associated with the pursuit of job-​relevant goals. What goals should the person be focusing on? What barriers are they encountering as they pursue these goals? How can they adapt their behaviors to more successfully achieve these goals? What learning and experiences can they gain from the work they are doing related to these goals? Given the emphasis on goals, it is difficult to implement good feedback systems if employees and managers do not have clarity

8

88  / /  ​ 3 6 0 for D ecision-Making

around their goals and how they align with the goals of the company, their teams, and their colleagues. Relationship Quality. Using 360 Feedback to support continuous performance management depends on positive relationships between managers, employees, and coworkers. Managers and coworkers must trust that their direct reports or team members are committed to their jobs, and employees must trust that their managers or coworkers are supportive of their careers. Employees must also trust that feedback provided for development will help them be more successful and will not be used to undermine their careers. The use of policies around data confidentiality can help with this, but ultimately trust can only be built through time and the demonstration that developmental feedback is used to help employees, not hurt them. Training on Giving and Receiving Feedback. Feedback cultures rely on the ability of managers and employees to provide effective feedback and coaching. Unfortunately, many companies fail to provide adequate training on providing, responding to, and seeking feedback. It is common to encounter managers and employees who do not know how to provide quality feedback or effectively coach their employees. On top of this is the added challenge that employees may struggle with how to receive or use feedback (Bracken & Rose, 2011). Research has shown that employees want feedback if it helps them achieve their goals, helps them feel good about their efforts, and enhances their professional image (Ashford, Blatt, & Walle, 2003). Similarly, managers will be willing to give feedback if they are confident that it will help their teams, and therefore the managers themselves be more successful. Achieving this level of confidence toward the value of feedback usually requires some investment in training people within the organization to give and receive feedback. Transparency of Data and Talent Decisions. Employees tend to be more willing to engage in ongoing feedback and coaching conversations when they understand how their company makes evaluative decisions related to compensation and staffing that impact their careers. Employees know that they are going to be evaluated at some point. Employees are more comfortable discussing their developmental needs if they know how performance evaluations are conducted, when they are made, who is involved, and what they are looking at. Feedback Systems and Structures. To increase the use of 360 Feedback for continuous performance management, managers, employees, and coworkers must get in the habit of having regular feedback and coaching conversations. These conversations need not be time-​consuming affairs, often less than 30 minutes in length. But, employees and managers have to remember to have the discussions, recall what they discussed previously, and keep track of things they want to revisit in the future. This is where having

 89

Technological Innovations for Performance Management  //​ 89

some sort of feedback system enabled by feedback technology can help. The following are common elements found in these feedback systems:





• Triggering events. Managers who have several direct reports or who have heavy task loads may struggle to remember to hold coaching meetings regularly. Having a triggering event like a mobile alert helps managers remember to meet with their employees on a regular basis. These triggering events can be time based, occurring at regular intervals (e.g., weekly or monthly). Or, they may be based on operational events such as completion of major tasks or following project actions or meetings. • Data recording and access. It is useful to provide technology that allows managers and employees to share agendas for meetings, access 360 Feedback gathered from different sources, track information discussed in previous sessions, and record information to review in future discussions. Some companies also use these systems to allow managers to use feedback data collected throughout the year to inform other talent management processes and make more accurate decisions around compensation, staffing, and future development. • Messaging. Messaging tools that allow employees and managers to request meetings, share information, request and collect feedback from different individuals in the company, or call attention to activities and questions enables more effective ongoing dialogue between meetings. These tools are particularly valuable for teams spread over large geographical areas.

Program Evaluation and Management. Continuous performance management processes should be viewed as something that are maintained and developed, not things that are “launched.” Tracking program metrics helps create an understanding behind what is going well, what is not working out, and where more work is needed. Possible metrics include employee attitudes like job satisfaction, trust, and engagement; workforce metrics such as employee retention and internal job changes; and process metrics such as frequency of feedback conversations or nature and quality of feedback and goals. When doing this, care must be taken to protect the confidentiality of raters and ratees. However, there may be considerable value in looking at the nature and impact of 360 Feedback being collected and discussed by managers and employees. In particular, are people providing useful, appropriate, and constructive feedback, and are employees and managers reacting to this feedback in a positive manner? The move to more continuous performance management systems that incorporate ongoing 360 Feedback represents one of the most significant transformations in HCM to occur within the last 10 years. These kinds of performance management processes can

90

90  / /  ​ 3 6 0 for D ecision-Making

provide considerable value over traditional annual performance management methods. But, like most things that provide significant value, the benefits of moving to continuous performance management methods and creating feedback-​rich cultures cannot be realized without some level of effort. Questions in Need of Further Research

Companies are currently making many of the changes to performance management based largely on best practice guesses and estimations. What is needed is more rigorous, evidence-​based studies to show what works and what does not when it comes to expanding the ongoing collection and use of developmental 360 Feedback using these technologies. For example, the nature and importance of the conditions identified in Table 5.2 should be examined more carefully, ideally with more empirical methods. Research is also needed to better understand why some managers and employees are reluctant to give and receive feedback and how to address this reluctance. As companies collect more ongoing feedback, research might be conducted to understand how to optimally use this feedback information for talent decisions without undermining employee trust. MAKING TALENT DECISIONS

The third component of performance management focuses on decisions about compensation, promotion, staffing, and development that are based in part on employee performance. Assessing employee performance is central to guiding these decisions. This is arguably the most difficult part of performance management because it requires dealing with the sensitive reality that not all employees perform at the same level (Viswesvaran & Ones, 2000). Every employee contributes different levels of value to the organization due to differences in productivity, skills, potential, or any number of other job-​relevant factors (Boudreau & Ramstad, 2005). These differences are not minor. Contributions made by high-​performing employees can be several times greater than contributions made by solid or “average” employees (O’Boyle & Aguinis, 2012). Companies that manage, develop, and invest in employees considering their relative contributions significantly outperform companies that treat employees as though they all provide equal value (Bloom & Van Reenen, 2007). Recognizing that some employees are more valuable than others is good not only for companies but also for employees. The use of consistent, transparent methods to assess and reward employee contributions is a key factor affecting employees’ perceptions of justice, fairness, and equity (Colquitt, Conlon, Wesson, Porter, & Yee, 2001).

 91

Technological Innovations for Performance Management  //​ 91

Managing differences in employee contributions is critical to maximizing company performance. But, accurately assessing differences in employee contributions is one of the most challenging areas of HCM (Austin & Villanova, 1992). People know not everyone contributes equally, but people do not always agree on how their contributions should be evaluated. Many people also find it uncomfortable to have their performance compared against their peers. Employees, particularly those at the “lower end” of the performance distribution, may experience considerable stress from an assessment process that compares their contributions with their coworkers (Wheeler & Suls, 2005). Fortunately, research on employee perceptions of fairness has shown that most employees, even those who may be struggling in their roles, can accept assessment results provided they understand how they were assessed, believe the process was accurate and consistently applied, and that the results were delivered in a sensitive manner (Colquitt et  al., 2001). This requires companies to use accurate and fair methods to assess employee performance. 360 Feedback can play a critical role in creating such assessments. The Role of Technology

There are three general ways technology is used to incorporate 360 Feedback into performance assessments. First, 360 survey, pulse survey, and get feedback solutions can gather structured ratings and qualitative comments from different stakeholders as part of the assessment process. These technologies are usually used to get input from subsets of coworkers chosen by the employee and their manager. These applications can also collect feedback from a manager’s direct reports to guide assessment of the manager. In this case, the 360 Feedback is not actually for the manager, but for the person assessing the manager. Which raises interesting questions about whether the manager should be able to see the feedback. Second, spot reward systems and continuous performance management systems can provide 360 Feedback about employee performance gathered from different stakeholders over time, for example, reviewing positive comments, recognition, and constructive feedback provided by coworkers regarding past actions of the employee. This method has the strength of reducing biases due to recency effects because it incorporates historical data into the assessment process. It also allows for the assessment of performance trends, which is important given that individual performance can fluctuate considerably over time (Sturman, Cheramie, & Cashen, 2005). It also raises questions about how using recognition for evaluation might impact employee attitudes about the purpose of giving and receiving positive feedback.

92

92  / /  ​ 3 6 0 for D ecision-Making

Third, calibration talent review systems can facilitate 360 Feedback discussions where multiple stakeholders meet to discuss, share, and compare perspectives about the performance of different employees. We believe technology used to support talent reviews is having the most profound impact on the use of 360 Feedback for making talent decisions in performance management. This is because it enables companies to replace individual manager performance ratings with group-​based assessment methods. This move from individual to group-​based assessment has the potential to fundamentally shift the nature of performance management from something managers do alone to something managers do though team discussion. The bulk of the section on culture and process design looks at factors impacting the use of 360 Feedback within the context of these types of talent review sessions. Culture and Process Design Considerations

Historically, one of the most common ways companies evaluated employee performance was through individual manager ratings. This typically involved managers rating employee performance based on an annual, quarterly, or monthly schedule. This method suffers from significant limitations. First, managers may be affected by biases that impact the accuracy of their evaluations. Second, managers can only evaluate employees based on their view of the employee’s performance, and this view may not accurately represent the full nature of the employee’s contributions (Scullen, Mount, & Goff, 2000). Third, the act of collecting manager ratings is often viewed as a highly burdensome and administrative task. One way to address the limitations of manager ratings is to use 360 Feedback surveys or get feedback tools that allow managers to reach out to an employee’s coworkers to collect information about their performance. The use of these 360 Feedback methods is more effective than relying solely on individual manager evaluations done in isolation. There is another method for incorporating 360 Feedback into performance assessments that is arguably more effective. This involves using group calibration talent review sessions where organizational stakeholders collectively meet to discuss and evaluate employee contributions. Talent review sessions do not require the use of forced ranking, numerical ratings, or any other ordinal rating process. Although these rating methods are often used in talent review sessions, they are not necessary components of these sessions. The only thing talent review sessions require is that the people meeting have sufficient knowledge of employees’ past accomplishments and behaviors and meet as a group to constructively discuss the performance or potential of these employees relative to some agreed-​on decision-​making criteria.

 93

Technological Innovations for Performance Management  //​ 93

The use of talent review sessions is potentially superior to individual manager assessments for several reasons. Rather than relying on the subjective opinions and perspectives of a single manager, talent reviews bring in the opinions of multiple people with different viewpoints to ensure employees are evaluated based on their actual behaviors, skills, and accomplishments. This may reduce idiosyncratic biases found across managers, including subconscious biases that can negatively impact women, ethnic minorities, and other historically underrepresented groups (Deaux & Emswiller, 1974; Martell, 1991). Employees are also less likely to feel as though their fate is in the hands of a single manager who may not have an accurate perception of their value. Groups with members of diverse backgrounds and perspectives have also been shown to make higher quality decisions than individuals (Kolbe & Booz, 2009). A major historical challenge to conducting talent review sessions was the work required to assemble and manage employee data related to the sessions. Because of the effort involved, talent reviews were often only done for a very small number of jobs in a company, such as top leadership roles. Or, the sessions were done in an overly simplistic manner that emphasized speed and compliance over quality (e.g., forcing managers to comply with strict forced-​ranking guidelines). Advances in HCM technology have radically reduced the time needed to manage calibration session data. For example, one company reported it used to spend as many as 40 hours assembling employee data for a single talent review session, and that it can now perform the same tasks in under one hour. The shift to less structured, more ongoing, qualitative, and conversational-​based methods for gathering and using 360 Feedback to guide talent decisions raises interesting questions about validity and reliability. Before discussing specific questions about the validity of qualitative 360 Feedback data, it is important to consider the general transformation taking place in performance management overall. The primary focus of this transformation is to move away from event-​based, periodic review processes that put a lot of focus on filling out forms. These processes are being replaced by less structured processes that emphasize more organic, ongoing qualitative feedback and conversational dialogue. In this sense, traditional survey-​based 360 Feedback methods often suffered from many of the same criticisms leveled at annual performance reviews:  They happen too infrequently (rarely more than once a year), they use ratings that can be perceived as more evaluative than developmental, they may ask raters to deliver feedback about behaviors performed so long ago their impact remains a foggy memory, and they can be seen as putting more emphasis on filling out forms as opposed to engaging in effective conversations. This creates something of a dilemma from a statistical standpoint. Specifically, many 360 Feedback survey characteristics that are associated with higher measurement validity, such as systematic, highly structured collection of empirical ratings, are seen to be

94

94  / /  ​ 3 6 0 for D ecision-Making

somewhat antithetical to the kinds of performance management processes companies want to create. Given current trends, we believe it is unlikely that most companies would be willing to introduce any methods into their performance management processes that increased time spent filling out structured rating forms. This is true at least in the current market. This creates a need for more research into ways to increase the validity of the new kinds of 360 Feedback methods companies are adopting. When making talent management decisions, is it better to have some historical 360 Feedback data that lack methodological rigor or to make decisions based solely on manager recollections and perceptions of past performance? The answer naturally depends on the quality of the data and the trust we place in the manager’s opinion. But, in our view, in most cases having some historical 360 Feedback data from multiple stakeholders is likely to be superior to relying entirely on the memories and notes of the manager alone. CONCLUSION

The collection of 360 Feedback used to be thought of more as a formal event centered on structured forms and surveys. Technology has increased the availability and use of 360 Feedback at all phases of performance management. It has also increased the variety of approaches to collecting employee feedback. These innovative technologies include things such as using online communities to solicit input from coworkers on job goals, adopting continuous performance management technology to support and encourage ongoing collection and discussion of job-​relevant feedback throughout the year, and using calibration talent review solutions to replace individual manager performance ratings with group-​based assessment processes that incorporate input from multiple stakeholders across the company. And, new methods are being developed every year. It is hard to say exactly how companies will be using 360 Feedback for performance management in the future. But, it is safe to say that new HCM technologies will continue to emphasize the use of 360 Feedback that is more ongoing and less formal. We may see the use of 360 Feedback transformed even further as companies make greater use of machine learning to “scrape” feedback data from other sources, such as e-​mails or online communities. The collection and use of 360 Feedback is also likely to change as companies adopt more team-​based, matrixed organizational structures. Technology makes things possible that were not possible before, but it does not make things happen or guarantee that new methods will be more effective. This is particularly true when applied to 360 Feedback technology. Just because employees can more easily provide and leverage 360 Feedback does not mean they will do so more often or more

 95

Technological Innovations for Performance Management  //​ 95

effectively. It is not hard to imagine scenarios where the presence of 360 Feedback technology could even make things worse. As one person put it, “There is a reason Facebook does not have a dislike button.” Our goal in this chapter is to provide insights into how technology can improve the use of 360 Feedback for performance management and what is required to realize these benefits. We are just scratching the surface when it comes to understanding how to maximize the value of 360 Feedback to support performance management. The more we use technology to incorporate 360 Feedback into performance management, the more feedback we receive on how to effectively use this technology, and the faster the technology itself changes. The result is a world of constant innovation where it is wise to remember that today’s best practices may be tomorrow’s outdated processes.

REFERENCES Ashford, S. J., Blatt, R., & Walle, D. V. (2003). Reflections on the looking glass:  A review of research on feedback-​seeking behavior in organizations. Journal of Management, 29(6), 773–​799. Austin, J. T., & Villanova, P. (1992). The criterion problem: 1917–​1992. Journal of Applied Psychology, 77, 836–​874. Bloom, N., & Van Reenen, J. (2007). Measuring and explaining management practices across firms and countries. Quarterly Journal of Economics, 122, 1341–​1408. Boudreau, J., & Ramstad, P. M. (2005). Talentship, talent segmentation, and sustainability: A new HR decision science paradigm for a new strategy definition. Human Resource Management, 44, 129–​136. Bowen, D. E., & Ostroff, C. (2004). Understanding HRM-​firm performance linkages: The role of “strength” of the HRM system. Academy of Management Review, 29, 203–​221. Bracken, D. W., & Rose, D. S. (2011). When does 360-​degree feedback create behavior change? And how would we know it when it does? Journal of Business and Psychology, 26, 183. Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of360° feedback. Industrial and Organizational Psychology, 9, 761–​794. doi:10.1017/​iop.2016.93 Campion, M. C., Campion, E. D., & Campion, M. C. (2015). Improvements in performance management through the use of 360 Feedback. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8,  85–​93. Colquitt, J., Conlon, D., Wesson, M., Porter, C., & Yee, K. (2001). Justice at the millennium: A meta-​analytic review of 25 years of organizational justice research. Journal of Applied Psychology, 86(3), 425–​445. Deaux, K., & Emswiller, T. (1974). Explanations of successful performance on sex-​linked tasks: What is skill for the male is luck for the female. Journal of Personality and Social Psychology, 29(1),  80–​85. Griffith, E. (2016, May 16). What is cloud computing? PC Review. https://​www.pcmag.com/​article2/​ 0,2817,2372163,00.asp Hunt, S. T. (2011). Technology is transforming the nature of performance management. Industrial and Organizational Psychology, 4, 188–​189. Hunt, S. T. (2014). Common sense talent management: Using strategic human resources to improve company performance. San Francisco, CA: Wiley. Hunt, S. T. (2015). There is no single way to fix performance management: What works well for one company can fail miserably in another. Industrial and Organizational Psychology, 8, 130–​139. Kolbe, M., & Booz, M. (2009). Facilitating group decision-​making: Facilitator’s subjective theories on group coordination. Forum: Qualitative Social Research, 10(1), article 28. Martell, R. F. (1991). Sex bias at work: The effects of attentional and memory demands on performance ratings of men and women. Journal of Applied Social Psychology, 21, 1939–​1960.

96

96  / /  ​ 3 6 0 for D ecision-Making O’Boyle, E., & Aguinis, H. (2012). The best of the rest: Revisiting the norm of normality and individual performance. Personnel Psychology, 65, 79–​119. Pulakos, E. D., Mueller Hanson, R., Arad, S., & Moye, N. (2015). Performance management can be fixed: An on-​the-​job experiential learning approach for complex behavior change. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8,  51–​76. Scullen, S. E., Mount, M. K., & Goff, M. (2000). Understanding the latent structure of job performance ratings. Journal of Applied Psychology, 85, 956–​971. Sturman, M. C., Cheramie, R. A., & Cashen, L. H. (2005). The impact of job complexity and performance measurement on the temporal consistency, stability, and test-​retest reliability of employee job performance ratings. Journal of Applied Psychology, 90(2), 269–​283. Viswesvaran, C., & Ones, D. (2000). Perspectives on models of job performance. International Journal of Selection and Assessment, 8, 216–​228. Wheeler, L., & Suls, J. (2005). Social comparison and self-​evaluation of competence. In A. J. Elliot & C. S. Dweck (Eds.), Handbook of competence and motivation (pp. 566–​578). New York, NY: Guilford Press.

 97

6 / /​/ ​     / /​/​ STRATEGIC

360 FEEDBACK FOR TALENT MANAGEMENT ALLAN H. CHURCH

INTRODUCTION

Over the past 30  years since the emergence of 360 Feedback as an official human resources (HR) intervention that “can change your life” (O’Reilly, 1994), there has been considerable discussion regarding whether the results from 360 tools and processes should be used for decision-​making or development-​only purposes. Numerous cases, debates, and applications have been documented outlining the two sides of the equation (e.g., Bernardin, 1986; Bracken & Church, 2013; Church, 1995; Church & Waclawski, 2001; Edwards & Ewen, 1996; Effron & Ort, 2010; London, 2001; London, Smither, & Adsit, 1997; Nowack & Mashihi, 2012; Tornow & London, 1998), and practitioners often have strong points of view on the subject. At this stage in the evolution of the field, however, it seems clear that the debate is a moot point (Bracken, Rose, & Church, 2016). 360 Feedback is in fact already a key tool being used for decision-​making in a number of contexts in organizations today, including performance management and talent management (TM) in particular. Specifically, many organizations use 360 data as an integral part of their talent review process and succession-​planning efforts (e.g., Allstate, Corning, Microsoft, GE, and PepsiCo, among others). Although companies with more sophisticated HR practices embraced this approach years ago (e.g., Bracken, Timmreck, & Church, 2001; Church & Waclawski, 2010; Freedman, 2004; Oliver, Church, Lewis, & Desrosiers, 2009; Yost, 2010), recent reviews and benchmark reports with large, well-​known

97

98

98  / /  ​ 3 6 0 for D ecision-Making

organizations have indicated that it is now one of the most commonly used data-​based inputs over other forms of assessment when it comes to high-​potential leader identification and related TM and executive selection efforts (Thornton, Johnson, & Church, 2017). For example, Silzer and Church (2010), in their sample of 20 well-​known organizations, reported that 65% utilized 360 for high-​potential identification (the most commonly used tool other than performance ratings). Similarly, Church and Rotolo (2013), in their study of 84 top development companies with robust TM systems, reported that 66% were using 360 Feedback with high potentials, and 60% used it as part of assessment and development programs with their senior executives. Finally, in their most recent report, the 3D Group (2016) benchmark study, which reflects a much wider sample of organizations, reported that 55% incorporated 360 Feedback into one or more of their TM programs, with 38% citing talent reviews and development specifically. In short, the debate is over. 360 Feedback can and arguably should (if effectively designed and implemented) be a core tool used as part of a larger TM process. When the concept of 360 Feedback was popularized in the 1990s (Bracken et  al., 2016), however, the practice of TM as it is defined today did not even exist. Although many of the activities, such as individual assessment, talent reviews, and succession planning, were widely practiced, the TM label did not emerge until the mid-​2000s (e.g., Church, 2006; Effron & Ort, 2010; Silzer & Dowell, 2010). As a result, despite its popularity, questions remain regarding how to make the most effective use of 360 Feedback and its results in TM systems today. The purpose of this chapter is to describe the role of 360 Feedback in TM applications in organizations and to provide guidance to practitioners who may be considering implementing this type of approach as either internal or external consultants. Although definitions vary regarding the boundaries of TM (Silzer & Dowell, 2010), the focus of the present discussion is on the use of 360 Feedback in formal structured processes that serve to differentiate, develop and enable decision-​making to solve organizational talent needs. The chapter begins with a case example highlighting the power of 360 Feedback as both a predictive and a diagnostic tool for talent planning and review processes, including the process of slating candidates for succession and making succession-​related decisions. Next, context is provided regarding the distinction between the little and big S in strategic 360 Feedback in TM programs, followed by a discussion of the four key differentiating components to consider when designing and implementing 360 Feedback for talent decision-​making. The last section focuses on key challenges that can influence the effectiveness of a 360 Feedback program along with recommendations for practice.

 9

Strategic 360 Feedback for Talent Management  //​ 99

CASE EXAMPLE: THE TALE OF TWO LEADERS

Recently, we were in a talent review meeting discussing the merits of two employees (Robert and Amanda1) who had recently completed 360 Feedback as part of an internally validated high-​potential, multitrait multimethod (MTMM) assessment program called LeAD (short for Leadership Assessment and Development; see Church & Rotolo, 2016; Church & Silzer, 2014; and Trudell & Church, 2016). Robert was in the program because he had been designated a high potential via the organization’s internal talent review process. Amanda was newer to her position in the headquarters (HQ) office, having come from the field, and was thus being evaluated for further stretch into higher level roles. Although she had been labeled as a high potential earlier in her career while out in the business, she was now seen as a key contributor, that is, someone with stretch potential but not necessarily to senior leader levels (Church & Waclawski, 2010). Given the LeAD program had been designed to identify and confirm future potential status, the 360 results were particularly important per the comprehensive validation study that had been conducted internally2 as they represented the single strongest predictor of success at two levels higher in the organization. Based on an initial review of the assessment outcomes, it appeared that, while Amanda did exceptionally well overall due in large part to strong 360 results, Robert’s outcome suggested he was in the bottom quartile of the norm group and less likely to be successful in the future if promoted. In short, his high-​potential status was now in question. In comparison, based on her data, there was now a high degree of confidence in Amanda’s future leadership potential. Given the way the assessment algorithm had been developed, the strategic impact of 360 Feedback as a predictive tool for TM and succession planning was clearly evident in the talent review meeting. The line leader in the meeting (i.e., the client) noted that this approach was far more consistent and precise than simply picking and choosing which elements to highlight, either positive or negative, about someone when they are being discussed, as is often the case in less focused TM approaches. Needless to say, however, given the initial talent calls prior to assessment (e.g., high potential and key contributor), the line leader was still surprised by the assessment outcome as he worked directly with Robert and thought he was fantastic—​ultra responsive

 Names have been changed for this case description.  Ensuring a validation study has been conducted in a given organization setting is a critical first step that must be completed for any 360 Feedback system to be used for decision-​making in that same organization. It ensures that the content being measured statistically predicts the desired outcomes (e.g., performance at higher levels, etc.). See Bracken et al. (2001) and Scott et al. (Chapter 29) for more on the importance of validation and legal defensibility.

1 2

10

100  / / ​  3 6 0 for D ecision-Making

and full of energy and someone who could really deliver. Amanda, on the other hand, whom he also knew, he felt was more professional, planful, and somewhat reserved in her approach with him. Thus, despite the assessment index findings, he still wanted to place Robert on the candidate slate over Amanda. When we examined the 360 Feedback results in depth, however, the power of 360 as a diagnostic tool for informing TM decisions became apparent. While Robert’s 360 results were average overall (and thus did not help his LeAD Assessment Index), they had been positively influenced by almost all 5 ratings (on a 1–​5 scale) from his manager. His peer ratings were somewhere in the middle of the range. His direct report ratings, however, were quite negative, particularly on the “Taking Others With You” dimensions in the organization’s leadership model. In short, based on the data, it appeared that Robert managed up exceptionally well, worked adequately enough with colleagues, and overworked and micromanaged his direct reports. Conversely, Amanda’s 360 results were solid from her manager, perhaps as a result of being somewhat new to her role and environment, but extremely positive from her peers and direct reports. She was doing everything necessary to learn the culture in HQ, fitting in well with coworkers and her team, and demonstrating her capability in her role even though it represented a significant stretch assignment. So, although her manager was not entirely sure yet about her future leadership potential (given the key contributor rating he had given her going into the talent review meeting), her results from the future-​focused 360 measure indicated someone who was highly collaborative and a strong people developer while having transitioned though multiple experiences.3 In short, she met the empirically derived criteria for being classified as a high potential if the organization chose to do so. In the end, the data received from this process resulted in a different set of talent outcomes than would have happened otherwise. Robert was moved further down the succession list (but not entirely removed as he was still deemed to have potential) and given a different type of role where he could spend more time focused on enhancing his people leadership skills via developmental coaching and training support (i.e., reinforcing the 70–​20–​10 model of development; see McCauley & McCall, 2014). Amanda, on the other hand, had her status changed back to high potential as it had been previously when she was in the field. Her developmental focus now included specifically building rapport  The full LeAD assessment process also includes other measures, such as personality and cognitive tests, custom simulations or assessment centers, structured interviews, situational judgment tests, and so on. The Assessment Index includes all relevant data from the validation models conducted at each level of LeAD. The 360 results are highlighted here only for brevity of the discussion. For more information see Church and Rotolo, 2016; Church and Silzer, 2014; Happich and Church, 2017; Silzer, Church, Rotolo, and Scott, 2016; Trudell and Church, 2016.

3

 10

Strategic 360 Feedback for Talent Management  //​ 101

and demonstrating impact with her manager in HQ. As her level of potential appeared to be the highest based on a standardized and validated assessment process, the 360 Feedback was particularly important in highlighting areas where her manager rated her poorly, given his perspective would have a critical impact and deciding influence on her future prospects in talent review meetings going forward (Conger & Church, 2018). It was important that she uncover where he was not seeing her impact compared to others or where expectations, behaviors, and performance might still be misaligned. If she was truly not delivering results (from her manager’s perspective), then the outcome could still change over time as appropriate. These are critical aspects for focusing talent development as so much of TM and high-​potential identification can be influenced by manager direct report and contextual factors at a given point in time (e.g., Church, Rotolo, Ginther, & Levine, 2015; Conger & Church, 2018; Silzer & Dowell, 2010). In this case, as in many others, 360 results were used to strategically inform talent decision-​making and target further development for future leaders to enhance leadership succession outcomes longer term. THE STRATEGIC CONTEXT OF 360 FOR TALENT MANAGEMENT

Although 360 Feedback can be used for either development or decision-​ making purposes, when organizations deploy a 360 Feedback tool or process as part of a set of talent discussions, they are, by default, moving in a more strategic direction regarding the utilization of behaviorally based data. While integrated large-​scale applications of 360 Feedback for organization development (OD) and change applications also have significant strategic implications (as discussed in Chapter  13), the data in these types of systems are used by individuals, groups, and organizations more for (a)  individual development, (b) team dynamics and functioning, and (c) communicating and driving cultural change versus making actual decisions about selection and staffing decisions internally. Therefore, the focus is different. Either way, however, the 360 Feedback actually represents something meaningful to the organization: It is not based on a generic model or used as a one-​off intervention for a leader derailment issue. This is a key distinction. Given the increasing importance of information and insights on people today, particularly in Big Data applications (e.g., Guzzo, Fink, King, Tonidandel, & Landis, 2015), any utilization of quantifiable information to make decisions reflects a more forward-​thinking perspective (Chamorro-​Premuzic, Winsborough, Sherman, & Hogan, 2016; Church & Burke, 2017). That said, this does not mean that every application of 360 is strategic, just as not every TM framework is strategic. There are conditions that need to be met. Bracken Chapter 2 provides a definition of strategic 360 Feedback that involves four key

102

102  / / ​  3 6 0 for D ecision-Making

points: (a) The content is derived from the organization’s strategy and values, (b) the data obtained are sufficiently reliable and valid, (c)  the results are integrated into key talent processes and HR systems, and (d)  participation must be inclusive (at least of those individuals being compared as part of the process; more on this point is provided further in the chapter). In the context of strategic 360 Feedback for TM, these four principles hold true, although there are some minor areas where emphases may differ. At its core, 360 Feedback is a process for collecting feedback from others on a specific set of leaders’ behaviors that results in data-​based information being shared with line leaders and HR for some type of organizational outcome (Bracken et al., 2016). This means, from a TM perspective, the information shared internally in a high-​potential identification process, talent review, or succession-​planning context represents a (a) summary of observations from different perspectives; (b) common standard for measuring behavior, competencies, knowledge, skills, and abilities; (c) potential means for reducing bias across different groups throughout the organization; (d) tool that can show patterns of change over time if the individual modifies his or her behavior; and (e) mechanism for meeting the strategic talent needs of the business. When the process is applied to a given set of employees (e.g., a slate of candidates, a high-​potential talent pool, a functional grouping, or a broad-​based set of candidates), the data can be used to fairly and accurately compare individuals using the same metric. Although 360 Feedback should never be used in isolation as the sole decision-​making tool (no measure is perfect, hence the general recommendation for the use of MTMM assessment suites), it is one of the most cost-​effective means for providing a consistent view of strengths and opportunities across a group of individuals using behavioral indictors from multiple points of view. As in the case example provided, while 360 Feedback almost always incorporates the managers’ perspective, it does not rely solely on the judgment of one individual regarding the current and future capability of the focal individual being assessed. When it comes to making decisions about people, either for fit to a new role today or for their potential to excel in new and more challenging assignments in the future, having multiple perspectives is critical to avoid a host of biases and potential shortcomings of the “like me” models of the past (Church & Silzer, 2014; Church & Waclawski, 2010). While there is no guarantee that having 360 Feedback available or,

 103

Strategic 360 Feedback for Talent Management  //​ 103

better yet, as discussed in (Chapter 2), integrated into a TM process will result in the data-​based insights being used (that requires both culture and capability on the part of the those in talent discussions), it does ensure that the field has the potential to be leveled when the discussion is taking place. THE LITTLE AND BIG S IN STRATEGIC 360 TM APPLICATIONS

Although the systematic use of 360 Feedback in TM is strategic at some level by default, because it is impacting the organization beyond just development for the individual, it can be in the form a little s or a big S when it comes to defining strategy (Outram, 2013). In the context of the framework provided in (Chapter 2), any type of strategic 360 must be tied to some element of the broader organization’s business strategy and values in an integrated manner. The question of how explicit it is and the purpose of that linkage is what differentiate the level of strategic importance. The little s of strategic 360 TM applications reflects those systems where 360 Feedback is used in a talent review, program, or succession-​planning process to make decisions based on a model of leadership factors or other variables that are required for success. These factors can be framed either in the present context or for the future. If an organization is using data collected on its leaders to identify talent pools (e.g., high potentials, candidates for chief financial officer [CFO], chief marketing officer [CMO], or division president, etc.) or making decisions regarding who is promoted or placed into open roles, then the 360 Feedback process is having an impact both short and longer term on the leadership composition and future of the organization. The identification of future leaders to receive accelerated development and the placement of current individuals into new roles will ultimately influence all aspects of the organization—​from structure to culture to the types of talent they bring in or promote. In short, using 360 for any type of TM process means there is a strategic component to the effort whether the intent and implications are formally articulated and acknowledged or not. This is the little s model as long as it meets the basic requirements of being linked and having some solid measurement properties and is integrated and used for some specific population in a given context. In comparison, when an organization approaches 360 Feedback from a larger strategic context (big S), the focus is on intentionally and systematically developing or selecting leaders (or the majority of those with additional stretch or potential) toward a future set of behaviors that will shape the organization in a specific direction. In this regard, the TM process more closely aligns to an OD and change approach (see Chapter 13) in that the intent is to shift the capability mix and thereby deliver on the business strategy

104

104  / / ​  3 6 0 for D ecision-Making

of the organization. From the OD perspective, the goal is to develop everyone toward the future state whether that is reflective of the business strategy, the culture, or both. However, a key difference in a TM application is on using those data to make decisions on the individuals being assessed via a new lens of success metrics versus driving their development with the intent being cascaded (from the top down) or a change from the middle up (i.e., impacting an organizations culture and values). In addition, while most OD approaches are inclusive in nature (given this is a core value of the field of OD), the use of strategic 360 for TM is often focused on a targeted set of individuals. In short, this reflects the key difference in mindset and values between a focus on the “many” in OD and the “few” in TM (Church, 2013b, 2014b; Church, Shull, & Burke, 2018). In a TM application, the 360 Feedback results are used to differentiate, target development, and ultimately select and place certain individuals who demonstrate those skills (or show the potential to do so) over others who do not. As a result, future focus, specific measurement properties, integrated design and systems, as well as a target audience and scale of the application are all critical design components for a big S strategic approach. FOUR DIFFERENTIATING COMPONENTS OF 360 FEEDBACK FOR TM

When it comes to designing an effective 360 Feedback for TM processes, there are many factors to consider. Some of these are the same as when designing any type of effective 360 Feedback program. Others are somewhat unique to TM applications. While they all meet the requirements for a strategic 360 process as outlined in Chapter 2, in some cases the design elements can be more specific in nature. The sections that follow describe four key areas to consider. Because they follow the same logic for building a TM architecture, they are presented in a slightly different order than Bracken’s list, although they align quite well. Regardless of the order, the important thing to remember is that 360 Feedback in this context is operating at both the systems level of the organization (Burke & Litwin, 1992) and in a systematic and integrated manner (Church, 2014a; Silzer & Dowell, 2010). Identifying the Strategic Content

The first step in designing or implementing a 360 Feedback system for TM is to determine the type of content measured. While generic models of leadership effectiveness (whether found in the literature or from consulting firms with existing 360 tools designed around them) are useful for individual coaching and development purposes, they will not meet the goals of a TM system. For 360 Feedback to be maximally effective in evaluating talent

 105

Strategic 360 Feedback for Talent Management  //​ 105

for internal decision-​making, the content must be based on the strategic direction of the organization. This can include the values and principles that a company may have or even espouse to have, as in an OD context (Burke, 1982, 2014), but it should go beyond that. The measure, subsequent feedback, and data generated for comparisons across leaders should be based on some customized set of behaviors (reflecting skills and abilities) that have been identified for success in the future. Here, we are talking about both the future of the organization for sustained growth in the marketplace and the success of individual leaders at higher levels in the management hierarchy, potentially all the way to the C-​suite and chief executive officer (CEO). Thus, it is important in the design process to start with the business strategy and the talent implications for achieving that strategy (Silzer & Dowell, 2010). The content can be generated using tools such as workforce planning and data analytics as needed (e.g., for predicting future outcomes), or external data gathering and interviews with senior stakeholders (e.g., a board of directors, the CEO, external thought leaders, and other key influencers) may be required. Whatever the methodology, the outcome is a framework or set of competencies/​capabilities that can be used to generate behaviors for incorporation in a customized 360 Feedback tool for use in that organization. The design of a custom tool is critical here as it provides (a) a targeted and tightly linked measure for achieving future success and (b) the senior leaders and others in the review process with a new set of standards for comparing talent across different groups, levels, functions, as well as prior talent designations. This latter factor is particularly relevant in talent reviews as using a generic model or existing leadership framework is far more likely to reinforce the prevailing wisdom regarding talent (e.g., someone already identified as high potential based on prior performance and cultural context) rather than assessing and classifying against the future needs of the business. If leaders enter a succession-​ planning discussion and discuss talent already identified using data from the present state, the chances of selecting (e.g., adding or removing from talent pools and slates) the best future candidates are limited by the blinders of that very same current leadership paradigm. Although clearly having 360 Feedback results for a given set of leaders being discussed for a position or slated for succession will enhance the quality and consistency of the decision-​making process overall (and thus any data are better than having no data or results based on inconsistently applied tools and approaches), the key point is that the model drives the content. Organizations that are focused on changing the talent mix with respect to gender, ethnicity, global experience, or other variables need to pay particular attention to this potential self-​perpetuating talent trap. As Valerio (2018) noted, if an organization’s high-​potential models are not carefully examined for inherent biases, there

106

106  / / ​  3 6 0 for D ecision-Making

is little likelihood of change in the outcomes mix at senior levels, even if the pipeline is expanded initially. Once the future framework has been designed and implemented, another choice point for design is whether the model should apply equally to all types of leaders in the organization or only those at certain levels. While other applications of 360 Feedback, such as those directed at culture change (e.g., Church, Waclawski, & Burke, 2001), might be best communicated and used for developing all employees, a TM approach may dictate the implementation of different frameworks at different levels. For example, the key leadership competencies required for success at the C-​suite or senior executive levels could be somewhat different (or more in-​depth in certain areas) than those for identifying future leaders with potential further down in the organization. This is consistent with current thinking around the nature of potential in general, such as the Leadership Potential BluePrint (Church & Silzer, 2014; Figure 6.1). The idea is that broader foundational characteristics (e.g., personality and strategic thinking skills) might be more important to measure early on as leading indicators, and growth and career factors such as learning agility and leadership competencies are more meaningful at higher levels of management (Silzer & Church, 2009). By applying a model such as the BluePrint to the process design, there are clear implications for managing the type of content and measurement mix of 360 Feedback in combination with other assessment tools (Silzer & Church, 2009; Thornton et al., 2017), as well as the types of possible follow-​up developmental interventions for each dimension (Church, 2014b).

Cultural Fit

Career Dimensions Growth Dimensions

Foundational Dimensions

Leadership Capability

Learning

Personality Characteristics

Functional & Technical Capability

Motivation

Cognitive Abilities

Performance Gatekeeper

FIGURE  6.1 The Leadership Potential BluePrint as a framework for measuring future capability. Adapted from Church and Silzer (2014) and Silzer and Church (2009).

 107

Strategic 360 Feedback for Talent Management  //​ 107

Table 6.1 provides an example of how 360 Feedback content might be adapted to each specific area of focus in a high-​potential or leadership assessment and development process. While increasing self-​awareness of participants’ strengths and opportunities for each set of elements in the BluePrint are important at every stage, there are clear differences in emphasis between feedback efforts aimed at Foundational dimensions (e.g., such as the need to develop workaround strategies to address derailers), versus those at Growth (e.g., learning how to identify and learn from experiences and apply that new knowledge in the future) or Career dimensions (e.g., developing or enhancing leadership skills such as collaboration and team building or expanding their functional knowledge, say in finance, digital marketing, consumer insights, or design thinking). While there is no definitive answer here, part of the decision will depend on how the strategic model to be assessed has been constructed in the first place. If the 360 Feedback is to be based on a predictive framework, then that is more likely to dictate certain competencies and behaviors via a validated research design than one based using external stakeholders. While using the same general leadership framework is preferred for a variety of reasons (e.g., standardization, communication, learning and development, etc.), in some cases, there may be a strategic reason to focus on the differences in talent requirements. For example, there may be aspirational goals regarding new capabilities required for the future of the organization (e.g., familiarity with digital and machine learning concepts) or simply expectations that senior leaders have a greater breadth of knowledge in certain core areas (e.g., finance, marketing, insights) by the time they achieve a certain level in the organization. Finally, even when the same leadership model is used for a 360 Feedback process, another design option concerns the use of differing behaviors or degrees of behavior at different levels in the organization. This approach closely aligns to the leadership pipeline construct (Charan, Drotter, & Noel, 2001) in that although the competencies identified may be critical for all levels in a given organization, there may be different emphases (or examples) of the types of behaviors required at different levels of leadership and management. A number of organizations have taken this approach. PepsiCo, for example, has leveraged this concept in both of the last two leadership model revisions, starting in 2007 to the present, with three levels of behaviors (e.g., those appropriate for all employees, leaders, senior leaders) used to measure the competencies included under the three leadership imperatives, which have been held constant since the late 1990s (i.e., Setting the Agenda, Taking Others With You, and Doing It the Right Way). By using these different levels for 360 Feedback when comparing talent, you are not only driving consistency in the same content of the competencies being reviewed across candidates when discussing

Closure

Learning and

• Motivation and

Characteristics

• Personality

Strategies

Behavioral

of Alternate

Identification

• Cognitive

Capabilities

Adaptation and

Foundational

Engagement

Individual

• Learning Agility

Drive

Enhancing

Growth

Technical Skills

and Gap

Competencies

Augmentation

• Leadership

• Functional and

Skill

Focus

Career

Factors

Dimension and

BluePrint

outages and derailers (capability building and team composition)

develop new behaviors and workaround skills to address inherent

• Results used to target skill building, coaching, and mentoring to

on a validated picture of long-​term potential

tools (e.g., personality, cognitive, situational judgment, biodata) based

• Standard or customized 360 Feedback used with other assessment

development

planning, visible support, and targeted (accelerated) employee

• Results used for collaborative short-​and long-​term career

gained and needed, reflective learning, and inquiry from assignments

• Appreciative experiential learning, planning for critical experiences

learning agility, career aspirations, perceptions

• Standard or customized 360 Feedback assessment tools linked to

integrated learning and development, coaching, and mentoring efforts

• Results linked to leadership development programs; action learning;

language of the organization

• Customized leadership behaviors based on unique culture and

aligned to strategic needs and direction of the business

• Targeted future-​focused competencies in 360 Feedback systems

Considerations for 360 Feedback Assessment Interventions

and potential derailers)

Foundational dimensions (characteristics, abilities,

• Increased self-​awareness and implications of

ability to design a group for maximizing capability mix

• Enhanced understanding of team composition and

behaviors to augment and mitigate potential issues

• Identification of workaround strategies and new

(strengths and challenges)

• Increased self-​awareness in Growth dimensions

engagement to the work and the organization

• Renewed motivation, energy, and personal

situations

and apply those learnings to new experiences and

• Increased focus on and ability to integrate, learn from,

(strengths and challenges)

• Increased self-​awareness of Career dimensions

exposure to a range of new functional disciplines

• Deeper functional knowledge and skills or broader

strategically aligned competencies

• Enhanced leadership skills and new behaviors on

Expected Outcomes

TABLE 6.1  Using the Leadership Potential BluePrint to Align Strategic 360 Feedback Content for Talent Management Efforts

108

 109

Strategic 360 Feedback for Talent Management  //​ 109

their strengths and opportunities, but also enabling a distinction to be made between gradations by level of capability required for success. The key to all of these variations is making sure the content is both strategic in nature and relevant to the appropriate populations being targeted for talent-​related decision-​making. Targeting the Most Strategic Talent Pools

The second step in designing or implementing strategic 360 Feedback for TM is to ensure the right talent focal targets—​whether unique individuals on a succession slate or defined talent pools—​have been identified for the process. Although in Chapter 2 definition of Strategic 360 notes the importance of being inclusive, in this context the term inclusion does not mean 360 for everyone as it does in an OD context (Church, Rotolo, Shull, & Tuller, 2014). Rather, the emphasis is on ensuring the population to be assessed matches those individuals who need to be identified, considered, reviewed, planned for, or placed into key assignments and succession. Given that TM is all about differentiating talent in order to make decisions, it is critical that the right individuals be given 360 Feedback to either (a) identify the best and brightest talent with the most future potential or (b) ensure an appropriate level of comparative data be made available for making informed decisions in talent reviews regarding succession plans and bench strength discussions. Thus, in its simplest form, inclusiveness refers to all the people you need 360 Feedback for, namely, those you plan to review and make decisions about. Although in an ideal world it might be preferable to assess all individuals at all levels for possible discussion, this is often not practical even in developmental 360 Feedback applications. Strategic 360 is rarely met by “sheep dipping” the organization. Some level of prioritization and logic behind it is needed. From a strategic standpoint, the best decision will once again be tied to the broader capability needs of the business for sustaining performance and growth. This is typically done following inputs from talent reviews, an analysis of succession plans, and other workforce analytics tools. For example, if a senior bench is lacking in an organization, then the initial target for TM-​related 360 Feedback might be at one or two levels below the current set of leaders. The assumption here would be that, for immediate or urgent gaps identified in the top of house, the organization is likely going to have to “buy” rather than “build” their talent at this point in time (Cappelli, 2008). The immediate successors and one generation behind (e.g., 3–​5 years or one or two developmental moves away), however, can still benefit from an aggressive leadership development agenda. Thus, 360 Feedback results become a key component for evaluating the quality of the bench and on whom to place the most emphasis and development resources. If the gaps in the pipeline are lower in the management hierarchy

10

110  / / ​  3 6 0 for D ecision-Making

and the senior pipeline is acceptable, the target may be more junior employees and the 360 Feedback process used to segment talent into A, B, and C level talent pools. Or, perhaps the approach is somewhere in the middle, using 360 Feedback to confirm a group of individuals’ status as high potentials going forward during talent discussions and ensuring their development is highly targeted and appropriately supported (as in the LeAD program described previously). Another option to consider with the target group is the appropriate timing of 360 Feedback for talent discussions. Although there are different points of view on how often these types of tools should be used in general (e.g., annually vs. 2–​3 years), the best approach from a strategic TM perspective is to deploy a new 360 assessment under one or more of the following conditions: 6 to 9 months into a leader’s new developmental assignment to facilitate learning 12–​18 months into a coaching relationship focused on enhancing leadership capability 18–​24 months into an existing assignment in preparation for further planning and development Prior to a focused and intensive leadership development program where the data will be used to build a high-​quality development plan that can be executed Whenever the talent needs to be discussed or the existing data on that individual are not fresh enough to enable a current analysis of their leadership capabilities In short, the timing of the process is best determined at either the individual or perhaps a cohort level rather than some mandated “HR process” approach. Despite the latter’s seeming simplicity, it will invariably fall short from an optimal timing perspective for talent related discussions. One final point to be made here is that, although experience has shown that general patterns of ratings across leadership competencies do tend be consistent for individuals over time, particularly in a broader population, when focused attention is made on developing against strengths and opportunities and with the appropriate level of developmental support, changes in the pattern of results are possible and can be used in scorecards for tracking progress against TM goals (Church, 2013a). Research has supported the value and impact of 360 Feedback when implemented with the right level of development support to drive behavior change (Nowack & Mashihi, 2012; Smither & Walker, 2001). Thus, HR and TM professionals should not shy away from multiple administrations of 360 Feedback simply because they fear no change will occur. It is important, however, to ensure that the data analysis and determination of “changes” in capabilities be made by practitioners with deep experience in interpreting patterns across tools and over time.

 1

Strategic 360 Feedback for Talent Management  //​ 111

Another lens that can be applied to determining the target audience when implementing 360 Feedback to TM is one that is role based. Rather than focus on individuals alone, a group of like roles can be used to source the process. Typically, these come in the form of one of either the top 100 roles in an organization, key succession feeder roles, a specific function or subfunction where leadership capability gaps have been identified (e.g., general managers, sales, research and development (R&D) leaders, finance CFO bench, etc.) or even a particular talent pool within a set of roles (e.g., women leaders in line management positions, digital marketers with customer experience, business unit leaders with global experience and high mobility, etc.). Because the focus is specifically on the types of talent to be reviewed and planned for in the organization, the target audience for 360 Feedback needs to follow the individuals, the pool, or the roles in order to be useful in a TM context. Designing the Appropriate Measurement and Reporting Tools

Once the strategic content has been created and the most important target individuals or talent pools have been identified, the next step in the process is building and implementing the best (i.e., highest quality from a measurement and usability standpoint) tools and reporting methods. Much has been written about the design of 360 Feedback measures over the years (e.g., Bracken et al., 2001, 2016; Lepsinger & Lucia, 1997), so the best advice is to ensure you follow the guidelines of having high-​quality behavioral items, appropriate scales, system-​enabled calibration tools such as the use of simultaneous ratings (i.e., allowing raters to provide ratings of others at the same time to enable comparisons), and rater training to drive the appropriate measurement quality and variability (see Chapter 15). There is little difference in how a behavioral item should be designed from a TM perspective versus any other. Transparency and messaging are also extremely important (see Chapter 28), but they are touched on in the next section. What does matter, however, in the use of 360 Feedback for talent reviews and decision-​making are two relatively basic yet critical elements in this part of the process: (a) building the right level of validation into the model being deployed and (b) ensuring the most appropriate scoring or weighting methodology in the reporting process. When it comes to measurement, there is little value in having a tool that is neither reliable nor valid. While some HR professionals and even TM practitioners (typically those without a measurement background) are less concerned over using tools with little empirical evidence to support the validity of the tool, the vast majority would prefer to use a valid 360 Feedback assessment. As suggested in Chapter 29, the need for validation may be much more than a preference. Given this is one of the requirements of strategic

12

112  / / ​  3 6 0 for D ecision-Making

360s, it makes perfect sense that it would be important to the use of a tool in TM contexts. What is critical here, however, is that because the data from TM-​related applications are being used to evaluate and make decisions about people that will impact their lives and the success of the organization, having a robust model and validated 360 tool in place is not nice to have but instead required to protect the organization from legal risk of adverse impact and other potential issues. The process of validation can take different forms and is beyond the scope of this discussion as well; see the Society for Industrial and Organizational Psychology’s (SIOP’s) Principles and Practices (2003) as well as industrial–​organizational psychology books for more traditional (e.g., Levy, 2003; Scott & Reynolds, 2010) and alternative validation strategies (McPhail, 2007). Whether the 360 Feedback tool has been customized to an organization or not, the key is ensuring that appropriate and rigorous research has been conducted that provides empirical support for the use of the results for decision-​making in that system. Many TM professionals believe that externally “validated” 360 tools by vendors and consulting firms are acceptable for use in their own organizations when in fact that instead puts them at risk (see Chapter 29 for more on the legal implications of 360 Feedback). The second element that can be somewhat different in the use of 360 Feedback for TM applications is the method of scoring or weighting used to create summary ratings across various rater groups. While research (e.g., Mount, Judge, Scullen, Sytsma, & Hezlett, 1998) has suggested that raters in 360 Feedback processes might best be grouped by their patterns of scores to maximize measurement quality rather than more intuitive intact groups, such as direct reports or peers, this is not practical from a utilization standpoint and as a result has not been adopted. Instead, the dominant method for summarizing feedback across individual ratings into “all-​rater” scores from others has historically been to simply average the unique individual ratings given across all observers (e.g., all direct reports, all peers, etc.) and then roll that data up further to create a summary. In short, if you have one manager, four direct reports, and four peers, that results in a sample size of nine ratings on each behavior (and each set of behaviors that makes up a competency) when computing the summary statistics that are reported and discussed. While this approach is simple and works well enough for development purposes (i.e., it represents a summary of your strengths and opportunities as observed by others collectively), it does not always align well to the organizational perspective on how talent should be viewed. In the case of one, four, four, the manager ratings on each behavior are providing only 11% weighting overall to the all-​rater number. When you consider the significance of manager perceptions in a talent review process (and particularly because they are likely the ones making the future potential call coming into the meeting

 13

Strategic 360 Feedback for Talent Management  //​ 113

as well as ensuring the individual is developing in his or her current role), that degree of underweighting can result in too wide a gap in practice. As a result, the manager may be more likely to discount the results in the talent review process if they are highly divergent from his or her own perspective. While this would appear to fly in the face of the previous discussion regarding the use of 360 Feedback for leveling the playing field, it in fact does not. When you consider that whichever rater group has the largest number of responses will influence the all-​rater scores significantly, it is apparent that even with rater nomination review processes in place to ensure quality (balanced) raters are selected, the size of different internal teams can influence the data significantly. If the pattern, for example, were 1 manager, 10 direct reports, and 3 peers, the results would clearly favor the direct report perspective (positively or negatively). Unless there is a strategic reason for the tool to be focused on one rater group or another, the preferred approach in TM context for 360 Feedback is to ensure equal weighting of the rater groups for the all-​rater categories at every stage of the report. Thus, behaviors, competencies, and dimensions or factors would all reflect successive rollups of averages of averages. While this downplays the differences in sizes between groups (and thus may not reflect a focal participant’s prevalence of individual behaviors with others in totality), it does enable a consistent calibrated method for averaging data across rater groups. Thus, with this approach, the manager perspective always counts as one third or 33% of the summary rating, as do the direct reports (in total) and the peers (in total). By deploying an equal-​weighting approach, a high potential with limited direct reports early in his or her career is derailed by having neither too few directs nor too many peers. This method ensures a comparable scoring rubric for each set of data to be compared across individuals and results in closer alignment between the manager’s assessment and the data. While there will be cases when one rater group is either below the reporting threshold for confidentiality purposes (e.g., often a set of three for direct reports and three for peers independently) or the manager or other group may be missing the equal weighting will cause some new challenges (e.g., groups might be rated at 50% each), the approach is still preferable to allowing a pure individual rollup model that can be heavily skewed. Either way, the results can still be divergent, but the manager is unlikely to see his or her rating completely ignored as they might be in large rating group samples under the more developmental approach. Ultimately, this levels the playing field and makes the discussion more acceptable to the leaders making the talent decisions because their perspective (i.e., observations and judgment) does count for more. By using this approach, however, it does mean that the manager’s ratings have significantly more “weight” in the overall outcome of the results (for better or worse). This is critical to remember when interpreting and using these types of data as inputs into decision-​making processes.

14

114  / / ​  3 6 0 for D ecision-Making

Implementing an Aligned Process for Accountability

The fourth and final step in the process of designing and implementing a strategic 360 Feedback process specifically for use in TM concerns the steps required for ensuring aligned accountability. Once again, many of the recommendations here would be consistent with good practice in general, such as transparency of purpose and utilization of the data, confidentiality of ratings given from protected groups, nomination of qualified raters (not just friends and positive raters), access to results, integration with other programs, and follow-​up and development support (Bracken et  al., 2016; Chapter  3; Chapter 28). The two areas that are somewhat more pronounced in the context of the present discussion are (a) digital integration and (b) contextual interpretation. These essentially go hand in hand. Focusing first on integration, while all strategic 360 Feedback should be integrated at some level with other HR processes and systems, for 360 Feedback to have the most consistent impact in a high-​potential identification process, talent reviews, and succession planning, it must be immediately available to those in the discussion and in a form that is fully connected to other sources of data. This does not mean bringing a paper report to the meeting or summarizing the results of the 360 report in a set of notes somewhere (as is often the case) or, worse, looking at the feedback offline after the meeting and decisions have already been made. Rather, decision-​makers need to have direct access to the data and insights from the tool in real time. The results should be digitally available and presented in relationship to other information feeds, such as other assessment tools from an MTMM process (if available), multiple years of performance ratings, high-​potential talent ratings, and strengths and opportunities as observed by others in the past or following other types of programmatic inputs (e.g., leadership programs, coaching engagements as appropriate, upward feedback, network analysis, etc.). The information regarding where an individual is slated for succession, the fit to role profiles and requirements (as assessed), the job history, critical experiences, and career preferences as well as their long-​term potential and destination roles should also be present. When providing all of these data (which can be overwhelming at first for senior leaders or those for whom these types of inputs are less familiar), the line and HR decision-​makers in the meeting are embracing the second concept of contextual interpretation. By reviewing all of the inputs holistically, they will have a more complete understand of the results (similar to an MTMM measurement approach) and make more informed decisions considering performance history, experiences gained and needed, as well as slated roles and destination (i.e., answering the “Potential for what?” question; Church & Silzer, 2014). While the individual’s development remains important, the key

 15

Strategic 360 Feedback for Talent Management  //​ 115

discussion in this case is around how he or she compares to others in the pipeline and where best to consider aligning their future talents vis-​à-​vis the variety of inputs in hand. Remember, the organization’s goal is to ensure the viability and success of the business by identifying, selecting, and placing the best talent in the right roles, so talent review discussions and the resulting allocation of resources, roles, and succession decisions represent a complex puzzle to be solved, not a simple case of strengths and opportunities on which to work. 360 Feedback represents a powerful tool for doing so, but if it is not present at the meeting or is used in isolation, it is an incomplete equation. KEY CHALLENGES INFLUENCING THE EFFECTIVENESS OF 360 FEEDBACK FOR TM

In general, it should be apparent from this discussion that the use of strategic 360 Feedback for TM-​related processes, particularly as they relate to high-​potential identification, talent reviews, and succession planning, presents some unique and complex challenges over and above the standard application of the tool in organizations (Church, 2013a). While the primary differentiating factors to consider have been outlined, there are a few final issues and items to watch out for that practitioners should be aware of before moving ahead with these types of applications. 1. Determining where, how, and with whom to best use the feedback results. Clearly, if 360 Feedback is to be used as part of a talent review process, the decision-​makers involved need to have the systems to enable digital integration and the capability to contextually interpret that information. The challenge sometimes lies in where and who those people actually are in an organization. While one might initially think using 360 Feedback in TM with the senior-​most leaders is the optimal place to start, sometimes this group may be more resistant to change in their approach to assessing and developing leaders than others earlier in their careers. Similarly, some functions (and functional leadership teams) are more data friendly as well (e.g., R&D and information technology are often functions that embrace data openly). As with some OD interventions, the best approach may be to find the right sponsors and champions of using 360 Feedback for TM and work outward from that group rather than always starting with the top of the house. Creating pull for these tools by showing the impact (as in the case example at the opening of this chapter) is one of the ways to do that. 2. The challenges and pitfalls of rater biases. There are many factors that influence an individual’s ratings in 360 Feedback (highlighted elsewhere in this book). While rater training can help improve accuracy (see Chapter 15), the bottom line

16

116  / / ​  3 6 0 for D ecision-Making

is that personal tendencies, biases, the situation, and cultural context all play a role in what is rated (as observed) at a given point in time. The consequence of these factors can be more pronounced in 360 Feedback processes for TM when the equal rating methodology is employed. While the approach effectively moderates certain issues among outlying direct reports or peers, it enhances others, particularly on the part of the manager. For example, if the manager is an overly positive or negative rater, the overall “average” results will be significantly influenced in the direction of those tendencies. This can represent a challenge if two different candidates are being reviewed who have been rated by managers with very different frames of reference or rating patterns (vs. reflecting actual behavioral strengths and opportunities of the individuals being compared). The best way to manage this challenge is by reviewing the ratings history (mean rating and distribution) of those managers and helping individuals calibrate appropriately in talent discussions. This is part of the contextual integration process and is enabled by having the results digitally available on demand in those situations. It also helps to have people doing this analysis and insights work with the right level of capability. Alternatively, one could use a more standard aggregate rating approach (i.e., all raters treated equally), but that has other potential measurement and political disadvantages, as described previously. 3. Ensuring feedback itself is delivered in a positive but evaluative context. One of the challenges with any assessment feedback delivery including 360 results is that many factors can influence the way it is received and acted on by the focal individual. Whether it is his or her own inherent reaction to feedback, the nature and type of results, the format or delivery method, or the quality of the feedback provider and messages given, we know that feedback not acted on will have no impact and represents a waste of resources. While this is undoubtedly true at all levels, recent research has shown that while early career professionals appear to be largely unfazed by assessment feedback whether positive or negative, even if it is regarding their future potential (Church & Rotolo, 2016), at the senior-​most levels the delivery process is critical for ensuring future behavior change (Church, Del Guidice, & Margulies, 2017). In this last study conducted as part of a senior executive assessment program evaluation, the extent to which the focal participants had a positive reaction to the feedback session (regardless of how they actually scored) was significantly linked to a follow-​up 360 Feedback measure conducted 12–​18 months after the initial assessment. Although it can be difficult to position and deliver poor results in a positive light, the importance of doing so from a developmental perspective is clear. From a TM and bench-​building point of view, it

 17

Strategic 360 Feedback for Talent Management  //​ 117

is equally vital to facilitating accountability and engagement that ultimately lead to behavior change. Given that the majority of TM applications are done at the more middle or senior levels (vs. a pure high-​potential identification process, which tends to be broader in application), this is an important insight for delivering these types of tools and somewhat divergent from traditional feedback models. 4. Resurveying for a refreshed perspective versus measuring behavioral change over time. The last major challenge from a TM perspective with the application of strategic 360 Feedback is how best to approach and interpret multiple assessments. As noted, there are many situations when it is appropriate to launch a new 360 Feedback assessment for a given talent discussion or planning process—​often to provide a fresh view or new insights into the focal leader’s current strengths and opportunities. There are also instances when a new 360 is requested to look for behavior change, for example, to measure improvement against the individual’s development areas or as part of a TM scorecard (Church, 2013a). In general, the use of 360 for multiple “refreshed” views is easier to interpret and contextualize in a TM process than a “Time 1, Time 2” change approach. Although there are examples for which the pure behavior change model can be appropriate (e.g., during a focused coaching intervention), the number of variables that need to remain constant to ensure comparability are many and complex. Changes in direct reports, peers, or the manager can significantly influence the ratings one way or another even if the role is the same (and this is often a challenge as well). This often results in data that can be more distracting than helpful in a TM review process. Unless the 360 Feedback is targeted at measuring change specifically (as in the study by Church et al., 2017), the best approach may be to focus on each 360 Feedback result as a new “refreshed” assessment and compare trends in strengths and opportunities at the aggregate level rather than a rating average point-​by-​point comparison. If a leader is rated poorly 1 year on collaborating with peers, but a year later, the score improves by 0.5 points with three different raters and a new manager, is that because the behavior changed or the raters did? The better question might be if collaboration was the lowest rated competency in a list of 10 and it remained the lowest a year later even with different raters, then there may not have been much change after all. Of course, write-​in comments are enormously helpful as well and must always be examined for context. In the end, while leadership behaviors can and do change across 360 Feedback measures, sometimes a focus on the detailed ratings are not the most helpful approach when it comes to discuss the staffing, future potential, or slating of individuals.

18

118  / / ​  3 6 0 for D ecision-Making

CONCLUSION

In general, strategic 360 Feedback represents a critical tool for individual development and a popular and powerful means for assessing leadership capability in organizations. Recent benchmark surveys have shown that it is now one of the most prevalent tools for TM applications, including the identification of high potentials, and reviewing talent for determining internal assignments, slating, and succession-​related decision-​making. This chapter has outlined a number of areas and ways in which strategic 360 Feedback can play a significant role in these processes from both a predictive and a diagnostic manner. In order to be maximally effective, the process must follow the principles outlined for good strategic 360 Feedback practice with a particular emphasis on (a) measuring and providing feedback on a future state framework for leadership effectiveness that will drive the business forward; (b) determining the right target audience for the program that will have the most strategic impact on the critical talent needs of the business; (c) ensuring a formally validated process and implementing reporting in a way that fairly balances all rater perspectives for calibrated decision-​making; and (d) delivering the results in an integrated system via technology with the appropriate level of analytics and insight capability to interpret the insights in the broader context of other sources of data for decision-​making.

REFERENCES Bernardin, H. J. (1986). Subordinate appraisal: A valuable source of information about managers. Human Resource Management, 25, 421–​439. Bracken, D. W., & Church, A. H. (2013). The “new” performance management paradigm: Capitalizing on the unrealized potential of 360 degree feedback. People & Strategy, 36(2),  34–​40. Bracken, D. W., Timmreck, C. W., & Church, A. H. (2001). The handbook of multisource feedback. San Francisco, CA: Jossey-​Bass. Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360 degree feedback. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(4), 761–​794. Burke, W. W. (1982). Organization development: Principles and practices. Glenview, IL: Scott, Foresman. Burke, W. W. (2014). Organization change: Theory and practice (4th ed.). Thousand Oaks, CA: Sage. Burke, W. W., & Litwin, G. H. (1992). A causal model of organizational performance and change. Journal of Management, 18, 523–​545. Cappelli, P. (2008). Talent on demand:  Managing talent in an age of uncertainty. Boston, MA:  Harvard Business Press. Chamorro-​Premuzic, T., Winsborough, D., Sherman, R. A., & Hogan, R. (2016). New talent signals: Shiny new objects or a brave new world? Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(3), 621–​640. Charan, R., Drotter, S., & Noel, J. (2001). The leadership pipeline: How to build the leadership powered company. San Francisco, CA: Jossey-​Bass. Church, A. H. (1995). First-​rate multirater feedback. Training & Development, 49(8),  42–​43. Church, A. H. (2006). Talent management. The Industrial-​Organizational Psychologist, 44(1),  33–​36.

 19

Strategic 360 Feedback for Talent Management  //​ 119 Church, A. H. (2013a). Assessing the effectiveness of talent movement within a succession planning process. In T. H. DeTuncq & L. Schmidt (Eds.), Integrated talent management scorecards: Insights from world-​class organizations on demonstrating value (pp. 255–​273). Alexandria, VA: ASTD Press. Church, A. H. (2013b). Engagement is in the eye of the beholder: Understanding differences in the OD vs. Talent Management mindset. OD Practitioner, 45(2),  42–​48. Church, A. H. (2014a). Succession planning 2.0:  Building bench through better execution. Strategic HR Review, 13(6), 233–​242. Church, A. H. (2014b). What do we know about developing leadership potential? The role of OD in strategic talent management. OD Practitioner, 46(3),  52–​61. Church, A. H., & Burke, W. W. (2017). Four trends shaping the future of organizations and organization development, OD Practitioner, 49(3),  14–​22. Church, A. H., Del Giudice, M. J., & Margulies, A. (2017). All that glitters is not gold: Maximizing the impact of executive assessment and development efforts. Leadership & Organization Development Journal, 38(6), 765–​779. Church, A. H., Shull, A. C., & Burke, W. W. (2018). Organization development and talent management: Divergent sides of the same values equation. In D. W. Jamieson, A. H. Church, & J. D. Vogelsang (Eds.), Enacting values-​ based change:  Organization development in action (pp. 265–​ 294). Cham, Switzerland: Palgrave Macmillan. Church, A. H., & Rotolo, C. T. (2013). How are top companies assessing their high-​potentials and senior executives? A talent management benchmark study. Consulting Psychology Journal: Practice and Research, 65(3), 199–​223. Church, A. H., & Rotolo, C. T. (2016). Lifting the veil: What happens when you are transparent with people about their future potential? People & Strategy, 39(4),  36–​40. Church, A. H., Rotolo, C. T., Ginther, N. M., & Levine, R. (2015). How are top companies designing and managing their high-​potential programs? A follow-​up talent management benchmark study. Consulting Psychology Journal: Practice and Research, 67(1),  17–​47. Church, A. H., Rotolo, C. T., Shull, A. C., & Tuller, M. D. (2014). Inclusive organization development: An integration of two disciplines. In B. M. Ferdman & B. Deane (Eds.), Diversity at work: The practice of inclusion (pp. 260–​295). San Francisco, CA: Jossey-​Bass. Church, A. H., & Silzer, R. (2014). Going behind the corporate curtain with a BluePrint for Leadership Potential: An integrated framework for identifying high-​potential talent. People & Strategy, 36(4),  51–​58. Church, A. H., & Waclawski, J. (2001). A five phase framework for designing a successful multirater feedback system. Consulting Psychology Journal: Practice & Research, 53(2),  82–​95. Church, A. H., & Waclawski, J. (2010). Take the Pepsi Challenge: Talent development at PepsiCo. In R. Silzer & B. E. Dowell (Eds.), Strategy-​driven talent management: A leadership imperative (pp. 617–​640) (SIOP Professional Practice Series). San Francisco, CA: Jossey-​Bass. Church, A. H., Waclawski, J., & Burke, W. W. (2001). Multisource feedback for organization development and change. In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.). The handbook of multisource feedback: The comprehensive resource for designing and implementing MSF processes (pp. 301–​317). San Francisco, CA: Jossey-​Bass. Conger, J. A., & Church, A. H. (2018). The high potential’s advantage: Get noticed, impress your bosses, and become a top leader. Boston, MA: Harvard Business Review Press. Edwards, M. R., & Ewen, A. J. (1996). 360° feedback: The powerful new tools for employee assessment and performance improvement. New York, NY: AMACOM. Effron, M., & Ort, M. (2010). One page talent management:  Eliminating complexity, adding value. Boston, MA: Harvard Business School. Freedman, A. (2004). The “Session C” strategy. Human Resource Executive Online. http://​hrearchive.lrp. com/​HRE/​print.jhtml?id=5359233 Guzzo, R. A., Fink, A. A., King, E., Tonidandel, S., & Landis, R. S. (2015). Big Data recommendations for industrial–​organizational psychology. Industrial and Organizational Psychology:  Perspectives on Science and Practice, 8(4), 491–​508.

120

120  / / ​  3 6 0 for D ecision-Making Happich, K., & Church, A. H. (2017). Going beyond development: Key challenges in assessing the leadership potential of OD and HR practitioners. OD Practitioner, 49(1),  42–​49. Lepsinger, R., & Lucia, A. D. (1997). The art and science of 360-​degree feedback. San Francisco, CA: Pfeiffer. London, M. (2001). The great debate: Should multisource feedback be used for administration or development only? In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback (pp. 368–​388). San Francisco, CA: Jossey-​Bass. Levy, P. E. (2003), Industrial-​organizational psychology: Understanding the workplace. Boston, MA: Houghton Mifflin. London, M., Smither, J. W., & Adsit, D. J. (1997). Accountability: The Achilles’ heel of multisource feedback. Group & Organization Management, 22(2), 162–​184. McCauley, C. D., & McCall, M. W., Jr. (Eds.). (2014). Using experience to develop leadership talent: How organizations leverage on-​the-​job development. San Francisco, CA: Jossey-​Bass. McPhail, S. M. (Ed.). (2007). Alternative validation strategies (SIOP Professional Practice Series). San Francisco, CA: Jossey-​Bass. Mount, M. K., Judge, T. A., Scullen, S. E., Sytsma, M. R., & Hezlett, S. A. (1998). Trait, rater and level effects in 360-​degree performance ratings. Personnel Psychology, 51(3), 557–​576. Nowack, K. M., & Mashihi, S. (2012). Evidence-​based answers to 15 questions about leveraging 360-​degree feedback. Consulting Psychology Journal: Practice and Research, 64(5), 157–​182. Oliver, D. H., Church, A. H., Lewis, R., & Desrosiers, E. I. (2009). An integrated framework for assessing, coaching and developing global leaders. In Advances in global leadership (Vol. 5, pp. 195–​224). Bingley, England: Emerald Group. Outram, C. (2013). Making your strategy work: How to go from paper to people. Harlow, England: Pearson. O’Reilly, B. (1994). 360° feedback can change your life. Fortune, 130(8), 93–​94, 96, 100. Scott, J. C., & Reynolds, D. H. (Eds.). (2010). The handbook of workplace assessment: Evidenced based practices for selecting and developing organizational talent. San Francisco, CA: Jossey-​Bass. Silzer, R., & Church, A. H. (2010). Identifying and assessing high potential talent: Current organizational practices. In R. Silzer & B. E. Dowell (Eds.), Strategy-​driven talent management: A leadership imperative (pp. 213–​279) (SIOP Professional Practice Series). San Francisco, CA: Jossey-​Bass. Silzer, R., & Church, A. H. (2009). The pearls and perils of identifying potential, Industrial and Organizational Psychology, 2, 130–​143. Silzer, R., Church, A. H., Rotolo, C. T., & Scott, J. C. (2016). I-​O practice in action: Solving the leadership potential identification challenge in organizations. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(4), 814–​830. Silzer, R. F., & Dowell, B. E. (2010). Strategic talent management matters. In R. F. Silzer & B. E. Dowell (Eds.), Strategy-​driven talent management:  A leadership imperative (pp. 213–​279) (SIOP Professional Practice Series). San Francisco, CA: Jossey-​Bass. Smither, J. W., & Walker, A. G. (2001). Measuring the impact of multisource feedback. In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback: The comprehensive resource for designing and implementing MSF processes (pp. 256–​271). San Francisco, CA: Jossey-​Bass. Society for Industrial and Organizational Psychology (SIOP). (2003). Principles for the validation and use of personnel selection procedures (4th ed.). Bowling Green, OH: Author. Thornton, G. C., Johnson, S. K., & Church, A. H. (2017). Selecting leaders: High potentials and executives. In N. Tippins & J. Farr (Eds.), Handbook of employee selection (Rev. ed., pp. 833–​852). London, England: Routledge. 3D Group. (2016). Current practices in 360 degree feedback (5th ed.). Emeryville, CA: 3D Group. Tornow, W. W., & London, M. L. (Eds.). (1998). Maximizing the value of 360-​degree feedback. San Francisco, CA: Jossey-​Bass.

 12

Strategic 360 Feedback for Talent Management  //​ 121 Trudell, C. M., & Church, A. H. (2016). Bringing it to life:  Global talent scout, convener and coach: PepsiCo’s LeADing talent management into the future. CHREATE Advancing the HR Profession Forward Faster. https://​www.researchgate.net/​project/​PepsiCo-​LeAD-​Program-​assessment-​and-​ development-​of-​leadership-​potential Valerio, A. M. (2018). Wherefore are thou all our women high-​potentials? People & Strategy, 41(1),  32–​36. Yost, P. R. (2010). Integrated talent management at Microsoft. In R. F. Silzer & B. E. Dowell (Eds.), Strategy-​ driven talent management: A leadership imperative (pp. 641–​654). San Francisco, CA: Jossey Bass.

12

 123

7 / /​/ ​     / /​/​ USING

STAKEHOLDER INPUT TO SUPPORT STRATEGIC TALENT DEVELOPMENT AT BOARD AND SENIOR EXECUTIVE LEVELS A Practitioner’s Perspective PAUL WINUM

The purpose of this chapter is to offer perspectives about using stakeholder input (also referred to as 360 Feedback and multirater feedback) to support high-​value strategic talent development at the board and senior executive levels of organizations. The perspectives offered are based on my experiences working in and consulting to more than 100 organizations over the past 35 years. The chapter addresses the use of stakeholder input as a tool for the development of boards of directors and for use with cohorts of executives as part of an executive development program intended to drive a strategic business agenda. I offer thoughts about the context, considerations, and methods for using stakeholder input for each of these purposes as well as provide case examples to illustrate. Last, a summary of key learning and insights is offered in the hope of helping other practitioners to use stakeholder input effectively in the future.

123

124

124  / / ​  3 6 0 for D ecision-Making

STAKEHOLDER INPUT FOR USE IN BOARD DEVELOPMENT Context and Considerations

In the last decade, there has been a significant uptick in the expectations for boards of directors to deliver value to the organizations they direct. Spurred to a large degree by the calamitous swoon in the financial markets and among financial services companies in 2008, boards have increasingly come under scrutiny for the way they oversee the chief executive officers (CEOs) and management teams of companies across all sectors, particularly when there are serious performance issues. In response to these pressures, boards have increasingly taken steps to elevate the effectiveness of their governance processes and impact. One of the ways boards have done this is through their annual evaluation process. Conducting an annual board evaluation is a requirement for companies listed on the New York Stock Exchange (NYSE), as is an annual evaluation of each board committee. Also, the National Association of Corporate Directors recommends an annual board evaluation as a best governance practice for all public and private companies. For many years, this requirement was met through a board self-​assessment, usually comprising a brief survey administered by the Nominating and Governance Committee or an outside law or accounting firm. A 2016 research study conducted jointly by the NYSE Governance Services and RHR International that polled 620 directors of NYSE listed companies found that less than half (only 48%) of respondents indicated their board evaluation process was very effective. Comments by the directors who responded suggested many factors that can undermine the candor and ultimate utility of a board’s self-​assessment. Also, while 90% of the directors surveyed said they would value receiving individual feedback on their performance as a director, only 39% of the directors said they got such feedback as part of their boards’ past evaluation process. That is all changing. Best practice now is to utilize an objective, outside firm or consultant to conduct a board evaluation at least every second or third year. In addition, incorporating individual director feedback in the board evaluation is an emerging best practice. In fact, responsible, conscientious directors who serve on boards want to know that they are contributing and if there is anything they can do to contribute more in their board service. Because directors and board committee members often interface with the CEO and members of the executive management team, receiving their input as key stakeholders is also emerging as a best practice in board evaluation. The challenge is doing this in a skillful, sensitive, and effective manner. Inviting stakeholder input from fellow directors and members of the management team can be a risky proposition. Boards are very private social systems. Board members

 125

Using Stakeholder Input to Support Strategic Talent Development  //​ 125

often develop strong and long-​lasting relationships with one another. In many cases, board members knew each other before joining the board and often had relationships with the CEO. Asking for feedback from directors about each other could potentially strain relationships if not gathered and managed well. When it comes to inviting stakeholder input from members of the management team about the board, there is the risk that always comes with providing upward feedback: potential career-​threatening retaliation in some form. Yet, the members of the management team who interact with the members of board committees often have direct experience about how the board is functioning and adding value—​or not. It is the same with the CEO, who is often both a fellow board member and a person with a reporting relationship to the board. The perspective of the CEO about the working partnership with the board and the contributions of individual directors is invaluable but usually needs to be gathered and delivered very carefully. Methods

With the imperative for boards to up their game and the case presented for director and management team input as one tool to give a board and individual director’s feedback about the value they are adding, the question now is, Which method(s) will yield the desired outcomes? There are the three parts to the method question:

1. Which stakeholders will be asked to provide input? 2. About what will they be asked to provide input? 3. How will that input be gathered and fed back to the board and individual directors?

Let us take each of these sequentially. First, if you think about any organization, there are multiple stakeholders who have a vested interest in how the organization performs: the investors, the employees, the communities where the organization operates and delivers its products and services, suppliers, and of course the board members themselves. Of all of these, however, only the board members and members of the management team have a direct line of sight into how the board actually operates. So, the directors themselves and the members of the executive team who interface with the board and its committees are the stakeholders who should be invited to provide input. What, then, should these stakeholders be asked to provide feedback about? There are many options when it comes to topic areas for relevant input. A 2016 Stanford University study of board evaluation practices recommended that boards implement a diagnostic process that covers a number of governance dimensions.

126

126  / / ​  3 6 0 for D ecision-Making TABLE 7.1  RHR’s Diagnostic Areas for Board Evaluation Purpose and Strategy

How does your board intend to deliver value to the enterprise?

Composition and Structure

What expertise does your board need to deliver its value proposition and through what structures?

Risk Management and Safeguards

How does your board protect the enterprise from risks and threats?

Board Culture

How well do your board members work together?

Board and CEO Partnership

How do the board and CEO collaborate in playing their respective roles?

Board Leadership

How is your board led by its chair?

Board Renewal

How does your board continually evolve and improve its effectiveness?

Based on that study and a review of literature on board evaluation, RHR International constructed a board development survey to assess the areas shown in Table 7.1. The methods for gathering stakeholder input on these topics include using a written survey, interviews conducted in person or by phone, or both survey and interview methods. Given the importance of obtaining a comprehensive picture of how the board is functioning in each of these areas, using an interview method is essential to generate the nuanced understanding of a board’s strengths and areas for development. Having data generated by a survey is also very valuable for showing the areas where a board excels or needs to develop and for comparing board effectiveness year after year. In addition to receiving input on how the board as a whole and its committees are functioning, the interview and survey methods can and should also be used to gather feedback about the contributions and areas for development of each director. In order to maximize candor, it is important that complete confidentiality be given to each of the feedback providers. Case Illustration

To illustrate the use of stakeholder input for a board development purpose, here is an example of its application with a recent client. The initial referral for the engagement came from a former client who had migrated to the chief human resources officer (CHRO) role in a Fortune 500 company. A call was arranged with the lead director, chair of the Nominating and Governance Committee, and the chair/​CEO to discuss the context for the request, desired outcomes, and the process steps. The chair of the Nominating and Governance Committee was new to her role

 127

Using Stakeholder Input to Support Strategic Talent Development  //​ 127

and wanted to upgrade the quality and utility of the board evaluation process, which had previously consisted of a brief internal survey. In addition to obtaining an assessment of the board’s effectiveness, she wanted the process to generate feedback about the CEO and each individual director. The intention was to use the output from the process to assist the board and CEO in elevating individual and collective leadership impact as the company navigated particularly challenging industry headwinds and considered new strategies for the future. During the call, the following process steps were agreed to: 1. Preparation of a communication to all directors and members of the executive management regarding purpose and process; 2. Presentation to the full board on best practices in governance and on this board development process; 3. Identification of several customized questions for input from all selected stakeholders; 4. Distribution of RHR’s Board Development Survey℠ online to every director and members of the executive management team1; 5. Scheduling and execution of confidential 90-​minute meetings (or in some cases conference calls when logistics demanded) with each director and member of the executive management team about how the board and committees were functioning; 6. Solicitation of developmentally oriented feedback about each board member and the CEO gathered during 90-​minute interviews; 7. Benchmarking the board survey results against RHR’s Great Boards database2; 8. Generation of two summary reports: one on the overall board and committee effectiveness and a second summarizing feedback for each individual board member and the CEO; 9. Delivery of reports to the engagement sponsors (chair of Nominating and Governance and lead director), followed by delivery to and discussion with the chair/​CEO; 10. Presentation of results to the full board with discussion; 11. Individual one-​on-​one feedback delivered to each director by the RHR team; and

 RHR’s Board Development Survey contains 54 items in the categories Board Purpose and Strategy, Composition and Structure, Risk Management and Safeguards, Board Culture, Board Leadership, Board/​ CEO Partnership, and Board Renewal. 2  The Great Boards database uses several measures of market and organizational performance and director ratings of board effectiveness to compare survey results with boards that score at or above the 80th percentile. 1

128

128  / / ​  3 6 0 for D ecision-Making

12. Discussion of an action plan to implement recommendations for board and individual director development. The process was executed over a period of 3 months from start to finish. There was a very positive response to the process from every director and member of the executive management team. The interviews and survey summaries reported many positives in how the board and CEO were operating as well as several important areas for development. In addition, each director and the CEO received constructive, actionable feedback about how to maximize their respective contributions to the organization going forward. Directors were appreciative of the feedback they received as it was offered both as an acknowledgment of positive contributions and as suggestions to enhance contributions in the future. Some of the themes for the feedback included speaking up more in board meetings, respectfully challenging the CEO more, offering more declarative positions on key issues, and planning for succession in committee leadership. The lead director, board chair/​CEO, and chair of Nominating and Governance, who were given summaries of all feedback, then took responsibility for overseeing the implementation of recommendations to improve the board’s effectiveness. STAKEHOLDER INPUT USED TO DRIVE A STRATEGIC BUSINESS AGENDA

While the use of stakeholder input in board development is relatively new, 360 Feedback has been in use as a tool to aid in the assessment and development of leaders since the 1940s. Just as beauty is in the eye of the beholder, effective leadership is judged in part from the perspectives of those who are led by and interact with leaders. If the quality, candor, and specificity of the feedback are high, a well-​executed 360 Feedback process can be very helpful in enhancing self-​awareness of leader impact and inform areas for development. The case illustration that follows describes the use of multirater feedback to a large cadre of executives charged with leading a cultural transformation and a new strategy. Case Illustration

One of the great company success stories of the last decade is that of Mastercard®.3 Since going public in 2006, shares of Mastercard’s stock have soared from its

 W hile permission to describe the Executive Leadership Program has been given by this client, some details about the strategy and leadership behaviors targeted by the program have not been specified to protect the company’s proprietary information and competitive advantage.

3

 129

Using Stakeholder Input to Support Strategic Talent Development  //​ 129

split-​adjusted opening of $3.90 in 2006 to $148 today—​an increase of more than 3,700%. While there are many reasons behind this incredible run, one that stands out is the profound shift in corporate strategy and culture led by CEO Ajay Banga. With its pre–​initial public offering (IPO) roots as a transaction-​processing shop for credit card–​issuing financial institutions, the company operated almost like a trade association, owned and managed by the banks it served. When antitrust litigation propelled divestitures by the banks and with the decision to become a stand-​alone public company, Mastercard’s strategy and culture needed to be transformed. Following Bob Selander, who led the company through the IPO process, Ajay joined the company in July 2009 as president and CEO successor. To enable the company to compete and grow in a competitive market and needed not only to continue serving issuing financial institutions but also to generate new sources of revenue, Ajay led the company’s shift to an information and technology company that leveraged resources across the company to deliver products and services to a broad array of customers around the world. The new strategy and the underlying culture needed to drive that strategy would require a new set of leader and employee behaviors throughout the organization. The new talent profile centered on four core behaviors (confidential and proprietary to the company). To install these behaviors in the company was a challenging imperative, one met in part through an executive development program and the use of 360 Feedback as a leadership development tool. Mastercard’s Executive Leadership Program (ELP) was one of Ajay Banga’s brainchildren and was designed to educate and catalyze leaders in mission-​critical roles to effectively transform the culture of the company. Initially targeting the top 75 leaders who reported to the executive committee, the program was so successful that it was expanded over a 6-​year span to include the top 400 leaders. The program included an executive education component, an executive coaching component, and the use of a customized 360 Feedback process. All three program components were designed and executed to build and strengthen the leader capabilities necessary to effectively drive Mastercard’s strategy and culture. The 360 Feedback component was intended to generate clarity and awareness about the behaviors that would be needed and expected from leaders and to build self-​awareness about areas of particular strength and development for each leader. It included ratings across each of the four targeted leadership dimensions as well as importance ratings for each behavior. (Each leadership dimension was assigned an importance ranking by raters depending on the leader’s role.) In all, more than 2,000 staff throughout the company provided stakeholder input on the executive program participants. The following are the steps taken to implement the 360 Feedback process:

130

130  / / ​  3 6 0 for D ecision-Making







1. Specify requisite leadership behaviors to drive strategy and desired culture. 2. Construct a customized 360 Feedback survey instrument based on requisite leader behaviors.4 3. Have each participant select 15–​20 stakeholders from peers, subordinates, and supervisors; these selections were approved by their manager to provide input about each ELP participant. 4. Draft and distribute communication to stakeholders with a link to the survey. 5. Process all input data and deliver feedback results to program participants. 6. Set development action plans for each participant based on 360 Feedback survey results with the assistance of the executive coach and reviewed with the manager. 7. Execute coaching sessions over a 6-​month period to support the implementation of development action plans with engagement of the manager. 8. Send the Post/​Then Progress Update Survey to 360 Feedback raters. 9. Review Post/​Then Progress Update results with each ELP participant and close out coaching engagements. (described in material that follows). 10. Summarize and report progress results to program sponsors.

The Post/​Then Progress Update Survey5 was distributed to all raters who provided the original feedback. Rather than readminister the original 360 Feedback survey, raters were asked to indicate the degree of observable change in leader behavior at the end of the program compared to when the program started. The items in the Post/​Then Progress Update Survey were as follows: 1. Please rate the overall effectiveness of [name]’s leader behaviors prior to and after ELP program participation? (On a scale from 1 to 7) 2. If different from six months ago, what are the 1–​2 changes that you have observed? 3. What has been the benefit to you or to Mastercard of any of these changes? 4. Please offer any suggestions that you think will enhance this leader’s contribution and impact as a leader at Mastercard in the future.



 The ELP 360 instrument employed was Lominger’s Voices instrument.  Post/​Then methods of program evaluation are intended to negate response bias effects that can influence pre/​post evaluation methods. See Howard, G. S., Ralph, K. M., Gulanick, N. A., Maxwell, S. E., Nance, D. W., & Gerber, S. K. (1979). Internal invalidity in pretest-​posttest self-​report evaluations and a re-​evaluation of retrospective pretests. Applied Psychological Measurement, 3(1),  1–​23.

4 5

 13

Using Stakeholder Input to Support Strategic Talent Development  //​ 131 TABLE 7.2  ELP Progress Update Results Executive Leadership

Pre-​ELP Highly Effective

Post-​ELP Highly Effective

Behevaiors

(%)

(%)

Behavior 1

32.8

63.5

Behavior 2

27.2

58.9

Behavior 3

26.3

55.4

Behavior 4

31.9

62.0

The aggregate results of the Post/​Then survey showed significant behavioral change in the positive direction across each of the four targeted leader behaviors for program participants after the start of the program. Table 7.2 displays results of the Progress Update Surveys showing the changes in the targeted Executive Leadership Behaviors before and after ELP participation. In addition to the targeted feedback to participants that focused their development efforts, the completion of the initial 360 Feedback instrument and the postprogram Progress Update Survey by more than 2,000 employees at Mastercard was an intervention in itself. All of those survey respondents became educated about valued leadership behaviors and were primed to pay attention to changes in leaders through the postprogram survey. In my more than 35 years as a consulting psychologist and executive leader, the results of this program and the impact it has had on the culture and success of the company are the best I have ever seen. SUMMARY INSIGHTS

Over the course of my career, I have seen multirater feedback processes offer a great stimulus to the development of many executives and managers. I  have also seen instances when the process did not stimulate much more than a paper exercise of complying with a request from a human resource professional. What, then, are the conditions that result in the most value? There are a few that seem to make that difference. First, the connection between the feedback process, the behaviors to be rated, and the strategic importance to the organization’s success must be clear, credible, and convincing to all the stakeholders. The communication about the intended purpose, the process, and how the data will be used helps make that connection. Second, there must be a high level of confidence that stakeholders’ input will not be attributed to them so that they can be as candid as possible. Inviting feedback from peers and subordinates can be a politically risky request, and stakeholders are often inclined to

132

132  / / ​  3 6 0 for D ecision-Making

pull their punches, thus diluting the content and quality of the feedback data. It is quite helpful if the subject of a stakeholder input request asks the feedback providers directly and encourages candor. Third, it is important that there is accountability and follow-​through actions on the feedback. One of the worst things an organization can do to erode confidence in management is to invite input about the organization or its leaders and not do anything with the output. Whether a group intervention like the two case illustrations presented in this chapter or feedback process for a single individual, the stakeholders deserve to know what the summary feedback was and what actions the requester is going to take to address areas for improvement. In every case when 360 Feedback is requested, the person who asked for it should thank the feedback providers for taking the time to provide the input, convey the key insights gained from the exercise, and commit to action steps. A mechanism for checking progress against those action commitments is important and should be visible to the stakeholders. In the Mastercard example, the Post/​Then Progress Update Survey provided some measure of accountability. Just the act of inviting feedback from colleagues, subordinates, managers, and other stakeholders sends a powerful message: “How I am impacting you matters to me.” When commitments for change in specific areas are made and observable changes happen following a feedback request, those who provided the input feel empowered and that they make a difference. When all these conditions are present, the use of stakeholder feedback has the ability to help drive strategic talent development in individuals, in groups, and, indeed, in the entire organization. REFERENCES Larcher, D. F., Miles, S., Griffin, T., & Tayan, T. (2016). 2016 survey: Board of director evaluation and effectiveness. Stanford, CA: Stanford University. NYSE Governance Services and RHR International. (2016). Sustaining boardroom performance:  How great boards stay great. https://​www.rhrinternational.com/​thought-​leadership/​research-​studies/​ sustaining-​boardroom-​performance-​how-​great-​boards-​stay-​great Weil, Gotshal, and Manges. (2015). Requirements for public company boards. New York, NY: Author. https://​ www.weil.com/​~/​media/​files/​pdfs/​150154_​pcag_​board_​requirements_​chart_​2015_​v21.pdf

/

 13

/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/​ SE C T I O N   I I / /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/

360 FOR DEVELOPMENT

134

 135

8 / /​/ ​     / /​/​ APPLICATION

OF 360 FEEDBACK FOR LEADERSHIP DEVELOPMENT CYNTHIA MCCAULEY AND STÉPHANE BRUTUS

The practice of 360 Feedback is strongly rooted in efforts to develop leadership in organizations. Although feedback as a mechanism for improving employee performance has a long history, the practice of systematically collecting evaluations of a focal leader’s behaviors and skills from the perspective of that leader’s manager, peers, and direct reports is a more recent development. An effort in the 1970s to identify assessment tools for providing feedback to leaders from coworkers yielded only 24 instruments; however, most of these either were designed for upward feedback only or were research questionnaires, included because they had the potential for providing feedback (Morrison, McCall, & DeVries, 1978). Four decades later, there are hundreds of 360 Feedback instruments available in the marketplace, and many organizations have created their own customized instruments. It is now common for leadership development programs and coaching engagements with leaders to include a 360 Feedback tool early in the process. And, organizations are increasingly making use of 360 Feedback as a regular talent process, much like annual performance evaluations or employee engagement surveys. Although organizations today use 360 Feedback for multiple talent management purposes, leader development is an element of nearly all 360 processes (3D Group, 2016). Technology that made the collection and compilation of data faster and easier is certainly a major factor in the growth of 360 Feedback. But, we would argue that the 135

136

136  / / ​  3 6 0 for D evelopment

growth is also due to the gap it fills:  Leadership is a social process, and the manager often has a limited view of the variety of interactions that make up that process. Peers and direct reports also have useful views, but the power dynamics of hierarchical organizations often deters them from sharing those views. The rater anonymity built into the 360 Feedback process creates a safer route for receiving honest input from the broader social system. For leadership development, 360 Feedback is increasingly used at all levels in the organization, makes use of tools that are customized to reflect an organization’s leader competency framework, and is viewed as a core talent process for leadership development. As a result, 360 Feedback is becoming more strategic: It focuses on competencies valued by the organization, is designed to create change in leaders across the organization, supports the development of a feedback culture, and informs broader talent management processes (Chapter 2). In its application to leadership development, 360 Feedback must influence the development of individual leaders; yet, to have a strategic role in an organization, it also needs to enhance the leadership capacity in the organization for meeting its strategic agenda. In this chapter, we highlight well-​established practices for using 360 Feedback for individual leader development and emerging practices for its use in the collective development of leadership capacity. But first, we address the debate about whether 360 data for leadership development should be used only for development purposes. 360 DATA FOR “DEVELOPMENT ONLY” VERSUS “DEVELOPMENT PLUS”

Complexity in the design of a 360 Feedback process for leadership development is the trade-​off between using the data for development purposes only versus using the results for both development and making decisions about the leader (e.g., performance evaluations, selection into high-​potential pools). Some organizations opt for complete data confidentiality, leaving any decisions about sharing the feedback report in the hands of the focal leader and arguing that confidentiality creates more honest feedback from raters and more openness to that feedback by leaders. Other organizations use the same 360 process for leadership development and for organizational decision-​making, careful to be transparent and consistent about how the data will be used and who has access to what level of detail. They argue that the 360 data have value beyond development, and the organization would be remiss in not using these data to make better decisions. They also point to the enhanced motivation to improve when individuals know that how others rate them will be used in decision-​making. Some organizations work to avoid the debate by using different 360 processes for developmental feedback and for organizational

 137

Application of 360 Feedback for Leadership Development  //​ 137

decision-​making about leaders; however, this can become cumbersome and potentially confusing to raters. Of course, there are a range of options in between the two extremes of complete data confidentiality and data being available for any type of personnel decision. Data may be shared beyond the leader but remain in the realm of leadership development. For example, elements of the data can be shared with the boss for use in formal development planning, with an internal coach working with the leader, or with human resource professionals who combine data across individuals to identify the developmental needs of leaders across the organization. There is limited research on how the purpose of 360 Feedback impacts raters. There is some evidence that employees have a more positive view of 360 Feedback when it is used for developmental purposes than when it is used for decision-​making (Bettenhausen & Fedor, 1997; 3D Group, 2016). When asked, slightly over a third of raters say that they would change their ratings if the data were used for evaluative rather than purely developmental purposes (London, Wohlers, & Gallagher, 1990). One can point to a more extensive body of research on performance appraisals that show higher ratings on average when evaluations are obtained for administrative purposes compared to those obtained for research or development purposes ( Jawhar & Williams, 1997). Designers of 360 Feedback for leadership development need to carefully consider these options. An organization’s culture (e.g., openness of communication, deference to hierarchy) and its history with 360 Feedback are important considerations in deciding whether the data will be used for purposes beyond leadership development. As an organization matures in its use of 360 Feedback, it may decide to move from a development-​ only purpose to development plus decision-​making; however, such a transition needs to be deliberate, transparent, and supported by necessary changes in the system and tools to better fit with a broader purpose. The “development-​only” versus “development-​plus” debate has existed as long as 360 Feedback has existed and, in our view, can only be resolved locally. 360 FEEDBACK PRACTICES FOR INDIVIDUAL DEVELOPMENT

The success of 360 Feedback for leadership development rests with its ability to stimulate desired changes in focal leaders. The good news is that there is an extensive body of 360 Feedback research (see Atwater, Brett, & Charles, 2007; Bracken & Rose, 2011; Nowack & Mashihi, 2012) as well as broad experience using these tools in organizations (see Bracken, Rose, & Church, 2016; Effron & Ort, 2010; Fleenor, Taylor, & Chappelow, 2008; Leslie, 2013), all of which point to useful practices in designing and implementing

138

138  / / ​  3 6 0 for D evelopment

360 Feedback for individual leader development. Many of these practices are aimed at providing clarity to leaders about changes they can make for enhanced effectiveness in their organization and motivating leaders to pursue these changes. Equally important are postfeedback development opportunities that are as thoughtfully crafted as the feedback process itself. To maximize its developmental impact, 360 Feedback should be utilized as a tool within a broader leadership development system. Feedback That Provides Clarity About Needed Changes

Clarity about needed changes is enhanced when the focal leader receives feedback messages that are specific, relevant, and credible. Many design features of the 360 Feedback process impact the quality of feedback, including the instrument itself, the raters, and the delivery of feedback (Chapter 15). Items on the instrument should describe precise, observable behaviors and skills. Items that ask about traits (e.g., friendly, creative); outcomes (e.g., makes me feel valued, impresses customers); or broad capabilities (e.g., a strategic thinker, talented at dealing with people) are not as helpful for the leader seeking guidance about needed changes. Items should also be relevant to the leader’s context, focusing on leader competencies that the organization has identified as important and behaviors particularly germane to the leader’s organizational level. Accumulating and sharing evidence that scores on the instrument’s leadership dimensions actually do predict leader effectiveness in the organization will bolster perceptions of the relevance of the instrument’s content (Chapter 14). There has been considerable discussion about the best rating scales for items on 360 instruments. The vast majority of instruments use a five-​point Likert scale, asking raters to indicate either the frequency of the described behavior or the degree to which the leader exhibits a specific capability. Use of these types of scales have been criticized for yielding a lack of variability in responses across leaders and for low usefulness when it comes to identifying the most important changes the leader should make, prompting exploration of alternative approaches. A promising option is social comparison scales that ask about the leader’s skill relative to others (e.g., among the best, about average, among the worst). There are also compelling arguments for asking raters more directly about what the leader needs to change, for example, asking whether the leader needs do more, not change, or do less of each behavior or asking which skills are the leader’s top strengths and developmental needs. Raters themselves—​who they are, their willingness to give honest feedback, their understanding of the process—​play a major role in the relevance and credibility of the feedback. Feedback should be invited from all coworkers who have an opportunity to

 139

Application of 360 Feedback for Leadership Development  //​ 139

regularly observe the leader in action. Focal leaders typically generate a list of raters that their managers review and may augment. Raters need to be oriented to the 360 process: how they were selected, how their ratings will be used, and what the outcomes of the process are. To encourage honest responses, rater anonymity should be guaranteed. Rater training is an important but underutilized practice for improving the quality of feedback. Such training can reduce common rating errors, such as recency effects and halo, thus increasing the accuracy of ratings. Educating raters about the leader competencies being evaluated, reviewing the items with them, and creating a shared understanding of the rating scale increases consistency in how raters complete the instrument. It would seem to go without saying that the feedback data need to be delivered in ways that help the focal leader discern key messages in the feedback. Before receiving their own feedback reports, focal leaders should be adequately prepared to receive and understand their individual feedback reports, ideally in a live group session where questions can be addressed. This is a time to remind participants of the purpose and goals of the feedback process, encourage them to be open to the feedback (e.g., others have invested their time to provide this feedback) and spend an adequate amount of time with their data (e.g., do not make the common mistakes of accepting or rejecting the feedback too quickly), and explain how to read and interpret the report. The most useful reports are comprehensive yet lead individuals through the data in a step-​by-​step process, highlighting the results that they should pay most attention to and offering interpretation and suggestions along the way. The visual presentation of the feedback information is important: It is currently one of the weaknesses of most feedback reports. The developers of 360 Feedback instruments often overemphasize assessment quality at the expense of the feedback report design. The field of data visualization should be tapped into more regularly in designing feedback reports that translate large amounts of data into simple yet meaningful visuals. The most potent way to support focal leaders as they make sense of their feedback and begin to formulate action steps is through a private consultation with a feedback facilitator who is experienced with the instrument being used. These one-​on-​one sessions should take place after the focal leader has had some time to digest and reflect on his or her feedback. A facilitator encourages sense-​making in the face of what can be an overwhelming amount of data by asking questions about what the focal leader sees in the data and about his or her reactions to the feedback (e.g., what is most surprising, most disturbing) and by helping the leader identify themes in the data and seek to understand why different rater groups may have different perceptions of the leader’s capabilities. The facilitator also encourages the focal leader to move from sense-​making to action planning,

140

140  / / ​  3 6 0 for D evelopment

including identifying any puzzles in the data that are best addressed by going back to coworkers for more information. Feedback Processes That Motivate Change

A feedback process also needs to spur commitment to change and to establish mechanisms that will help maintain that commitment over time. Growth and change takes sustained effort. Given the competing demands on a leader’s time and energy, these efforts will wane without strong motivation. Motivating change starts with setting an organizational expectation that 360 Feedback will be used for development. Setting goals based on the feedback then generates motivation for specific changes for each leader. Accountability mechanisms and tracking progress encourage leaders to work toward their goals and realize those changes. When initiating a 360 Feedback process for leadership development, the message to focal leaders and their raters needs to be clear: We are investing in this process because feedback is key for focusing individual development efforts in directions that matter for our collective success; we expect to see positive changes in each leader who participates. That message is reinforced when the feedback focuses on leadership competencies that the organization has identified as critical and that are already used in the organization (e.g., in performance appraisals and in leadership training programs). Having senior executives who participate in the feedback process and whose subsequent change efforts are visible also strongly reinforces the expectation. Yet, the message of “we expect you to change” is necessary but not sufficient. There is an equally important element to be communicated: “and we know you can change.” By encouraging a sense of self-​efficacy, organizations are reinforcing a growth mindset (Dweck, 2006), that is, a belief that skills can be developed, which in turn leads to a desire to learn and master challenges. A 360 Feedback process will have minimal impact if it does not generate development goals and action plans for achieving those goals. The research is clear: Specific improvement goals drive behavior change. Resources for development planning, including a plan template, should be shared with leaders when they receive their feedback reports. Access to feedback facilitators helps ensure that leaders will move from drawing insights from their data to identifying potential goal areas. These initial goal areas should be vetted with the leader’s manager. Input from peers and direct reports is also valuable. The final plan should focus on a few challenging goals that, if realized, will benefit the organization, increase the likelihood of tangible rewards for the leader, and generate personal satisfaction. The plan should also articulate how the focal leader will use multiple learning strategies (e.g., practice, ongoing feedback, training programs, developmental relationships, or

 14

Application of 360 Feedback for Leadership Development  //​ 141

challenging assignments) to reach the goals. Sharing the plan with the manager and getting his or her approval increases the likelihood that needed resources and support in implementing the plan are available. Sharing feedback results with others and involving them in postfeedback action planning also creates a greater sense of accountability on the part of the focal leader to follow through on efforts to change. A three-​way session between the focal leader, his or her manager, and a coach is a particularly effective strategy for motivating action in response to the feedback. Ideally, the session is driven by the focal leader, having been coached on how to make the most of the session. The first part of the session is an authentic conversation with the manager not only about the feedback itself but also about the manager’s expectations of the focal leader. The aim of the second part of the session is to agree on the most important development goals and how the manager will support work on those goals. Follow-​up sessions with the coach and regular check-​ins with the manager on goal progress create two accountability partners for the focal leader. Postfeedback coaching may seem like a luxury reserved for senior leaders and those designated as high potential. However, organizations that invest in 360 Feedback for leaders at all levels of the organization are increasingly leveraging that investment with lower cost approaches to postfeedback coaching. For example, individuals who serve as feedback facilitators may continue supporting change efforts through three 1-​hour coaching sessions conducted over the phone and spaced over 6 months. One organization has a cadre of managers trained in coaching skills that they use to provide targeted, short-​term coaching to focal leaders (who are not their direct reports) following feedback. Finally, tracking progress toward goals also motivates focal leaders to continue exerting effort to improve. For example, at the Center for Creative Leadership (CCL) we offer participants the opportunity to get a second round of 360 Feedback to check on their goal progress. This second round is more focused, asking raters about the behaviors and skills the focal leader is working on, and asks raters directly about the amount of change they have observed and the impact of those changes. Some organizations are also experimenting with user-​driven feedback tools that allow focal leaders to ask for anonymous feedback from coworkers in real time, for example, immediately after a meeting or at the completion of a major task (Chapter 5). 360 Feedback Embedded in a Broader Leadership Development System

The impact of 360 Feedback is enhanced when the feedback is part of a broader leadership development system. Such systems offer feedback from multiple assessments, opportunities to practice new behaviors and skills and to learn from others, as well

142

142  / / ​  3 6 0 for D evelopment

as ongoing support and accountability for development. 360 Feedback is typically embedded in a development system via feedback-​intensive development programs, formal leadership development initiatives that extend over time, executive coaching, and the organization’s development-​planning and succession-​planning processes. In feedback-​intensive development programs, 360 Feedback is a central tool. These programs use multiple sources of data to deepen self-​awareness and inform development goals. Personality measures provide insights into why certain behaviors come more naturally to the focal leader. Assessments of interpersonal preferences can explain why the leader’s typical ways of interacting with others do not always meet their expectations. Examining how personality and preferences shape behavior helps focal leaders see how their internal sense-​making may need to shift to change certain behaviors or how particular skills may be more difficult to develop because they require leaders to “go against their grain” in some way. Feedback-​intensive programs also use observations of behaviors in simulations and role plays and feedback from fellow program participants to provide additional sources of feedback that can validate and inform insights gained from 360 Feedback. Feedback facilitators work one on one with the focal leaders to integrate data from multiple assessments and identify development goals. Also, 360 Feedback is used in the front end of development initiatives that extend over time. These programs focus on the dimensions assessed by the 360 instrument, providing opportunities to gain knowledge relevant to each dimension and to practice new behaviors and skills in the safe environment of a classroom. Focal leaders then identify behaviors to experiment with or skills to apply back in the workplace. Follow-​up sessions allow leaders to share their successes and challenges, obtain feedback and advice from fellow participants, and gain further coaching from program staff. A second round of 360 Feedback at the end of the program provides a measure of progress and insights for a new round of goal-​setting. This structured process helps focal leaders maintain focus and motivation throughout the journey from self-​awareness to mastery of new behaviors and skills. 360 Feedback tied to executive coaching allows for a highly customized form of leadership development. As with feedback-​intensive development programs, executive coaching may use multiple assessments to deepen the focal leader’s self-​awareness and to provide the coach with a more nuanced understanding of the leader’s strengths and weaknesses. Coaching engagements also are typically charged with identifying development goals aligned with a particular organizational need (e.g., preparing the focal leader for higher level positions, enhancing performance in an arena critical to the leader’s current job). Once these goals are identified, the coach works with the leader over time to monitor and reflect on progress toward goals, as well as to discuss issues that arise and

 143

Application of 360 Feedback for Leadership Development  //​ 143

strategies for dealing with those issues. The coach and focal leader may include other stakeholders in the coaching process in ways that facilitate the leader’s development efforts. When 360 Feedback is designed as a “development plus” process, data may also be used in an organization’s development-​planning and succession-​planning processes, again to inform the identification of development goals that will enable the focal leader to better contribute to the success of the organization. Plans for reaching these goals include identifying situations where the focal leader will practice behaviors and obtain ongoing feedback, new assignments that will stretch the focal leader’s current skillset, and access to a mentor or experienced peer who can serve as a role model and advisor as the focal leader works on the targeted skills. The implementation of these plans and monitoring of progress need to be jointly owned by the focal leader, the manager, and the process owners. 360 FEEDBACK PRACTICES FOR COLLECTIVE DEVELOPMENT

The deployment of 360 Feedback can be designed to affect leadership in an organization beyond its impact on the focal leader. Used as a broad organizational intervention, it can influence the overall leadership capacity of an organization and, as a result, contribute to the organization’s ability to meet its strategic agenda. Feedback Processes That Educate Raters

The fact that 360 Feedback processes rely on the collective effort of multiple raters is an opportunity to extend its influence far beyond the focal leader. From its deployment to its conclusion, a 360 Feedback process needs not only to engage raters in meaningful ways but also to educate them about what effective leadership looks like in the organization and how their feedback plays a critical role in producing this leadership. The education of raters begins with the solicitation to participate in the process. This is not an invitation to participate, it is an expectation to invest in creating the kind of leadership the organization needs for success. A  well-​designed communication campaign begins with a clear message from the chief executive officer or head of human resources about why the 360 is being deployed, the raters’ critical role for its success, and the mechanics of the process (e.g., anonymity of ratings). Rater training should provide an opportunity to reflect and think, in a structured manner, about the organization’s leadership competency model. The act of rating a coworker on items that reflect these competencies provides a unique opportunity to

14

144  / / ​  3 6 0 for D evelopment

promote and reinforce them throughout the organization. This reinforcement is further enhanced when employees evaluate multiple focal leaders; distinctions between leaders expose employees to variance in behaviors, furthering the refinement of their comprehension of the model. Asking focal leaders to share with raters insights from the feedback, as well as their development plans, reinforces the importance of the rater’s role in leadership development. These coworkers then have a front row seat as the focal leader works to change and grow and should have opportunities to provide ongoing feedback that helps the leader track progress and make adjustments. As noted, 360 Feedback is the beginning of a development process that unfolds over time. As active participants in the process, raters can themselves learn, albeit vicariously, from the development successes and challenges of the focal leader. 360 Feedback at All Levels

To use 360 Feedback as a tool for developing collective leadership capacity, organizations must ensure that leaders at all levels receive feedback. This goal of inclusiveness can be achieved in multiple ways. Organizations might opt to make 360 Feedback a regular element of each job move a leader makes, administering a 360 instrument 6 to 12  months into the assignment. Another option is to include 360 Feedback in mandatory leadership development programs for leaders moving to new levels of responsibility in the organization. When the data will be used in other talent management processes, organizations often deploy 360 Feedback as an organization-​wide assessment that is repeated at regular intervals. Some organizations use a cascading approach, starting with top leadership and then moving through each subsequent management level. This not only reinforces the importance of the process in the organization but also gives managers a first-​hand experience before they are called on to support the process with their direct reports. Implementing 360 Feedback across levels in a single administration has its challenges. Most notable is the load on raters who are completing assessments of their boss, multiple peers, and multiple direct reports. Instruments with fewer items can lighten this rating load. Another option is to spread the rating load across time by administering 360 Feedback to a portion of the organization’s leaders on a rotating basis (e.g., providing feedback to a quarter of the leader population every 3 months). Single administration can also overtax feedback facilitation and coaching resources, leading some organizations to opt out of using these impactful practices. On the other hand, single administration may be necessary to supply needed data to other talent management processes in a timely

 145

Application of 360 Feedback for Leadership Development  //​ 145

way, and rating multiple people at once may help raters better discern competency-​level differences across various colleagues. Deploying 360 Feedback across all levels also generates data that help organizations diagnose leadership issues and opportunities in their population of leaders. If leaders across the organization, at particular levels, or in certain regions are rated lower on particular competencies, then extra developmental attention can be given to those competencies. If leaders in certain regions or functions are rated noticeably higher on particular competencies, then leaders in these groups could be a useful source for mentors, or rotational assignments into these groups could provide access to knowledge and role models for developing these competencies. Having 360 data from all leaders (and across time) is also valuable from a talent analytics perspective, for example, to discern competency patterns related to employee engagement or to identify recruiting sources for leaders who are rated highly on skills essential to the organization. 360 Feedback as the Cornerstone of a Feedback Culture

A strong feedback culture is one where individuals continuously receive, solicit, and use formal and informal feedback to improve their job performance (London & Smither, 2002). A 360 Feedback process that incorporates many of the practices we have already noted—​rater education, broad participation, sharing feedback results, and integration with a leadership development system—​can serve as a cornerstone for developing such a culture. These practices emphasize the value of feedback for the individual and the organization, provide a structure for giving and receiving high-​quality feedback, make leaders’ efforts to change and grow in response to feedback more visible, and encourage more informal feedback in support of that development. With these practices in place, a 360 Feedback process can provide a training ground for feedback conversations and can enhance comfort with feedback and with conversations about performance at work. The limited research in this arena is promising. Participation in a 360 process appears to promote open communication and interactions between employees (Druskat & Wolff, 1999). And, organizations that use 360 Feedback have higher levels of knowledge sharing among employees and increased productivity compared those that do not (Kim, Atwater, Patel, & Smither, 2016). The role of 360 Feedback in developing a feedback culture is strengthened when senior leaders demonstrate that feedback is essential for their own self-​awareness and continued development. They send a powerful signal to organizational members when they launch a 360 process with their own participation. For example, we know of a newly appointed leader of a healthcare organization who, in the first few months of his

146

146  / / ​  3 6 0 for D evelopment

tenure, asked all 400 employees to evaluate him. Although it may have been preferable to conduct the assessment after his employees had more opportunity to observe his behavior, the main point of the exercise was to send a very direct message that feedback mattered. Senior leaders also send a powerful message when organizational members are aware of how their leaders are making use of their feedback. For example, in one organization we have worked with, it is well known that executive team members meet to share their feedback with one another and use this discussion to set individual and shared goals. CONCLUSION

The use of 360 Feedback in contemporary leadership development has become ubiquitous. Be it as the cornerstone of wide-​scope leadership development programs aimed at middle managers, as part of individualized programs designed for higher level executives, or as input to annual development-​planning processes, 360 Feedback provides a level of clarity that is unique and essential to guide leadership development efforts. In addition, 360 Feedback possesses the distinct capacity to trigger and stimulate desired changes in leaders. However potent these processes may be, 360 Feedback processes can be complex to manage; thus, careful design and implementation considerations are needed to achieve maximum impact. As a practice, 360 Feedback has long been viewed as a process for individual development. However, this perspective limits the value of 360 Feedback for organizations. Increasingly, investments in 360 are embedded in broader interventions aimed at impacting organizational leadership capacity. With this broader perspective, 360 Feedback can contribute to an organization’s strategic intent by operating as a lever for collective leadership improvement and organizational change. Key insights for using 360 Feedback for leadership development include



• The choice between using 360 Feedback for development or development plus is significant. In making this choice, an organization has to consider the context within which the 360 will be used. Once a purpose has been established, expectations of all users have to be managed carefully (e.g., confidentiality, ownership of the data). • The most effective 360 processes are meticulously designed. From the selection of participants and the initial communications to target leaders and evaluators to feedback report design and postfeedback support, every step of the process

 147

Application of 360 Feedback for Leadership Development  //​ 147







requires attention to design details to ensure high impact and alignment with purpose. • Postfeedback development opportunities need to be as thoughtfully crafted as the feedback process itself. Feedback increases self-​awareness and motivates change. Without targeted opportunities to practice new behaviors, learn from skilled others, and receive ongoing real-​time feedback and support, the insights gained from 360 Feedback may not lead to actual development. • A  feedback culture in which informal feedback is valued and encouraged also enhances the value of a 360 Feedback process. At the same time, the introduction of strategic approaches to 360 Feedback can advance a feedback culture by signaling its importance for organizational success. • Organizations can expand the impact of 360 Feedback by using it not only to develop focal leaders but also to develop raters. A 360 process that educates raters about important leadership competencies, asks them to be mindful about what they are evaluating, and involves them in postfeedback support of the focal leader can have positive effects on their own leadership capacity.

REFERENCES Atwater, L., Brett, J., & Charles, A. C. (2007). Multisource feedback: Lessons learned and implications for practice. Human Resource Management, 46, 286–​307. Bettenhausen, K. L., & Fedor, D. B. (1997). Peer and upward appraisals: A comparison of their benefits and problems. Group and Organization Management, 22, 236–​263. Bracken, D. W., & Rose, D. S. (2011). When does 360-​degree feedback create behavior change? And how would we know when it does? Journal of Business and Psychology, 26, 183–​192. Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360° feedback. Industrial and Organizational Psychology, 9, 761–​794. Druskat, V. U., & Wolff, S. B. (1999). Effects and timing of developmental peer appraisals in self-​managing work groups. Journal of Applied Psychology, 84,  58–​74. Dweck, C. S. (2006). Mindset: The new psychology of success. New York: Random House. Effron, M., & Ort, M. (2010). One page talent management:  Eliminating complexity, adding value. Boston, MA: Harvard Business Review Press. Fleenor, J. W., Taylor, S., & Chappelow, C. (2008). Leveraging the impact of 360–​degree feedback. San Francisco, CA: Pfeiffer. Jawahar, I. M., & Williams, C. R. (1997). Where all the children are above average: The performance appraisal purpose effect. Personnel Psychology, 50, 905–​925. Kim, K. Y., Atwater, L., Patel, P. C., & Smither, J. W. (2016). Multisource feedback, human capital, and the financial performance of organizations. Journal of Applied Psychology, 101, 1569–​1584. Leslie, J. B. (2013). Feedback to managers: A guide to reviewing and selecting multirater instruments for leadership development (4th ed.). Greensboro, NC: Center for Creative Leadership. London, M., & Smither, J. W. (2002). Feedback orientation, feedback culture, and the longitudinal performance management process. Human Resource Management Review, 12, 81–​100.

148

148  / / ​  3 6 0 for D evelopment London, M., Wohlers, A. J., & Gallagher, P. (1990). A feedback approach to management development. Journal of Management Development, 9(6),  17–​31. Morrison, A. M., McCall, M. W., Jr., & DeVries, D. L. (1978). Feedback to managers: A comprehensive review of twenty-​four instruments. Greensboro, NC: Center for Creative Leadership. Nowack, K. J., & Mashihi, S. (2012). Evidence-​based answers to 15 questions about leveraging 360-​degree feedback. Consulting Psychology Journal: Practice and Research, 64, 157–​182. 3D Group. (2016). Current practices in 360 degree feedback (5th ed.). Emeryville, CA: 3D Group.

 149

9 / /​/ ​     / /​/​ MOVING

DEBATE”

BEYOND “THE GREAT

Recasting Developmental 360 Feedback in Talent Management JASON J. DAHLING AND SAMANTHA L. CHAU

Our knowledge of 360 Feedback has advanced over the last 30  years to the point that these systems have become an extremely common and effective element of performance management. However, despite the ubiquity of 360 Feedback in organizational life, some of the conceptual problems that emerged in early scholarship continue to persist and limit the ways organizations think about, and use, this kind of feedback today. In this chapter, we focus on one such problem: the persistent distinction drawn between “administrative” and “developmental” 360 Feedback systems. Our core contention is that this distinction has become meaningless because 360 Feedback is rarely performed with developmental self-​awareness as the only end goal; almost all 360 systems use the data gathered to inform talent and performance management to some degree. However, we are likely to overlook these nuanced possibilities when we adopt a simplistic, “either/​or” viewpoint on the ways in which 360 Feedback can be leveraged in the workplace. We consequently have two objectives in this chapter. First, we review the historical distinction in the literature between administrative and developmental 360 Feedback. We highlight how this dichotomy has become outdated and unhelpful when thinking about the uses of 360 Feedback. Second, we explore some of the ways that developmentally 149

150

150  / / ​  3 6 0 for D evelopment

oriented 360 Feedback is used to purposefully strengthen strategic practices in the talent management space, including individual and team training, employee coaching, succession planning, and executive onboarding and leader development (see also Chapter 6). We end by using a continuum framework for thinking about the uses of 360 Feedback in organizations and note some future research and practice concerns that are evident from adopting this kind of framework. CHARACTERIZING ADMINISTRATIVE VERSUS DEVELOPMENTAL 360 FEEDBACK

Early research and practice surrounding 360 Feedback in the 1980s and 1990s focused narrowly on its use as a performance appraisal tool. Consequently, most early work on 360 Feedback debated the appropriateness of using these ratings to make high-​stakes personnel decisions (e.g., Edwards & Ewen, 1996). Practitioners and researchers raised many concerns about this use of 360 Feedback, which included questions about the value of 360 information relative to traditional supervisor ratings of performance, the perceived fairness of using 360 ratings to make personnel decisions and whether peers could (or would) provide accurate ratings under these kinds of high-​stakes conditions. As these conversations evolved through scholarship and organizational practice, multisource ratings used to make formal personnel decisions, especially those concerning compensation, promotion, and retention, were given the shorthand label of administrative 360 Feedback to differentiate them from 360 used for other purposes in the organization. Given the significant cost and effort in developing and running a 360 Feedback system, many organizations use 360 data to shape these kinds of high-​stakes administrative decisions (Church & Rotolo, 2013; London, 2015). In contrast, 360 Feedback employed for other, lower stakes purposes came to be described as “developmental” feedback. This type of 360 Feedback generally focuses on the personal and career development of the recipient, with the feedback information used to improve self-​awareness and acted on as the recipient sees fit. Consequently, one of the major critiques of developmental 360 is that its impact depends greatly on the motivation of the focal leader to process and act on feedback information (e.g., Dahling, Chau, & O’Malley, 2012). However, despite this concern, evidence supports the usefulness of developmental 360 Feedback. For example, meta-​analytic research suggests that developmental is better than administrative 360 Feedback at improving performance ratings (Smither, London, & Reilly, 2005). This growth is likely a function of one key advantage associated with developmental feedback: Colleagues may be more willing to

 15

Recasting Developmental 360 Feedback in Talent Management  //​ 151

express their perspectives honestly and directly when they believe that the recipient’s employment prospects will not be harmed by tough, negative feedback (London, 2015). Developmental systems are consequently more likely to get focal leaders the message they most need to hear. This dichotomy between administrative and developmental 360 has persisted in the literature for over 30 years. Indeed, at the time the Handbook of Multisource Feedback was published, London (2001) characterized the choice between administrative and developmental framing as “the great debate” in 360 Feedback scholarship, arguing that organizations needed to carefully decide on one of these options that will determine how they use 360 information. This distinction still persists even in high-​impact, modern 360 scholarship (e.g., Kim, Atwater, Patel, & Smither, 2016). Although this dichotomy was helpful when research and practice on 360 Feedback was in its infancy, we see several problems with maintaining it today. First, organizations investing in developmental 360 systems are using this information in many ways to inform core talent management processes. Consequently, presenting and discussing this type of 360 Feedback as “consequence free” and lacking any administrative implications is potentially misleading to employees. Second, casting developmental 360 Feedback in this light suppresses creative discussions of other ways that this information might be used to manage and support the career development of employees. Indeed, the ongoing evolution of performance management toward more feedback-​intensive systems suggests that 360 Feedback will inevitably inform and support a wider number of talent management functions in organizations (e.g., Adler et al., 2016; Chapter 6). Third, in an era of strict budgetary limitations for talent management, presenting 360 Feedback only as a discretionary developmental opportunity is likely to undersell the value of this tool to executives. Rallying support behind such a system is difficult unless leaders can see how this information presents a good return on investment for growing talent and making accurate decisions. To be fair, we are hardly the first people to recognize that organizations can do better with developmental 360 Feedback (e.g., Bracken & Church, 2013; Bracken, Dalton, Jako, McCauley, & Pollman, 1997). However, we are surprised at how long the administrative-​ versus-​developmental distinction has persisted (especially in the scholarly literature) and at how slow some organizations have been to act on the potential strategic uses of developmentally oriented 360 reviews. Indeed, once the initial investment in creating a 360 Feedback system is committed, there are many opportunities to leverage the data it provides at little additional cost or difficulty. We explore some of these uses in the section that follows.

152

152  / / ​  3 6 0 for D evelopment

MAXIMIZING DEVELOPMENTAL 360: INTEGRATION WITH STRATEGIC TALENT MANAGEMENT

Even if high-​stakes administrative consequences are off the table, receiving 360 Feedback should never be a stand-​alone developmental event. Instead, it should be an “unfreezing” moment that enables changes in people that facilitate the organizational strategy (McCauley & Moxley, 1996), but supporting and reinforcing those changes in people requires careful developmental scaffolding. To this end, 360 Feedback should be integrated with many other functions in the talent management space (see also Chapter 6) that are intended to develop employees and enhance their performance. We describe some of these integrations in order of their approximate impact on the feedback recipient’s career. Individual Development Plans

In its purest form, developmental 360 Feedback is provided in an “information-​only” capacity to recipients, who can choose how (or if) they want to act on this information. Advocates for this approach sometimes suggest that developmental feedback should not be provided to managers of the recipient but should instead be completely confidential. However, few organizations can afford the time and expense of such a system that relies entirely on the goodwill and motivation of employees to actually apply their feedback information. Fortunately, there are many opportunities that maintain the spirit of developmental 360 Feedback while coupling it with a reasonable degree of accountability and supervisor support. Individual development plans (IDPs) are a natural point of integration between 360 Feedback and talent management that still allows for a high degree of confidentiality concerning the recipient’s feedback. IDPs consist of goals that are highly individualized and relevant to the employee’s developmental needs and career goals. These goals are ideally collaboratively set between the employee and manager. Good IDPs will reflect a blend of the organization’s strategic priorities with the developmental needs of the individual; they should strengthen competencies that will enable the employee to enjoy a more successful career and to make better performance contributions to the organization (Stringer & Cheloha, 2003). Developmental 360 Feedback shared with managers of focal leaders can serve as a rich source of information when selecting competencies to target in an IDP. This information can be especially useful if the focal leader is otherwise resistant to acknowledging developmental needs that the manager sees. Further, incorporating the IDP into an employee’s formal developmental goals for a performance cycle is one lower stakes way

 153

Recasting Developmental 360 Feedback in Talent Management  //​ 153

to indirectly tie 360 Feedback to administrative outcomes. Rather than using the 360 ratings as the direct basis for an immediate administrative decision, the ratings can be used as the basis for a development goal, and progress on that goal can inform later administrative decisions. Thus, administrative consequences are not contingent on the 360 Feedback ratings themselves, but rather on the recipient’s willingness and ability to show improvement.

Employee Coaching

Alongside IDPs, we also see considerable utility in better integrating 360 Feedback systems with employee coaching. Employee coaching refers to the ongoing guidance provided by managers to their direct reports, which specifically involves managerial behaviors such as active listening, behavioral modeling, goal setting, collaborative problem-​solving, and, most importantly, routine feedback provision to their direct reports. Deliberate employee coaching is increasingly important in many organizations due to cuts to training and development budgets, flatter organizational structures that leave managers with a wider span of control, and high-​performance expectations that put pressure on employees to address their developmental weaknesses quickly and efficiently. Managers commonly cite 360 Feedback as a critical tool for employee coaching success (Mone & London, 2010), but given the piecemeal nature of research in this area, much remains to be determined about how to best leverage this information in coaching relationships. Our own research on employee coaching has made it clear that the performance outcomes associated with this form of coaching are a function of the quality, rather than the quantity, of coaching provided by managers. For example, our study of pharmaceuticals sales teams (Dahling, Taylor, Chau, & Dwight, 2016)  indicated that employee coaching frequency was completely unrelated to sales goal attainment among teams that were managed by an effective coach. Employees in these teams also outsold their counterparts in teams managed by ineffective employee coaches by about 9% of their target goals over the span of a year. While many questions remain about how to promote high-​quality employee coaching, 360 Feedback is a natural complement to this end; the pharmaceuticals organization that we studied makes a deliberate effort to funnel feedback data to the manager, especially external feedback from physicians and other medical professionals. Managers can then quickly identify coaching needs, which are often smaller or product-​specific concerns, to address in ride-​along observations with their sales representatives.

154

154  / / ​  3 6 0 for D evelopment

Team Development and Formal Training

Most consideration of developmental 360 Feedback focuses on the benefits to the individual recipient. However, these insights can also be rolled up to the team level to look for common strengths, weaknesses, and opportunities that are evident among the group (Fletcher, 2008; Chapters 10 and 23). Such an analysis is particularly valuable when the 360 process draws on the perspective of internal or external clients or collaborators, who often have good insights on the role that the team plays within the broader organization. Such an analysis can be performed qualitatively by looking at emergent themes in the comments provided to team members or quantitatively by examining statistical agreement across performance dimension ratings. In either case, developmental opportunities that might not be readily evident to team leaders frequently reside at the group level of analysis. We have seen this kind of “aggregated” developmental 360 Feedback used very successfully by managers to identify training needs and collaborative opportunities that were not previously recognized. For example, a group-​level analysis of developmental 360 Feedback in a small department pointed toward a systemic need to coordinate effectively with other groups. This problem was not evident to the manager within the department; internal coordination and cooperation were quite effective. However, several members of the team received constructive feedback concerning needs to better share information outside the group with other stakeholders, build relationships outside their own team, and remain cognizant of the priorities of other departments in the organization. Rolling up the 360 Feedback provided by external peers to individual team members allowed us to identify a consistent theme: The department was viewed as somewhat standoffish in the organization, and many of its members were not acting as good business partners to colleagues elsewhere in the organization. The leader of the department did not recognize the extent of this problem until we discussed the aggregated results, which is a common experience; aggregated developmental feedback can help to reveal these “blind spots” to leaders. In this case, the manager was able to successfully discuss this concern with her team, ensure that the team understood and accepted the problem, and brainstorm new ways to break out of their silo and collaborate more visibly and effectively with other business units. The type of rating aggregation used to identify team development needs can also feed into a broader analysis of training needs within the organization. When 360 Feedback points to shared development needs among larger groups of people across teams or units, the analysis naturally turns to wider training endeavors. Training needs analysis can be greatly facilitated by 360 Feedback information, especially at the person level of analysis

 15

Recasting Developmental 360 Feedback in Talent Management  //​ 155

to identify the population of employees who need training on a particular competency (e.g., Horng & Lin, 2013; Tornow & Tornow, 2001). Effective integration of 360 Feedback with training depends on a top-​down approach in which a high-​level organizational needs analysis has already identified the strategic objectives in need of training support. The 360 instrument must include questions that specifically target the operational competencies and tasks related to these deficient objectives. Raters need to understand that feedback recipients will not be penalized or stigmatized for having low standing on these proficiencies; rather, these recipients will have training opportunities that should help them grow and improve. Under such contexts, raters are likely to provide more honest responses. Further, collecting information from multiple raters improves the likelihood that those in need of training will receive it, particularly when compared to the usual practice of managers alone nominating their direct reports for training. Executive Onboarding and Leader Development

Perhaps the most success at integrating developmental 360 Feedback with talent management to date is in the leader development space (e.g., Bono, Purvanova, Towler, & Peterson, 2009). Here, the applications focus primarily on executive onboarding and early intervention or on 360 Feedback as a supplement to executive coaching. New executives often struggle to adjust to the culture, norms, and practices of new organizations in ways that inhibit effective performance. Providing timely and relatively early 360 Feedback serves many essential functions for these new leaders, although raters need sufficient time for observation before this assessment can occur. Most directly, 360 Feedback provides the executive with knowledge of how their adjustment and performance is perceived by their stakeholders. However, it also allows for an early point of intervention if more serious issues are uncovered. For example, in our own organizational experience, we have used 360 successfully with new executive hires within the first 6 months to identify specific and actionable goals to work on with executive coaching partners. Executive coaches lead the 360 Feedback data collection process, which keeps the reports confidential between the coach and executive recipient. Collecting this information helps to accelerate the coaching relationship to make faster improvements in advance of formal performance appraisal. This type of timely and precise intervention can help salvage initially rocky executive hires and improves new executives’ perceptions of organizational support. Beyond the executive, this integration of developmental 360 with coaching and development can have broader positive impacts by showing subordinates that their concerns about new leaders are taken seriously; rapid improvement in

156

156  / / ​  3 6 0 for D evelopment

dysfunctional executive behaviors helps to maintain employee engagement that might otherwise wither. Succession Planning

Succession planning is perhaps the highest impact use of developmental 360 Feedback before it crosses into formal performance management and the more traditional administrative domain. Although the best process of identifying high-​potential employees and preparing them to move into key roles is highly debated (e.g., Thornton, Hollenbeck, & Johnson, 2010), this process routinely relies heavily on 360 Feedback information. Perhaps the key challenge to linking 360 Feedback and succession planning is tailoring the 360 instrument appropriately (see also Chapter 6). Most instruments naturally focus on the competencies that are relevant to the feedback recipient’s current job. However, this information might not be useful for succession planning because of its tangential connection to the broader competencies and qualities needed to progress to higher level jobs (e.g., Silzer & Church, 2010). To support succession planning, the 360 assessment needs to target a broader suite of future-​oriented leadership competencies. The future-​oriented questions should be administered alongside questions that pertain to current performance, but their impact should be limited to succession-​planning conversations; judgments about current performance should not be influenced by ratings on future-​oriented competency questions. MOVING FORWARD: FROM DICHOTOMY TO CONTINUUM

Our brief review highlights only a few of the ways that developmentally oriented 360 Feedback can be integrated with talent management functions. However, what should be clear is that the spirit of developmental 360 can be retained while using this information to shape meaningful strategic decisions in talent management. Every organization must find its own comfort level concerning the degree of impact that they are willing to allow 360 Feedback to exert, but the range of options available greatly exceeds a simple developmental/​administrative dichotomy. To this end, we submit that the utility of 360 Feedback is likely to be maximized if researchers and practitioners instead treat the administrative versus developmental distinction as anchors on a continuum of impact for the focal ratee (e.g., Bracken & Church, 2013). At one (increasingly rare) extreme, purely developmental 360 Feedback is provided simply to inform focal ratees of how others perceive them. At the other extreme, purely administrative 360 Feedback is solely concerned with rating and differentiating

 157

Recasting Developmental 360 Feedback in Talent Management  //​ 157

people for highly significant personnel actions, like downsizing or promotion. The practical reality is that the vast majority of purposes for 360 Feedback systems fall somewhere between these two anchors, and we miss these important nuances as researchers and practitioners when we force our conversations into a false dichotomy. One key advantage of adopting a continuum model is that it facilitates conversations about how 360 Feedback systems can change within the organization. As London (2015) has suggested, organizations should move toward greater administrative impact cautiously, introducing new talent management integrations slowly over multiple feedback cycles. Such an approach gives employees time to adjust to the practice of giving and receiving 360 Feedback, and it allows them a fair amount of time to adjust to the increasing consequences of these ratings. This type of longitudinal evolution has occurred organically in many organizations with a long history of 360 Feedback (e.g., PepsiCo; Bracken & Church, 2013). However, it can also be easily integrated into the human resource strategic plan of organizations that are new to 360 Feedback. The continuum approach also underscores the responsibility of organizations to clarify to employees the different ways that 360 information will be used to inform talent management functions or make personnel decisions (see Chapter 28). To characterize 360 as for developmental purposes masks crucial variability in how this information is applied across organizations, and employees are likely to approach the ratings process differently depending on the intended impacts of the system. Organizations need to be up front with these planned uses or they risk angering employees who subsequently feel deceived when their “developmental” feedback winds up having teeth. Further, design features of developmental 360 systems can also help remind employees of these distinctions. For example, when 360 Feedback is not intended to serve as a formal performance appraisal, we have found that it can be helpful to deliberately design the system with different rating scale levels and language from the formal appraisal instrument. Avoiding the terminology associated with formal appraisal within the organization underscores to employees that this 360 rating is a different piece of information that will be used for other purposes. CONCLUSION

In conclusion, we reiterate the need for researchers and practitioners to move beyond the administrative/​developmental dichotomy. We can use 360 Feedback to inform a wide range of talent and performance management functions, and the current dichotomy fails to capture this richness. We need to communicate with greater precision to better compare research findings, leverage best practices, and deal honestly with employees whose working lives may be greatly impacted by their participation in these systems.

158

158  / / ​  3 6 0 for D evelopment

REFERENCES Adler, S. Campion, M., Colquitt, A., Grubb, A., Murphy, K., Ollander-​Krane, R., & Pulakos, E. D. (2016). Getting rid of performance ratings:  Genius or folly? A  debate. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9, 219–​252. Bono, J. E., Purvanova, R. K., Towler, A. J., & Peterson, D. B. (2009). A survey of executive coaching practices. Personnel Psychology, 62, 361–​404. Bracken, D. W., & Church, A. H. (2013). The new performance management paradigm: Capitalizing on the unrealized potential of 360 degree feedback. People + Strategy, 36(2),  34–​40. Bracken, D. W., Dalton, M. A., Jako, R. A., McCauley, C. D., & Pollman, V. A. (1997). Should 360-​degree feedback be used only for developmental purposes? Greensboro, NC: Center for Creative Leadership. Church, A. H., & Rotolo, C. T. (2013). How are top companies assessing their high-​potentials and senior executives? A talent management benchmark study. Consulting Psychology Journal: Practice and Research, 65(3), 199–​223. Dahling, J. J., Chau, S. L., & O’Malley, A. L. (2012). Correlates and consequences of feedback orientation in organizations. Journal of Management, 38, 530–​545. Dahling, J. J., Taylor, S. R., Chau, S. L., & Dwight, S. (2016). Why does coaching matter? A multilevel model linking managerial coaching effectiveness and frequency to sales goal attainment. Personnel Psychology, 69, 863–​894. Edwards, M., & Ewen, J. (1996). 360° feedback: The powerful new model for employee assessment & performance improvement. New York: AMACOM. Fletcher, C. (2008). Appraisal, feedback, and development: Making performance review work (4th ed.) London, England: Routledge. Horng, J. S., & Lin, L. (2013). Training needs assessment in a hotel using 360 degree feedback to develop competency-​based training programs. Journal of Hospitality and Tourism Management, 20,  61–​67. Kim, K. Y., Atwater, L., Patel, P. C., & Smither, J. W. (2016). Multisource feedback, human capital, and the financial performance of organizations. Journal of Applied Psychology, 101, 1569–​1584. London, M. (2001). The great debate: Should multisource feedback be used for administration or development only? In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback (pp. 368–​385). San Francisco, CA: Jossey-​Bass. London, M. (2015). The power of feedback: Giving, seeking, and using feedback for performance improvement. New York, NY: Routledge. McCauley, C. D., & Moxley, R. S. (1996). Developmental 360: How feedback can make managers more effective. Career Development International, 1,  15–​30. Mone, E. M., & London, M. (2010). Employee engagement through effective performance management: A practical guide for managers. New York, NY: Routledge. Silzer, R., & Church, A. H. (2010). Identifying and assessing high-​potential talent. In R. Silzer & B. E. Dowell (Eds.), Strategy-​driven talent management: A leadership imperative. San Francisco, CA: Jossey-​Bass. Smither, J. W., London, M., & Reilly, R. R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-​analysis, and review of empirical findings. Personnel Psychology, 58(1), 33–​66. doi:10.1111/​j.1744-​6570.2005.514_​1.x Stringer, R. A., & Cheloha, R. S. (2003). The power of a development plan. Human Resource Planning, 26(4),  10–​17. Thornton, G. C., III, Hollenbeck, G. P., & Johnson, S. K. (2010). Selecting leaders:  Executives and high potentials. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection (pp. 823–​840). New York, NY: Routledge. Tornow, W. W., & Tornow, C. P. (2001). Linking multisource feedback content with organizational needs. In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback (pp. 48–​62). San Francisco, CA: Jossey-​Bass.

 159

10 / / ​/ ​      / / ​/​

TEAM DEVELOPMENT WITH STRATEGIC 360 FEEDBACK Learning From Each Other ALLISON TRAYLOR AND EDUARDO SALAS

INTRODUCTION

In the past 20 years, the prevalence of teams in organizations has increased dramatically, and a body of academic literature has emerged to enhance our understanding of the development, processes, and performance of these work teams. In that time, researchers have unearthed a number of important findings regarding the antecedents and correlates of team performance. Although organizations have used 360 Feedback in an attempt to enhance team development and performance over the past two decades, this new body of knowledge provides organizations with the ability to more strategically implement these systems to develop their teams and to improve team and organizational effectiveness. Research has demonstrated that thorough feedback from fellow team members is linked to high performance as well as developing and maintaining trust within team members ( Jarvenpaa & Leidner, 1998). In addition, 360 Feedback can impact a broad range of interpersonal factors of team success, including improving relationships between team members or more generally boosting group-​related attitudes (Druskat & Wolff, 1999). Feedback can also help teams reach a common understanding of tasks and mutual agreement, developing common ground among members (Olson & Olson, 2000). However, organizations have the opportunity to further reap the benefits of 360 Feedback through application to the team processes most closely linked to development 159

160

160  / / ​  3 6 0 for D evelopment

and performance. Aligning feedback with specific, research-​ backed developmental processes distinguishes strategic 360 Feedback from traditional feedback systems and can have immense impact on team performance. This chapter combines our current knowledge of the science of teamwork with descriptions of the most strategic implementations of 360 Feedback for today’s organizations. We begin by describing the strategic applications of 360 Feedback to team developmental processes and learning. We proceed to describe the team processes for which strategic 360 Feedback has the greatest potential for impact, providing practical implications for organizations implementing a 360 Feedback system and identifying targeted interventions to address antecedents and correlates of team development and performance. In addition, we provide an overview of the most effective methods for feedback delivery, with an overview of how organizations should structure their feedback systems. This chapter provides not only a comprehensive overview of how 360 Feedback can be applied to teamwork but also recommendations for strategic implementation of such programs targeted at the most fundamental determinants of team success. TEAM DEVELOPMENT ESSENTIALS

Traditional theories of the developmental process of teams focus on relatively linear stages of development (e.g., Tuckman, 1965). Although more recent approaches to development have taken into account the team’s context and tasks, the basic tenets of these earlier theories may ring true. For example, Kozlowski, Gully, Nason, and Smith’s (1999) four-​phase model of development includes team formation, task compilation, role compilation, and team compilation. It focuses on the development of adaptive capabilities and the development of a team as it compiles information across individual, dyadic, and team levels. As team capabilities improve, the group can move on to develop more advanced skills. Teams learn and acquire these more advanced skills across these levels from the individual to dyadic to team levels. The team formation stage involves team socialization and the development of an interpersonal and team orientation. The task compilation phase involves acquiring the skills necessary to perform tasks, leading to role compilation, in which members shift from an individual to dyadic focus as they determine and negotiate team roles. In the final phase, team compilation, the model shifts to the team level, creating an adaptable and interdependent network enabling continuous improvement and the ability to perform novel tasks. Strategic 360 Feedback can target this cycle to help teams more rapidly move to develop more advanced skills that will promote team performance. In addition, feedback

 16

Team Development With Strategic 360 Feedback  //​ 161

can help teams compile their knowledge, skills, and performance outcomes from an individual to dyadic to team focus by promoting interactions with other team members and a more team-​centric view of their work. Although feedback can be important across all four stages, it may be particularly useful in the earlier phases as teams socialize and adjust to a new perspective on their work. While the phases of team development can help organizations visualize and time strategic feedback, 360 Feedback can also be applied through developmental interventions. Although some of these methods are directed at teams as a whole, others, such as team leader development, are more targeted toward the individual team member. Shuffler, DiazGranados, and Salas (2011; see Table 10.1) provided an overview of common team development interventions and distinguished between team-​building activities, which focus on improving interactions and resolving conflict, and team-​training activities, which focus on helping teams understand appropriate competencies. In addition, the authors addressed several components involved in understanding team effectiveness, including team selection methods and task analysis as well as sources of feedback, such as team performance measurement systems and diagnostic feedback targeted at changing team behaviors following a performance period. Team-​building exercises may be particularly relevant to practitioners interested in implementing 360 Feedback systems as they can improve interpersonal relations within a team and can include approaches such as goal setting, interpersonal relationship management, role clarification, and problem-​solving (Buller & Bell, 1986). Historically used as a primary intervention, team-​building exercises generally have positive impacts on overall performance, particularly related to affective or process outcomes (Klein et al., 2009). For greatest impact, team-​building interventions should emphasize role clarification (Salas, Rozell, Mullen, & Driskell, 1999) and task identification (Bouchard, 1972). Moreover, team-​building exercises may improve psychological safety, or the shared belief that a team is safe for interpersonal risk-​taking, an important antecedent to open and valuable feedback in a team environment. Teams with high psychological safety are more likely to seek feedback from other team members, discuss errors, and seek feedback from customers and outside stakeholders (Edmondson, 2003). While each of these types of team development has demonstrated success in improving team performance, it is important that practitioners take care in designing developmental content and measuring results. For example, retrospective perceptions of team development may be much more positive than actual team development (Eden, 1985); although garnering participant feedback on developmental exercises is important, it is vital that organizations also assess team development more objectively. For example, organizations implementing 360 Feedback in teams not only should collect perspectives

162

TABLE 10.1  Common Team Development Interventions Strategy

Description

Impact on 360 Feedback Processes

Cross training

Training implemented to teach team

Cross training can enhance team

members about the duties and

members’ ability to provide others with

responsibilities of fellow teammates,

feedback regarding their performance and

focusing on developing shared

can help team members self-​reflect on the

knowledge of tasks and responsibilities

way their work is mutually dependent on

and mutual performance monitoring

other team members

Team self-​correction

Training designed to develop a

Self-​correction training helps teams

training

team’s ability to diagnose teamwork

learn how to give targeted, meaningful

breakdowns or issues within the team

diagnostic feedback following critical

and reach effective solutions internally

incidents to improve performance

on a continual basis, focusing on mutual performance monitoring, effective communication, and leadership Team coordination

Training focused on improving a team

Team coordination training prepares

training

shared mental model framework or

team members to give more specific and

facilitating a common understanding

useful feedback to others regarding their

of issues related to achieving team

impact on team processes, improves

goals, focusing on backup behaviors,

team understanding of shared goals and

mutual performance monitoring, and

processes to promote more effective

understanding teamwork skills

feedback

Crew resource

Training providing instructional

CRM prepares teams to give feedback

management

strategies to improve teamwork by

within specific, targeted contexts;

(CRM)

applying training tools (e.g., simulators,

promotes informal feedback through

role playing) targeted at specific content, increased communication and briefing; focusing on communication, briefing,

and enhances feedback surrounding

backup behaviors, decision-​making,

decision-​making

team adaptability, and shared situation awareness Team building

Strategies implemented to improve

Team building helps team members

interpersonal relations and social

feel comfortable providing feedback

interactions, emphasizing goal setting,

by improving team interactions and

interpersonal relationships, role

relationships

clarification, and problem-​solving Adapted from Shuffler et al. (2001).

 163

Team Development With Strategic 360 Feedback  //​ 163

of the process but also should measure how effectively their teams meet performance goals before and after the feedback process. TEAM LEARNING

Team learning is the process through which teams obtain and process data that allow them to adapt and improve (Hackman, 1987). Team learning typically occurs through developmental interactions and team training. 360 Feedback, particularly from other team members, is a vital component of the team learning process. As teams face shared experiences, members and team leaders can provide feedback to enhance learning from these experiences. In particular, 360 Feedback processes can be structured to emphasize constructive feedback to help team members learn from each other. In fact, team feedback was one of the first variables found to support team learning (Goldstein & Ford, 2002) and remains a core component of team learning processes. While broader applications of team learning are discussed in the sections that follows, team training is a fundamental mechanism through which teams learn. Effective training prepares teams to encounter and overcome barriers and teaches them to translate and transfer their training to real situations. Team training and feedback work together in a mutually beneficial cycle: Feedback can be incorporated to enhance training outcomes, and training can be used to promote feedback in teams. Incorporating feedback into training is crucial to successful training transfer (Kluger & DeNisi, 1996), the process through which teams transfer and apply to their work what they learn in training. In order to maximize the impact of feedback on training, feedback should be incorporated carefully to target specific aspects of the training process to maximize effectiveness. For example, feedback should be based on training objectives and linked to specific skill performance (Salas, Burke, & Cannon-​Bowers, 2002). Debriefs and reviews can also enhance training and provide an opportunity for feedback within a training session. Conversely, team training can be used as an important tool in preparing teams for the 360 process. For example, research demonstrates that team interaction training can enhance members’ shared mental models and overall team performance (Marks, Zaccaro, & Mathieu, 2000). Team training can support 360 Feedback by helping teams understand the types of feedback, including the purpose and benefit of each type of feedback. Training can also help teams better understand how to craft feedback to be useful and supportive for their teammates or leaders and how to incorporate feedback to improve their own performance.

164

164  / / ​  3 6 0 for D evelopment

TEAM PROCESSES

In the past 30  years, organizational scientists have conducted a variety of studies examining the affective, behavioral, and cognitive processes underlying team performance. Organizations looking to implement 360 Feedback systems should consider targeted interventions aimed at improving these processes as incorporating feedback can facilitate team development and learning through the processes that promote performance. Each of the team processes described can contribute to higher order team learning and development and provides an opportunity for incorporating feedback to further its impact on development. This section provides an overview of important team processes (distinguished as affective, behavioral, and cognitive processes) and provides strategic applications of 360 Feedback systems for each; an overview of these processes is provided in Table 10.2. Team Affect

Research has provided a number of prominent affective processes impacting team performance, including cohesion, conflict, and collective efficacy. Team cohesion is typically conceptualized in terms of task cohesion, or shared commitment to the group’s tasks and goals, and interpersonal cohesion, or group members’ attraction to the group as a whole. While task cohesion generally has a greater impact on team performance (Mullen & Copper, 1994), interpersonal cohesion is linked to team viability (Barrick, Stewart, Neubert, & Mount, 1998). In order to improve team cognition, teams should work to set clear norms and goals (Kozlowski & Bell, 2007). Strategic 360 Feedback systems can facilitate this process by providing team members with feedback from each other and from leaders on the integration of current norms and goals and can help team members establish these goals moving forward. Like team cohesion, team conflict is frequently separated into task and identity components. While identity conflict is typically problematic, task conflict can lead to positive outcomes as team members debate ideas to find the optimal solution to a problem. However, team conflict is best achieved through a cooperative conflict management approach (Somech, Desivilya, & Lidogoster, 2009) in which team members feel a high level of concern for themselves as well as the other party involved. In addition, though conflict can provide positive team outcomes in some scenarios, it is best for members to limit the length of each conflict to avoid permanent negative effects on intrateam relations. 360 Feedback processes can be implemented to facilitate a cooperative conflict management

Cognition

Behavior

cooperation toward shared goals?”

members, use feedback to promote motivation toward

this task in the future?”

for the future; use structure pre-​and debriefs to provide

feedback as tasks are completed

use to improve their performance on

help members identify errors, exchange feedback, and plan

models

“What strategies could this individual

communicate with others?”

behaviors rather than members themselves

Use team self-​correction training to promote feedback to

“Does this team member effectively

Provide frequent feedback directed toward member

team goals

“How could this team member improve

efforts?”

and ensure feedback supports information-​sharing efforts

Communicate uncooperative behaviors with team

to team-​planning and coordination

provide feedback throughout the team-​planning process,

Shared mental

Communication

Cooperation

“Does this team member contribute

experiences

Create a culture of providing and receiving feedback,

overcome this obstacle in the future?”

performance and give feedback after success and failure

Coordination

member demonstrate to better

Provide constructive feedback linked to behaviors and

efficacy

“What behaviors could this team

conflict as it arises?”

avoid conflict or work through conflict more quickly

“Does this individual actively resolve

Collective

Conflict

management and use feedback as a preventive measure to

goals?”

norms and goals?”

Set expectations for a cooperative approach to conflict

sense of clarity around norms and

alignment with the team’s shared

integrated between team leaders and members

performance on this task?

doing in the future in order to maintain

“What should this team continue

support its goals?”

“Does this team’s communication

together to meet shared goals?”

“Does this team effectively work

performance?”

knowledge and information to promote

“Is this team effectively sharing

quarter?”

its performance over the past

“How could this team have improved

conflict?”

communicate in order to avoid

“Does this team effectively

“How can this team create a better

“Is this team member working in

Cohesion

Use feedback to set clear, agreed-​on norms and goals

Survey Items for Teams

Affect

Survey Items for Members

Strategic Feedback Recommendations

Team Process

TABLE 10.2  Team Processes and Recommendations for Feedback

 165

16

166  / / ​  3 6 0 for D evelopment

approach and to limit the length of conflicts. For example, feedback should be specific and constructive in order to alleviate conflict and promote communication between team members. Feedback processes can be used to identify the source of conflict by encouraging team members to share their frustrations and to help team members engage in the behaviors that will alleviate this conflict. While peer-​to-​peer feedback can be particularly useful in conflict identification, team leaders are uniquely positioned to provide an objective and holistic view of the conflict for involved parties to promote conflict resolution. Team members can also give and receive feedback as a preventive measure to avoid conflict or can be used as a reactive tool to help team members work through disagreements. Finally, research has identified collective efficacy, or the shared ability to achieve team goals, as a strong predictor of team performance. 360 Feedback has the power to foster collective efficacy in a team setting both initially as goals are created and throughout team development. For example, feedback during the goal-​setting process can help ensure that team members choose goals within reach and can build a sense of collective efficacy together. Feedback throughout success and failure experiences can also help foster collective efficacy, as would be the case for individuals’ self-​efficacy (Kozlowski & Bell, 2007). For example, team debriefs during which members receive feedback from leaders and outside stakeholders can help teams understand how they may overcome these challenges in the future. Providing feedback that is constructive can also foster team self-​efficacy, while destructive feedback can actually lower team self-​efficacy and do more harm than good. Destructive feedback is most common in teams under high stress or experiencing conflict; however, awareness can help teams avoid destructive feedback. In order to promote constructive feedback, teams should focus on framing feedback in terms of growth and development, ultimately promoting a greater sense of collective self-​efficacy. The nature of feedback is as important as the amount and sources of feedback in the development of self-​efficacy. The final section of this chapter provides further insight to the types and sources of feedback and their impact on team development. Team Behavior

In addition to the affective correlates and predictors of team performance, a variety of team behavioral performances have been linked to performance. Most prominently, research has pointed to team coordination, cooperation, and communication as predictors of performance. 360 Feedback can be combined with other behavioral interventions to improve these processes. Team coordination involves management of team interdependencies and is an important correlate of team performance. A  number of interventions to enhance team

 167

Team Development With Strategic 360 Feedback  //​ 167

coordination have been identified, including information sharing, planning, and asking for feedback. A strategic 360 Feedback process can target these areas specifically, thus improving coordination. For example, team members and leaders should provide feedback throughout the planning process and should ensure that feedback supports information-​sharing efforts overall. In addition, strategic 360 processes should promote a culture of feedback among team members so that members feel comfortable soliciting feedback from teammates even outside feedback periods. While coordination focuses on the management of team interdependencies, cooperation involves team member contribution to tasks and is also associated with both team effectiveness and performance. Cooperation has been linked to team performance and even investment and sales growth in top management teams (Smith et al., 1994). Although the link between feedback and cooperation is less clear, 360 Feedback may be a useful tool to communicate uncooperative behaviors or tendencies in groups and to promote motivation to pursue team goals. Team communication is tightly intertwined with cooperation and coordination, as well as many of the other team processes outlined in this chapter. Though task-​related communication can be beneficial to team performance, exact amounts and types of communication vary from team to team based on contextual factors and team type. However, we posit that frequent and well-​directed feedback will improve team communication and promote problem-​solving, thus enhancing team performance as a whole. Team Cognition

Team cognition, or the shared understanding of tasks, team, and situation, includes shared mental models, situational awareness, and team communication (Salas & Fiore, 2004) and is a fundamental driver of performance. Team cognition is a particularly important characteristic in highly interdependent teams because the members rely on outcomes such as coordination and compatible decision-​making for success. Moreover, well-​established shared mental models can improve functioning for teams under stress (Wilson, Salas, Priest, & Andrews, 2007), and breakdowns in shared mental models lead to poor performance and errors (Stout, Cannon-​Bowers, Salas, & Milanovich, 1999). In order to improve team performance, strategic 360 Feedback processes should target team cognition and promote the creation of shared mental models and team communication. To start, a number of interventions involving feedback have been identified to improve team cognition. For example, team self-​correction training involves reviewing events, identifying errors, exchanging feedback, and planning for the future. Structured

168

168  / / ​  3 6 0 for D evelopment

pre-​and debriefs have also been used as an effective approach to enhance team cognition and performance (Smith-​Jentsch, Zeisig, Acton, & McPherson, 1998). Teams should incorporate 360 Feedback focused on self-​correction and reviewing performance in order to improve cognition. GIVING FEEDBACK

While many developmental activities are beneficial to team performance, research has provided particularly strong evidence for the role of feedback on team processes and performance. Though much of the academic research on team feedback focuses on feedback type and source in isolation, we assert that taking a strategic approach to 360 team feedback garners the greatest benefit for teams and organizations. For example, while receiving feedback on team outcomes from a team leader may improve performance, a more strategic approach could combine feedback on team outcomes and processes and provide feedback from leaders, peers, and outside stakeholders. Next, we provide an overview of types and sources of team feedback and describe how practitioners can use this knowledge to design a 360 Feedback process for teams in their organization. Levels and Sources of 360 Feedback

In order to begin designing a 360 Feedback process for teams, organizations must answer the questions “For whom?” and “By whom?” Team feedback can be given at the individual level, the team level, or both levels. Though the implementation of team-​level feedback in this context is fairly straightforward, it is important to understand the nature of individual-​level feedback in a team environment. Rather than focusing on individuals in isolation, this feedback may focus on the role that an individual plays in a team. In fact, this approach is particularly strategic for teams interested in improving team-​level rather than individual-​level outcomes as it links feedback more clearly to improving team processes. For example, appropriate questions might include, “Is the team member pulling their weight?” or “Does the team member promote a sense of trust with other team members, or does the individual negatively impact team productivity by engaging in interpersonal conflict?” rather than “Is this team member meeting individual performance goals?” While research indicates that individual-​level feedback can enhance performance improvements attained by group feedback, it is not critical in the maintenance of these improvements (Goltz, Citera, Jensen, Favero, & Komaki, 1990). Although team members can learn from each other in unique ways, team leaders can play a particularly important role in the feedback process. Team leaders have the power

 169

Team Development With Strategic 360 Feedback  //​ 169

to set norms and expectations for the team and can enhance psychological safety and promote idea sharing and feedback by being open to feedback themselves. In addition, organizations can invest in the development of team leaders to improve these capabilities and help them to improve team functioning (Marks et al., 2000). While 360 processes implemented in teams typically focus on feedback from team leaders and other members, team-​based 360 Feedback may be available from an even broader variety of sources. Although these sources may provide the most valuable feedback for team development and performance, looking to both internal and external sources of feedback may be useful. External sources can include organizational leaders, other members of a department, or outside stakeholders (e.g., customers) and should be selected based on a match between the expertise or perspective of the stakeholder and the type or topic of feedback the team seeks. For example, while a team leader knows the group members and their processes intimately, an outside stakeholder like a project manager may be able to provide more objective feedback on team processes or performance. Organizations seeking to implement strategic feedback processes must link the level and source of feedback to overall strategy. While feedback focused on both the individual and team level can be integrated in an approach with feedback from both internal and external stakeholders, the most strategic implementations will align the format of feedback with broader organizational goals. For example, professionals creating 360 Feedback processes for a highly interdependent surgical team may focus exclusively on team-​level feedback and may integrate feedback from members of the surgical team as well as outside observers. Types of Feedback

In addition to examining feedback source, research examining the benefits of team feedback describes multiple types of feedback for teams, each of which can play an important role in designing a strategic 360 Feedback process. In this chapter, we focus on three types of feedback prominent in the literature on team feedback: process, outcome, and diagnostic feedback. We also distinguish between evaluative and developmental feedback in the team context, noting the outcomes for each and emphasizing the importance of constructive feedback in all scenarios. Process Feedback

Process feedback describes information about behaviors, actions, or strategies regarding the task and information about interpersonal behaviors and team relationships or about team member motivation (e.g., Druskat & Wolff, 1999; McLeod & Liker, 1992). Team

170

170  / / ​  3 6 0 for D evelopment

process feedback is unique in that it strays away from individual-​level, task-​related information and focuses more exclusively on interactions within the team. Process feedback can have immense positive benefits on team functioning and tends to have a particularly positive impact on motivation, satisfaction, and performance (Geister, Konradt, & Hertel, 2006). Teams seeking process feedback might prompt raters with questions such as, “How effectively does this team communicate?” or “Do team members promote a sense of interpersonal trust though their behaviors?” Outcome Feedback

While process feedback focuses on behaviors, feedback regarding team-​level performance, or outcome feedback, is a particularly important tool for improving team performance over time (e.g., Burgio et al., 1990). Outcome feedback can help teams understand the links between their behaviors and results on team performance metrics. As a result, this type of feedback is most beneficial when combined with and linked to team goal setting (Tubbs, 1986), as the feedback can then evaluate specific performance metrics based on preestablished goals. Outcome feedback has been linked to team member motivation and promotes increased effort for team members, in turn leading to increased team performance, and is best aligned closely with tasks as more high-​level feedback can be distracting and reduce motivation for team members (Kluger & DeNisi, 1996; Locke & Latham, 1990). Outcome feedback should be tailored to the performance goals of a team. For example, outcome feedback might include items such as, “How effectively did this team meet target sales goals?” or “Did this team provide adequate support to customers during the last quarter?” Outcome feedback is best delivered at the team level rather than the individual level (Goltz et al., 1990) as it can help teams conceptualize future improvement and better understand how overall team processes might have impacted the team’s outcomes. Additionally, outcome feedback is particularly influential when delivered by team leaders (Duarte & Snyder, 2001), who have an understanding of broader team functioning and are better equipped to evaluate team member performance. However, teams may solicit outcome feedback from a variety of sources, including both internal and external stakeholders, particularly if team outcomes are externally oriented, as for a customer service team. Diagnostic Feedback

Team diagnostic feedback identifies obstacles to performance and highlights successes directly after a performance period so that teams can refine their behaviors for the next performance period. Though diagnostic feedback is in some ways an amalgamation of

 17

Team Development With Strategic 360 Feedback  //​ 171

process and outcome feedback, this type is distinguished temporally from other types of feedback. Diagnostic feedback may be particularly beneficial to teams whose work is project or critical incident based and where there are clear checkpoints during which the team can reflect on the last phase of work, give and receive feedback, and implement feedback for the next project phase. Diagnostic feedback lends itself to team self-​correction, a strategy that allows teams to discuss and correct behaviors, and has been linked to improved team performance (Rasker, Post, & Schraagen, 2000). Diagnostic feedback tools should focus on reflection and improvement and might include items such as, “What did this team do well in the last quarter?” or “How could this team more effectively overcome obstacles in the future?” Evaluative and Developmental Feedback

The nature of feedback can also vary, and it is important to distinguish between evaluative and developmental feedback. Evaluative feedback is generally used as a part of a performance measurement tool involving comparisons and metrics. In a team context, evaluative feedback can be used to share performance information from the perspective of other team members and leaders. Developmental feedback focuses on skills and behaviors that can be developed over a longer period of time and is not necessarily tied to performance metrics. When used in combination, evaluative and developmental feedback have proven beneficial to team performance (Dominick, Reilly, & McGourty, 1997). It is important to note that effective feedback must be constructive to maximize impact for team development and performance. Giving process and outcome feedback that is clear, concise, and constructive allows team members to gain a deeper understanding of how their behaviors drive performance, further promoting team development. In order to ensure that feedback is directed at a task or behavior rather than at the person, feedback should also be based on developmental or training objectives and be linked to specific skill performance (Salas et al., 2002). SUMMARY AND INSIGHTS

This chapter provided key evidence-​based insights for the integration of 360 Feedback with team development. We provided an overview of the process of team development and learning and noted strategic interventions for feedback in each process described. Next, we provided an overview of team processes, including team affect, behavior, and cognition, describing the impact of feedback on the development of each process, and linked feedback to team performance in these realms. Finally, we provided an overview of the sources and types of feedback and their impact on team and team member

172

172  / / ​  3 6 0 for D evelopment

development. Next, we provide three key insights from this chapter for practitioners interested in implementing 360 Feedback processes to enhance team effectiveness in their organizations:

• 360 Feedback is a fundamental driver of team learning and development as well as team performance. • Organizations can strategically implement 360 Feedback processes by targeting the team processes and interventions most important to development. • In every implementation of 360 Feedback, organizations should carefully consider the type and source of feedback provided to employees.

The team setting provides a particularly impactful opportunity to implement strategic 360 Feedback processes. Peer-​to-​peer feedback is particularly valuable in teams as members are working closely together to achieve a common goal. To facilitate development through peer-​to-​peer feedback, organizations can strategize its timing and delivery to promote development. In addition, feedback from team leaders and even outside sources can be integrated in a 360 Feedback process to promote development. ACKNOWLEDGMENT

This work was partially supported by funding from the National Aeronautics and Space Administration (grants NNX09AK48G and NNX16AP96G). REFERENCES Barrick, M. R., Stewart, G. L., Neubert, M. J., & Mount, M. K. (1998). Relating member ability and personality to work-​team processes and team effectiveness. Journal of Applied Psychology, 83(3), 377. Bouchard, T. J. (1972). A comparison of two group brainstorming procedures. Journal of Applied Psychology, 56(5), 418. Buller, P. F., & Bell, C. H. (1986). Effects of team building and goal setting on productivity: A field experiment. Academy of Management Journal, 29(2), 305–​328. https://​doi.org/​10.5465/​256190 Burgio, L. D., Engel, B. T., Hawkins, A., McCormick, K., Scheve, A., & Jones, L. T. (1990). A staff management system for maintaining improvements incontinence with elderly nursing home residents. Journal of Applied Behavior Analysis, 23(1), 111–​118. https://​www.ncbi.nlm.nih.gov/​pmc/​articles/​PMC1286215/​ Dominick, P. G., Reilly, R. R., & McGourty, J. W. (1997). The effects of peer feedback on team member behavior. Group & Organization Management, 22(4), 508–​520. Druskat, V., & Wolff, S. B. (1999). Effects and timing of developmental peer appraisals in self-​managing work group. Journal of Applied Psychology, 84(1), 58–​74. https://​doi.org/​10.1037//​0021-​9010.84.1.58 Duarte, D. L., & Snyder, N. T. (2001). Mastering virtual teams: Strategies, tools and techniques that succeed. New York, NY: Wiley.

 173

Team Development With Strategic 360 Feedback  //​ 173 Eden, D. (1985). Team development:  A true field experiment at three levels of rigor. Journal of Applied Psychology, 70(1), 94. Edmondson, A. (2003). Speaking up in the operating room: How team leaders promote learning in interdisciplinary action teams. Journal of Management Studies, 40(6), 1419–​1452. https://​doi.org/​10.1111/​ 1467-​6486.00386 Geister, S., Konradt, U., & Hertel, G. (2006). Effects of process feedback on motivation, satisfaction, and performance in virtual teams. Small Group Research, 37(5), 459–​489. https://​doi.org/​10.1177/​ 1046496406292337 Goldstein, I. L., & Ford, J. K. (2002). Training in organizations: Needs assessment, development, and evaluation (4th ed.). Belmont, CA: Wadsworth/​Thomson Learning. Retrieved from http://​psycnet.apa.org. ezproxy.rice.edu/​record/​2001-​01461-​000 Goltz, S. M., Citera, M., Jensen, M., Favero, J., & Komaki, J. L. (1990). Individual feedback. Journal of Organizational Behavior Management, 10(2), 77–​92. https://​doi.org/​10.1300/​J075v10n02_​06 Hackman, J. R. (1987). The design of work teams. In J. Lorsch (Ed.), Handbook of organizational behavior (pp. 315–​342). Englewood Cliffs, NJ: Prentice-​Hall. Jarvenpaa, S. L., & Leidner, D. E. (1998). Communication and trust in global virtual teams. Journal of Computer-​Mediated Communication, 3(4), JCMC346. https://​doi.org/​10.1111/​j.1083-​6101.1998. tb00080.x Klein, C., DiazGranados, D., Salas, E., Le, H., Burke, C. S., Lyons, R., & Goodwin, G. F. (2009). Does team building work? Small Group Research, 40(2), 181–​222. https://​doi.org/​10.1177/​1046496408328821 Kluger, A., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-​analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–​284. Kozlowski, S. W. J., & Bell, B. S. (2007). A theory-​based approach for designing distributed learning systems. In S. M. Fiore & E. Salas (Eds.), Toward a science of distributed learning (pp. 15–​39). Washington, DC: American Psychological Association. https://​doi.org/​10.1037/​11582-​002 Kozlowski, S. W. J., Gully, S. M., Nason, E. R., & Smith, E. M. (1999). Developing adaptive teams: A theory of compilation and performance across levels and time. In D. R. Ilgen & E. D. Pulakos (Eds.), The changing nature of work performance: Implications for staffing, personnel actions, and development (pp. 240–​292). San Francisco, CA: Jossey-​Bass. Locke, E. A., & Latham, G. P. (1990). Work motivation and satisfaction:  Light at the end of the tunnel. Psychological Science, 1(4), 240–​246. Marks, M. A., Zaccaro, S. J., & Mathieu, J. E. (2000). Performance implications of leader briefings and team-​ interaction training for team adaptation to novel environments. Journal of Applied Psychology, 85(6), 971. McLeod, P. L., & Liker, J. K. (1992). Electronic meeting systems: Evidence from a low structure environment. Information Systems Research, 3(3), 195–​223. Mullen, B., & Copper, C. (1994). The relation between group cohesiveness and performance: An integration. Psychological Bulletin, 115(2), 210–​227. Olson, G. M., & Olson, J. S. (2000). Distance Matters. Human–​Computer Interaction, 15(2), 139–​178. https://​doi.org/​10.1207/​S15327051HCI1523_​4 Rasker, P. C., Post, W. M., & Schraagen, J. M. C. (2000). Effects of two types of intra-​team feedback on developing a shared mental model in Command & Control teams. Ergonomics, 43(8), 1167–​1189. https://​ doi.org/​10.1080/​00140130050084932 Salas, E., Burke, C. S., & Cannon-​Bowers, J. A. (2002). What we know about designing and delivering team training: tips and guidelines. In K. Kraiger (Ed.), Creating, implementing, and managing effective training and development: State-​of-​the-​art lessons for practice (pp. 234–​259). San Francisco, CA: Jossey-​Bass. Salas, E., & Fiore, S. M. (2004). Team cognition: Understanding the factors that drive process and performance. Washington, DC: American Psychological Association. https://​doi.org/​10.1037/​10690-​000 Salas, E., Rozell, D., Mullen, B., & Driskell, J. E. (1999). The effect of team building on performance: An integration. Small Group Research, 30(3), 309–​329. https://​doi.org/​10.1177/​104649649903000303

174

174  / / ​  3 6 0 for D evelopment Shuffler, M. L., DiazGranados, D., & Salas, E. (2011). There’s a science for that:  Team development interventions in organizations. Current Directions in Psychological Science, 20(6), 365–​372. https://​doi. org/​10.1177/​0963721411422054 Smith, K. G., Smith, K. A., Olian, J. D., Sims, H. P., O’Bannon, D. P., & Scully, J. A. (1994). Top management team demography and process: The role of social integration and communication. Administrative Science Quarterly, 39(3), 412–​438. https://​doi.org/​10.2307/​2393297 Smith-​Jentsch, K. A., Zeisig, R. L., Acton, B., & McPherson, J. A. (1998). Team dimensional training:  A strategy for guided team self-​correction. In J. A. Cannon-​Bowers & E. Salas (Eds.), Making decisions under stress:  Implications for individual and team training (pp. 271–​297). Washington, DC:  American Psychological Association. Somech, A., Desivilya, H., & Lidogoster, H. (2009). Team conflict management and team effectiveness: The effects of task interdependence and team identification. Journal of Organizational Behavior, 30(3), 359–​378. Stout, R. J., Cannon-​Bowers, J. A., Salas, E., & Milanovich, D. M. (1999). Planning, shared mental models, and coordinated performance: An empirical link is established. Human Factors, 41(1),  61–​71. Tubbs, M. E. (1986). Goal setting: A meta-​analytic examination of the empirical evidence. Journal of Applied Psychology, 71(3), 474–​483. Tuckman, B. W. (1965). Developmental sequence in small groups. Psychological Bulletin, 63(6), 384–​399. Wilson, K. A., Salas, E., Priest, H. A., & Andrews, D. (2007). Errors in the heat of battle: Taking a closer look at shared cognition breakdowns through teamwork. Human Factors, 49(2), 243–​256.

 175

11 / / ​/ ​      / / ​/​

FROM INSIGHT TO SUCCESSFUL BEHAVIOR CHANGE The Real Impact of Development-​Focused 360 Feedback KENNETH M. NOWACK

Increasingly, Strategic 360 Feedback systems (Chapter 2) have proliferated and are being used for diverse individual and organizational purposes (e.g., executive coaching, performance evaluation, talent management, and succession planning). Despite the widespread use of Strategic 360 Feedback, organizations still seem to ignore some of the evidence-​based research highlighting the possible limitations, risks, and issues of this type of intervention, particularly when used for development purposes (Bracken & Rose, 2011; Bracken, Rose, & Church, 2016; Nowack & Mashihi, 2012). Under the right circumstances, feedback interventions designed strategically for human resources decision-​making (e.g., succession planning) and development can indeed facilitate some of the conditions required for successful behavioral change (Bracken, Timmreck, Fleenor, & Summers, 2001; Nowack, 2008, 2009, 2017; Nowack & Mashihi, 2012; Smither, London, Flautt, Vargas, & Kucine, 2003). Yet, there are many studies showing that such processes sometimes create no measurable change whatsoever (Siefert, Yukl, & McDonald, 2003) or small effects (L. E. Atwater, Waldman, Atwater, & Cartier, 2000) or may even have negative effects on both engagement and productivity (Nowack & Mashihi, 2012). Despite some limitations of 360 Feedback, talent management professionals can leverage this type of strategic intervention within organizations to 175

176

176  / / ​  3 6 0 for D evelopment

maximize both awareness and behavioral change by understanding and using comprehensive feedback and individual change models that build on the theoretical work of others (Gregory, Levy, & Jeffers, 2008; Joo, 2005; London & Smither, 2002; Nowack, 2008). This chapter attempts to provide an integrated and theoretically derived individual change framework within Strategic 360 Feedback interventions (Chapter 2) to facilitate successful behavioral change despite realistic issues and potential challenges (Nowack, 2009). A brief description of a new individual change model is introduced, and issues specific to each stage are discussed. Practitioners will be provided with practical tips and suggestions to maximize understanding, acceptance, motivation, and successful goal accomplishment—​the real impact of Strategic 360 Feedback interventions that also has a development component.

THE PROXIMAL AND DISTAL OUTCOMES OF STRATEGIC 360 FEEDBACK: THE 3E MODEL OF INDIVIDUAL BEHAVIOR CHANGE

One important goal of Strategic 360 Feedback interventions is successful change of behavior on the job by feedback recipients (Bracken et  al., 2001; Joo, 2005; London & Smither, 2002). Initiation of new behaviors and sustaining them over time are particularly challenging for most individuals (Nowack, 2017). The likelihood that an employee will or will not engage in a particular behavior following a 360 Feedback discussion is influenced heavily by their predictions of the effects and consequences of that behavior in relation to their own professional goals and objectives. Building on the strategic organizational 360 Feedback process models of Bracken et al. (2001), London and Smither (2002), and Gregory et al. (2008), a new individual behavioral change model has been proposed (Nowack, 2009, 2017)  based heavily on evidence-​based research in the health psychology and behavioral medicine literature. The enlighten, encourage, and enable model (Figure 11.1) is based on the most often applied theories of individual behavioral change, including the theory of planned behavior (Ajzen, 1991), self-​efficacy and social cognitive theory (Bandura, 1977), the health belief model (Becker, 1974), and the transtheoretical model of change (Prochaska & Velicer, 1997). Each of these theories should be useful to all practitioners today in organizations who are attempting to extend the utility of 360 Feedback beyond awareness to enhanced effectiveness or impact within strategic applications (Chapter 3). Each stage of the model and issues contributing to translating insight from the use of 360 Feedback to actual behavior change are briefly discussed.

 17

Impact of Development-Focused 360 Feedback  //​ 177

• Accurate Insight • Identifying Signature Strengths • Ideal vs. Real Self

Enlighten

Encourage • Motivation • Self-Efficacy • Skill Building • “Nudge” Reminders

• Implementation Intentions • Practice Plans • Social Support • Relapse Prevention • Goal Evaluation

Enable

FIGURE 11.1  Enlighten, encourage, and enable individual change model to leverage impact of 360 feedback.

STAGE 1: ENLIGHTEN

The enlighten stage is about the client understanding and accepting feedback as well as enhancing self-​insight/​self-​awareness about signature strengths and potential development areas. The “what’s in it for me” (WIFM) is a critical leverage point for practitioners to be successful in behavioral change efforts with employees using 360 Feedback interventions. However, insight and self-​awareness are only a fundamental first step that is a necessary, but not sufficient, condition for successful behavioral change to take place. During this enlighten stage, the practitioner is using the data from the 360 Feedback process to help the employee interpret the meaningfulness of different rater perspectives compared to their own self-​perceptions. One important role of the practitioner during this stage is to help manage potential employee reactions to ensure that the feedback does not elicit disengagement or cause the employee to ignore it or to overly emphasize it (Brett & Atwater, 2001; Sedikides, Gaertner, & Toguchi, 2003; Smither et al., 2003). Reactions from any 360 Feedback process might range from being pleasantly surprised to experiencing hurt, anger, and even depression, with predictable consequences to performance, health, and psychological well-​being (Eisenberger, Lieberman, & Williams, 2003). As Joo (2005) has pointed out, the feedback orientation and personality of the employee will directly affect the employee’s openness to the input, suggestions, and feedback, which can affect the overall effectiveness of the intervention. These findings suggest an important role of the practitioner in targeting the emotional reactions and consequences for engaging in new behaviors as well as assessing “readiness-​to-​change” stages in employees. In organization-​wide 360 Feedback interventions, information about the focal leader’s personality or feedback orientation may not be readily available.

178

178  / / ​  3 6 0 for D evelopment

However, if an internal or external coach is part of the feedback process, it might be valuable for them to have such information to leverage successful behavior change efforts. Additionally, there are three important barriers or issues to the enlighten stage that can interfere with employees understanding and accepting feedback, without which any behavior change can likely occur. Each of these is briefly discussed next. Issue 1: The Neurobiology of Positive and Negative Feedback

In general, poorly designed feedback assessments and interventions can increase disengagement and contribute to poor individual and team performance (Bono & Colbert, 2005; DeNisi & Kluger, 2000; Ilgen & Davis, 2000). Several studies have also shown that individuals can experience strong discouragement and frustration when 360 Feedback results are not as positive as the feedback recipient expected (L. E. Atwater & Brett, 2005). For example, it is common in most 360 Feedback assessments to include one or more open-​ended questions that are typically voluntarily and confidentially answered by raters. A study by Smither and Walker (2004) analyzed the impact of upward feedback ratings, as well as narrative comments, over a 1-​year period for 176 managers. The number of raters providing comments per target manager ranged from 1 to 12 (M = 3.10, SD = 2.21). The number of comments per target manager ranged from 1 to 35 (M = 7.35, SD = 6.14). Seventy percent of the comments were coded as 3.5 or higher (1, unfavorable; 5, favorable). The study found that those who received a small number of unfavorable, behaviorally based comments improved significantly more than other managers, but those who received a large number (relative to positive comments) significantly declined in performance more than other managers (Smither et al., 2003). Research on positive and negative emotions suggests that a person, relationship, or group flourishes best when positive emotions feature with a frequency two to five times that of negative emotions (Boyatzis, Rochford, & Taylor, 2015; Fredrickson, 2009; Gottman, Murray, Swanson, Tyson, & Swanson, 2002; Schwartz et  al., 2002). Unfortunately for talent management professionals, the Smither et al. (2003) study did not quantify the number or ratio of negative to positive comments that might be the “tipping point” for performance declines. Neuroscience research sheds some light on why feedback perceived to be evaluative, judgmental, or negative is potentially emotionally harmful (Nowack, 2014). In general, stressors, including feedback processes that induce greater social-​evaluative threat, elicit significantly larger cortisol and ambulatory blood pressure responses (Dickerson & Kemeny, 2004; Lehman & Conley, 2010), as well as trigger the same neurophysiologic pathways associated with physical pain and suffering (Eisenberger et  al., 2003).

 179

Impact of Development-Focused 360 Feedback  //​ 179

Additionally, social stressors at work in terms of tension, conflict, and unfairness have repeatedly been shown to have stronger effects on health and psychological well-​being, over and above task-​related stressors, such as role overload (Sonnentag & Frese, 2013). Neuroscience research using functional magnetic imagery (fMRI) by Jack, Boyatzis, Khawaja, Passarelli, and Leckie (2013) compared a problem-​centered and solution-​ focused coaching style (NEA; coaching to the negative based emotional attractor) to a future vision or compassion-​based coaching style (PEA; coaching to the positive emotional attractor) with both neural activation and perceived coaching outcomes. The PEA coach was perceived to be considerably more inspirational and also more trusting and caring. Additionally, compared to the NEA coaching approach, a PEA coaching style resulted in greater engagement of the brain areas associated with motivation, goal setting, and positive affect, enabling the person to be more open to new ideas and willing to initiate new development goals. These findings suggest that using data-​based feedback (e.g., 360 Feedback or assessment center results) to help coach a person—​focusing on numerical data, graphs, or open-​ended comments in feedback reports—​will typically result in a person emphasizing perceptual gaps, weaknesses, or specific positive and negative comments. This is likely to largely activate several specific decision-​making and problem-​solving areas of the brain (TPN; task positive network or executive function) as they try to understand and make sense of their feedback, which is consistent with a problem-​centered and solution-​ focused coaching style. As such activation of these brain areas may interfere with opening possibilities for the acceptance of the feedback and future goal setting ( Jack et al., 2013). Their research findings ( Jack et al., 2013) suggest that focusing on the personal vision of a focal leader or participant before presenting any numerical or graphical feedback in coaching engagements may maximize the chance of creating a positive and strongly desired context for accepting the feedback and enhancing motivation to set new behavior change goals. Engaging the TPN too often, or exclusively, in a feedback session with employees may prevent feelings of safety and interpersonal trust from emerging, possibly interfering with acceptance of performance evaluation or 360 Feedback results. Issue 2: Overestimators Versus Underestimators

It has been estimated that 65% to 75% of the employees in any given organization report that the worst aspect of their job is their immediate boss (Hogan, 2007, p. 106). In fact, estimates of the base rate for managerial incompetence in organizations range from 30% to 75%, with the average level of poor leadership estimated at about 50% (Hogan & Kaiser, 2005). Many of these incompetent leaders tend to have unrealistic and inflated

180

180  / / ​  3 6 0 for D evelopment

views of their skills and abilities, and this appears fairly common in 360 Feedback research (L. E. Atwater & Brett, 2005). It is important to point out that some leaders certainly learn that is “politically incorrect” to present themselves as outstanding, and that some “overestimation” of skills and abilities could be a direct result of the design of the feedback intervention or even the assessment (Bracken & Rose, 2011). However, differences in self-​and other perceptions are important to explore (Chapter 16; Yammarino & Atwater, 2001) and may be more than just “gaming” a 360 process or purposeful image management on the part of participants going through such a feedback process. In general, self-​evaluations of skills (e.g., social, intelligence) and objective performance measures across 22 studies suggest that people have only modest insight into their true abilities (Zell & Krizan, 2014). Some self-​enhancers or overestimators are truly blind to accurately identifying their own strengths, generally less receptive to feedback from others, have negative reactions to feedback (Brett & Atwater, 2001), and are at risk for potential career derailment (Quast, Center, Chung, Wohkittel, & Vue, 2011). As a result, practitioners working with self-​enhancers might find it difficult to find the WIFM with such employees to accept the perceptions of others and commit to modifying their behavior in order to better meet the expectations and needs of those working with them. For example, Brett and Atwater (2001) found that managers who rated themselves higher than others (overestimators) reported significantly more reactions that were negative to the 360 Feedback process. They noted specifically that “negative feedback (i.e., ratings that were low or that were lower than expected) was not seen as accurate or useful, and it did not result in enlightenment or awareness but rather in negative reactions such as anger and discouragement” (p. 938). Additionally, a study by Atwater et al. (1998) found that leadership effectiveness in 1,400 managers was highest when both self-​and other ratings were high and when self-​ratings were substantially lower than other ratings (severe underestimation). Effectiveness was the lowest for overestimators when self-​ rating was only moderate and subordinate ratings were low. Goffin and Anderson (2007) found in their study of 204 managers that self-​rating inflation was significantly correlated with low neuroticism but high achievement, self-​ esteem, and social desirability personality factors (i.e., projecting a socially desirable impression of their behaviors to others). This personality profile pattern suggests that self-​enhancers might possess an exaggerated perception of their strengths, resulting in potential defensiveness and resistance during 360 Feedback discussions. It should also be noted that the pattern of high social desirability and low anxiety (repressive coping) has long been shown in the health psychology literature to be significantly associated with increased cardiovascular reactivity to stress, higher blood pressure, and poor overall

 18

Impact of Development-Focused 360 Feedback  //​ 181

health outcomes (Mund & Mitte, 2012; Rutledge, 2006). This personality pattern of overestimators found by Goffin and Anderson (2002) suggests that such employees might not only be at risk to potentially derail their careers but also be vulnerable to negative health outcomes. To date, no research has directly tested this hypothesis or considered that the most vulnerable overestimator might indeed have a personality profile characterized as high in social desirability, low in negative affect (anxiety), and simultaneously high in positive affect (i.e., a “superrepressor”). The research of Goffin et al. (2002) supports the notion that some overestimation may not merely be a result of conscious “gaming” of the 360 Feedback system and that the tendency to self-​enhance might be driven, in part, by personality characteristics that have importance beyond job performance. As a result, talent management professionals designing and implementing strategic company-​wide 360 interventions should become familiar with the impact of client self-​enhancement (both magnitude and direction) on the understanding, acceptance, and actions taken following 360 Feedback as well as how it might predict job performance, health, and behavioral goal setting. Training for both talent management professionals and focal leaders or participants of feedback should provide information about self-​enhancement in general and implications for goal setting and follow-​up with invited raters to enhance overall effectiveness of the 360 Feedback intervention and facilitate successful behavior change efforts. Another form of self-​other incongruence that is common in 360 Feedback processes is characterized by clients rating themselves significantly lower than the ratings of others. In general, such underestimators are typically viewed as possessing strengths but not fully recognizing or acknowledging them either consciously or not relative to others giving them feedback (Nowack, 2009). Furthermore, research by Goffin and Anderson (2007) indicated that underestimators score significantly higher on negative affect than overestimators, suggesting they are likely to be more emotionally reactive, anxious, and nervous (i.e., greater in neuroticism) in the interpretation of their feedback results. Nowack (2009) reported that underestimators are typically characterized as highly perfectionist; expect high performance for themselves and others; hypervigilant for fault, criticism, and potential deficits in the feedback received from others; and reframe such feedback interpreting their strengths as being inflated by others and perhaps not completely accurate. From a practical perspective, talent management professionals should be aware that employees who are underestimators are likely to be highly perfectionistic, self-​critical, and express high negative affect, making them likely to dismiss the strengths perceived by others. It should be expected that underestimating employees will not see their feedback in balance, and talent management professionals should anticipate that their clients will accentuate and focus on the negative despite feedback from others that

182

182  / / ​  3 6 0 for D evelopment

they are actually performing strongly or possess high competence in particular skills and abilities being rated. The discrepancy between self-​ratings and other ratings can also affect both emotional reactions and readiness to change behavior. Research suggests mixed findings for the association between affect and behavioral change. For example, L.  E. Atwater and Brett (2005) suggested that leaders who received low ratings and overrated themselves were actually more motivated to change than leaders who received low ratings and gave themselves low ratings. However, these overestimators also had more negative reactions (e.g., were more angry) than those who did not overrate themselves relative to others. In contrast, other research suggests that overestimators are significantly less likely to engage in developmental plans following negative feedback so important for the distal goal of successful behavior change (Woo, Sims, Rupp, & Gibbons, 2008). Issue 3: Culture and the Acceptance of Feedback

There is increasing use of Strategic 360 Feedback interventions in different cultures and countries, as multinational companies utilize this intervention throughout their entire organization (Bracken et al., 2016). Recent research in this area suggests 360 Feedback can have positive business outcomes in companies outside North America (Kim, Atwater, Patel, & Smither, 2016). Differences in 360 Feedback rating and interpretation should be expected to some degree in other cultures. Several cultural dimensions have been thoroughly studied (Hofstede & McRae, 2004) and would appear to be meaningful to 360 ratings (self and others). Some typical cultural dimensions include individualism versus collectivism, power distance, uncertainty avoidance, short-​term versus long-​term orientation, and gender egalitarianism. Varela and Premeaux (2008), in their sampling of managers in Latin America, found the least discrepancy between peer and self-​ratings. In their analysis, direct reports gave the highest ratings to their managers in this highly collectivistic and high-​power distance culture relative to studies of other countries that were less collectivist and have lower power distance. Cultural differences between geographic regions in Asia have been associated with patterns of self-​ratings of managerial performance (Gentry, Yip, & Hannum, 2010). These researchers found that significant self–​other discrepancies were wider in high-​power and individualistic cultures mainly due to self-​ratings and not the ratings of others. In a comparison of US managers (N = 22,362) to an Asian sample of 3,810 managers consisting of five countries, Quast et al. (2011) found that self–​other discrepancies in all countries were significantly associated with bosses’ predictions of how likely a manager was to experience future career derailment. These results provide

 183

Impact of Development-Focused 360 Feedback  //​ 183

support for earlier findings that self–​other rating discrepancies are associated with potential derailment in the United States and extended these findings to the five Asian countries included in this study (China, S. Korea, Japan, India, and Thailand). Atwater, Wang, Smither, and Fleenor (2009) explored self-​and subordinate ratings of leadership in 964 managers from 21 countries, based on assertiveness, power distance, and individualism or collectivism. Self-​and other ratings were more positive in countries characterized as high in both assertiveness and power distance. However, L. Atwater, Waldman, Ostroff, Robie, and Johnson (2005) found varying multisource ratings patterns (i.e., self–​other agreement) in different cultures. Their study showed that links between self–​other discrepancies and managerial effectiveness varied greatly and these discrepancies were related to effectiveness in the United States but not in the European and Scandinavian countries of Germany, Denmark, Italy, and France (only others’ ratings of leadership predicted managerial effectiveness in these countries). In one of the broadest studies to date, Eckert, Ekelund, Gentry, and Dawson (2010) investigated self-​observer rating discrepancies on three leadership skills on data from 31 countries. They reported that rater discrepancy on a manager’s decisiveness and composure was higher in high power distance cultures (e.g., Asian) than low power distance cultures (e.g., Americas). Self–​observer rating discrepancy has also been shown to be higher (i.e., bigger or wider) for US American managers than for Europeans on 360 ratings of managerial derailment behaviors (Gentry, Hannum, Ekelund, & de Jong, 2007). At least in the United States, higher disagreement between self-​and observer ratings is generally associated with lower effectiveness and job performance (L. E. Atwater & Brett, 2005; Ostroff, Atwater, & Feinberg, 2004), but some contradictory evidence has been found in other countries (L. Atwater et al., 2009). Cultural relevance was compared across five countries (United States, Ireland, Israel, Philippines, and Malaysia), and this not only supported the overall effectiveness of the 360 Feedback process but also revealed important differences (Shipper, Hoffman, & Rotondo, 2007). Their study suggested that the 360 Feedback process is relevant in all cultures but most effective in those low on power distance with individualistic values (e.g., United States vs. Philippines). Finally, earlier research on 360 Feedback across 17 countries by Robie, Kaster, Nilsen, and Hazucha (2000) suggested that there were more similarities than differences across countries. For example, the ability to solve complex problems and learn quickly appears to be universally predictive of effectiveness for leaders across cultures both high and low in power distance. Taken together, these cross-​cultural 360 Feedback studies suggest that cultural factors should be considered as potential barriers to understanding and acceptance of the results as well as potentially impacting goal setting.

184

184  / / ​  3 6 0 for D evelopment

For example, alternative competency models defining cross-​cultural leaders might be strongly considered for future 360 Feedback interventions given the lack of a universal taxonomy or systematic framework for evaluating the content coverage of such assessments (Holt & Seki, 2012). Additionally, talent management professionals should understand the cultures they are working in when designing and using 360 Feedback interventions. In high-​power cultures (e.g., Mexico, China), the tendency to emphasize boss ratings and comments more than any other rater group might be expected when debriefing clients. In collectivist cultures (e.g., Japan), providing candid upward feedback to bosses is often met with some resistance in practice, limiting the results that might be used in strategic 360 Feedback interventions (e.g., coaching, succession planning). As such, talent development professionals designing and implementing 360 Feedback interventions might consider modifying a system-​wide approach to accommodate local culture and norms. As an example, in collectivistic and high-​power cultures, it might be easier to gain compliance to such systems initially using downward feedback processes or team-​based feedback to engender greater trust and enhance participation rates. Additionally, in such cultures when the primary goal of the 360 Feedback intervention is personally decision focused (e.g., performance evaluation), it is recommended that some aspect of the feedback process be used simultaneously for development purposes to educate focal leaders or participants about the value of feedback for professional development. STAGE 2: ENCOURAGE

Whether an organization-​wide intervention or individual-​based coaching, getting focal leaders or participants to want to use the feedback for professional and career development is very important. One key to successful individual and team-​based behavioral change following 360 Feedback is in the development-​planning process, which should minimally include “deliberate practice” of newly acquired skills or the leveraging of one’s existing strengths. The talent management professional’s role is to ensure the translation of the enlighten stage (i.e., insight and self-​awareness) to the creation of realistic, specific, and measurable development plans in the encourage stage that an employee is genuinely motivated to work on. Goal setting and developmental planning are generally addressed in most feedback models (Gregory et al., 2008; Nowack, 2017; Chapter 3), and as previously pointed out, follow-​up by talent management professionals and talent management staff or others (e.g., focal leader’s mentor or internal/​external coach) to review the feedback report results appears to significantly help the employee translate awareness and motivation into specific behavioral change goals (Smither et al., 2003).

 185

Impact of Development-Focused 360 Feedback  //​ 185

The encourage stage involves gaining commitment with the employee toward a collaborative and explicit behavioral change plan. The talent management professional or even manager of the focal leader or employee, during this stage, explores signs of resistance and actively strengthens clarity of action plan goals and commitment to implement them. The employee’s motivation to change is a function of the discrepancy between their action plan goal and current situation. The manager or talent management professional should also help the employee to see if the goal is realistic, as a large gap between current and ideal states may actually decrease confidence to sustain change over time, leading to possible relapse (Dimeff & Marlatt, 1998; Larimer, Palmer, & Marlatt, 1999; Parks & Marlatt, 1999). Finally, specific challenges to translating intention into practice often impede the success of many 360 Feedback interventions (Nowack, 2017). Two issues or barriers surrounding translating insight into action are briefly presented next that are common in the encourage stage. Issue 1: How Long Does It Take to Form New Habits/​Behaviors?

Neuroscience research provides talent management professionals with a better understanding about how long it takes, on average, for new behaviors to become comfortable and automatic. It is important to point out that there is a difference between changes at the neural level (neurogenesis) and actual change in specific observable behaviors (Kleim & Jones, 2008). Research by Lally and associates (Lally, Van Jaarsveld, Potts, & Wardle, 2010) suggested that new behaviors can become automatic, but it depends on the complexity of what new behavior a client is trying to put into place as well as aspects of their personality. They studied volunteers who chose to change an eating, drinking, or exercise behavior and tracked the volunteers for success. Participants completed a self-​report diary, which they entered on a website log and were asked to try the new behavior each day for 84 days. Their research findings indicated that it took 66 days, on average, for new behavior to become automatic and natural, and the range was anywhere from 18 to 254 days. The mean number of days varied by the complexity of the habit:  drinking, 59  days; eating, 65 days; and exercise, 91 days. Although this study used lifestyle-​oriented behavior change goals, these findings are generalizable to behaviors in the workplace, whether externally motivated (e.g., getting a promotion) or internally driven behavior change efforts (e.g., to enhance specific leadership, interpersonal, or communication skills) that are important for one’s career and professional success. Taken together, talent management professionals should consider that translating a work-​or team-​oriented goal into a new habit from 360 Feedback results for most clients

186

186  / / ​  3 6 0 for D evelopment

might take longer than expected (approximately 2 months or more of deliberate practice). As such, gaining commitment toward and building in enough actual practice following 360 Feedback can be a significant potential barrier or issue in translating insight into successful behavior change, particularly in organization-​wide interventions that lack accountability, follow-​up, and evaluation of results. Issue 2: Does Practice Make Perfect?

Assuming that an employee following 360 Feedback is motivated to set goals and work on specific behaviors long enough to enhance their effectiveness, does this guarantee that an employee will make a large enough change to be viewed by others? In other words, does behavioral practice always lead to meaningful and significant behavior change in people? In a 1996 book co-​edited by Anders Ericsson and others, The Cambridge Handbook of Expertise and Expert Performance, one of the authors concluded that great performance comes mostly from two things:  regularly obtaining concrete and constructive feedback and deliberate practice with difficult tasks (Ericsson, 1996, p. 4). For example, the authors found that the best skaters spent 68% of their practice doing really hard jumps and routines compared to those who were less successful (they spent only about 48% of their time doing the same difficult things). For leaders and other employees who are not attempting to be world-​recognized professional athletes, initiating and deliberating practicing behaviors that are challenging and difficult will contribute to their professional and career success. In two other studies, Ericsson and colleagues (Ericsson, Krampe, & Tesch-​Römer, 1993) recruited musicians with different levels of accomplishment and asked them to retrospectively estimate the amount of time per week they had engaged in deliberate practice. On average, the “best” violinists had accumulated over 10,000 hours of deliberate practice, compared with less than 8,000 hours for the “good” violinists and not even 5,000 hours for the least accomplished “teachers.” Ericsson et al. (1993) concluded that “individual differences in ultimate performance can largely be accounted for by differential amounts of past and current levels of practice” (p. 392). Brooke Macnamara and her colleagues from Princeton University recently conducted the largest review and meta-​analysis of studies exploring the relationship between deliberate practice and performance in several work and nonwork domains (Macnamara, Hambrick, & Oswald, 2014) as well as testing the accepted “10,000-​hour rule” popularized in a number of books (e.g., Colvin & Overrated, 2008; Gladwell, 2008). The percentage of variance accounted by deliberate practice in five specific domains was games, 26%; music, 21%; sports, 18%; education, 4%; and less than 1% for professions (Macnamara

 187

Impact of Development-Focused 360 Feedback  //​ 187

et  al., 2014). This research revealed that the strength of the relationship between deliberate practice and performance varied by domain, with the effect size for education and career/​professional success being very low. The authors speculated that deliberate practice might be less well defined across the studies included in their analysis of the diversity of expertise. Alternatively, the experience and skills of the study participants in those domains in the studies cited required less practice to achieve professional success (Macnamara et al., 2014). Certainly, deliberate practice is necessary, but not sufficient, to explain individual differences in skills, and it appears that more variance is not explained by deliberate practice than what is explained by it. From a practitioner perspective, these results suggest the importance of considering other broad factors that may contribute to individual differences in competence and expertise (e.g., cognitive ability, personality, peer support, genetic predisposition, etc.) when trying to leverage behavior change efforts using 360 Feedback (Nowack, 2015, 2017; Plomin, Shakeshaft, McMillan, & Trzaskowski, 2014). STAGE 3: ENABLE

The enable stage is one in which talent management professionals begin to actually help the employee acquire new knowledge, increase self-​efficacy, and reinforce deliberate practice of skills to initiate and maintain important new behaviors (i.e., habit pursuit and success). This is a stage involving “nudges” or reminders to prompt new behaviors (e.g., by online apps, devices such as smartphones and watches, and Internet-​based behavior change platforms that send scheduled email and SMS text messages), support by peers and others, and a way for employees to monitor and measure their progress on specific goals they are working on. In general, employees are more likely to try new behaviors in which they are confident of a successful outcome and feel a sense of mastery in maintaining it over time despite some possible setbacks and challenges. If the employee is lacking confidence in his or her ability to implement the plan, the chances that he or she will maintain it over time will be low. When possible, talent management professionals and even managers in the enable stage should be working to help the focal leader or employee to manage lapses, recognize successes, enlist the power of their social support systems to follow up and reinforce key behaviors and learnings, recognize and reward goal progress, and evaluate overall success. Finally, during this stage it is important to build in ongoing feedback to track and monitor behavior change and successful goal accomplishment following 360 Feedback goal setting and goal pursuit.

18

188  / / ​  3 6 0 for D evelopment

Issue 1: Tracking and Evaluating New Habit/​Behavior Change

One potential issue around demonstrating effectiveness of 360 Feedback is trying to demonstrate both individual and system-​wide change in actual behavior of employees receiving feedback. Today, there are a growing number of online apps and Internet-​based platforms to help organizations, coaches, and those receiving feedback to set/​track online goals, seek feedback from others about goal progress, and evaluate perceived behavior change (Bersin, 2015; Nowack, 2009). In our own practice, we use an online goal-​setting and evaluation platform called Momentor (full disclosure: Momentor is our proprietary development platform integrated with all custom and off-​the-​shelf 360 assessments) for translating insight from 360 Feedback into deliberate practice and successful behavior change. This learning transfer platform allows a client to review their 360 Feedback report and select specific competency-​based goals to begin their development journey (Mashihi & Nowack, 2013). Human resources can track and monitor goal progress using a wide variety of vendor-​based dashboards with reminder systems to encourage the focal leader or employee to maintain focus to continue working on their professional development plan. Some online habit and behavior change platforms, like ours, also provide a comprehensive resource library mapped to each competency measured in the 360 assessment; it includes suggested development tips, videos, websites, blogs, recommended books, training programs, and other resources to support the development activities of the employee. Ideally, these platforms, like ours, also allow for the invitation of a “goal mentor” or another individual inside or outside the organization to know about and support the development efforts of the focal leader or employee that can be particularly useful for company-​wide interventions. One of the most important features in our own platform is the ability for human resources, an internal or external coach, or the focal leader or employee of feedback to send out a pulse survey to anyone using any type of response scale and as many times as they would like to receive immediate feedback about perceived behavior change success. Additionally, if 360 Feedback is being used for an organization-​w ide leadership development initiative, the platform can measure behavior change postprogram, providing a system-​w ide measure of impact across all program participants. Without the use of some type of systematic tracking, monitoring, and evaluation of development plans and behavior change goals following 360 Feedback, demonstration of any type of return on investment (ROI) for the value of organization-​w ide interventions is challenging.

 189

Impact of Development-Focused 360 Feedback  //​ 189

CONCLUSION

Feedback is one of the necessary conditions for successful behavior initiation and change over time. Although a number of other coaching and feedback models have attempted to outline various proximal and distal outcomes, the enlighten, encourage, and enable model consists of three stages, each impacted by individual (e.g., personality) and organizational (e.g., design and purpose of the Strategic 360 Feedback intervention) variables but focused on successful individual behavior change efforts within organization-​wide programs. The importance of this individual behavior change model is that it highlights the diverse roles of the practitioner, employee, and organization that appear throughout the 360 Feedback literature to facilitate accurate self-​awareness, self-​directed learning, goal-​ setting processes, deliberate practice, and evaluation. The emphasis on more than just insight in this model is important in light of meta-​analytical findings suggesting that effect sizes for transfer of training interventions are generally low (particularly when seen by direct reports and peers) but can be improved significantly with opportunities for structured and deliberate practice (Taylor, Taylor, & Russ-​Eft, 2009). It is hoped that this individual change model and understanding some of the potential issues each one will help talent management professionals to focus more on the distal (behavioral change), rather than proximal (insight), outcomes when using Strategic 360 Feedback interventions within company-​wide interventions and diverse leadership development programs (Chapter 3). REFERENCES Ajzen, I. (1991). The theory of planned behaviour. Organisational Behaviour and Human Decision Processes, 50, 179–​211. Atwater, L., Wang, M., Smither, J., & Fleenor, J. (2009). Are cultural characteristics associated with the relationship between self and others’ rating of leadership? Journal of Applied Psychology, 4, 876–​886. Atwater, L., Ostroff, C., Yammarino, F. J., & Fleenor, J. (1998). Self-​other agreement: Does it really matter?. Personnel Psychology, 51, 577–​598. Atwater, L., Waldman, D., Ostroff, C., Robie, C., & Johnson, K. M. (2005). Self-​other agreement: Comparing its relationship with performance in the US and Europe. International Journal of Selection and Assessment, 13,  25–​40. Atwater, L. E., & Brett, J. F. (2005). Antecedents and consequences of reactions to developmental 360-​degree feedback. Journal of Vocational Behavior, 66, 532–​548. Atwater, L. E., Waldman, D., Atwater, D., & Cartier, P. (2000). An upward feedback field experiment. Supervisors’ cynicism, follow-​up and commitment to subordinates. Personnel Psychology, 53, 275–​297. Bandura, A. (1977). Self-​efficacy: Toward a unifying theory of behavior change. Psychological Review, 84, 191–​215.

190

190  / / ​  3 6 0 for D evelopment Becker, M. H. (1974). The health belief model and sick role behavior. Health Education Monographs, 2(4), 409–​419. Bersin, J. (2015, August 26). Feedback is the killer app: A new market and management model emerges [Blog post]. Forbes.com. Retrieved from https://​www.forbes.com/​sites/​joshbersin/​2015/​08/​26/​employee-​ feedback-​is-​the-​killer-​app-​a-​new-​market-​emerges/​#687339095edf Bono, J., & Colbert, A. (2005). Understanding responses to multi-​source feedback: The role of core self-​ evaluations. Personnel Psychology, 58, 171–​203. Boyatzis, R. E., Rochford, K., & Taylor, S. N. (2015). The role of the positive emotional attractor in vision and shared vision: Toward effective leadership, relationships, and engagement. Frontiers of Psychology, 6, 670. doi:10.3389/​fpsyg.2015.00670 Bracken, D. W., & Rose, D. S. (2011). When does 360 degree feedback create behavior change? And how would we know it when it does? Journal of Psychology and Business, 26, 183–​192. doi:10.1007/​ s10869-​011-​9218-​5 Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360° feedback. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9, 761–​794. https://​doi. org/​10.1017/​iop.2016.93 Bracken, D. W., Timmreck, C. W., Fleenor, J. W., & Summers, L. (2001). Feedback from another angle. Human Resource Management, 40,  3–​20. Brett, J., & Atwater, L. (2001). 360-​degree feedback:  Accuracy, reactions and perceptions of usefulness. Journal of Applied Psychology, 86, 930–​942. Colvin, G., & Overrated, T. I. (2008). What Really Separates World-​Class Performers From Everybody Else. New York: Portfolio. DeNisi, A., & Kluger, A. (2000). Feedback effectiveness: Can 360-​degree appraisals be improved? Academy of Management Executive, 14, 129–​139. Dickerson, S., & Kemeny, M. (2004). Acute stressors and cortisol responses: A theoretical integration and synthesis of laboratory research. Psychological Bulletin, 130, 355–​391. Dimeff, L. A., & Marlattt, G. A. (1998). Preventing relapse and maintaining change in addictive behaviors. Clinical Psychology: Science & Practice, 5, 513–​552. Eckert, R., Ekelund, B., Gentry, W., & Dawson, J. (2010). I don’t see me like you see me but is that a problem? Cultural differences in rating discrepancy in 360-​degree feedback instruments. European Journal of Work and Organizational Psychology, 19, 259–​278. Eisenberger, N., Lieberman, M., & Williams, K. (2003). Does rejection hurt? An fMRI study of social exclusion. Science, 302, 290–​292. Ericsson, K. A. (1996). The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 683–​703). New  York, NY:  Cambridge University. Ericsson, K. A., Krampe, R. T., & Tesch-​Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 363–​406. http://​dx.doi.org/​10.1037/​ 0033-​295X.100.3.363 Fredrickson, B. (2009). Positivity. Harmony. Gentry, W. A., Hannum, K. M., Ekelund, B. Z., & de Jong, A. (2007). A study of the discrepancy between self and observer-​ratings on managerial derailment characteristics of European managers. European Journal of Work and Organizational Psychology, 16, 295–​325. Gentry, W. A., Yip, J., & Hannum, K. (2010). Self-​observer rating discrepancies of managers in Asia: A study of derailment characteristics and behaviors in Southern Confucian Asia. International Journal of Selection and Assessment, 18, 237–​250. Gladwell, M. (2008). Outliers: The story of success. Hachette UK. Goffin, R. D., & Anderson, D. W. (2002, June). Differences in self and superior ratings of performance: Personality provides clues. Paper presented at the Society for Industrial and Organizational Psychology, Toronto, Canada.

 19

Impact of Development-Focused 360 Feedback  //​ 191 Goffin, R. D., & Anderson, D. W. (2007). The self-​rater’s personality and self-​other disagreement in multi-​ source performance ratings. Journal of Managerial Psychology, 22, 271–​289. Gottman, J. M., Murray, J. D., Swanson, C. C., Tyson, R., & Swanson, K. R. (2002). The mathematics of marriage: Dynamic non-​linear models. Cambridge, MA: MIT Press. Gregory, J. B., Levy, P. E., & Jeffers, M. (2008). Development of the feedback process within executive coaching. Consulting Psychology Journal: Practice and Research, 60,  42–​56. Hofstede, G., & McRae, R. R. (2004). Personality and culture revisited: Linking traits and dimensions of culture. Cross-​Cultural Research, 38, 52–​88. doi:10.1177/​1069397103259443 Hogan, R. (2007). Personality and the fate of organizations. Hillsdale, NJ: Erlbaum. Hogan, R., & Kaiser, R. B. (2005). What we know about leadership. Review of General Psychology, 9, 169–​180. Holt, K., & Seki, K. (2012). Global leadership:  A developmental shift for everyone. Industrial and Organizational Psychology: Perspectives on Science and Practice, 5, 198–​217. Ilgen, D., & Davis, C. (2000). Bearing bad news:  Reactions to negative performance feedback. Applied Psychology: An international Review, 49, 550–​565. Joo, B. K. (2005). Executive coaching: A conceptual framework from an integrative review of research and practice. Human Resource Development Review, 4, 134–​144. Jack, A. I., Boyatzis, R. E., Khawaja, M. S., Passarelli, A. M., & Leckie, R. L. (2013). Visioning in the brain: An fMRI study of inspirational coaching and mentoring. Social Neuroscience, 8, 369–​384. doi:10.1080/​ 17470919.2013.808259 Kim, K. Y., Atwater, L., Patel, P. C., & Smither, J. W. (2016). Multisource feedback, human capital, and the financial performance of organizations. Journal of Applied Psychology, 101, 1569–​1584. http://​dx.doi. org/​10.1037/​apl0000125 Kleim, J. A., & Jones, T. A. (2008). Principles of experience-​dependent neural plasticity: Implications for rehabilitation after brain Damage. Journal of Speech, Language and Hearing Research, 51, 225–​2339. doi:10:1044/​1092-​4388(2008/​018) Lally, P., Van Jaarsveld, C., Potts, H., & Wardle, J. (2010). How are habits formed: Modeling habit formation in the real world. European Journal of Social Psychology, 1009, 998–​1009. Larimer, M. E., Palmer, R. S., & Marlatt, G. M. (1999). Relapse prevention:  An overview of Marlatt’s cognitive-​behavioral model. Alcohol Research & Health, 23, 151–​160. Lehman, B. J., & Conley, K. M. (2010). Momentary reports of social-​evaluative threat predict ambulatory blood pressure. Social Psychological and Personality Science, 1(1),  51–​56. London, M., & Smither, J. W. (2002). Feedback orientation, feedback culture and the longitudinal performance management process. Human Resource Management Review, 12, 81–​100. Macnamara, B. N., Hambrick, D. Z., & Oswald, F. L. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-​analysis. Psychological Science, 25, 1608–​1618. http://​ dx.doi.org/​10.1177/​0956797614535810 Mashihi, S., & Nowack, K. (2013). Clueless: Coaching people who just don’t get it (2nd ed.). Santa Monica, CA: Envisia Learning. Mund, M., & Mitte, K. (2012). The costs of repression: A meta-​analysis on the relation between repressive coping and somatic diseases. Health Psychology, 31(5), 640. Nowack, K. (2008). Coaching for stress: StressScan. In J. Passmore (Ed.), Psychometrics in coaching: Using psychological and psychometric tools for development (pp. 254–​274). London, England: Kogan-​Page. Nowack, K. (2009). Leveraging multirater feedback to facilitate successful behavioral change. Consulting Psychology Journal: Practice and Research, 61, 280–​297. Nowack, K. (2014). Taking the sting out of feedback. Talent Development Magazine, 68,  50–​54. Nowack, K. M. (2015). 360 feedback: From insight to improvement. Public Manager, 44(2), 20. Nowack, K. (2017). Facilitating successful behavior change:  Beyond goal setting to goal flourishing. Consulting Psychology Journal:  Practice and Research, 70, 1–​19. http://​dx.doi.org/​10.1037/​ cpb0000088 Nowack, K., & Mashihi, S. (2012). Evidence based answers to 15 questions about leveraging 360-​degree feedback. Consulting Psychology Journal: Practice and Research, 64, 157–​182.

192

192  / / ​  3 6 0 for D evelopment Ostroff, C., Atwater, L., & Feinberg, B. (2004). Understanding self-​other agreement: A look at rater and ratee characteristics, context and outcomes. Personnel Psychology, 57, 333–​375. Parks, G. A., & Marlatt, A. (1999). Relapse prevention therapy for substance-​abusing offenders: A cognitive-​ behavioral approach in what works: In E. Latessa (Ed.), Strategic solutions: The international community corrections association examines substance abuse (pp. 161–​233). Lanham, MD:  American Correctional Association. Plomin, R., Shakeshaft, N. G., McMillan, A., & Trzaskowski, M. (2014). Nature, nurture, and expertise. Intelligence, 45, 46–​59. http://​dx.doi.org/​10.1016/​j.intell.2013.06.008 Prochaska, J. O., & Velicer, W. F. (1997). The transtheoretical model of health behaviour change. American Journal of Health Promotion, 12,  38–​48. Quast, L. N., Center, B. A., Chung, C., Wohkittel, J. M., & Vue, B. (2011, February). Using multi-​rater feedback to predict managerial career derailment: A model of self-​boss rating patterns. Paper presented at the 2011 Academy of Human Resource Development International Research Conference in the Americas, Chicago, IL. Robie, S., Kaster, K., Nilsen, D., & Hazucha, J. (2000). The right stuff: Understanding cultural differences in leadership performance. Unpublished manuscript, Personnel Decisions, Minneapolis, MN. Rutledge, T. (2006). Defensive personality effects on cardiovascular health: A review of the evidence. In D. Johns (Ed.), Stress and its impact on society (pp. 1–​21). Hauppauge, NY: Nova Science Publishers. Schwartz, R. M., Reynolds, C. F., III, Thase, M. E., Frank, E., Fasiczka, A. L., & Haaga, D. A.  F. (2002). Optimal and normal affect balance in psychotherapy of major depression: Evaluation of the balanced states of mind model. Behavioural and Cognitive Psychotherapy, 30, 439–​450. Sedikides, C., Gaertner, L., & Toguchi, Y. (2003). Pancultural self-​enhancement. Journal of personality and social psychology, 84(1), 60. Shipper, F., Hoffman, R., & Rotondo, D. (2007). Does the 360 feedback process create actionable knowledge equally across cultures? Academy of Management Learning & Education, 6,  33–​50. Siefert, C., Yukl, G., & McDonald, R. (2003). Effects of multisource feedback and a feedback facilitator on the influence of behavior of managers toward subordinates. Journal of Applied Behavior, 88, 561–​569. Smither, J., London, M., Flautt, R., Vargas, Y., & Kucine, I. (2003). Can working with an executive coach improve multisource feedback ratings over time? A quasi-​experimental field study. Personnel Psychology, 56,  23–​44. Smither, J., & Walker, A. G. (2004). Are the characteristics of narrative comments related to improvement in multirater feedback ratings over time? Personnel Psychology, 89, 575–​581. Sonnentag, S., & Frese, M. (2013). Stress in organizations. In N. W. Schmitt & S. Highhouse (Eds.), Handbook of psychology, Vol. 12: Industrial and organizational psychology (pp. 560–​592). New York, NY: Wiley. Taylor, P., Taylor, H., & Russ-​Eft, D. (2009). Transfer of management training from alternate perspectives. Journal of Applied Psychology, 94, 104–​121. Woo, S., Sims, C., Rupp, D., & Gibbons, A. (2008). Development engagement within and following developmental assessment centers: Considering feedback favorability and self-​assessor agreement. Personnel Psychology, 61, 727–​759. Varela, O. E., & Premeaux, S. F. (2008). Do cross-​cultural values affect multisource feedback dynamics? The case of high power distance and collectivism in two Latin American countries. International Journal of Selection and Assessment, 16, 134–​142. Yammarino, F. J., & Atwater, L. E. (2001). Understanding agreement in multisource feedback. In D. Bracken, C. Timmreck, & A. Church (Eds.), The handbook of multisource feedback (pp. 204–​220). San Francisco, CA: Jossey-​Bass. Zell, E., & Krizan, Z. (2014). Do people have insight into their abilities? A metasynthesis. Perspectives on Psychological Science, 9, 111–​125.

 193

12 / / ​/ ​      / / ​/​

INTEGRATING PERSONALITY ASSESSMENT WITH 360 FEEDBACK IN LEADERSHIP DEVELOPMENT AND COACHING ROBERT B. KAISER AND TOMAS CHAMORRO-​P REMUZIC

Self-​awareness is the cornerstone of leadership development. It has been emphasized since the ancient inscription at the entrance to the temple of Apollo in Delphi, Greece: “Know thyself.” It is prominent today in modern models of executive education that assign a foundational role for self-​regulation (Day, Harrison, & Halpin, 2009; R. Hogan & Warrenfeltz, 2003). Furthermore, research confirms that self-​ awareness is a defining characteristic of high-​performing leaders (Church, 1997). But, the term self-​awareness is somewhat misleading. In the context of 360 Feedback, self-​awareness really means other awareness because it provides the focal leader with a chance to understand how coworkers view his or her behavior. This is helpful because self-​ratings of performance are notoriously biased and barely correlated with ratings made by managers, peers, and direct reports (Conway & Huffcutt, 1997; Heidemeier & Moser, 2009). Incorporating coworker views into one’s self-​concept therefore should produce a more realistic understanding of one’s relative strengths and weaknesses. However, this sort of insight only informs the “what” of self-​awareness: what one does relatively better or worse. For leaders to make sustained improvements in their leadership, they also need to know the “why” of their behavior: why one acts a certain

193

194

194  / / ​  3 6 0 for D evelopment

way. This aspect of self-​awareness provides knowledge of the dispositions, habits, hot buttons, and blind spots that underlie behavioral tendencies (Kaiser & Kaplan, 2006). One particularly effective and efficient way to gain this type of insight is through personality assessment. The point of this chapter is to demonstrate how integrating personality assessment with 360 Feedback can provide a powerful combination for strategic self-​awareness–​ based leadership development. The combination is strategic in that it broadens self-​ awareness of what one does, and it is powerful in that it deepens self-​awareness about why one does it, which can help leaders make lasting changes to better align with the skills, competencies, and values needed for high performance in their jobs and desired by their organizations. Combining 360 Feedback with personality assessment can also be powerful in selection-​ oriented talent-​ management processes, from the identification of leadership potential to succession planning to hiring decisions. However, these high-​stakes applications are distinct from developmental applications and involve a number of different considerations ( Jeanneret & Silzer, 1998). For the sake of focus and simplicity, we only address development in this chapter. We begin by reviewing the link between personality and leadership behavior and explain how they are distinct yet related constructs. We then consider the mechanics of integrating personality assessment with 360 Feedback in terms of choosing a personality instrument, conceptualizing which personality scales are relevant to feedback about which leadership behaviors, and the staging and sequencing of the presentation of personality scores and 360 Feedback. Next, we describe a simple framework for understanding the possible combinations of personality and 360 results and their implications for development. We close with a case study to illustrate how to apply these concepts. PERSONALITY AND LEADERSHIP

Of all psychological constructs, few are arguably more useful in leadership development than personality. Compared to other widely studied constructs, such as intelligence, personality is a more varied domain and covers a broader range of individual differences. Personality also concerns multiple modes of personal functioning, including patterns of thinking (decisions), feeling (resilience and stress tolerance), and behaving (observable skills and competencies), and thus is relevant to the development of a host of performance dimensions ( James & Mazerolle, 2002). Further, research shows that personality is more strongly related to leadership than other individual differences.

 195

Integrating Personality Assessment in Leadership Development and Coaching  //​ 195

Evidence-​Based Support

A meta-​analysis of 78 primary studies determined that the Big Five personality factors (Extraversion, Agreeableness, Conscientiousness, Stability, and Openness) collectively correlate .48 with leadership ( Judge, Bono, Ilies, & Gerhardt, 2002). For comparison, a similar meta-​analysis of 151 primary studies of intelligence reported a .27 correlation with leadership ( Judge, Colbert, & Ilies, 2004). The link between personality and leadership effectiveness is also clear. Leader traits are expressed in actions and decisions, which in turn impact employees and teams and the results they produce (Kaiser & Hogan, 2007; Kaiser & Overfield, 2010). Further, certain personality traits are systematically associated with certain leadership behaviors. For example, DeRue, Nahrgang, Wellman, and Humphrey (2011) conducted an integrative meta-​analysis using the Big Five and Yukl’s (2006) taxonomy of leader behaviors to demonstrate that Extraversion and Agreeableness predict interpersonal-​oriented behaviors (e.g., consideration, participative decision-​making, empowerment, and coaching); Conscientiousness and Stability predict task-​oriented behaviors (e.g., structuring work, planning, organizing, and following up); and Extraversion and Openness predict change-​oriented behaviors (e.g., vision, strategic thinking, innovation, and transformational leadership). These relations provide a basis for helping leaders understand the work-​related implications of their personalities and, in particular, how they impact coworkers. The meta-​analyses demonstrated the relationship between personality and leadership. Beneath these general trends, however, are some finer points that are helpful for integrating personality assessment with 360 Feedback. For instance, the meta-​analyses consistently showed that Extraversion was the strongest correlate of leadership effectiveness among the Big Five. However, the broad Extraversion factor is composed of narrower facets, including assertiveness and sociability (DeYoung, Quilty, & Peterson, 2007), and it is the assertiveness facet that accounts for the relationship between Extraversion and leadership ( J. Hogan & Holland, 2003; Kaiser & Hogan, 2011). In other words, it is drive and ambition that enhance leadership, not having the loudest voice or talking a lot. Additionally, the relationship between personality and leadership is often curvilinear. Higher standing on many traits is associated with higher performance up to a certain point, beyond which performance actually declines. This has been demonstrated in the relationship between assertiveness and leadership effectiveness (Ames & Flynn, 2007), between both Conscientiousness and Stability and task performance (Le et al., 2010), and between charismatic personality and effectiveness (Vergauwe, Wille, Hofmans, Kaiser, & De Fruyt, 2018). These studies illustrate a key finding from the derailment research conducted at the Center for Creative Leadership in the 1980s: Strengths can

196

196  / / ​  3 6 0 for D evelopment

become weaknesses when overused (McCall & Lombardo, 1983). The practical implication is that more is not always better; too much of a good thing can be a bad thing. Extreme scores on personality scales raise the possibility that the leader may compromise performance by engaging in too much of the associated behaviors and neglecting opposing but complementary behaviors (Kaiser & Hogan, 2011). Another nuance in combining personality with 360 Feedback concerns how raters focus on and evaluate certain behaviors more systematically depending on their hierarchical relationship. Performance evaluations made by managers emphasize technical skill, task-​oriented behavior, and business results, whereas evaluations made by direct reports emphasize people skills, integrity, and the work environment (R. Hogan, 2007; Hooijberg & Choi, 2000). For instance, one study showed how Stability and Conscientiousness were predominantly correlated with manager ratings of task-​ oriented leadership behaviors, while Agreeableness was largely correlated with direct report ratings of interpersonal-​oriented leadership behaviors (Oh & Berry, 2009). Thus, personality may be particularly helpful in understanding why different rater groups rate a focal leader the way they do. Distinct But Related Constructs

The distinctiveness of measures of personality and 360 measures of behavior contributes to the power of combining them. They are two different assessment methods: one based entirely on the individual’s own self-​description on a standardized questionnaire of general patterns of thinking, feeling, and acting and the other based on a comparison of coworker ratings and self-​ratings of in-​role performance behaviors. Yet, they often converge in systematic ways that make sense. For instance, when a leader discredits 360 Feedback as a misperception by coworkers, a coach can point to similar themes in the personality results and emphasize how this was based solely on the leader’s input. Or, if a leader dismisses the implications of personality assessment results, the coach can call attention to similar themes in the behavior described by coworker feedback. The two methods serve to reinforce one another and provide convergent insights that illuminate blind spots and increase self-​awareness. PRACTICAL CONSIDERATIONS

Combining personality assessment with 360 Feedback involves several choices. Three strategic decisions are particularly consequential in self-​awareness–​based leadership development and coaching:  which personality assessment to use, how to align the

 197

Integrating Personality Assessment in Leadership Development and Coaching  //​ 197

personality scales and 360 performance dimensions, and the staging and sequence of presenting personality and 360 results. Choosing a Personality Instrument

Many commercial personality tests are available along with a proliferation of generic assessments increasingly available on the Internet, often for free and usually lacking research support. We recommend choosing tests developed according to established psychometric procedures supported by validation research. We also advise skepticism about test publishers who do not make technical reports and validation studies freely available. Peer-​reviewed research vetted by independent scholars is key; self-​published white papers are better interpreted as marketing materials rather than scientific evidence for the quality of the instrument. Not all personality assessments are created equal. They vary in terms of the range of traits measured, how well they predict performance, and the conceptual frame of reference that will shape the narrative interpretation of results and what they mean to the focal leader. Several personality assessments were developed based on the widely accepted Big Five model, including the NEO Personality Inventory (NEO-​PI; Costa & McCrae, 1992) and the Hogan Personality Inventory (HPI; R. Hogan & Hogan, 2007). These instruments—​and those that have been rejiggered to provide interpretation in terms of the Big Five, such as the 16 Personality Factor (16pf®) Questionnaire (Cattell, 1989)—​ seem to balance breadth of content and brevity.1 Personality assessments also differ in terms of how well they predict performance. This is a crucial consideration that can determine the potential impact of combining a particular personality test with 360 Feedback. How well the personality scores predict leadership behavior and effectiveness provides an index of how much relevant information the test produces. Strong relationships suggest that the personality assessment can explain a great deal of differences in performance. Weak relationships suggest that most of the personality information is irrelevant to enhancing performance, and that in-​depth discussion of those results, while perhaps interesting, will be a distraction. Another key consideration is the nature of the developmental discussion and personal narrative that the leader will take away from the personality results. Different assessments

 Recent models of personality at work distinguish the “dark side,” counterproductive personality traits that extend the coverage of the Big Five to include extreme tendencies associated with career derailment (Hogan & Hogan, 2001; Kaiser, LeBreton, & Hogan, 2015). For the sake of simplicity and brevity, we do not consider dark side/​derailer tendencies as distinct from the Big Five in this chapter.

1

198

198  / / ​  3 6 0 for D evelopment

are based on different ideas about what personality is and how it functions. Many are atheoretical; for instance, those based on the Big Five were constructed around the factor structure of personality trait scales. This model has little to say about where personality comes from, what its purpose is, and how it functions (Block, 1995). There are few overarching concepts and frameworks for explaining these results to leaders, leaving it largely to the facilitator to put the results in context and provide a story for the leader to assimilate the meaning and implications. Thus, there may be less consistency and varying levels of comprehension for leaders working with different facilitators. The HPI was developed from two perspectives:  the empirically derived Big Five model of trait structure and the socioanalytic theory of personality, which combines Darwinian, psychoanalytic, and sociological views of the function of personality (R. Hogan, 2007). Socioanalytic theory proposes that there are three master motives critical to human survival and reproductive success:  the need to get  along with others in a social group, the need to get ahead and achieve status in that group, and the need to find meaning and purpose for one’s life that is valued by the group. From the psychoanalytic perspective, people are somewhat self-​deceived about their desires to get ahead and get along, and from the sociological perspective, people fulfill their needs in interaction with other people. The HPI scales were designed to predict success at getting along, getting ahead, and finding meaning at work, and it is one of the most predictively valid personality assessments ( J. Hogan & Holland, 2003). Socioanalytic theory also provides a narrative structure for explaining and understanding the implication of HPI results in terms that leaders can apply to better navigate their work environment. Aligning Personality Scales With Dimensions of Performance

Regardless of the personality assessment that is selected, it will be necessary to determine which personality scales are most relevant to what 360 dimensions. Alignment is found where research shows a correlation between certain personality factors and a particular leadership behavior (e.g., Openness and strategic thinking). The lack of a correlation indicates which personality factors are not relevant to a particular behavior (e.g., Agreeableness and strategic thinking). J. Hogan and Holland (2003) demonstrated the significance of aligning personality traits and performance dimensions. They showed how “getting along” traits such as Agreeableness were related to getting along criteria such as interpersonal skill and cooperation, whereas “getting ahead” traits such as ambition were related to getting ahead criteria such as productivity and showing initiative. The more aligned the personality and performance measures, the stronger the correlation. The practical implication is that

 19

Integrating Personality Assessment in Leadership Development and Coaching  //​ 199

when theory is used to determine which personality factors and performance dimensions are aligned, the relationships not only are stronger but also make sense conceptually and are therefore easy to understand. This sort of conceptual mapping can be guided by research showing how the scales on commonly used personality assessments relate to the Big Five (e.g., see Woods & Anderson, 2016). A taxonomy of leadership skills, competencies, and behaviors is also needed for categorizing the dimensions on the 360. Yukl’s three-​category taxonomy of task-​oriented, interpersonal-​oriented, and change-​oriented behaviors has proven useful in relating personality to leadership (e.g., DeRue et al., 2011). These three categories are broad, generally defined domains in which more specific behaviors and skills can be classified. Table 12.1 provides a summary of how commonly researched leadership constructs have been organized in Yukl’s taxonomy, as well as Bartram’s (2005) “great eight” competency factors and common managerial functions. Table 12.2 integrates the Big Five with Yukl’s taxonomy of leadership behavior based on the conceptual similarity of the underlying constructs and meta-​analytic research demonstrating statistical relationships. This table also shows which personality–​behavior linkages are most salient for which rater groups. It is intended to provide empirically based, theoretically coherent guidance for understanding how the scales on a personality instrument relate to manager, peer, and direct report ratings of various performance dimensions on a 360. Sequence

When combining personality assessment with 360 Feedback, there is a choice of which to present first to the focal leader, and there is a question about whether to review both sets of results at the same time or at different points in time. We are unaware of research indicating the superiority of one approach over another. Our experience suggests that various combinations of order and interval can be equally effective, but there are advantages to each approach that may fit some scenarios better than others. For instance, one approach is to review 360 results first, then personality results. This is the “what you do, and why you do it” method that begins with feedback about performance behavior and then seeks to guide development with deeper awareness and understanding about the psychology behind the behavior. It is an approach that may work particularly well when the focal leader is skeptical of personality concepts and prefers behavioral feedback that is more immediately tied to performance. It may also be advantageous in cases that emphasize coaching for skill learning or performance enhancement (Witherspoon & White, 1996).

motivation)

and adaptation to the

problems (intellectual stimulation)

-​ Seeks input from employees in solving

compelling vision (inspirational

collective learning,

external environment

-​ Develops and communicates a

Transformational

Increase innovation,

Change oriented

organization (idealized influence)

consideration)

-​ Seeks group approval on important matters

-​ Looks out for the welfare of employees

relations

of employees (individualized -​ Emphasizes the greater good of the

-​ Treats employees with respect

human resources and

Transformational -​ Considers the needs and aspirations

-​ Consults employees when making decisions

-​ Listens to employees

Consideration

(management by exception)

Increase the quality of

Interpersonal oriented

-​ Identifies problems and mistakes

-​ Establishes lines of communication

them (contingent reward)

who is responsible for achieving

-​ Clarifies performance targets and

Transactional

Dimensions

Transformational/​Transactional

-​ Maintains standards of performance

responsibilities

-​ Assigns employees to roles and

way

-​ Enforces standards, rules, regulations

efficient and reliable

Initiating structure

Ohio State Factors

Accomplish work in an

Task oriented

Objective

Behavior Category/​

Yukl Leadership

Adapting and coping

conceptualizing

Creating and

Interacting and presenting

Supporting and cooperating

Enterprising and performing

Organizing and executing

Analyzing and interpreting

Leading and deciding

Factors

Great Eight Competency

TABLE 12.1  Common Leadership Behavior Constructs, Competencies, and Functions Organized in Terms of Yukl’s Taxonomy

learning

Facilitating collective

Encouraging innovation

Advocating change

Envisioning change

Developing

Supporting

Recognizing

Empowering

Problem-​solving

Monitoring operations

Planning

Clarifying

Common Functions

20

 201

Integrating Personality Assessment in Leadership Development and Coaching  //​ 201 TABLE 12.2  Personality Factors Most Empirically Related to Categories of Leadership Behavior and the Rater Group for Which the Link Is Strongest Yukl Leadership Behavior

Big Five Personality Factor

Category

Interpersonal oriented

Rater Group Most Attuned to Linkage

Extraversion

Direct report, peer

Agreeableness Task oriented

Conscientiousness

Manager, peer

Stability Change oriented

Extraversion

Manager

Openness

In the second approach, the personality results are first reviewed to establish the individual’s general modus operandi; then, the 360 results are reviewed to identify how these tendencies are expressed in leadership behavior as experienced by coworkers. This approach may be beneficial for introspective leaders or in developmental coaching that focuses on transformational learning or career development (Witherspoon & White, 1996). AN INTEGRATIVE FRAMEWORK TO GUIDE DEVELOPMENT

We created a simple 2 × 2 matrix for making connections between personality assessment and 360 Feedback. The framework considers whether personality results suggest a leader has a particular behavioral tendency or not compared to whether the 360 ratings indicate the leader demonstrates the behavior or not. Combining these dichotomies leads to the four interpretation alternatives shown in Figure 12.1. Of course, personality scale scores and behavior ratings are not simply dichotomous; both range on a continuum from low to high. The more extreme a particular combination lands in one of the quadrants, the stronger the interpretation. When using a personality assessment with strong predictive validity and focusing on the traits most aligned with a particular 360 performance dimension, most focal leaders fall in the upper right (has the personality tendency and demonstrates the behavior) or lower left quadrant (does not have the personality tendency and does not demonstrate the behavior). Although not quite as common but still frequent enough to be noteworthy in practice, some leaders will be off the diagonal prediction line: in the upper left quadrant (does not have the personality tendency but does demonstrate the behavior) or the lower right quadrant (does have the personality tendency but does not demonstrate

20

Does demonstrate Does not demonstrate

360 Feedback Behavior

202  / / ​  3 6 0 for D evelopment

Adapted Capability

Strength

• Requires focused effort to learn and maintain • Energy management strategies can help sustain the effort • Caveat: Some behaviors are easier to adapt than others

• Realized potential • Can be energizing; feel“in the zone” when doing the behavior • Is there a risk of overdoing the behavior?

Performance Risk

Unrealized Potential

• How to achieve a minimum level of competency? • How to align performing the behavior with self-concept? • Is it possible to delegate the role or function?

• Had opportunity to develop skill? • Does job design not require the behavior? • Does role overload or organizational culture prevent manager from using skill?

Does not have tendency

Has tendency

Personality Assessment FIGURE 12.1 An integrative framework for interpreting patterns of personality Assessment and 360 Feedback results.

the behavior). The developmental implications for each of these four combinations are unique. Strengths

The upper right quadrant describes when the leader is disposed toward a particular behavior and coworker ratings indicate that the individual indeed does demonstrate the behavior. These are cases where the leader is “playing to a strength,” for instance, an assertive leader showing initiative and taking charge or a conscientious leader organizing workflow. However, these cases may also represent a leader at risk for turning a strength into a weakness through overuse. Leaders are five times more likely to be rated by coworkers as overdoing behaviors related to their personal areas of strength (Kaiser & Overfield, 2011). The research on curvilinear relationships between personality and leadership suggests that about 1 SD above the normative mean is the point at which performance starts to decrease. Unfortunately, most 360 instruments use 5-​point rating scales that assume more is better and do not distinguish between doing a lot of a behavior and doing too much of it (Kaiser & Kaplan, 2005). By one estimate, ratings of 5 on a 5-​point Likert scale correspond to the optimal amount of leadership behavior in about two thirds of cases, but too much of the behavior in one third of the cases (Vergauwe, Wille, Hofmans, Kaiser, & De

 203

Integrating Personality Assessment in Leadership Development and Coaching  //​ 203

Fruyt, 2017). Thus, feedback facilitators may need to probe for the possibility that an apparent strength may actually be a strength overused, for instance, through coworker interviews or observations of on-​the-​job performance. Adding open-​ended questions to the 360 to solicit written feedback can also help determine whether the focal leader tends to overdo certain behaviors. Performance Risks

The lower left quadrant describes when an individual is not disposed toward a behavior and coworker ratings indicate that the person indeed does not demonstrate the behavior. These cases are concerning to the extent that the behavior, skill, or competency is considered critical for job success. They are classic weaknesses, and the behaviors will be difficult for the leader to develop because they do not flow naturally from the individual’s personality. It may not be realistic to expect the leader to be able to develop a performance risk into a strength. A  more realistic developmental goal may be shoring up a glaring weakness to achieve a minimum level of competency. For some skills and behaviors, it may be viable for the leader to rely on someone else to perform a function that does not come so naturally. For instance, a flexible leader low on Conscientiousness and rated low on process discipline might delegate project planning to a highly conscientious staff member. Or, a practical leader low on Openness and rated low on innovation might rely on creative staff members to generate new ideas for products and processes. After all, there are different personality profiles for coming up with a creative idea versus implementing it ( Janovics & Christiansen, 2003). The caveat is that complementary staffing may work better for task-​and change-​oriented leadership behaviors than for interpersonal-​oriented behaviors. Adapted Capabilities

The upper left quadrant describes when a leader has developed a skill, habit, or behavior that does not come so easily, for instance, when a leader has managed to improve a “performance risk” to a competent level of performance. Adapted capabilities are relatively rare unless the individual has focused on them with a lot of effort and practice. And, some behaviors are easier to learn than others are. For instance, we often have greater success coaching introverted leaders to navigate networks and build relationships than we have coaching leaders low on Openness and high on Conscientiousness to be better strategic thinkers and drive innovation.

204

204  / / ​  3 6 0 for D evelopment

Stretch assignments can be one of the more effective techniques for the development of skills and behaviors that require leaders to step outside their comfort zones. Studies of well-​rounded, strategic leaders point to a pivotal role for a varied career history with a broad range of different types of jobs and assignments (Dragoni, Oh, Vankatwyk, & Tesluk, 2011; McCall, Lombardo, & Morrison, 1988). But, because adapted behaviors do not come so naturally, it will likely be exhausting for the individual to perform them, especially early on. Energy management strategies may be needed to support the sustained performance of these behaviors (Schwartz & McCarthy, 2007). For instance, an introverted leader might schedule general staff meetings before lunch to use that time to recharge. Or, an open leader might schedule a blue-​sky strategy session as an energizing reward for conducting a detailed analysis. Unrealized Potential

The lower right quadrant indicates that a leader has a personality disposition but does not demonstrate the behavior. The first question is whether the individual has had the opportunity to develop raw potential into skill. After all, people are not born with skill; they are born with talent that has to be developed through learning and practice (Ericsson, 1996). It is also possible that the leader has developed the skill but is unable to demonstrate it. Perhaps the job design does not require an open visionary to do much strategic planning. Or, maybe role overload has the open visionary consumed with juggling the tactical details of so many projects that there is little time and energy for strategic planning. In some situations, the organizational culture or a senior leader’s expectations and leadership style may inhibit the expression of a personality tendency in performance behavior. For instance, a leader’s creativity may remain a latent if he or she works in a very bureaucratic or traditional organization. Or, a micromanaging boss may create so much stress and distraction for an agreeable leader that he or she does not have the time and energy for coaching direct reports. It is important to figure out the reasons for unrealized potential. These personality tendencies should be relatively easy to shape into skilled leadership behavior, as long as the focal leader has an opportunity to learn and practice the behavior and the job, leader, and organizational culture encourage it. Bucking the Trend

We have emphasized the evidence base supporting the relationship between personality and leadership behavior. However, statistical trends among people in general do not always apply to a person in particular. In some cases, the expression of personality in behavior

 205

Integrating Personality Assessment in Leadership Development and Coaching  //​ 205

may not follow the general trend. For instance, a leader low on Conscientiousness may nonetheless get high ratings on a task-​oriented behavior such as execution because the leader is high on assertiveness, and the desire to stand out and get ahead has prompted the focus on task-​oriented skills and competencies. A deep expert can make these less obvious connections in individual assessment ( Jeanneret & Silzer, 1998). Although research-​based algorithms for relating personality scores to performance profiles tend to outpredict the typical assessor’s judgment, the best assessors routinely beat the formulas (Grove, Zald, Lebow, Snitz, & Nelson, 2000). That said, the research-​based associations typically represent the bulk of personality–​ performance relationships in an individual case. We estimate that the research findings will cover at least three quarters of what an individual’s personality profile can explain about his or her leadership, with idiosyncratic compensatory effects accounting for the rest. Case Study: Gretchen Goodhardt

To illustrate the foregoing research insights and practice model, consider the case of Gretchen Goodhardt (a pseudonym, but real case). Gretchen was a director at an established and growing Silicon Valley technology company with a fast-​paced culture that blended high performance, creativity, and strong relationships. She was in her mid-​30s, had been with the company for 4 years, and was in charge of a department with six direct reports who sold contracts for posting ads on a web application. She had received “exceeds expectations” on every performance review, was identified as a high potential, and was being given an opportunity for coaching with an external consultant for development for “a larger role.” Gretchen completed the HPI (R. Hogan & Hogan, 2007) and received 360 Feedback on the Leadership Versatility Index (LVI; Kaiser, Overfield, & Kaplan, 2010). The scales on the HPI and the dimensions on the LVI are described in Table 12.3. The LVI utilizes a distinct rating scale that ranges from -​4 to +4, anchored with decreasing degrees of “too little” in the negative range, “the right amount” at 0, and increasing degrees of “too much” in the positive range (Kaiser & Kaplan, 2005). Prior research has shown that the HPI scales and LVI behaviors are conceptually and statistically related (Kaiser & Hogan, 2011). Gretchen’s HPI scores and LVI feedback and self-​ratings are presented in Figure 12.2. Gretchen’s coaching program started with a review of her HPI results in a session that began by having her guess her score on each scale before revealing the actual scores. She had a generally accurate sense of her personality, especially around her high scores

206

206  / / ​  3 6 0 for D evelopment TABLE 12.3  HPI and LVI Scale Definitions for Gretchen Goodhardt Case Hogan Personality Inventory (HPI) Big Five Factor

HPI Scale

Definition

Low Scores

High Scores

Extraversion

Ambition

Taking initiative, being

Indecisive and

Proactive and

competitive, and seeking unassertive

driven

leadership roles Sociability Agreeableness Conscientiousness

Emotional stability

Outgoing, talkative,

Withdrawn and

Expressive and

and gregarious

quiet

socially engaged

Interpersonal

Socially aware,

Tough minded,

Friendly, warm,

sensitivity

considerate, and tactful

frank, and direct

and cooperative

Prudence

Reliable, dependable,

Nonconforming,

Rule abiding,

and responsible

impulsive, and

organized, and

flexible

detail oriented

Self-​confidence,

Tense, irritable,

Steady, resilient,

composure, and stable

and negative

and optimistic

Curious, imaginative,

Focused and

Creative and

and open minded

pragmatic

visionary

Interested in formal

Experiential and

Intellectual and

education and

practical

worldly

Adjustment

moods Openness to

Inquisitive

experience Learning approach

intellectually engaged Leadership Versatility Index (LVI) Yukl Category

Scale/​Subscale

Definition

Interpersonal

Forceful

Asserting personal and position power

-​Takes charge

Assuming authority and control

-​ Decisive

Taking a position and speaking up

-​ Demanding

Holding people to high standards

Enabling

Creating conditions for others to contribute

-​ Empowering

Giving people autonomy

-​ Participative

Being open to input and influence

-​ Encouraging

Providing support to people

Strategic

Positioning the organization for the future

-​ Direction

Setting the course

-​ Expansion

Growing the organization

-​ Innovation

Supporting change and creativity

Change

 207

Integrating Personality Assessment in Leadership Development and Coaching  //​ 207 Table 12.3 Continued Leadership Versatility Index (LVI) Yukl Category

Scale/​Subscale

Definition

Task

Operational

Focusing the organization on short-​term results

-​ Execution

Driving implementation

-​ Efficiency

Conserving resources

-​ Order

Using process discipline

Notes: HPI scale description based on Hogan Personality Inventory Manual, by R. Hogan and J. Hogan, 2007, Hogan Press, Tulsa, OK. Adapted with permission from the publisher. LVI scale descriptions reproduced from Leadership Versatility Index: A User Guide for Version 5.0, by R. B. Kaiser and D. V. Overfield, 2017, Greensboro, NC: Kaiser Leadership Solutions. Copyright 2017 by Kaiser Leadership Solutions, LLC. Reprinted with permission.

on interpersonal sensitivity, prudence, and adjustment and middling score on sociability. She thought of herself as a caring, responsible person who can be both outgoing and reserved and was usually easy going. She also reported being a “pleaser.” On the other hand, she was surprised by her low score on ambition and average score on inquisitive, saying that she had always been a hard worker, and that she liked the creative culture at her workplace. When these results were cast in terms of “getting ahead” and “getting along,” she identified far more with getting ahead and expressed ambivalence about getting ahead, noting that she would rather cooperate than compete with others. About a month later, Gretchen summarized three observations with the coach. First, she noticed how much she preferred to work collaboratively. Second, she noticed that she sometimes got deeply involved with tasks, often spending much more time on them than she anticipated. She wondered if she needed to be better organized and planful to be more efficient. Finally, she knew that she did not like conflict but was surprised to realize how often she let mild annoyances slide—​and sometimes even bigger things such as a direct report missing a deadline. Gretchen and the coach reviewed the LVI results a few weeks later in a session that emphasized three major headlines in her 360. First, her leadership style was described as enabling and operational, which she recognized already but nevertheless was pleased to have corroborated by the feedback. The feedback indicated that she empowered people, included them in important decisions, and was seen as supportive. This seemed to be a clear reflection of the caring thoughtfulness represented in the high score on interpersonal sensitivity, the security and trust in her elevated adjustment, and warmth and

43

60

92

88

52

28

75

Operational Execution Efficiency Order

Empowering Participative Encouraging

Innovation

Expansion

Direction

Strategic

Enabling

Demanding

Decisive

Take charge

Forceful

Leadership Versatility Index 360 Feedback

FIGURE 12.2  Personality scores and 360 Feedback ratings for Gretchen Goodhardt case.

Learning Approach

Inquisitive

Prudence

Interpersonal Sensitivity

Sociability

Ambition

Adjustment

Hogan Personality Inventory Scores

208

 209

Integrating Personality Assessment in Leadership Development and Coaching  //​ 209

gregariousness of her moderate sociability. She was also focused on being productive, organizing her team’s work, and maintaining a focus on deliverables. This was a reflection of the conscientiousness in her highest scoring trait, prudence, and the stability and consistency in her adjustment. Second, Gretchen was surprised that she was rated as being overly encouraging, especially the items “sensitive to people’s feelings” and “gives people the benefit of the doubt.” Her self-​ratings indicated that she thought she actually did not display enough of these behaviors. She also did not expect to be rated as too orderly, especially by her boss, and on the items “organized,” “process-​oriented,” and “attention to detail,” which she rated as either “too little” or “the right amount.” About this theme, she wondered out loud how one could actually be too organized (a telling sign about her mindset). The coach pointed out that the too supportive and too orderly themes corresponded to her two highest personality scores, interpersonal sensitivity and prudence. “Clearly,” she said, “these things are core to who I am. The key to my success has been playing nice with others and meeting—​ actually, exceeding—​expectations.” The third headline concerned feedback that she could stand to assert herself more and be a more strategic leader, especially in terms of setting a direction that could help grow revenues (which her manager described as a significant issue). Her self-​ratings showed some awareness of these themes, especially the need to focus more on a long-​term plan for growing customers. Gretchen also recognized some elements of needing to assert herself and be more forceful, especially in terms of taking clear, if unpopular, decisions and standing her ground, as well as needing to be firmer in holding people accountable. On the other hand, Gretchen did not recognize that her peers and direct reports thought she needed to take charge more in terms of asserting control and setting expectations. Gretchen and the coach met a few days later to sum up her feedback and create a development plan. The coach recommended she choose two behavior change goals and encouraged her to fortify her strengths and build off of what she most values about herself. To that, she identified being a great people leader who really cared. The coach asked her to then think about the two issues she most wanted to focus on in coaching and to imagine herself making these behavioral adjustments in a way that was consistent with this image of herself. Gretchen decided she wanted to be less concerned with making people happy and more concerned with bringing out the best in them. Her development plan for this involved setting a higher bar for performance by explaining that she thought everyone could raise their games, starting with her. She committed to being clearer about expectations and establishing progress check-​ins as part of her monthly one-​on-​one meetings with her direct reports.

210

210  / / ​  3 6 0 for D evelopment

Gretchen’s second development goal was to focus on growing customer accounts, both by expanding current accounts and especially by landing new ones. In order to free the time for this, Gretchen resolved to be less involved in routine work that her team could handle without her. After a few months of implementing her development plan, Gretchen and the coach reviewed progress; she described two significant insights. First, she reported surprise at realizing how caring about people sometimes involved tough love and nudging them to try harder. Second, Gretchen reported the realization that she can be too planful and organized. She described how she was getting more comfortable working with an agenda that was less structured when meeting with clients and how this allowed for richer discussions that often led to breakthrough ideas for advertising their business. CONCLUSION

Personality assessment can greatly enhance the self-​awareness provided by 360 Feedback. Realizing this benefit, however, requires a personality instrument that measures traits related to effective leadership. The instrument should also portray personality in a way that can be readily understood by leaders as relevant to their development. Then, regardless of the instrument employed, alignment must be addressed by determining which personality scales are relevant to each 360 dimension. The sequencing of presenting results also needs to be considered, for instance, first having a session to review the 360 ratings and then a session to debrief the personality results, or vice versa, or reviewing both in one session. Finally, we recommend using the 2 × 2 matrix in Figure 12.1 to combine personality assessment and 360 results to facilitate the developmental discussion. The matrix can help identify strengths, performance risks, adapted capabilities, and unrealized potential, thereby organizing the effort to channel deep insight into actions that result in expanded capability and enhanced performance. REFERENCES Ames, D. R., & Flynn, F. J. (2007). What breaks a leader? The curvilinear relation between assertiveness and leadership. Journal of Personality and Social Psychology, 92, 307–​324. Bartram, D. (2005). The great eight competencies:  A criterion-​centric approach to validation. Journal of Applied Psychology, 90, 1185–​1203. Block, J. (1995). A contrarian view of the five-​factor approach to personality description. Psychological Bulletin, 117, 187–​215. Cattell, H. B. (1989). The 16PF: Personality in depth. Champaign, IL: Institute for Personality and Ability Testing.

 21

Integrating Personality Assessment in Leadership Development and Coaching  //​ 211 Church, A. H. (1997). Managerial self-​awareness in high-​performing individuals in organizations. Journal of Applied Psychology, 82, 281–​292. Conway, J. M., & Huffcutt, A. I. (1997). Psychometric properties of multi-​source performance ratings: A meta-​analysis of subordinate, supervisor, peer, and self-​ratings. Human Performance, 10, 331–​360. Costa, P. T., & McCrae, R. (1992). Revised NEO Personality Inventory (NEO-​PI-​R) and NEO Five Factor Model (NEO-​FFI) professional manual. Odesa, FL: Psychological Assessment Center. Day, D. V., Harrison, M. M., & Halpin, S. M. (2009). An integrative approach to leader development: Connecting adult development, identity, and expertise. New York, NY: Routledge. DeRue, D. S., Nahrgang, J. D., Wellman, N., & Humphrey, S. E. (2011). Trait and behavioral theories of leadership: A meta-​analytic test of their relative validity. Personnel Psychology, 64,  7–​52. DeYoung, C. G., Quilty, L. C., & Peterson, J. B. (2007). Between facets and domains: 10 aspects of the Big Five. Journal of Personality and Social Psychology, 93, 880–​896. Dragoni, L., Oh, I.-​S., Vankatwyk, P., & Tesluk, P. E. (2011). Developing executive leaders: The relative contribution of cognitive ability, personality, and the accumulation of work experience in predicting strategic thinking competency. Personnel Psychology, 64, 829–​864. Ericsson, K. A. (1996). The road to excellence: The acquisition of expert performance in the arts and sciences, sports, and games. Mahwah, NJ: Erlbaum. Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-​analysis. Psychological Assessment, 12,  19–​30. Heidemeier, H., & Moser, K. (2009). Self–​other agreement in job performance ratings: A meta-​analytic test of a process model. Journal of Applied Psychology, 94, 353–​370. Hogan, J., & Holland, B. (2003). Using theory to evaluate personality and job performance relations:  A socioanalytic perspective. Journal of Applied Psychology, 88, 100–​112. Hogan, R. (2007). Personality and the fate of organizations. Mahwah, NJ: Erlbaum. Hogan, R., & Hogan, J. (2007). Hogan Personality Inventory manual. Tulsa, OK: Hogan Press. Hogan, R., & Warrenfeltz, R. (2003). Educating the modern manager. Academy of Management Learning and Education, 2,  74–​84. Hooijberg, R., & Choi, J. (2000). Which leadership roles matter to whom? An examination of rater effects on perceptions of effectiveness. Leadership Quarterly, 11, 241–​364. James, L. R., & Mazerolle, M. D. (2002). Personality in work organizations. Thousand Oaks, CA: Sage. Janovics, J. E., & Christiansen, N. D. (2003). Profiling new business development: Personality correlates of successful ideation and implementation. Social Behavior and Personality, 31,  71–​80. Jeanneret, R., & Silzer, R. (1998). Individual psychological assessment:  Predicting behavior in organizational settings. San Francisco, CA: Jossey-​Bass. Judge, T. A., Bono, J. E., Ilies, R., & Gerhardt, M. W. (2002). Personality and leadership: A qualitative and quantitative review. Journal of Applied Psychology, 87, 765–​780. Judge, T. A., Colbert, A. E., & Ilies, R. (2004). Intelligence and leadership: A quantitative review and test of theoretical propositions. Journal of Applied Psychology, 89, 542–​552. Kaiser, R. B., & Hogan, R. (2007). The dark side of discretion: Leader personality and organizational decline. In R. Hooijberg, J. Hunt, J. Antonakis, & K. Boal (Eds.), Being there even when you are not: Leading through strategy, systems and structures. Monographs in leadership and management (Vol. 4, pp. 177–​197). London, England: Elsevier Science. Kaiser, R. B., & Hogan, J. (2011). Personality, leader behavior, and overdoing it. Consulting Psychology Journal: Practice and Research, 63, 219–​242. Kaiser, R. B., & Kaplan, R. E. (2005). Overlooking overkill? Beyond the 1-​to-​5 rating scale. Human Resources Planning, 28(3),  7–​11. Kaiser, R. B., & Kaplan, R. E. (2006). The deeper work of executive development. Academy of Management Learning and Education, 5, 463–​483. Kaiser, R. B., LeBreton, J. M., & Hogan, J. (2015). The dark side of personality and extreme leader behavior. Applied Psychology: An International Review, 64,  55–​92.

21

212  / / ​  3 6 0 for D evelopment Kaiser, R. B., & Overfield, D. V. (2010). The leadership value chain. The Psychologist-​Manager Journal, 13, 164–​183. Kaiser, R. B., & Overfield, D. V. (2011). Strengths, strengths overused, and lopsided leadership. Consulting Psychology Journal: Practice and Research, 63, 89–​109. Kaiser, R. B., & Overfield, D. V. (2017). Leadership Versatility Index: A user guide for version 5.0. Greensboro, NC: Kaiser Leadership Solutions. Kaiser, R. B., Overfield, D. V., & Kaplan, R. E. (2010). Leadership Versatility Index version 3.0 facilitator’s guide. Greensboro, NC: Kaplan DeVries. Le, H., Oh, I.-​S., Robbins, S. B., Ilies, R., Holland, E., & Westrick, P. (2010). Too much of a good thing? The curvilinear relationships between personality traits and job performance. Journal of Applied Psychology, 95,  1–​21. McCall, M. W., Jr., & Lombardo, M. M. (1983). Off the track: Why and how successful executives get derailed. Greensboro, NC: Center for Creative Leadership. McCall, M. W., Jr., Lombardo, M. M., & Morrison, A. M. (1988). The lessons of experience: How successful executives develop on the job. Lexington, MA: Lexington Books. Oh, I.-​S., & Berry, C. M. (2009). The five-​factor model of personality and managerial performance: Validity gains through the use of 360-​degree performance ratings. Journal of Applied Psychology, 94, 1498–​1513. Schwartz, T., & McCarthy, C. (2007). Manage your energy, not your time. Harvard Business Review, 85(10),  63–​73. Vergauwe, J., Wille, B., Hofmans, J., Kaiser, R. B., & De Fruyt, F. (2017). The “too little/​too much” scale: A new rating format for detecting curvilinear effects. Organizational Research Methods, 20, 518–​544. Vergauwe, J., Wille, B., Hofmans, J., Kaiser, R. B., & De Fruyt, F. (2018). The double-​edged sword of leader charisma:  Understanding the curvilinear relationship between charismatic personality and leader effectiveness. Journal of Personality and Social Psychology, 114(1), 110–​130. http://​dx.doi.org/​10.1037/​ pspp0000147 Witherspoon, R., & White, R. P. (1996). Executive coaching: A continuum of roles. Consulting Psychology Journal: Practice and Research, 48, 124–​133. Woods, S. A., & Anderson, N. R. (2016). Toward a periodic table of personality: Mapping personality scales between the five-​factor model and the circumplex model. Journal of Applied Psychology, 101, 582–​604. Yukl, G. A. (2006). Leadership in organizations (6th ed.). Englewood Cliffs, NJ: Prentice-​Hall.

 213

13 / / ​/ ​      / / ​/​

STRATEGIC 360 FEEDBACK FOR ORGANIZATION DEVELOPMENT ALLAN H. CHURCH AND W. WARNER BURKE

INTRODUCTION

Organization development (OD) is a field of practice focused on individual and organizational change. Since its origins in the 1950s in social psychology and group dynamics, there have been many influences in the form of theoretical constructs and applications that have shaped the approach employed by practitioners today. Some of these include the use of process consultation, the introduction of new science and growth mindset, whole systems interventions, appreciative inquiry, diversity and inclusion, and dialogic OD; these have emerged as discrete areas of practice. The well-​known frameworks of consulting skills, action research, employee surveys, and leveraging individual feedback from multiple sources for enhancing self-​awareness and growth (Burke, 1982, 2011; Church, Waclawski, & Burke, 2001; Waclawski & Church, 2002), however, have remained at the core since its inception. It should be no surprise, then, that 360 Feedback is an integral part of the OD practitioners’ toolkit for driving individual and organizational change at multiple levels within an organization’s system (Church, Walker, & Brockner, 2002). In fact, recent research conducted on 388 OD practitioners from multiple professional groups (Church, Shull, & Burke, 2018; Shull, Church, & Burke, 2014). including the OD Network, the International Society for Organization Development, the National Training Laboratories (NTL), and the Society for Industrial–​Organizational Psychology, has supported the critical role that feedback plays in the change process. For example, the 213

214

214  / / ​  3 6 0 for D evelopment

study noted that 87% cited developing organizational leaders as the number 1 ranked value for the field today. Further, enhancing self-​awareness among clients was ranked second overall as a key goal for OD interventions, which speaks to the importance of using feedback via multiple sources for focusing on leadership strengths and opportunities (e.g., Bracken, Timmreck, & Church, 2001; Happich & Church, 2017; Phillips, Phillips, & Zuniga, 2013; Waclawski & Church, 2002). Finally, and perhaps most germane to the practice of 360 Feedback, was the use of feedback tools themselves. Specifically, from a list of 63 possible interventions, 71% of respondents cited the use of data survey and feedback methods for driving organizational change, and 45% specifically noted their use of multirater feedback in their standard practice. The fact that almost half of this sample of OD practitioners rely on 360 Feedback as a core methodology speaks to the central nature that feedback tools play in driving organization change through a focus on enhancing individual self-​awareness. What was once considered a new methodology for enhancing human resource (HR) practices (e.g., Church & Bracken, 1997; London & Beatty, 1993) and a fad by many others has in fact been central to OD efforts for decades. What makes the use of 360 Feedback processes unique to OD applications, however, is what it represents, and why the method is employed. Specifically, the premise is that OD is about driving change in individuals, which in turn drives broader change in organizations. While some OD interventions related to 360 Feedback do occur at the micro (individual) and meso (group) levels (see Church et al., 2002), the primary use of data-​driven feedback from an OD perspective is at the macro (organizational system) level. In short, OD practitioners employ 360 tools with strategic intent to communicate, reinforce, develop, and measure collective behavior change toward some type of desired outcome. This is why OD has been described as a process directed at developing “the masses” (Church, 2013, 2014; Church et al., 2018), whereas practice areas such as talent management are targeted at differentiating “the few” for individual decision-​making (see Chapter 6). As a consequence, the application of 360 Feedback is strategic by its very nature (as defined by in Chapter  2) when done at the systems level of an organization. By systems, we are referring to the broader concept as defined by Katz and Kahn (1978), not the individual HR systems that make up but one component. But, there is more to the distinction between the use of data-​driven methods for OD versus other types of development efforts. As we look ahead to the impact of external forces shaping organizations today, such as the (a) changing nature of work, (b) changing nature of data, and (c) changing dynamics of the workforce itself (Church & Burke, 2017), the important role that 360 Feedback plays in driving organizational transformation becomes even more important.

 215

Strategic 360 Feedback for Organization Development  //​ 215

The purpose of this chapter is to discuss the application of 360 Feedback specifically for OD and change interventions. The emphasis is on (a) the ways in which using this data-​based feedback methodology for OD efforts is similar to and different from other applications (e.g., individual coaching and leader development, team effectiveness, and talent management) and (b) the origins, evolution, and current state of the method as a key tool for OD practitioners. The chapter begins with an overview of the role and key differentiators of strategic 360 Feedback for OD and change-​related interventions. The core mechanisms for linking 360 Feedback to an organization’s mission, vision, and strategy in driving organizational change using a framework such the Burke–​Litwin model (Burke & Litwin, 1992) are also discussed. Next, given that enhancing self-​awareness to drive individual and organizational change is one of OD’s core values, a brief overview of the origins of 360 Feedback is presented based on the work Blake and Mouton (1964). In this context, it is important to note that 360 Feedback can be effectively utilized as either a transactional (managerial) or transformational (leadership) change lever. Case examples are provided for each type of approach. The chapter concludes with summary observations about the evolution and potential future of the application of 360 Feedback in OD interventions, particularly with respect to the shifting forces in technology and the digitization of HR and the challenges that lie ahead. THE ROLE OF 360 FEEDBACK IN ORGANIZATION DEVELOPMENT AND CHANGE

Although there are many approaches to using 360 Feedback in organizational settings (e.g., for individual coaching and development, leadership programs, understanding climate and team dynamics, performance management, identifying high potentials, and selecting leaders for placement), the application for OD and change interventions is unique when two primary conditions exist. First, 360 Feedback should be focused on driving a large-​ scale organization change or transformation aligned to some intended strategic or cultural goal. Although 360 Feedback in general is thought to be strategic when aligned to any set of organizational priorities, in this case the focus is on a desired future state. Following the seminal work of Lewin (1958) on the process of social change, individual feedback is provided to a collective set of employees to create a felt need for change. That felt need, in turn, drives enhanced self-​awareness relative to results provided (i.e., a new set of standards of behavior that reflect the desired strategic goal or cultural transformation). Although similar in orientation to a future-​focused approach to 360 Feedback for talent management (see Chapter 6), the emphasis here is on articulating and reinforcing that desired direction (and sometimes setting a baseline for measurement over time as

216

216  / / ​  3 6 0 for D evelopment

well) versus assessing capability against the new standard and making decisions about individuals. In short, the OD approach is inclusive in telling people what is important to the organization and how they need to change, not deselecting them. Based on an emphasis on organizational priorities, applications of 360 Feedback for OD could be aligned to capabilities required to achieve a new business strategy or aimed at enabling the mission, vision, and values of the corporation or perhaps conveying key messages around a new desired culture (e.g., of empowerment). Clearly, this would indicate that 360 Feedback interventions for organizational change are almost always going to reflect custom content (e.g., behaviors items to be rated) specific to a given context or initiative. For example, we were engaged in a targeted 360 Feedback process several years ago with the partners in global professional services firm to help them focus on building their collaboration skills across siloed lines of business. The goal was to have an organization that was able to facilitate knowledge sharing and cross-​selling in a culture where rewards had historically been based completely on within-​group success measures. The new vision was a culture of greater partnership, not competitiveness, and 360 Feedback was seen as one key mechanisms for driving that message among the partners for the first time. The second way in which 360 Feedback for OD and change is unique is that it should be designed and implemented at the systems level. In other words, it should be aligned to the other aspects and subsystems of the organization in which it is deployed (Katz & Kahn, 1978). While strategic 360 in general should be integrated with other HR systems to ensure its maximum effectiveness, in the context of OD, that integration goes one step further. More specifically, we mean that all the subsystems of the organization, such as communications, reward and recognition mechanisms, senior leadership speeches and messaging internal and external to the organization, structures and hierarchies, work group climate, job design, and the alignment of employee needs and values to the employee value proposition (EVP), are all clearly considered or adjusted to enable and reinforce a successful transformation. Clearly, the senior leadership of an organization must be aligned to the behaviors being measured and fed back to participants as well. Ideally, in an OD context the senior-​ most leaders should be visible sponsors, role models, and early adopters of any new 360 Feedback system as participants themselves (Chapter 3). Otherwise, the change effort is likely to fail miserably. Further, middle management must also have a stake in the process with respect to their behaviors by being included in feedback processes at the appropriate times (e.g., via training or other facilitated sessions). As with any type of 360 Feedback process, it is expected that there will be high-​quality coaching and delivery mechanisms in place to ensure action planning and accountability for individual change following the

 217

Strategic 360 Feedback for Organization Development  //​ 217

results. We know from research that delivering feedback without action planning has almost no impact, whereas facilitated delivery and support do lead to behavior change over time (Bracken, Rose, & Church, 2016; Smither & Walker, 2001). In short, ensuring strategic alignment up and down the entire organization system is critical for success. While this is true in other areas, such as leadership development and talent management, as well (e.g., when using 360 Feedback as a tool for identifying high-​ potential leaders or selecting candidates for key succession roles), it is critically important for OD applications because the emphasis is on driving change in behavior in a unified direction to a desired future state. If the new behaviors are not aligned to other elements of the organization or supported by different levels of employees and management, there will be significant resistance or blockage to moving forward. In short, it would be like banging your head against the wall. One approach to framing the design and implementation of a 360 Feedback process for change is to use an organizational model such as the Burke–​Litwin (B-​L ; Burke & Litwin, 1992) model of organizational performance and change. Although we have described the application of the B-​L model to 360 elsewhere in detail (e.g., Church et al., 2001), the point here is that having a broad conceptual framework for identifying and evaluating the potential relationship and interplay of other factors to the new behaviors being measured is extremely important. Our experience in designing 360 Feedback systems and working with many organizational leaders and HR professionals over the past three decades is that they often do not have a systems mindset or take a long-​term approach to transformation and change. There is often an unrealistic expectation that implementing a simple feedback mechanism will result in change in their leaders overnight. Not only does this kind of action set unrealistic expectations on the part of participants, but also it damages the credibility of the tool itself as 360 Feedback processes that have not been well thought through and connected strategically to other aspects of the organization are likely to be shut down and defunded. By forcing a diagnostic evaluation with a systems framework such as the B-​L model, it can enable a much more comprehensive approach to understanding where the reinforcing, neutral, and resistance points are today. Figure 13.1 provides an example of how this might be used. In this case analysis, it is clear that forces from the external environment are aligned (i.e., the dark boxes) to changes being made in the mission and strategy of the organization. These changes are being reflected in the new leadership capabilities identified for the 360 Feedback system being introduced in the company, and the types of behaviors to be measured are effectively linked to the population engaged in the process. Conversely, barriers in the overall culture of the organization as well as other systems and processes,

218

218  / / ​  3 6 0 for D evelopment

External Environment

Mission & Strategy

Leadership

Culture

Structure

Management Practices

Systems (Policies/Processes/PM)

Work Group Climate

Job-Skill Match

Motivation Individual, Group & Org Performance

Needs & Values

Aligned Neutral Misaligned

FIGURE  13.1 Application of 360 Feedback to the Burke–​Litwin model of organizational performance and change.

such as the formal performance management system, are resulting in a team or work group climate at the local level that is not supportive (i.e., the open boxes) of the change required. This in turn is likely to lead to motivation and potential engagement challenges when it comes to ensuring employees will actually adopt the new behaviors. Other aspects of the system, such as the structure, how managers engage on a daily basis, and consideration of the needs and values of employees, are neutral factors (gray boxes), as is the overall outcome of the entire intervention. While the culture overall will likely take significant time to shift, the key to managing a successful strategic 360 Feedback implementation here is determining how best to modify the appropriate systems or to influence what managers do to help align the new leadership capabilities so that climate and motivation can be more positively influenced. This type of approach can be helpful in diagnosing not only the present context for an intervention but also the future potential barriers. It can be just as helpful in anticipating where those same points of contact might emerge over the next 1–​3 and 3–​5 years (e.g., if senior leaders are expected to change or a set of mergers is anticipated that could shift the balance of power in HR and OD interventions being deployed). It is important to recognize what it takes for complex organizations to truly shift their cultures and direction and that a longer term focus on impact and evaluation is needed (Church, 2017). In sum, the key elements for designing a strategic 360 Feedback process for driving large-​scale organizational change efforts includes the following components:

 219

Strategic 360 Feedback for Organization Development  //​ 219

1. Develop content (e.g., behaviors, competencies, skills) that clearly articulates, communicates, and measures a desired future state (and is not reflective of the current values, norms, or ways of doing business). 2. Review all aspects of the broader organizational system in which the 360 Feedback process is to be delivered (e.g., mission, strategy, HR and business processes, rewards, structures, communications, etc.) using a framework such as the B-​L model to align as many factors as possible up front and identify those that are neutral or potential points of resistance now or likely to be in the future. 3. Implement the feedback process in a way that is linked to other elements of the broader organizational change effort such as via a “campaign,” program, or other larger transformation or senior leadership initiative. Even the best 360 Feedback process by itself, even if done well, is unlikely to result in change, particularly in the short term. 4. Include as many participants as possible in the rollout process given (a) the change readiness of the organization and (b)  the resources available to ensure quality feedback discussions and action planning can occur. The “desk drop” method of delivering reports (Church & Waclawski, 2001) is unlikely to have an impact on organizational transformation broadly as only those individuals already predisposed to wanting feedback will attend to it. Those who might need the results the most in order to drive change in the culture will simply ignore it. Thus, targeting key groups of employees for participation early on who will serve as agents of change will be important in the design as well. 5. Ensure senior leadership sponsorship, ownership, engagement, and modeling of the importance of the 360 Feedback process. This is absolutely necessary to create hierarchical buy-​in to the feedback at the next layers and below in terms of both participation and utilization for development planning. If senior leaders are not visibly held accountable for participating in the process and demonstrating the insights they have learned and actions they are taking to change their behaviors, nobody else will care. Having the chief executive officer of an organization actively review raters for his or her direct reports, provide ratings, and then have a discussion about the results with concrete outcomes, the rest of the organization will see the importance of the tool and what it means for the future. 6. Set expectations for an appropriate level of (extended) commitment to the effort, required resources to sustain delivery beyond the initial launch and fanfare, and follow-​through to ensure accountability for change in the right direction. The original intent of the 360 Feedback process must be preserved at all costs to avoid creating distrust and cynicism in the organization. We have seen several situations,

20

220  / / ​  3 6 0 for D evelopment

for example, where an organization launched a tool under one set of rules (e.g., confidential feedback for development only) and then changed their approach after the data were collected to use it for other purposes. This is highly detrimental to the process, the end-​state goal, and the organization as a whole, and it can even create risk of litigation downstream, particularly if the tools were not designed for the new purposes for which they are now being used (see Chapter 29). ORIGINS OF 360 FEEDBACK IN THE PRACTICE OF OD

From an OD context, 360 Feedback is a critical tool because it drives self-​awareness and change at the individual level, which in turn creates change in the collective (i.e., “the many”). But, where did this concept start? Why is it so integral to the practice of OD? We turn now to the beginnings of 360 Feedback for OD and the human dynamics side of the process. It is difficult to determine the origins of an activity that became a social and learning movement as sweeping as the process of 360 Feedback. We can be certain that Robert R. Blake was one of, if not the, originators for what became 360 Feedback in the 1990s and carries through to today. At the time, the mid-​1950s, he was, like many of his colleagues in social psychology and related fields, involved in the early stages of a movement already under way: sensitivity training (or the T Group as it became known). Also, like his colleagues, Blake had been to Bethel, Maine, and experienced firsthand the T Group, which was a 3-​week immersion in a small group of 10 or so strangers who were challenged to learn as much as they could about group dynamics, interpersonal relationships, and themselves in terms of how they affected others in their group as well as how these others affected themselves, especially concerning emotions and feelings. Although an experienced trainer was part of the group, this individual provided little, if any, direction about what to learn and how to do so except to be open and honest in expressing oneself and to give one another feedback about how they were being affected. Each participant wore a name tag but was not otherwise identified regarding their work, family, date or place of birth, or even home address. The trainer’s role was to facilitate the learning process. Blake immersed himself and eventually became a trainer as well. He also encouraged his former student and current faculty colleague at the University of Texas, Jane S. Mouton, to join him in this new learning adventure. With time and no doubt due to Blake’s restless nature, he and Mouton began to veer from the ambiguous and unstructured characteristics of the T Group and to develop a more structured and direct approach to the learning process. They referred to their approach as structured interventions or structured feedback. Even though the ultimate learning goal was the same—​increasing one’s

 21

Strategic 360 Feedback for Organization Development  //​ 221

self-​awareness and interpersonal skills—​instead of the learning content being a function of what emerged in the group, the focus was to a large extent provided by the trainers, in this case Blake and Mouton. Along with another luminary in the T Group movement, Herb Shepard, they were involved with Humble Oil Refineries in Texas (now Exxon Mobil), where the learning effort emphasized management of people, not just interpersonal relationships and group dynamics per se. Blake, Mouton, and Shepherd believed this training work with managers from the same organization needed to be more directive. The work, moreover, became the basis for what is now OD and what Blake and Mouton developed further into their model and approach to individual and organization change: the managerial grid. Like the T Group, a fundamental component of the grid is individual feedback. But, unlike sensitivity training, the feedback is structured, with the structure being one’s managerial style. In the early 1960s, Blake and Mouton went on to formalize their approach to training and went to market with the managerial grid seminar, a 5-​day program designed to develop one’s managerial approach, with a strong bias toward a participative style (Blake & Mouton, 1964). Prior to the program, participants would rate themselves on five styles of management according to the grid model and ranging from laissez-​faire to authority–​obedience, supportive, and ultimately to participative, their normative stance. The participants’ subordinates would rate them on the same set of behavioral styles. As a part of the seminar, participants would receive the comparative feedback of how they perceived themselves with how their subordinates perceived them. Congruence of ratings was rare, and few were seen as participative. Thus, the seminar was devoted to enhancing one’s participative style. To summarize, with feedback as the fulcrum for change, the T Group method is based on what behavior emerges from an unstructured situation; that is, the content will vary depending on who is in the group. The trainer will guide the learning and some of the content, but in large measure, the unstructured aspect of the training remains. In an approach based on a more structured intervention, the content for learning is predetermined. Thus, it gradually became clear that if increasing self-​awareness is a goal, one has a choice: unstructured (the T Group) versus structured (specific behavioral interventions). The choice is also one of depth (Harrison, 1970). The T Group provides a deeper dive into one’s feelings and perceptions, whereas a multirater (or 360 Feedback) process is focused more on certain behaviors and perceptions and does not require as much psychological safety to be successful. Hence, 360 Feedback was born from a focus on interpersonal dynamics and managerial behaviors. The emergence of 360 Feedback as we know it today, however, took some time to develop. For quite a number of years and well into the 1970s, multirater feedback was not

2

222  / / ​  3 6 0 for D evelopment

very “multi.” The raters amounted to self—​a supervisor or manager—​and subordinates of the person being rated, or “180 degrees” as it were. For example, a program developed by the Forum Corporation for their client, Citibank, as the bank was known at the time (late 1970s), Managing People, was based on 39 behavioral practices, all of which concerned people management. These 39 practices were derived from a study conducted within the bank to determine what, for them, should be the best people management practices. Prior to the consequent 5-​day residential program that followed, each participant rated him-​or herself on the 39 practices and was rated on the same set of practices by the participant’s staff members (i.e., subordinates). During the week, each day was devoted to a particular set or subdimension of practices, beginning with a review of the feedback, followed by role practice on the behaviors that were videotaped, another form of feedback, of course. The five subsets of the 39 practices were

1. Getting commitment to the goals 2. Coaching 3. Appraising performance 4. Compensating and rewarding 5. Managing a staff for continuity of performance

The Managing People program was quite successful and continued for upward of two decades. While these practices might not appear to be cutting edge or future focused by today’s standards, they represented a shift in the desired behaviors for the bank at the time and thus were an early example of an OD approach to using data-​based feedback for change. During this period, the 1970s and well into the 1980s, a similar movement to multirater feedback emerged, particularly within the corporate sector. The idea was that effective management consisted of a set of competencies, with a competence being a specific set of skills. These skills could also be thought of as practices. Thus, a set of practices and skills established the basis for a competence. The five subdimensions of the Citibank Managing People program could be considered as competencies (e.g., coaching) that encompass a particular set of skills. Because management practices and competencies are parts of an overall conceptual package, it is not always clear where one starts and the other ends. Usually, however, one begins with a set of competencies, and management practices are then derived as behaviors that manifest a given competence. It seemed by the decade of the 1980s every corporation of several hundred employees and larger had a statement and list of managerial competencies. And, most lists of competencies across companies began to look remarkably similar. Today, we can boil it down to four primary themes:

 23

Strategic 360 Feedback for Organization Development  //​ 223

1. Intrapersonal skills: integrity, emotional stability, self-​control 2. Interpersonal skills:  able to build and maintain relationships, compassion, empathy, humility 3. Business skills: analyzing data, allocating resources, forecasting budgets 4. Leadership skills: vision, empowering others, good role modeling In other words, practically everyone has developed a competency model that includes these four sets of capabilities. Competency models no longer differentiate organizations. What can differentiate organizations managerially is their particular set of management practices and how they go about assessing and developing against those. As Church (2014) has noted, there is a finite number of leadership competencies in the world, and 85% of all leadership competency models in organizations cover the same basic content at the conceptual level. What matters most is not the specific set of dimensions themselves but (a) the level of customization of a model to a given organizational context and language of that organization and (b) the unique behaviors that are to be measured. Yes, there are generic sets of management practices that can also be used for 360 Feedback processes. Consulting firms produce them every day. However, one can customize these to create management practices for any given context or change effort. The Center for Creative Leadership (CCL), for example, has a pool of several hundred practices that a client organization can choose from, and then CCL provides their 360 Feedback customized set of, say, 20 practices. Optimally, however, the best approach is to create one’s unique, tailored version for their organization, particularly if the behaviors are meant to reflect a new set of capabilities or change for the organization (Church, 2014). Returning to our timeline, by the 1990s the term 360 Feedback had been officially coined, and it was now a consulting and industry fad (Bracken et  al., 2016), with organizations experimenting with these processes in various forms (e.g., Church, 1995; Edwards & Ewen, 1996; London, Smither, & Adsit, 1997; O’Reilly, 1994; Tornow & London, 1998). While the concept was popular, many companies were approaching the process in smaller pilot-​like segments and without the depth of the strategic linkages described previously. As a result, many of these efforts fizzled out, and companies moved to other trends at the time. There were, however, some organizations that began implementing more integrated change efforts, with 360 Feedback systems linked to broader change initiatives. Some of the organizations that we worked with during that time period include British Airways, the British Broadcasting Company, Caterair, Home Depot, Merck, Mitsubishi, NatWest, and SmithKline Beecham, to name a few. As programs became more aligned to broader change efforts, the question of whether 360 Feedback was an intervention in and of itself

24

224  / / ​  3 6 0 for D evelopment

fell by the wayside (similar to the movement in organizational surveys), and more fully integrated OD and change approaches to 360 Feedback implementations began to take over. The multiyear effort to drive an inclusive culture at PepsiCo through the use of an aligned 360 Feedback, survey program, changes to performance management to measure business and people results, and a greater emphasis on inclusive leaders in the talent management process is one such example (Church, Rotolo, Shull, & Tuller, 2014; Thomas & Creary, 2009). When considering the evolution of 360 Feedback for change, it is also interesting to note that two different streams of practice have emerged over time. Although similar in their methodology and requirements, the approach to the use of data for organizational change manifests itself via two different paths. If we frame this in terms of the B-​L model, the first path is leveraging 360 Feedback for transactional OD, that is, a focus on managerial behaviors working from the middle of the organization outward to drive an aligned culture change (almost like change from the bottom up). The second path represents the more formal top-​down transformational OD approach that leverages the senior-​most executives and follows a more forceful change doctrine. While both can be effective, they do manifest themselves differently and with different implications for the timing and potential impact of the change. In the next section, we describe an example of each type of approach. 360 FEEDBACK FOR TRANSACTIONAL OD

In a transactional approach to 360 Feedback for OD, the emphasis is on management practices representing a shift in day-​to-​day behaviors in the middle of the organization. The target is management (not leadership), and the outcome measures are often climate (not culture). Typically, this is tied to a programmatic delivery or some other formal vehicle, as noted previously, to make the connections to the organization’s new strategy, mission, or vision. One such example of an extensive change effort linked to a tailored program is the work that was conducted for the National Aeronautics and Space Administration (NASA) beginning in the late 1970s. The broader strategic intent of NASA at the time was twofold: (a) to establish a management development center and (b) to design and conduct a feedback-​rich training and development program for midlevel managers across all centers ( Johnson, Langley, Marshall, Kennedy, etc.) and headquarters to deliver a new way of working together across the agency. This feedback process used in the program was based on a custom 360 Feedback tool and derived from a newly conducted competency study. The center was eventually created and settled at Wallops Island, Virginia, a World War II US Navy base that was no longer needed for

 25

Strategic 360 Feedback for Organization Development  //​ 225

naval purposes. The base was rehabilitated for use by NASA and remains to the present time as its management and executive development center. Based on a grant from NASA, the social-​–​organizational psychology program at Teachers College, Columbia University, was charged with conducting research to establish the competencies that NASA deemed appropriate for their needs and their desired culture. Data were collected via interviews and questionnaires across the organization to determine the necessary competencies. The management practices used in the 360 Feedback process were then created from these sets of competencies. The five competencies for midlevel management at the time were

1. 2. 3. 4. 5.

Planning and controlling Promoting achievements Understanding and supporting others Evaluating subordinates Managing interfaces

While three of these competencies reflected the basics of good management, promoting achievements and managing interfaces were more aligned to the desired future direction of the agency. Developed from these five competencies were 37 practices (i.e., behavioral statements outlining the requirements for effective performance) on which the management development program was based. These management practices were based on certain beliefs and values consistent with the organization’s culture and were general enough for all managers within the organization to understand them and at the same time specific enough to be measurable. See Figure 13.2 for NASA’s original set of competencies and management practices. After several successful years of the program, NASA authorized further research with additional populations to expand the culture change effort. The next phase was to identify a new set of executive practices to undergird a more senior program. Following a similar research and design process, the six competencies that categorized the 40 practices in this senior executive framework were

1. Managing tasks 2. Influencing others 3. Managing the team 4. Working with subordinates 5. Ensuring openness 6. Leading

FIGURE 13.2  Sample management practices “placemat.”

26

 27

Strategic 360 Feedback for Organization Development  //​ 227

Over time, it became standard practice at NASA for managers and executives who had attended a program to follow up with their boss and direct reports regarding the feedback obtained (initially, the feedback was considered so confidential and for development purposes only that sharing was not necessarily part of the formal flow). The process was to have a meeting with their boss and discuss the feedback and then to have a separate meeting with their team members. Specific guidelines for how to conduct these meetings were provided to ensure a smooth set of sessions (e.g., being open with the boss; assuring subordinates that retribution was out of the question, etc.). The goals for the managers were to gain a deeper understanding of the feedback and to develop a specific plan for improvement. Collectively, along with others that were developed as well (e.g., Managing the Influence Process or MIP program), these structured feedback efforts were designed to move the organization forward toward a new way of working together, albeit at a relatively steady pace, working from the middle upward over time. While other organizational factors were also examined and aligned where possible (e.g., a long-​standing practice of an integrated organizational survey with linked content was also leveraged), some subsystems were more challenging to influence given the complexity of the infrastructure and the changing leadership following more senior administration changes and shifts in budgets and priorities. Nonetheless, these management and executive development activities were conducted well into the twenty-​first century and could be considered as transactional changes, that is, focusing on moving the organization forward via a continuous improvement approach and not a top-​down, wholesale culture change via senior leadership interventions. In contrast, what follows is a brief example of transformational change where 360 Feedback was central to a significant cultural shift. 360 FEEDBACK FOR TRANSFORMATIONAL OD

Normally, when thinking about a large-​scale cultural change of an organization, we begin with the organization’s external environment (Burke & Litwin, 1992). What forces in that environment are having an impact on the organization’s performance and causing a felt need to change or do something that has not been done before? In the case of British Airways (BA), one major environmental force was clear. Her name was Margaret Thatcher, the prime minister of the United Kingdom. Coming to power in the early 1980s, her mission was to return the country to its roots—​a free-​market society—​and away from what she considered to be socialism. A large part of that mission was to privatize much of the public sector, and BA was number 2 on her list. Thus, BA was to be

28

228  / / ​  3 6 0 for D evelopment

removed from the government dole and become a private stock-​owned company. This decision meant that, regarding survival, BA was on its own. The culture of BA had gradually changed over the years since World War II, from a military-​type hierarchy to a heavy-​laden bureaucracy where one’s power as a manager was based on what and how much information he or she held and communicated—​or not. Breaking this logjam of control was a significant part of the cultural change effort. There were many interventions aimed at changing the culture: changing to new uniforms, providing new paint jobs for the aircraft, changing the financial function fundamentally, establishing closer ties to the unions, providing training for all HR people in consulting and coaching skills. But, the main thrust was an integrated and strategically aligned 360 Feedback process coupled with changes in the performance appraisal and reward systems, in short, a fully integrated approach to organizational transformation. The head of HR at the time, Nicolas Georgiades, conceptualized this part of the change as metaphorically a three-​legged stool. If one leg were removed, the stool would not stand. The first leg was of “Managing People First,” a 5-​day experiential learning program for managers based on 20 behavioral practices that emphasized four factors aimed at “opening” the system to be more participative and trusting. The four factors were

• • • •

Clarity and helpfulness Promoting achievements Influencing through personal excellence and teamwork Care and trust

The second leg concerned the introduction of a new performance appraisal process for all managers. BA managers for years had been evaluated according to a “results-​only” criterion. The critical change made was that half of the manager’s evaluation would be based on their results and half based on how they achieved the results (i.e., the 20 management practices). This is similar to an approach later adopted by many organizations with respect to performance management (PM), including PepsiCo, as part of their change to support a cultural transformation (Corporate Leadership Council, 2002, 2005) and the use of upward feedback to drive manager quality (Bracken & Church, 2013). Given the significance that performance management has for driving organizational change on its own (Burke, 1982; Pulakos, Mueller Hanson, Arad, & Moye, 2015; Smither & London, 2009), leveraging 360 Feedback and PMP together represents an incredibly powerful combination. The third leg of the BA transformation was ensuring that managers were actually rewarded on the second leg of the stool, in other words, ensuring there were integration

 29

Strategic 360 Feedback for Organization Development  //​ 229

and accountability in the feedback and performance management process. All three legs together were critical to the effectiveness of the culture change. By 1990, BA was the most profitable airline in the world. For more detail on this case, see the work of Burke (2018), Goodstein and Burke (1991), and Kotter (1993). In summary, 360 Feedback today is used for “multi” purposes. Being clear about the purpose for a given implementation is one of the most important decisions a practitioner can make. The choice is often individual or organizational (as well as for development or decision-​making). If the choice is for the purpose of organizational change, however, there is a clear advantage because one can have learning and improvement for the individual as well as a focus on what behavioral changes at the organizational/​cultural level are needed. THE FUTURE 360 FEEDBACK FOR ORGANIZATION DEVELOPMENT AND CHANGE INTERVENTIONS

Given the prevalence of 360 Feedback systems today, it is hard to imagine what might be new in the area of OD applications of this methodology (Bracken et al., 2016). Aside from the core issues already discussed, and inclusive of ensuring the basics not covered in this chapter but elsewhere in this volume (and in Bracken et al., 2001) (e.g., sound implementation, item content, measurement properties, etc.), what else might we consider as factors for enhancing or optimizing this data-​driven approach to change 30 to 60 years after its origins? If we turn back to the trends impacting organizations and OD today (Church & Burke, 2017), there are several implications that have direct bearing on how 360 Feedback for OD might continue to evolve. First, with the increasing role of technology and digitization of information and HR processes (Church, Ginther, Levine, & Rotolo, 2015; Kane, Palmer, Phillips, Kiron, & Buckley, 2016; Outram, 2013), it is possible to imagine 360 Feedback systems becoming more adaptive, real time, and automated than ever before. Multilevel content (e.g., behaviors for different management layers, functions, linked to engagement survey scores, etc.); branching based on responses targeted at follow-​ups and diagnostic questions; pattern-​based feedback (e.g., to drive scale usage), all are possible today and can directly influence the type and quality of the feedback provided. There are also examples of mass customized approaches to 360 (Golay & Church, 2013), which include the CCL model described. But, is this what we really want? Some of these approaches might be better suited for targeting an individual’s development planning efforts, but we are not sure it will help from an organizational change perspective where consistency in messaging regarding new behavior standards is paramount to the overall effort.

230

230  / / ​  3 6 0 for D evelopment

For example, while real-​time performance feedback has some merit to it as a methodology, one has to question real-​time leadership feedback as reaping the same benefits. Moving to automated and adaptive feedback has the potential to be both beneficial and challenging for OD interventions that use data-​driven methods. While having highly adaptive and real-​time feedback should be a plus to ensuring highly targeted information regarding behavior change at the individual level, it is quite possible that enhanced accessibility and increased familiarity could also lead to reduced engagement with the process. Given one of the key differentiators of 360 Feedback for OD is that it focuses on new behaviors and represents a higher bar than the present state (i.e., reflecting the idealized future), having this content communicated in small “bite-​size” feedback tools on a constant basis might minimize its perceived importance to the individual. Thus, the gravitas and impact of the “campaign” that is often part of the 360 Feedback process for change might be lost. That said, it might also drive change faster, so it is something to consider. Second, if we consider a future world where robotics and algorithms will soon be the norm (Dotlich, 2018; McAfee & Brynjolfsson, 2017), then the technology itself might change the very nature of what is measured. What started as paper and optical scan forms in the 1980s and migrated to web-​based platforms in the 2000s will soon (this is already happening) be applications based in all sorts of everyday devices. Rather than nominations, ratings, feedback, and action planning, the emphasis could eventually be on other forms of observation and measurement outside traditional survey items and scales. Chamorro-​Premuzic, Winsborough, Sherman, and Hogan (2016), for example, recently discussed a number of new “talent signals” that they claim could make 360 Feedback obsolete. Crowdsourcing is the immediate example they offered, but technology may take us further even faster. Robotics might be monitoring your iPhone, computer, hallways, and car to synthesize your conversations or what others say about you even when you are not present. That information could then be integrated and synthesized, and through the use of complex Big Data algorithms and feedback, presented back to you in a packaged way about your skills, capabilities, and impact. Big Brother and related ethical considerations aside (see Church & Dutta, 2013; Church & Silzer, 2016), it is unlikely that the same professionals creating these technology systems and data structures will be the ones who today create 360 Feedback content in the form of survey items. So, there is real potential for a new wave of information that might be entirely unrelated to the behavioral sciences, which is quite concerning. Yet, also intriguing from an OD perspective is the fact that such data would be a true reflection of the culture as experienced day to day, that is, the way we do things around here. So again, there are pluses and minuses to consider. Technology will most certainly enhance organizations’ ability to reach more employees to give them feedback for their development in shorter cycle times and for

 231

Strategic 360 Feedback for Organization Development  //​ 231

less money, however, so that is a good thing, particularly if the data collected and process are kept confidential. Such a shift might even enable us to measure change more actively through multiple minisurveys administered continuously. However, receiving random ongoing 360 feedback requests daily could also lead to survey fatigue, reduce response rates, and cause feelings of being oversurveyed. If managed well, it should allow OD and HR practitioners to spend more time interfacing with the clients and working on the data-​based insights and action plans rather than focus on processing and reporting. Unfortunately how this will play out also remains to be seen. Finally, it goes without saying that advances in linkage analysis (and Big Data applications) will help organizations obtain improved predictive relationships and insights on the trends and drivers of change over time (see Chapter 27). The more disparate data sources are linked, the better able we will be to examine larger trends and determine which aspects (including competencies and practices) are most predictive of driving successful change in organizations (Church & Dutta, 2013). That would be very helpful to organizations and to the field in general, although it may also raise data privacy and confidentiality concerns. Unfortunately, as we have highlighted elsewhere (e.g., Church, 2017; Church & Burke, 2017; Church, Shull, & Burke, 2016), significant capability gaps exist in many practitioners today when it comes to effectively designing and interpreting such complex data sets. If OD practitioners are to leverage strategic 360 Feedback results and take advantage of these types of methodologies in the future, they will need to up their game when it comes to data analytics. In that same study of OD practitioners cited previously (Church, Shull, & Burke, 2016; Shull, Church, & Burke, 2014), only 29% of the 388 respondents mentioned using statistics and research methods as part of their current toolkit. Clearly, there is room for improvement if we are to embrace the future of 360 Feedback for change. CONCLUSION

In general, 360 Feedback is a process designed to drive change at the individual level. By itself, it is a powerful tool for helping leaders and managers increase their level of self-​ awareness and individual effectiveness. From an OD perspective, however, 360 Feedback takes on an entirely different lens, that is, that of systemic large-​scale change. While the process is generally the same, the content and implementation between individual and organizational applications are quite different. Even if an entire organization were to go through a stand-​alone generic 360 Feedback approach, it is highly unlikely it would result in organizational change in any meaningful way. For OD interventions to be effective in using this methodology, they must be based on a future state orientation, with custom

23

232  / / ​  3 6 0 for D evelopment

behavioral content (e.g., competencies, practices, individual items) that is designed in such a way as to ensure all elements of an organization are moving in the same positive direction. Even a short, focused, custom 360 Feedback implementation (e.g., 10–​12 items) that is linked to some strategic aspect of an organization is more likely to drive changes in behavior and subsequently an organization’s culture than a comprehensive (e.g., 50-​item) 360 survey diagnostic that reflects the “kitchen sink” of leadership capabilities based on the latest multifactor model of leadership. Unfortunately, this fact is lost on many organizational leaders and external consultants, who often endorse such approaches. While 360 Feedback tools of the latter type have their place in other applications (e.g., talent management, individual coaching engagements, leadership development programs), they are not effective means for driving OD and change interventions. REFERENCES Blake, R. R., & Mouton, J. S. (1964). The managerial grid. Houston, TX: Gulf. Bracken, D. W., & Church, A. H. (2013). The “new” performance management paradigm: Capitalizing on the unrealized potential of 360 degree feedback. People & Strategy, 36(2),  34–​40. Bracken, D. W., Rose, D. S., & Church, A. H. (2016). The evolution and devolution of 360 degree feedback. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(4), 761–​794. Bracken, D. W., Timmreck, C. W., & Church, A. H. (2001). The handbook of multisource feedback. San Francisco, CA: Jossey-​Bass. Burke, W. W. (1982). Organization development: Principles and practices. Glenview, IL: Scott, Foresman. Burke, W. W. (2011). Organization change: Theory and practice (3rd Ed.). Thousand Oaks, CA: Sage. Burke, W. W. (2018). Organization change: Theory and practice, 5th Ed. Thousand Oaks, CA: Sage. Burke, W. W., & Litwin, G. H. (1992). A causal model of organizational performance and change. Journal of Management, 18, 523–​545. Chamorro-​Premuzic, T., Winsborough, D., Sherman, R. A., & Hogan, R. (2016). New talent signals: Shiny new objects or a brave new world? Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(3), 621–​640. Church, A. H. (1995). First-​rate multirater feedback. Training & Development, 49(8),  42–​43. Church, A. H. (2013). Engagement is in the eye of the beholder: Understanding differences in the OD vs. talent management mindset. OD Practitioner, 45(2),  42–​48. Church, A. H. (2014). What do we know about developing leadership potential? The role of OD in strategic talent management. OD Practitioner, 46(3),  52–​61. Church, A. H. (2017). The art and science of evaluating organization development interventions. OD Practitioner, 49(2),  26–​35. Church, A. H., & Bracken, D. W. (1997). Advancing the state of the art of 360-​degree feedback:  Guest editors’ comments on the research and practice of multirater assessment methods. Group & Organization Management, 22(2), 149–​161. Church, A. H., & Burke, W. W. (2017). Four trends shaping the future of organizations and organization development, OD Practitioner, 49(3),  14–​22. Church, A. H., & Dutta, S. (2013). The promise of big data for OD: Old wine in new bottles or the next generation of data-​driven methods for change? OD Practitioner, 45(4),  23–​31. Church, A. H., Ginther, N. M., Levine, R., & Rotolo, C. T. (2015). Going beyond the fix: Taking performance management to the next level. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8(1), 121–​129.

 23

Strategic 360 Feedback for Organization Development  //​ 233 Church, A. H., Rotolo, C. T., Shull, A. C., & Tuller, M. D. (2014). Inclusive organization development: An integration of two disciplines. In B. M. Ferdman & B. Deane (Eds.), Diversity at work: The practice of inclusion (pp. 260–​295). San Francisco, CA: Jossey-​Bass. Church, A. H., Shull, A. C., & Burke, W. W. (2016). The future of organization development, transformation, and change. In W. J. Rothwell, J. M. Stavros, R. L. Sullivan, & A. Sullivan (Eds). Practicing organization development: A guide for leading change, 4th Ed, 419–​428. Hoboken, New Jersey: Wiley & Sons, Inc. Church, A. H., Shull, A. C., & Burke, W. W. (2018). Organization development and talent management: Divergent sides of the same values equation. In D. W. Jamieson, A. H. Church, & J. D. Vogelsang (Eds.), Enacting values-​ based change:  Organization development in action (pp. 265–​ 294). Cham, Switzerland: Palgrave Macmillan. Church, A. H., & Silzer, R. (2016). Are we on the same wavelength? Four steps for moving from talent signals to valid talent management applications. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9(3), 645–​654. Church, A. H., & Waclawski, J. (2001). A five phase framework for designing a successful multirater feedback system. Consulting Psychology Journal: Practice & Research, 53(2),  82–​95. Church, A. H., Waclawski, J., & Burke, W. W. (2001). Multisource feedback for organization development and change. In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback: The comprehensive resource for designing and implementing MSF processes (pp. 301–​317). San Francisco, CA: Jossey-​Bass. Church, A. H., Walker, A. G., & Brockner, J. (2002). Multisource feedback for organization development and change. In A. H. Church & J. Waclawski (Eds.), Organization development: A data-​driven approach to organizational change (pp. 27–​51). San Francisco, CA: Jossey-​Bass. Corporate Leadership Council. (2002). Closing the performance gap: Driving business results through performance management. Washington, DC: Corporate Executive Board. Corporate Leadership Council. (2005). PepsiCo’s dual performance rating practice: An overview of the practice and a conversation with Allan Church, VP Organization & Management Development. Washington, DC: Corporate Executive Board. Dotlich, D. (2018). In first person: The future of C-​suite potential in the age of robotics. People & Strategy, 41(1),  48–​49. Edwards, M. R., & Ewen, A. J. (1996). 360° feedback: The powerful new tools for employee assessment and performance improvement. New York, NY: AMACOM. Golay, L. M., & Church, A. H. (2013). Mass customization: The bane of OD or the cure to what ails it? Leadership and Organization Development Journal, 34(7), 661–​679. Goodstein, L. D., & Burke, W. W. (1991). Creating successful organization change. Organizational Dynamics, 19,  5–​17. Happich, K., & Church, A. H. (2017). Going beyond development: Key challenges in assessing the leadership potential of OD and HR practitioners. OD Practitioner, 49(1),  42–​49. Harrison, R. (1970). Choosing the depth of organizational intervention. Journal of Applied Behavioral Science, 6(2), 181–​202. Kane, G. C., Palmer, D., Phillips, A. N., Kiron, D., & Buckley, N. (2016). Digitally savvy executives are already aligning their people, processes, and culture to achieve their organizations’ long-​term digital success. Retrieved from http://​sloanreview.mit.edu/​projects/​aligning-​for-​digital-​future/​ Katz, D., & Kahn, R. L. (1978). The social psychology of organizations (2nd ed.). New York, NY: Wiley. Kotter, J. P. (1993). Changing the culture at British Airways. Harvard Business School Case 491-​009, October 1990 (Revised September 1993). Lewin, K. (1958). Group decision and social change. In E. E. Maccoby, T. M. Newcomb, & E. L. Hartley (Eds.), Readings in social psychology (pp. 197–​211). New York, NY: Holt, Rinehart, and Winston. London, M., & Beatty, R. W. (1993). 360-​degree feedback as a competitive advantage. Human Resource Management, 32(2&3), 353–​372. London, M., Smither, J. W., & Adsit, D. J. (1997). Accountability: The Achilles’ heel of multisource feedback. Group & Organization Management, 22(2), 162–​184.

234

234  / / ​  3 6 0 for D evelopment McAfee, A., & Brynjolfsson, E. (2017). Machine platform crowd:  Harnessing our digital future. New  York, NY: Norton. O’Reilly, B. (1994). 360° feedback can change your life. Fortune, 130(8), 93–​94, 96, 100. Outram, C. (2013). Making your strategy work: How to go from paper to people. Harlow, England: Pearson. Phillips, P. P., Phillips, J. J., & Zuniga, L. (2013). Measuring the success of organization development: A step-​by-​ step guide for measuring impact and calculating ROI. Alexandria, VA: ASTD Press. Pulakos, E. D., Mueller Hanson, R., Arad, S., & Moye, N. (2015). Performance management can be fixed: An on-​the-​job experiential learning approach for complex behavior change. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8,  51–​76. Shull, A. C., Church, A. H., & Burke, W. W. (2014). Something old, something new: Research findings on the practice and values of OD. OD Practitioner, 46(4),  23–​30. Smither, J. W., & London, M. (Eds.). (2009). Performance management:  Putting research into practice. San Francisco, CA: Jossey-​Bass. Smither, J. W., & Walker, A. G. (2001). Measuring the impact of multisource feedback. In D. W. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback: The comprehensive resource for designing and implementing MSF processes (pp. 256–​271). San Francisco, CA: Jossey-​Bass. Thomas, D. A., & Creary, S. J. (2009, August). Meeting the diversity challenge at PepsiCo: The Steve Reinemund era. Case: 9-​410-​024. Boston, MA: Harvard Business School. Tornow, W. W., & London, M. L. (Eds.) (1998). Maximizing the value of 360-​degree feedback. San Francisco, CA: Jossey-​Bass. Waclawski, J., & Church, A. H. (2002). Introduction and overview of organization development as a data-​ driven approach for organizational change. In J. Waclawski & A. H. Church (Eds.), Organization development: A data-​driven approach to organizational change (pp. 3–​26) (SIOP Professional Practice Series). San Francisco, CA: Jossey-​Bass.

/

 235

/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/​ SECTION III / /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/ /​/

360 METHODOLOGY AND MEASUREMENT

236

 237

14 / / ​/ ​      / / ​/​

FACTORS AFFECTING THE VALIDITY OF STRATEGIC 360 FEEDBACK PROCESSES JOHN W. FLEENOR

The concepts of validity and reliability have unique implications for Strategic 360 Feedback processes; there are challenges in defining and establishing principles of validity and reliability for these 360 implementations. Over the past 20 years, the conceptualization of validity in 360 Feedback has evolved; its validity must now be considered in the context of an ongoing process rather than as a one-​time event. This chapter focuses on the causes and effects of poorly implemented 360 systems and on how validity affects the future success of the process. A substandard implementation of 360 Feedback can negatively affect the validity of subsequent administrations. As discussed in this chapter, invalid 360 processes can have serious consequences for the organization. In 2001, Bracken, Timmreck, Fleenor, and Summers proposed a comprehensive model of 360 Feedback that addressed several fundamental questions about the process. Their model identified a number of key factors that proximally or distally influence the validity, and therefore the success, of a 360 implementation. The key factors in this model are directly related to the characteristics of a successful strategic 360 process: (a) The content is derived from the organization’s strategy and values; (b) the process is sufficiently valid and reliable to be used for decision-​making; (c) the results are integrated into talent management and leader development systems; and (d) participation is inclusive (Chapter 2).

237

238

238  / / ​  3 6 0 M ethodology and Measurement

Strategic 360 Feedback is important for measuring the “how” of performance, rather than only the “what” of performance (e.g., goal achievement) (Bracken & Church, 2013). The purpose of Strategic 360 Feedback is to create behavior change valued by the organization relative to its business strategy, cultural norms, and leadership vision (Chapter 3). The characteristics of Strategic 360 Feedback (e.g., reliability, validity, execution, and acceptance) determine the usefulness of the process. To be successful, the feedback should be used for decision-​making at some level. If the feedback is not used for talent management or similar purposes, it is not a 360 process; it is just an event (Bracken, Rose, & Church, 2016). In the Bracken et  al. (2001) model, the determining factor in the success of 360 Feedback is the validity of the process. The conceptualization of validity for a 360 process, however, is more complex than traditional notions of validity that arose from relatively controlled, standardized settings, such as preemployment testing. Traditional definitions specify that validity is determined by a measurement event in which an instrument is administered to individuals who respond to its items (i.e., single-​source data). For example, a classic definition states, “a measuring instrument is valid if it does what it is intended to do” (Nunnally, 1978, p. 86). Such definitions, however, fail to address factors that affect the validity of a process that depends on multiple sources of data (i.e., raters). An ostensibly valid 360 process may be become invalid if it is implemented poorly (e.g., the resulting data are not used appropriately in decision-​making, or raters are not sufficiently familiar with the focal leader’s behavior). A more suitable indicator of the validity of a 360 process, therefore, may be its usefulness (i.e., its success). Rather than asking if 360 Feedback works, a more relevant question is to ask under what conditions 360 is likely to be successful. The context of a 360 implementation must be considered to fully evaluate the validity of the process (Bracken & Rose, 2011). The validity of the process therefore cannot be fully known until it is implemented; then, its usefulness can be determined. Like performance appraisals and assessment centers, 360 Feedback depends on the collection of data from potentially unreliable sources (i.e., raters). It is a complex process with the characteristics of psychometric testing, large-​scale data collection, and leader development. Although rater training is strongly recommended for performance appraisals, training for 360 Feedback raters is often avoided because of the effort required to train large numbers of raters. As a result, raters in 360 processes are often untrained and unaccountable for the quality of their ratings, unlike managers conducting performance appraisals. A primary characteristic of Strategic 360 Feedback is its ongoing, repeated administrations. It is a dynamic process where participants (raters, focal leaders,

 239

Factors Affecting Validity of Strategic 360 Feedback Processes  //​ 239

managers) have experiences that affect future administrations. From a long-​term perspective, a number of factors can either enhance or undermine a 360 process, and these factors may not be apparent until the second or subsequent administrations. These factors and their consequences can decrease the validity of a 360 process, resulting in a failure to create behavior change in focal leaders. As defined by Bracken et al. (2001), success is creating sustained behavior change in the organization. VALIDITY FACTORS IN STRATEGIC 360 FEEDBACK

The factors that determine the validity of a 360 Feedback process can be organized into two primary categories, proximal and distal (Bracken et  al., 2001). Proximal factors, which occur during the initial administration (Time 1), have an immediate effect on the validity of decisions that arise from the current 360 administration. Distal factors are events that occur at Time 1 whose effects are not realized until Time 2, in a subsequent administration of the 360. Some factors have effects at both Time 1 and Time 2 (i.e., dual factors). Whether proximal, distal, or dual, each factor can have consequences for the validity of the 360 process. Table 14.1 presents a summary of these factors with the associated design recommendations. Proximal Factors

Each proximal factor can have a distal effect because of residual issues, such as a decrease in the participants’ confidence in the process, which negatively affect their participation in future administrations. A poorly implemented proximal factor can have the distal effect of permanently terminating a 360 system. Primary proximal factors include alignment, accuracy, clarify, cooperation, timeliness, reliability, and insight. Alignment

Alignment is the traditional definition of validity, that is, the extent to which the content of the feedback (e.g., competencies, behaviors) is relevant to the success in the organization. If the competencies being measured are not related to success, then the process is invalid. Alignment is optimal when the strategies, values, and goals of the organization are translated into a set of competencies for the entire organization (Chapter 3). That is, the “how” side of performance must be clearly tied to business strategies and values (Bracken & Church, 2013). One of the advantages of a well-​designed 360 system is that alignment can be strengthened over time. The first occasion is in the design of the 360 instrument. Design

240

TABLE 14.1  Strategic 360 Feedback Validity Factors With Design Recommendations Proximal Factors Validity Factor

Design Recommendations

Alignment

Custom design content Use internal norms Require meeting with raters Align with performance management system

Accuracy

Capacity to do high-​volume and secure reporting Processes to ensure zero errors Precode important information (e.g., demographics)

Clarity

Clear instructions and readability Training sessions for providing rating instructions Test understanding of participants

Cooperation

Keep length reasonable (40–​60 items) Limit demands (number of surveys) on rater Communicate need for rater cooperation Do on company time

Timeliness

Do as frequently as is reasonable/​needed Train raters to avoid recency error Deliver results as soon as possible

Reliability

Clear, behavioral, actionable Conduct reliability analyses Use clearly defined anchors Select raters with opportunity to observe Train on proper use of rating scale Report rater groups separately

Insight

Collect item ratings (not overall competency ratings) Provide as much information as possible to focal leaders Collect write-​in comments

Distal Factors

Require meeting with raters

Validity Factors

Design Recommendations

Focal Leader Accountability

Communicate expectations for focal leader Set consequences for noncompliance Require meeting with raters

Commitment

Administer on company time Visible participation by top management Provide access to internal/​external training Use results for decision-​making

 241

Table 14.1 Continued Distal Factors Validity Factors

Design Recommendations

Acceptance

Require focal leader participation Focal leader selects raters, agreed to by the organization Administer consistently across the organization Treat process as a business priority Content clearly tied to strategy and goals Train on how to use results

Dual Factors

Provide support (workshops, coaches, etc.)

Validity Factors

Design Recommendations

Consistency

Apply consistently across the organization Test for possible unfairness

Anonymity

Use outside vendor All direct reports; 4–​6 in other rater groups Communicate how anonymity is ensured Do not report groups