301 14 2MB
English Pages [358] Year 2013
The Psychology of Workplace Technology Recent advances in technology have dramatically altered the manner in which organizations function, transforming the way people think about and perform their work. The implications of these trends continue to evolve as emerging innovations adapt to and are adapted by organizations, workers, and other components of the socio-technical systems in which they are embedded. A rigorous consideration of these implications is needed to understand, manage, and drive the reciprocal interplay between technology and the workplace. This edited volume brings together top scholars within and outside of the field of industrial-organizational (I-O) psychology to explore the psychological and organizational effects of contemporary workplace technologies. A special section is included at the end of the book by three experts in the field, entitled Reflections and Future Directions. Michael D. Coovert joined the industrial-organizational psychology faculty at the University of South Florida (USF) and founded the Center for Psychology and Technology. His research includes human-systems integration, the impact of technology on individuals and organizations, quantitative methods, and performance measurement. Dr. Coovert has over 100 scientific publications, 175 presentations, and has directed 40 funded projects. He has received the Presidential Excellence Award from USF and also received the university’s Jerome Kirvanik Distinguished Teacher Award given once a year to its outstanding teacher. As an aviation enthusiast and pilot, Dr. Coovert can often be found in the sky. Lori Foster Thompson is a professor of psychology at North Carolina State University, where she leads the IOTech4D lab devoted to research at the intersection of work, psychology, technology, and global development. Her scholarship focuses on how technology and industrial-organizational psychology can together enrich and improve work carried out for, with, and by people in lower-income settings, for the purpose of addressing the most pressing economic, social, and environmental challenges facing our world today. In 2010, she was appointed an EU Erasmus Mundus Scholar in Humanitarian Work Psychology. She has been inducted into North Carolina State University’s Academy of Outstanding Teachers and in 2012 was named one of the university’s 24 inaugural University Faculty Scholars. Besides this book, Lori has edited a new book from Routledge (2013) with Julie Olson-Buchanan and Laura Koppes Bryan entitled Using IndustrialOrganizational Psychology for the Greater Good.
The Organizational Frontiers Series
Series Editor Eduardo Salas University of Central Florida
EDITORIAL BOARD Tammy Allen University of South Florida Neal M. Ashkanasy University of Queensland Adrienne Colella Tulane University Jose Cortina George Mason University Lisa Finkelstein Northern Illinois University Gary Johns Concordia University Joan R. Rentsch University of Tennessee John Scott APT Inc.
SIOP Organizational Frontiers Series
Series Editor Eduardo Salas University of Central Florida Coovert-Thompson: (2013) The Psychology of Workplace Technology. Highhouse-Dalal-Salas: (2013) Judgment and Decision Making at Work. Cortina-Landis: (2013) Modern Research Methods for the Study of Behavior in Organizations. Olson-Buchanan, Koppes Bryan, Foster Thompson: (2013) Using IndustrialOrganizational Psychology for the Greater Good: Helping Those Who Help Others. Eby-Allen: (2012) Personal Relationships: The Effect on Employee Attitudes, Behavior, and Well-being. Goldman-Shapiro: (2012) The Psychology of Negotiations in the 21st Century Workplace: New Challenges and New Solutions. Ferris-Treadway: (2012) Politics in Organizations: Theory and Research Considerations. Jones: (2011) Nepotism in Organizations. Hofmann-Frese: (2011) Error in Organizations. Outtz: (2009) Adverse Impact: Implications for Organizational Staffing and High Stakes Selection. Kozlowski-Salas: (2009) Learning, Training, and Development in Organizations. Klein-Becker-Meyer: (2009) Commitment in Organizations: Accumulated Wisdom and New Directions. Salas-Goodwin-Burke: (2009) Team Effectiveness in Complex Organizations. Kanfer-Chen-Pritchard: (2008) Work Motivation: Past, Present and Future. De Dreu/Gelfand: (2008) The Psychology of Conflict and Conflict Management in Organizations. Ostroff/Judge: (2007) Perspectives on Organizational Fit. Baum/Frese/Baron: (2007) The Psychology of Entrepreneurship. Weekley/Ployhart: (2006) Situational Judgment Tests: Theory, Measurement and Application.
Dipboye/Colella: (2005) Discrimination at Work: The Psychological and Organizational Bases. Griffin/O’Leary-Kelly: (2004) The Dark Side of Organizational Behavior. Hofmann/Tetrick: (2003) Health and Safety in Organizations. Jackson/Hitt/DeNisi: (2003) Managing Knowledge for Sustained Competitive Knowledge. Barrick/Ryan: (2003) Personality and Work. Lord/Klimoski/Kanfer: (2002) Emotions in the Workplace. Drasgow/Schmitt: (2002) Measuring and Analyzing Behavior in Organizations. Feldman: (2002) Work Careers. Zaccaro/Klimoski: (2001) The Nature of Organizational Leadership. Rynes/Gerhart: (2000) Compensation in Organizations. Klein/Kozlowski: (2000) Multilevel Theory, Research and Methods in Organizations. Ilgen/Pulakos: (1999) The Changing Nature of Performance. Earley/Erez: (1997) New Perspectives on International I-O Psychology. Murphy: (1996) Individual Differences and Behavior in Organizations. Guzzo/Salas: (1995) Team Effectiveness and Decision Making. Howard: (1995) The Changing Nature of Work. Schmitt/Borman: (1993) Personnel Selection in Organizations. Zedeck: (1991) Work, Families and Organizations. Schneider: (1990) Organizational Culture and Climate. Goldstein: (1989) Training and Development in Organizations. Campbell/Campbell: (1988) Productivity in Organizations. Hall: (1987) Career Development in Organizations.
The Psychology of Workplace Technology Edited by
Michael D. Coovert and Lori Foster Thompson
First published 2014 by Routledge 711 Third Avenue, New York, NY 10017 Simultaneously published in the UK by Routledge 27 Church Road, Hove, East Sussex BN3 2FA Routledge is an imprint of the Taylor & Francis Group, an informa business © 2014 Taylor & Francis The right of the editors to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging in Publication Data The psychology of workplace technology/Michael Coovert & Lori Foster Thompson [editors]. pages cm Includes bibliographical references and index. 1. Work environment—Psychological aspects. 2. Information technology —Psychological aspects. 3. Technology—Psychological aspects. I. Coovert, Michael D., editor of compilation. II. Thompson, Lori Foster, editor of compilation. HF5548.8.P779 2013 651⬘.26019—dc23 ISBN: 978-1-84872-964-3 (hbk) ISBN: 978-0-203-73556-5 (ebk) Typeset in Minion Pro and Optima by Florence Production Ltd, Stoodleigh, Devon, UK
To Sally, David, and Molly whose love and support continually enrich my life. MDC To Dr. Robert A. Reeves. For believing in, encouraging, guiding, and making time for me and countless other first-generation college students wondering what to do next. LFT
This page intentionally left blank
Contents Series Foreword ............................................................................................. xiii Preface............................................................................................................... xv About the Editors ......................................................................................... xvii About the Contributors ................................................................................ xix
Chapter 1
Toward a Synergistic Relationship between Psychology and Technology................................................... 1 Michael D. Coovert and Lori Foster Thompson
SECTION I Traditional Topics Chapter 2
Technology-Based Selection ................................................ 21 Alan D. Mead, Julie B. Olson-Buchanan, and Fritz Drasgow
Chapter 3
Advances in Training Technology: Meeting the Workplace Challenges of Talent Development, Deep Specialization, and Collaborative Learning............. 43 J. Kevin Ford and Tyler Meyer
Chapter 4
Technology and Performance Appraisal ........................... 77 James L. Farr, Joshua Fairchild, and Scott E. Cassidy
Chapter 5
Teams and Technology......................................................... 99 Jonathan Miles and John R. Hollenbeck
Chapter 6
Leadership and Technology: A Love–Hate Relationship .......................................................................... 118 Denise Potosky and Michael W. Lomax
ix
x • Contents
SECTION II Human Factors Chapter 7
Human Factors .................................................................... 149 Peter A. Hancock
Chapter 8
Usability Science II: Measurement ................................... 162 Douglas J. Gillan and Randolph G. Bias
SECTION III Emerging Areas Chapter 9
Robots: The New Teammates............................................ 185 Elizabeth S. Redden, Linda R. Elliott, and Michael J. Barnes
Chapter 10
Workplace Monitoring and Surveillance Research since “1984”: A Review and Agenda ................................ 209 Bradley J. Alge and S. Duane Hansen
Chapter 11
The Impact of Technology on Employee Stress, Health, and Well-Being ...................................................... 238 Ashley E. Nixon and Paul E. Spector
Chapter 12
Global Development through the Psychology of Workplace Technology .................................................. 261 Tara S. Behrend, Alexander E. Gloss, and Lori Foster Thompson
Chapter 13
Online Social Media in the Workplace: A Conversation with Employees....................................... 284 Richard N. Landers and Andrea S. Goldberg
SECTION IV
Reflections and Future Directions Section Introduction
Michael D. Coovert and Lori Foster Thompson
Chapter 14
Looking Back, Looking Forward: Technology in the Workplace ................................................................. 307 Wayne F. Cascio
Contents • xi Chapter 15
Reflections on Technology and the Changing Nature of Work ................................................. 314 Ann Howard
Chapter 16
Intersections between Technology and I/O Psychology..................................................................... 318 Walter C. Borman
Author Index ................................................................................................. 322 Subject Index................................................................................................. 326
This page intentionally left blank
Series Foreword Technology has changed our lives. And it has changed the world of work— dramatically. The way people feel, think, and behave at work and the manner in which organizations manage and function has, forever, changed. We live in a new technology-driven world. A new psychology of work has emerged, that needs to be understood, explored, and embraced. A new psychology of work that needs new theories, ideas, paradigms, reflections, and insights about how technology influences what we do, think, and feel at work. This is the motivation of the volume—to explore the psychology and organizational effects of modern workplace technologies. So, indeed, a much-needed volume that examines how to select, how to manage teams, how to design training, how leadership is depicted, how social media works, how robots behave as teammates, and how the role of human factors is influenced by technology. There are thoughtful nuggets in the volume for all—the scientist, the practitioner, the manager, the student, the HR executive, the engineer, to all involved in the design and management of socio-technical systems. Bravo Mike and Lori! Thank you! What a wonderful volume you have assembled. A great multidisciplinary set of thinkers, practitioners, and scholars who have made a very valuable contribution to our science and practice. On behalf of the Editorial Board of the Organizational Frontier Book Series—thank you again! Eduardo Salas Ph.D. University of Central Florida Series Editor
xiii
This page intentionally left blank
Preface Whether they are students, practitioners, researchers, or teachers, readers of this book have one thing in common: they are interested in the rapid pace at which technology affects workers and changes the workscape. Technology is being developed that targets the worker, the tasks the worker performs, the social milieu in which work takes place, as well as the culture in which organizations exist. Our goal for this book is to provide the reader with insights from frontline researchers and practitioners and to define the important issues for workers and organizations as they navigate this dynamic world in which technology is seemingly at the foundation of all we do—both at and outside of work. The book begins with an introductory chapter, where we discuss the fact that technology by itself is not inherently good or bad, but it is the application and the context in which it occurs that often defines its ultimate benefit or detriment. This prompts the need to consider technology’s impact on workers and organizations in the midst of today’s ever-changing and rapidly advancing technological landscape. Because of this escalating rate of change (and the ensuing obsolescence of today’s innovations), we encouraged the authors of this book to avoid concentrating on specific devices or software but rather to address the topical content of their chapters from an enduring applied and, when possible, theoretical perspective. Following our introductory chapter, the book is roughly divided into four sections. These are not discrete divisions as the boundaries are fuzzy and any particular topical content may be found in more than one chapter. The first section contains five chapters addressing traditional topics from industrial-organizational psychology and management: selection, training, performance appraisal, teams, and leadership. The authors address the nature by which technology is impacting, and often changing each of these areas. To provide the reader a full consideration of how technology is changing work, we felt it important to present a perspective on how sociotechnical systems should be developed. Two chapters are included to this end. The first, human factors, is both traditional and somewhat philosophical as its author considers the nature of work itself and lays a foundational for xv
xvi • Preface sociotechnical perspectives. The second chapter provides the reader background on measurement issues in usability. Usability can be considered at the heart of technological influence, yet is often overlooked by those interested in our subject matter. The third section considers topics on the emerging edge of psychology and workplace technology. The first chapter provides a perspective on robots in the workplace and even considers them the newest member of work teams. The next chapter offers a framework to think about how technology impacts workers’ stress, health, and overall well-being. Have you ever wondered how psychologists might use technology to help work and workers in disadvantaged areas of the world? If so, see the chapter on technology-mediated I-O psychology for global development. Its authors provide several examples of the implementation of technology in lowerincome settings. A final chapter in this group addresses the ever-increasing importance of social media in the workplace. Readers of this volume are most fortunate to have three distinguished psychologists provide their perspectives on the psychology of workplace technology. The last set of chapters in this book is presented in a section titled Reflections and Future Directions. Each is a brief perspective or update on the very nature of psychology and technology. We would like to take this opportunity to thank the authors of this book. We thoroughly enjoyed working with each and every one of them and are confident the reader will benefit from their insights.
About the Editors
Michael D. Coovert grew up in a family of nine children in central Illinois. His parents always stressed the value of education and after a stint in the Army he completed his undergraduate degree with a dual major in computer science and psychology at Chaminade University of Honolulu. Returning to the mainland, he obtained a masters degree from Illinois State and his doctorate from The Ohio State University. Upon leaving Ohio State, Dr. Coovert joined the industrial-organizational psychology faculty at the University of South Florida (USF) and founded the Center for Psychology and Technology. His research includes humansystems integration, the impact of technology on individuals and organizations, quantitative methods and performance measurement. Dr. Coovert has over 100 scientific publications, 175 presentations, and directed 40 funded projects. He has received the Presidential Excellence Award from USF and also received the university’s Jerome Kirvanik Distinguished Teacher Award given once a year to its outstanding teacher. Dr. Coovert is an elected member of the Society of Multivariate Experimental Psychology and a Fellow of the American Psychological Association (APA), the Association for Psychological Science (APS), and the Society for Industrial and Organizational Psychology (SIOP). The Department of Psychology at Illinois State University awarded Dr. Coovert Alumnus of the Year: In Recognition of Outstanding Lifetime Career Accomplishments. As an aviation enthusiast and pilot, Dr. Coovert can often be found in the sky. Lori Foster Thompson was born in Reedsburg, Wisconsin to parents whose wisdom, love, and kindness equipped and inspired each of their seven children in uniquely significant ways. Today, she is a professor of psychology at North Carolina State University, where she leads the IOTech4D lab devoted to research at the intersection of work, psychology, technology, and global development. Her scholarship focuses on how technology and industrial-organizational psychology can together enrich xvii
xviii • About the Editors and improve work carried out for, with, and by people in lower-income settings, for the purpose of addressing the most pressing economic, social, and environmental challenges facing our world today. Dr. Thompson’s publications have taken the form of journal articles, chapters, authored, and edited books and has been featured in popular media outlets such as The Wall Street Journal, ARS Technica, Fast Company, NSF Science 360, U.S. News and World Report, MSN Money, The Chronicle of Higher Education, and National Public Radio. Lori is a Fellow of the Society for Industrial and Organizational Psychology (SIOP), the American Psychological Association (APA), and the Association for Psychological Science (APS). She serves as a SIOP representative to the United Nations Economic and Social Council (ECOSOC). In 2010, she was appointed an EU Erasmus Mundus Scholar in Humanitarian Work Psychology. She has been inducted into North Carolina State University’s Academy of Outstanding Teachers and in 2012 was named one of the university’s 24 inaugural University Faculty Scholars. Besides this book, Lori has co-edited a new book from Routledge in 2013 entitled Using Industrial-Organizational Psychology for the Greater Good, edited by Olson-Buchanan, Koppes Bryan, and Foster Thompson.
About the Contributors Bradley J. Alge earned his Ph.D. in Business (Organizational Behavior and HRM) from the Ohio State University and is currently an Associate Professor of Management at Purdue’s Krannert School of Management. Professor Alge’s research focuses on leadership, coordination, and control in collocated and distant relationships, with an emphasis on technology. Michael J. Barnes is a research psychologist with the Army Research Laboratory (ARL). Currently, he manages human robotic interaction (HRI) programs for the Army. Previously, he conducted research for the Navy and GE and for the military intelligence community. He has co-authored over 80 articles and edited a book on HRI. Tara S. Behrend is an Assistant Professor of Industrial-Organizational Psychology at The George Washington University. She conducts research on the intersection of technology and worker well-being, especially with regard to training, selection, and career preparation. She earned her Ph.D. from North Carolina State University in 2009. Randolph G. Bias is an associate professor and director of the Information eXperience Lab in the School of Information at the University of Texas at Austin. He worked in industry for 23 years and is a Certified Human Factors Practitioner. His research focuses on human information processing and human–computer interaction. Walter C. Borman received his Ph.D. in Industrial-Organizational Psychology from the University of California (Berkeley). He is currently Chief Scientist of Personnel Decisions Research Institutes, Inc., and is a Professor of Industrial-Organizational Psychology at the University of South Florida. He has written more than 350 books, book chapters, journal articles, and conference papers. Wayne F. Cascio holds the Robert H. Reynolds Distinguished Chair in Global Leadership at the University of Colorado Denver. He has published 27 books, and more than 145 articles and book chapters. A senior editor xix
xx • About the Contributors of the Journal of World Business, he won SHRM’s Losey Award for Human Resources Research in 2010. Scott E. Cassidy is an Industrial-Organizational psychologist and Senior Research Scientist at SRA International where he conducts applied research and provides human capital consulting for a variety of government and private sector clients. He has presented his work at major conferences and published several journal articles and book chapters. Fritz Drasgow is a Professor of Psychology and of Labor and Employment Relations at the University of Illinois at Urbana-Champaign. His research focuses on psychological measurement, computerized testing. His recent psychometric work examines the use of ideal point models for personality assessment. He is a former President of the Society for Industrial and Organizational Psychology and received their Distinguished Scientific Contributions Award in 2008. Linda R. Elliott is a research psychologist at the ARL/HRED field element located at Fort Benning, Georgia. She is a member of NATO and ISO international working groups regarding haptic, tactile, and gestural technology. Currently, she supports Human-Robotic Interaction work, advanced visual and targeting capabilities, and flexible/multisensory display advanced research programs. Joshua Fairchild is a doctoral student in Industrial-Organizational Psychology at The Pennsylvania State University. His research focuses on leadership, creativity, and technology in the workplace. He has collaborated with colleagues in a number of design and technology disciplines, and his work has been published in a variety of academic outlets. James L. Farr is Professor of Psychology at Pennsylvania State University. A former President of the Society for Industrial and Organizational Psychology (SIOP) and a Fellow of SIOP and the American Psychological Association, he is the Co-Editor of The Handbook of Employee Selection and the author of more than 80 journal articles and book chapters. J. Kevin Ford is a professor of psychology at Michigan State University. His major research interests involve improving training effectiveness. He is a Fellow of the American Psychological Association and the Society of Industrial and Organizational Psychology. He received his Ph.D. in psychology from The Ohio State University.
About the Contributors • xxi Douglas J. Gillan is Professor and Head of the Psychology Department at North Carolina State University. Following his doctorate and two postdoctoral fellowships, he has worked for 10 years in industry and 22 years in academia. His research focuses on the psychological processes underlying how people interact with technology. Alexander E. Gloss is a doctoral student in industrial-organizational psychology at North Carolina State University. There, he is a member of the IOTech4D Lab, which is devoted to research at the intersection of work, psychology, technology, and global development. Alexander’s interests stem from his time as a U.S. Peace Corps volunteer in South Africa. Andrea S. Goldberg is an I/O psychologist with a background in Marketing, Communications, and Human Resources. She is a former IBM Vice President of Market Insights and has a certificate in Digital Media Marketing. She currently leads Digital Culture Consulting, specializing in strategy, insights and training in social media. Peter A. Hancock is Provost Distinguished Research Professor, Trustee Chair and Pegasus Professor in the Department of Psychology and the Institute for Simulation and Training at the University of Central Florida. His research concerns Human Factors, the study of how people co-evolve with technology. S. Duane Hansen earned his Ph.D. in Management from Purdue University and is currently an Assistant Professor of Management and Business Ethics at his undergraduate Alma Mater, Weber State University. His research centers on Ethical Leadership, Corporate Social Responsibility and Trust in the modern (technology-driven) world. John R. Hollenbeck holds the positions of University Distinguished Professor at Michigan State University and Eli Broad Professor of Management at the Eli Broad Graduate School of Business Administration. He received his Ph.D. in Management from New York University in 1984 and is a Fellow of the Academy of Management and the American Psychological Association. Ann Howard, prior to her retirement in 2009, was Chief Scientist for Development Dimensions International (DDI). Previously, she co-directed two longitudinal studies of managers at AT&T. The changing workplace
xxii • About the Contributors is a frequent topic in her writings and speeches, including her SIOP Presidential address and her edited book The Changing Nature of Work. Richard N. Landers received his Ph.D. in Industrial-Organizational Psychology from the University of Minnesota in 2009. As a professor at Old Dominion University, he runs the Technology iN Training Lab (TNTLab), where he studies the internet as it can be used to improve employee selection, assessment, learning, and job performance. Michael W. Lomax has 36+ years of executive-level experience in strategic planning, managing multimillion-dollar budgets, coaching, and mentoring. At Strategic Leadership Systems LLC, he advises organizations and teaches succession planning, leadership communication, and business efficiency assessments. Mr. Lomax earned a Master of Leadership Development from Penn State’s Great Valley School of Graduate Professional Studies and a B.S. in Sociology from Saint Joseph’s University. Alan D. Mead is an Assistant Professor of Psychology at the Illinois Institute of Technology researching and teaching I/O topics relating to psychometrics, quantitative methods, and individual differences. He has published over 60 peer-reviewed articles, book chapters and conference presentations. Prior to teaching, he spent several years as a consultant and psychometrician. His Ph.D. was awarded by the University of IllinoisUrbana-Champaign. Tyler Meyer is a graduate student of Psychology at Michigan State University. His major research interests involve factors that undermine and maintain high-level performance under pressure, and training interventions that buffer individuals against task and environmental factors that decrease performance. Jonathan Miles is a doctoral candidate in the Management Department of the Eli Broad College of Business at Michigan State University. He received his MBA from Kansas State University. He conducts research on team performance, technological adaptation, and work motivation. Ashley E. Nixon is an Assistant Professor of Human Resources Management and Organizational Behavior in the Atkinson Graduate School of Management at Willamette University. Her research interests focus on employee well-being and occupational stressors, particularly workplace
About the Contributors • xxiii aggression and conflict. She has published in several journals, including Work & Stress and Human Performance. Julie B. Olson-Buchanan earned her Ph.D. from the University of Illinois, Urbana-Champaign and is currently a Professor and Department Chair in the Department of Management at California State University, Fresno. Her research interests include conflict and mistreatment in organizations, technology-based selection, work–life issues, and nonprofit engagement. Dr. Olson-Buchanan recently received the SIOP Award for Distinguished Service and is currently serving on SIOP’s Executive Board. Denise Potosky is Professor of Management and Organization at the Pennsylvania State University, Great Valley. Her research focuses on human resource management, global staffing, technology-facilitated assessment, and intercultural adjustment. She is a Fulbright Research Scholar and a member of the Academy of Management and the Society for Industrial and Organizational Psychology. Elizabeth S. Redden has been the Chief of the U.S. ARL/HRED Field Element at Fort Benning, Georgia since 1982 providing MANPRINT and human factors engineering support for all maneuver systems, including robotic interfaces. She currently serves as the U.S. National Lead for the HUM Group Land Systems Technical Cooperative Panel. Paul E. Spector is distinguished professor, and director of the I/O psychology, and Occupational Health Psychology doctoral programs at the University of South Florida. He is Point/Counterpoint editor for Journal of Organizational Behavior, Associate Editor for Work & Stress, and is on the editorial board of Journal of Applied Psychology.
This page intentionally left blank
1 Toward a Synergistic Relationship between Psychology and Technology Michael D. Coovert and Lori Foster Thompson
The world in which we live and work is truly a magnificent one. Technology’s presence has grown at a most rapid rate, and technology itself is ubiquitous. Many readers of this volume will be familiar with Moore’s Law, which describes the rate and cost of the number of semiconductors on a chip (indicative of the chip’s power). The law predicts semiconductors doubling every 18 months while the cost halves during the same period. But what does this mean in terms of helping us understand how far technology has come? Diamandis and Kotler (2012) provide a useful frame of reference, noting: “Right now a Masai warrior with a cell phone has better mobile phone capabilities than the president of the United States did twenty-five years ago. And if he’s on a smart phone with access to Google, then he has better access to information than the president did just fifteen years ago” (p. 9). If we’ve come so far in such a short period of time, what lies ahead? Futurist Ray Kurzweil claims that by 2029 computers will be able to deal with the full range of human intelligence and emotions and will thus be indistinguishable from people (Murray, 2012). Computational developments of this scale enable exciting possibilities, but also raise new questions and concerns. While many believe that technology enhances our quality of life, both in and out of the workplace, others worry about its detrimental effects. Is technology necessarily good? As discussed in this chapter, workplace technology can enable or oppress. I-O psychology research, theory, and practice have the potential to facilitate the former and prevent the latter, much to the benefit of workers and employers. Psychological research and theory are essential to predicting and managing the direct influence as well as the second- and third-order effects of technology, enabling workers and employers to capitalize on 1
2 • Michael D. Coovert and Lori Foster Thompson technology’s potential while avoiding its perils. As workplace technology progresses, its effectiveness will only increase to the extent that its development and integration into the workplace are driven by a clear understanding of human behavior.
TECHNOLOGY’S PERILS, POTENTIAL, AND THE ROLE OF PSYCHOLOGY As children, many of us were taught that there can be more than one outcome when using any tool. Fire can be good, cooking our food and keeping us warm; but fire can also be harmful, such as when it burns us or gets out of control and destroys property. Similarly, we need to mindfully employ technology so it benefits and does not oppress us. As discussed below, electronic monitoring systems (Alge & Hansen, this volume) provide a good example of technology’s coexisting promise and perils. Another example pertains to online recruitment and selection systems. On the upside, such systems greatly increase the ability to screen large numbers of applicants based on key words or other descriptors. But they can also cause viable applicants to be overlooked, a problem with very real implications for people’s lives. Unfortunately, there have been no comprehensive studies examining the false negatives that are slipping through the system because they do not have the correct format or key words in their online résumé. Smart, mobile devices as a conduit for “24/7” access to information, work, and co-workers provide yet another illustration of technology’s positive and negative effects. Although constant, instantaneous access to technology and expertise has benefits, it can quickly wreak havoc with work–life balance. Forward thinking organizations use technology to enable their workforce, while others use it in a more oppressive fashion. The problem of oppressive technological interventions, however, is not as simple as a blatant disregard for workers’ well-being by organizational leaders seeking to maximize profit and productivity at any cost. At times, leaders inadvertently adopt or implement technology in a manner that undermines workers’ motivation and well-being. Psychological research and theory are needed to inform solutions to this problem. Many applicable theories exist. To provide but one example, consider the implementation of workplace technology through the lens of self-determination theory. Selfdetermination theory posits that workers’ self-motivation and well-being
A Relationship between Psychology and Technology • 3 will be enhanced when innate needs for autonomy, competence, and relatedness are satisfied, and diminished when these needs are thwarted (Ryan & Deci, 2000). Autonomy refers to the need to exercise control over one’s actions—to be a causal agent in one’s own life. Competence is the need to experience mastery and affect one’s outcomes and surroundings. Relatedness is the need to feel interpersonally connected with others (Greguras & Diefendorff, 2009). Technology can arguably threaten or help to satisfy these three core needs, thereby affecting workers’ motivation, functioning, well-being, and growth. Electronic monitoring provides a good example. Monitoring can be beneficial, as self-initiated systems demonstrate. Systems that enable people to track their activities at work have led to increases in productivity by helping people to better understand how they are allocating their time (Osman, 2010). This understanding allows workers to reapportion their time, tasks, and activities to better accomplish work goals. Note, however, that the effectiveness of such monitoring depends on how it is implemented. It appears most effective when it is initiated and controlled by the worker as opposed to the organization (Alge & Hansen, this volume), with the latter approach arguably threatening one’s freedom of action and hence a fundamental need for autonomy. In short, even a single technological intervention can negatively or positively affect people’s satisfaction, motivation, well-being, and productivity, depending on how it is designed and implemented. This underscores the need to attend to the psychology of workplace technology when innovations are adopted in organizations. At present, one fact is clear: Technology’s influence—both enabling and oppressive—is pervasive in the modern workplace. Technology’s presence will only increase as it continues to provide a competitive advantage. As noted earlier, its effectiveness will only increase to the extent that its development and integration into the workplace are driven by a clear understanding of human behavior. These two considerations—technological development and integration—are worth elaborating on in turn. With regard to technology’s development, psychologists specializing in cognition, human factors, and ergonomics have a history of contributing meaningfully to technology design through science and practice designed to improve the usability and naturalness of the hardware and software solutions that workers are asked to use. But what about the second point— the one pertaining to the integration of technology, once it is developed, into jobs, work processes, and social systems? What perspectives are necessary to ensure that piece of the puzzle is adequately informed by psychological research and theory? Industrial-organizational (I-O)
4 • Michael D. Coovert and Lori Foster Thompson psychology needs to play a role. Effective implementation of workplace technology necessitates careful attention to a host of issues ranging from “I-side” matters such as work analysis, selection, and training, to more “O-side” phenomena such as job stress and work teams. Effective implementation also generally requires the ability to understand and predict how workers react to new technologies as they are introduced. Are they “natural” and “easy to use” (Coovert, 1995; Hancock, this volume; Gillan & Bias, this volume)? Self-efficacy should also be considered, as people who feel competent to use (or learn to use) the new technology are likely to experience less anxiety when it is introduced. There is also an economic consideration: Does the technology provide users a competitive advantage in their business or personal lives? If so, the odds of it being embraced increase. Finally, one mustn’t overlook or undervalue the social component of technology acceptance. If friends, co-workers, or family members are using a technology and feel we should be doing the same, the likelihood that we too will adopt it increases, although sometimes we may feel coerced into doing so. Thus, many considerations, including economic, usability, psychological, and social factors will influence the adoption of workplace technologies. Existing theories, such as the United Theory of Acceptance and Use of Technology (Venkatesh et al., 2003), can be used and further developed to help organizations understand and optimize technology’s integration into the workplace. As noted, human factors and other areas of psychology are needed to inform technology’s design, and I-O psychology is needed to inform its implementation. As technology continues to evolve, however, I-O psychology’s perspective will be increasingly needed in the design of workplace technology as well. Why? Because the increasingly autonomous, “smart” technologies are resulting in a paradigm shift, whereby computers are not only embedded into organizational social systems, but they are becoming “social” actors in those systems. Consider, for example, the terms “co-worker” and “teammate.” Historically, these terms implied other humans. But this may no longer be the case as co-bots (co-worker robots) are entering the workplace as team members with greater and greater frequency (Redden, Elliott, & Barnes, this volume). As many readers well know, robots started in industry as heavy lifters and still play a significant role in that capacity. But more recently they have been taking on additional work requiring greater agility and interdependence with humans, such as assembling consumer electronics and finding trapped survivors after the collapse of a manmade structure (Burke et al., 2004). As robots evolve, they are likely to become more adaptable to the work environment, with
A Relationship between Psychology and Technology • 5 multimodal interfaces enabling them to communicate more efficiently and effectively with human teammates both receiving and transmitting information (Shindev et al., 2012). In effect, they will become increasingly social actors, with the potential to work with humans in a truly collaborative fashion. Research and theory in areas such as work analysis, teams, selection, training, motivation, and criterion development can aid their successful design and integration into work teams and organizations. Participation by the field of I-O psychology sooner rather than later can maximize the probability that “smart” innovations such as organizational robotics evolve in a way that promotes job satisfaction, motivation, commitment, organizational citizenship, and productivity on the part of human workers, rather than triggering undesirable effects such as stress, demotivation, and counterproductive work behaviors.
FUTURE DIRECTIONS Given the need for a more synergistic relationship between technology and psychology, we now consider a few specific areas where I-O psychologists might turn their attention. Although this list of future directions is by no means exhaustive, it offers examples of topics that appear especially promising and/or in need of further emphasis at the time of this writing. Digitalization Much has been written about how technology has fundamentally changed some jobs (Howard, 1995; this volume). Today’s factory worker is often engaged in assembly tasks employing high-tech equipment. The low-paid cashier is more likely to swipe a product’s barcode over a laser reader than to enter a price by hand. RFID tags and readers are increasingly replacing the toll collector. The cash register has become a multi-functional device, not only receiving cash but also interfacing with banks for credit and debit transactions and supplying sales information to warehouse, supply chain, and manufacturing systems. For the commuter, technology decreases commute time, saves energy, decreases pollution, and eases frustration. The digitalization of work has led to the increased abstraction of the job (Zuboff, 1988) while also allowing workers to visualize otherwise abstract information in entirely new ways. Consider weather forecasters who can now see rain, frontal zones, storms, and hurricanes as never before. Pilots
6 • Michael D. Coovert and Lori Foster Thompson have access to weather radars in the cockpit and can visualize routes around threatening storms. Chemists visualize molecules. NASA brings all sorts of digital information to life through simulations and images of star clusters, nebulae, and the like, forever changing the job of the physicist, astronomer, and cosmologist as well as the public’s understanding of their work. In a similar way, physicians use technology to educate and inform patients and receive real-time updates from patient monitoring systems and lab results (Coovert et al., 2012; Ducey et al., 2011). Interfaces We see certain themes in the area of interfaces that are also worth considering. One is termed organic, environmental, or ecological interfaces (c.f. Vicente, 2002). This trend entails developing interfaces so they fit seamlessly into the entire sociotechnical system of the users. Focusing on the technology itself is not enough; we must also attend to the social system in which the technology is intended for it to be wholly successful in aiding the worker. Multimodal interfaces (Coovert et al., 2008; Prewett et al., 2012) will be found in the workplace with greater and greater frequency. This is due to the ever-increasing need to develop interaction modes between the user and technology that are both natural and maximally effective, thereby reducing operator workload, improving attention allocation, and eliminating or mitigating human error. The goal is to understand and expand the availability of technological displays by including three modalities: visual, haptic, and auditory. By leveraging the ability to use more than one modality (e.g., exclusively visual for information conveyance), auditory and tactile devices are being developed with ever-increasing frequency, as they are deemed superior especially in situations where the visual channel is overloaded. For example, both 3-D audio and a tactile belt have been used effectively for directional cueing. It is especially important to quantify the advantages of displays and controls so they can be compared to the benefits of decreasing the physical and cognitive burden on the operator. Another theme worth noting involves bringing the interface closer to the body. Wearable computing glasses and earpieces are examples of this trend. These are now in their early stage of development, but the potential is clearly there as they provide a direct link to the rich digital world of information without removing the user from the physical world. Workers in aviation have utilized these displays for quite some time with the HUD (heads-up display) providing essential digital information to the pilot and
A Relationship between Psychology and Technology • 7 crew while they maintain complete engagement with the physical world. This blending of the physical and digital worlds will only increase due to the advantages they provide the user. The Shrinking Distinction between Human and Computer Remote-controlled, telepresent robots provide another example of how the workplace is changed by allowing someone who is separated from the workgroup to maintain a physical presence and synchronous interaction with other members of the group. This physical presence through the robot is significantly different from a co-worker who is present through an asynchronous e-mail exchange or a synchronous means such as phone, text, chat, or video-presence. The physical embodiment provided by the robot allows the worker to act in and upon the distant environment, and not be limited to merely observing and commenting upon it. Other advances in the realm of robotics point toward a future comprised of machines that interact with human teammates in an autonomous fashion. Consider Kurzweil’s predictions regarding the computational workers of 2029—computers indistinguishable from today’s biological workers, with the full range of human intelligence, interests, and attitudes (Murray, 2012). Just as robots are expected to become more person-like, people will become more machine-like as interfaces move closer to the body. In tandem, these trends blur the distinction between humans and computers. Connecting the human brain and nervous system directly to a computer creates innumerable possibilities, with the potential to greatly enhance what the workforce is capable of (Warwick, 2005). A similar logic applies to pharmaceutical and prosthetic innovations, which allow people to transform themselves (or to be transformed) for advantage. “For years, computers have been creeping ever nearer to our neurons. Thousands of people have become cyborgs, of a sort, for medical reasons: cochlear implants augment hearing and deep-brain stimulators treat Parkinson’s,” notes Pagan Kennedy (2011) in The New York Times, who goes on to predict that “within the next decade, we are likely to see a new kind of implant, designed for healthy people who want to merge with machines” (p. 24). How will this technology affect the well-being and performance of workers, co-workers, teams, leaders, and organizations? Can it be designed and implemented in a manner that promotes rather than threatens workers’ welfare? From the standpoint of self-determination theory, for example,
8 • Michael D. Coovert and Lori Foster Thompson one might examine how these innovations affect workers’ feelings of autonomy, competence, relatedness, and ultimately motivation. I-O psychology is uniquely equipped to address such areas of inquiry. The answers discovered through theory-driven research have the potential to contribute empirical, data-driven insights to otherwise emotionally charged discussions. In this manner, the science of work has an opportunity to help guide practice and policy decisions that will unfold with or without a psychological evidence base. Ethical debates notwithstanding, it may be noted that I-O psychologists have a history of research and practice aimed at maximizing person–job fit, not only through scientific selection and placement techniques, but also by “upgrading” the worker to meet the requirements of the job. Traditionally, this has entailed a permanent change to the worker through a training and development program designed to equip personnel with the knowledge and/or skills needed to perform the job. As technology and biology merge, the possibility to equip workers not only with knowledge and skills, but also with new abilities, becomes very real. Work analysis can be used to determine the abilities needed for the job. Quasi-experimental designs, familiar to those who study and practice training evaluation, can be used to assess the success of technological upgrades to the workforce. Traditional training models operationalize “success” at different levels of analysis (e.g., reactions, learning, behavior, results). Such models may serve as a useful starting point when evaluating interventions designed to equip workers with new abilities through technological innovations. Broader evaluation frameworks, which take psychological health and wellbeing into account, will also be needed. Social Media An increasingly important component of technology concerns its capacity to enable social interactions. Early investigations on technology’s social influence date back to the technological advances in coal mining, with the emergence of socio-technical systems (Goldthorpe, 1959). Today, however, the tables have turned such that technology is explicitly developed to foster social interactions. Social media is routinely employed to facilitate communication among co-workers. It also offers an opportunity for managers and executives to engage in new modes of leadership. Tweeting can provide rapid and brief communications to keep one’s team informed and motivated while technologies like blogs offer an opportunity to provide more informal and detailed comments on projects and challenges facing the team or company.
A Relationship between Psychology and Technology • 9 New challenges and questions accompanying the use of information gleaned from social media are being debated and addressed in legal, ethical, philosophical, and economic circles. I-O psychology has a major role to play as well. For example, psychological research can be used to help organizations determine how and whether online posts can be mined as a source of data to inform hiring and selection decisions. In addition, questions about what leadership and decision making mean in the digital age can be addressed (Askew & Coovert, 2013). We know a critical characteristic of being a successful manager and co-worker is simply making time for the human side of business. One-on-one interactions and expressing consideration for individual staff members’ development are key. Like parenting, spending time is important. But where does technology lead us? In traditional applications, technology-mediated leadership was often viewed as impersonal and cold. Perhaps this too is changing. Consider, for example, social media and how it is used to connect us. Interactions through social media might provide opportunities to express concern and caring about an employee, perhaps an adequate substitute for physical proximity. Will social media be the savior and facilitate rather than prevent genuine interpersonal interactions through instant messaging, text, chat, and video links? Will telepresence through the form of robots enable face time and direct interaction with co-workers who are not collocated? Perhaps they will, or perhaps these technologies will be viewed merely as presenting the appearance of caring, and will have a negative impact on employee relations. Time (and research) will tell. Crowdsourcing provides a means of outsourcing jobs or tasks, not to employees of other companies, but often to the undefined public. Common examples include individuals reporting news stories, uploading photos and videos, and describing the weather and its impact. “Galaxy Zoo” is a specific application of crowdsourcing whereby members of the public voluntarily classify images of galaxies, captured using telescopes, into categories. They report their classifications back to astronomers to help them study how galaxies form and relate to one another (Baker, 2007). Not all such work is performed by uncompensated volunteers. Amazon’s “Mechanical Turk” system offers a platform for paying people to complete tasks, such as filling out surveys for social scientists seeking research participants. Apps for smartphones also allow members of the public to earn money through their camera- and internet-enabled mobile devices by completing small jobs and tasks. For example, smartphone users can be paid to take and submit photos of menus at designated restaurants; the photos are later used by organizations offering services that help people order food online (Boehret, 2012).
10 • Michael D. Coovert and Lori Foster Thompson The movement toward “open-source” principles in the development of software and other programs, tools, and technologies further illustrates the crowdsourcing philosophy. Wikipedia and the Linux operating system are two classic examples of the power of innovation developed in a public, collaborative manner without traditional regard for ownership rights and financial gain. The movement toward open-source innovation has significant implications for employers and workers. It challenges and tests some longstanding assumptions of organizational behavior and work motivation, while requiring leaders who adopt this model to embrace change and relinquish control. Whether open source principles in particular and crowdsourcing in general will play additional roles in organizational life in the days to come remains unknown. This is an area ripe for I-O psychology research, theory, and practice. Technology-Based Education and Training Secondary and higher education are undergoing what might be described as a revolution. Service learning, flipped classrooms, standardized tests, and competency based mastery are all topics that have generated a great deal of discussion. Online learning is also a topic of interest. In some cases, online education and training modules are freely available (e.g., Kahn Academy on YouTube). In other cases, they are provided by professional organizations (e.g., Association for Computing Machinery; Air Safety Foundation) to their members. Online modules are likely to play a more prominent role in education and training as their quality increases and cost containment factors to the organization are considered. Organizations encouraging ongoing college education may soon have a new option available to members, such as one provided by edX, a consortium of higher learning institutions that offer the promise of a first-rate education at a cost significantly below that of attending a traditional Ivy League school. What is the best balance for organizations when choosing between different education and training formats? This question will loom large in days to come. I-O psychology is poised to provide a data-driven, evidence-based perspective on this topic. Telepresence, virtual presence, augmented reality, and virtual reality are technology-enabled strategies with implications for training. Such technologies can take synchronous and asynchronous forms, and can be used for team training where individual members are often distributed across time zones and perhaps continents. There are also training systems that employ in combination live, virtual, and constructive entities. The goal
A Relationship between Psychology and Technology • 11 of these systems is to provide an immersive training environment with such a high degree of psychological fidelity (Schiflett et al., 2004) that the trainee is unable to distinguish the training environment from the real world. A goal of this type of training entails ensuring that individuals will not encounter something in the real world that they have not already seen in training. This offers an unprecedented advantage to people who work in high-risk fields. Without today’s technology, these and other approaches and enhancements to organizational training would be impossible. Special Populations Technology holds the promise of increasing the quality of life for all. But to do so, it must sometimes be tailored for special populations. Consider, for example, older adults. As one ages, preferences toward technological use often change. Age-related changes in ability are also important considerations for product design (Thompson & Mayhorn, 2012). Technologies are available to assist with everyday activities such as lifting tasks, decreasing the likelihood of on-the-job injury. Information display is another area of opportunity; text display size is easily manipulated—or eliminated—through the use of text-to-speech systems. Veterans comprise an additional population deserving a close look. The psychology of workplace technology can positively impact their lives not only during military service, but also as they transition to civilian roles. For example, veterans can benefit from an assessment of how the knowledge and skills developed during service to their country translate to civilian occupations. Presently, O*NET can be employed in this capacity. O*NET is the U.S. economy’s primary source of occupational information and has been described as the biggest innovation in work analysis in recent years (Morgeson & Dierdorff, 2011). At its foundation is a database containing standardized descriptions of more than 850 occupations—the activities they entail as well as the worker characteristics needed to perform those occupations effectively. The free, online availability of this information enables noteworthy innovation, including an interactive tool known as “My Next Move for Veterans,” which allows veterans to enter information about their military work (e.g., the name or code of their military occupational specialty) and then receive information about civilian careers that are similar to the work they performed in the military. Occupational classification systems such as O*NET can be further developed through database extensions and increasingly sophisticated assessment and matching algorithms. Psychologists have the opportunity to assist in
12 • Michael D. Coovert and Lori Foster Thompson this endeavor, and also employ a myriad of other skills to address issues of particular relevance to veterans. For example, the psychology of workplace technology can be used to assist workers with traumatic brain injury (a signature wound of the combatant forces) and other disabilities. It can be used to aid with assessment, skill development, manpower planning and tracking, and a variety of other areas. Other special populations worthy of consideration abound. Behrend, Gloss, and Thompson (this volume) provide a discussion of how psychology and technology can be brought together to improve the quality of life for those in developing countries. One might argue, however, that it is workers in developed countries such as the U.S. who constitute “special populations.” Indeed, more than 80 percent of the world’s population lives in developing settings (United Nations Development Programme, 2010), prompting some to refer to such areas not as the “third world,” but rather the “majority world” (Berry et al., 2011). Since the emergence of the internet, most workplace technology innovations have arguably been built with developed settings in mind, commonly presuming a technological landscape marked by personal computers (PCs), broadband connections, digital literacy, affordable internet, and often English language skills. Meanwhile, the number of mobile phone subscribers in developing countries far exceeds the number of subscribers in the minority world, prompting assertions that: The developing world is “more mobile” than the developed world. In the developed world, mobile communications have added value to legacy communication systems and have supplemented and expanded existing information flows. However, the developing world is following a different, “mobile first” development trajectory. (Kelly & Minges, 2012, p. 3)
As low-income regions of the world continue to leapfrog past expensive, inefficient legacy technologies, innovations in mobile technologies for workers may increasingly originate in poorer countries and spread from there (Navas-Sabater & D’Costa, 2012). A firm grasp of the science of work, workers, and working in low- to high-income settings will aid this transition, offering a unique opportunity for I-O psychology in general and the psychology of workplace technology in particular to contribute to poverty reduction and international development. However, with this opportunity comes challenge. Legitimate questions have been raised about the universality of our understanding of work behavior, as the field of psychology in general has been criticized as exceedingly “WEIRD”—that
A Relationship between Psychology and Technology • 13 is, disproportionately concerned with, generated from, and consumed in Western, Educated, Industrialized, Rich, and Democratic settings (Henrich, Heine, & Norenzayan, 2010).
READY OR NOT? Reflecting on matters internal to the field of I-O psychology, it is important to consider how prepared we are to grapple with the issues raised by the ongoing integration of technology in the global world of work. The guidelines for I-O psychology education and training include more than 20 areas of competence to be developed in masters-level I-O psychology programs. At the doctoral level, 25 competency areas are specified. Statistical methods, consumer behavior, and compensation are among the domains in which I-O psychologists are urged to develop expertise. Human factors is also listed as a competency to be developed. But no mention is made of technology. Of course, efforts to describe, explain, predict, manage, and improve technological systems in the workplace tend to occur in multidisciplinary teams. Perhaps this is why expertise pertaining to the psychology of workplace technology remains absent from I-O psychology competency models. The National Academy of Sciences has developed a set of recommendations, practices, and standards for effective human-systems integration (HSI; Pew & Mavor, 2007) worth considering. They entail taking a systems approach and using many of the methods of human factors and I-O psychology throughout the conception, prototyping, early phases, and developmental lifecycles to ensure a product that is usable by the client or general public. The U.S. military, NASA, and other large governmental agencies have adopted the HSI approach with significant success. Thus, the HSI model offers one “tried and true” framework from which the field of I-O psychology can draw when considering how to successfully deal with the impact of technology on individuals and organizations.
CONCLUDING REMARKS Given Moore’s law and the rate at which technology progresses, it will not be long before the computing trends described on these pages appear
14 • Michael D. Coovert and Lori Foster Thompson quaint at best. However, the fundamental tenets articulated at the beginning of this chapter will remain, transcending time and technological particulars. Workplace technology can enable or oppress. I-O psychology research, theory, and practice have the potential to facilitate the former and prevent the latter, much to the benefit of workers and employers. Psychological research and theory are essential to predicting and managing the direct influence as well as the second- and third-order effects of technology, enabling workers and employers to capitalize on technology’s potential while avoiding its negative consequences. Technology will continue to transform the world of work, with or without I-O psychology. It is up to us to determine whether and how the science of work will play a role in this transformation. Historically, I-O psychologists have dealt predominantly with those technologies that the marketplace has determined to be winners. Looking forward, it is imperative that I-O psychology gets ahead of the adoption curve and influences the development of technologies through sound psychological principles based on human systems integration and knowledge of social factors. In the spirit of pushing the frontiers of I-O psychology, we asked the authors of this book to avoid focusing on the utility of specific devices or applications in their chapters. Our reasoning was twofold. First, it is critical for psychology to adapt and perhaps reinvent its models and theories so the field can facilitate the successful deployment of technologies in organizations and mitigate the negative impact on individuals. Having psychology consider these issues generally and not relative to current technologies challenges and moves the field forward in important directions. The second reason we asked authors to avoid a strong emphasis on specific tools and applications is because predicting what the technological future holds from the perspective of a device or product is nearly impossible. For example, as we were working on this edited volume, a new storage medium was introduced whereby an entire book was encoded and stored using DNA (Church, Gao, & Kosuri, 2012). This development seems cutting edge today and will surely lead to increased strategies for information storage (many of which will be exploited by psychologists and organizations). But who would have foretold such a development at the time our book was proposed, and how long before this advance is just another routine application? Given technology’s rapid rate of advancement, reactive “Band-Aid” solutions to surface-level symptoms triggered by specific devices or
A Relationship between Psychology and Technology • 15 applications are likely to have a modest impact and a short shelf life. Admittedly, solutions of this nature can be practically useful for addressing problems in the near-term. However, a deeper knowledge base rooted in psychological research and theory can inform worker and organizational functioning in the days to come, once the next “disruptive technology” comes to pass. The critical issue to consider is not the technology in and of itself, but rather how to create and use psychological theory and research to manage the impact and implementation of emerging developments so the positive consequences for individuals and organizations are maximized and the negative effects are minimized. We are privileged to be living and working in this time of rapid technological innovation. This era provides many challenges and opportunities for each of us charged with understanding and managing the shifting influences of technology on our individual, social, and organizational structures. Enjoy the ride!
REFERENCES Alge, B. J., & Hansen, S. D. (2013). Workplace monitoring and surveillance research since “1984”: A review and agenda. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York, NY: Routledge Academic. Askew, K., & Coovert, M. D. (2013). Online decision making. In Y. Amichai-Hamburger (Ed.), The social net (2nd edn). Oxford, UK: Oxford University Press. Baker, B. (2007, September 3). A universe of possibilities: Astronomy project seeks Internet volunteers to help sort out the galaxies. The Boston Globe, p. D1. Behrend, T. S., Gloss, A. E., & Thompson, L. F. (2013). Global development through the psychology of workplace technology. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York, NY: Routledge Academic. Berry, J. W., Poortinga, Y. H., Breugelmans, S. M., Chasiotis, A., & Sam, D. L. (2011). Crosscultural psychology: Research and applications. Cambridge, UK: Cambridge University Press. Boehret, K. (2012, August 8). Help wanted: Moonlighters for mobile apps. Wall Street Journal. From http://online.wsj.com/article/SB1000087239639044365920457757511 2957021328.html. Retrieved December 29, 2012. Burke, J. L., Murphy, R. R., Coovert, M. D., & Riddle, D. L. (2004). Moonlight in Miami: A field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise. Human-Computer Interaction, 19 (1–2), 85–116. Church, G. M., Gao, Y., & Kosuri, S. (2012, September). Next-generation digital information storage in DNA. Science, 337, 1628. Coovert, M. D. (1995). Technological changes in office jobs: What we know and what to expect. In A. Howard (Ed.), The changing nature of work (pp. 175–208). San Francisco, CA: Jossey-Bass.
16 • Michael D. Coovert and Lori Foster Thompson Coovert, M. D., Walvoord, A. A., Elliott, L. R., & Redden, E. S. (2008) A tool for the accumulation and evaluation of multimodal research. IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews, 38(6), 850–855. Coovert, S., Ducey, A., Grichanik, M., Coovert, M. D., & Nelson, R. (2012). Hey Doc, is that your Stethoscope? Increasing Engagement in Medical Education and Training with iPads. Proceedings of the ACM 2012 Conference on Computer-Supported Cooperative Work, pp. 71–74. Seattle, WA. Diamandis, P. H., & Kotler, S. (2012). Abundance: The future is better than you think. New York, NY: Free Press. Ducey, A., Grichanik, M., Coovert, M. D., Coovert, S., & Nelson, R. (2011, October). Tablet computers: A new prescription for medicine? AMA-IEEE Medical Technology Conference, Boston, MA. Gillan, D. J., & Bias, R. G. (2013). Usability science II: Measurement. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York, NY: Routledge Academic. Goldthorpe, J. H. (1959). Technical organization as a factor in supervisor-worker conflict. British Journal of Sociology, 10, 213–231. Greguras, G. J., & Diefendorff, J. M. (2009). Different fits satisfy different needs: Linking person-environment fit to employee commitment and performance using selfdetermination theory. Journal of Applied Psychology, 94, 465–477. Hancock, P. A. (2013). Human factors. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York, NY: Routledge Academic. Henrich, J., Hein, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Science, 33, 61–135. Howard, A. (Ed.) (1995). The changing nature of work. San Francisco, CA: Jossey-Bass. Howard, A. (2013). Reflections on technology and the changing nature of work. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York, NY: Routledge Academic. Kelly, T., & Minges, M. (2012). Executive summary. In 2012 Information and communications for development: Maximizing mobile (pp. 3–7). Washington, DC: International Bank for Reconstruction and Development/The World Bank. Kennedy, P. (2011, September 18). The cyborg in us all. The New York Times, p. 24. Morgeson, F. P., & Dierdorff, E. C. (2011). Work analysis: From technique to theory. In S. Zedeck (Ed.), APA handbook of industrial and organizational psychology, Vol 2: Selecting and developing members for the organization (pp. 3–41). Washington, DC: American Psychological Association. Murray, A. (2012, June 29). Man or machine? Wall Street Journal. From http://online. wsj.com/article/SB10001424052702304782404577490533504354976.html. Retrieved December 29, 2012. Navas-Sabater, J., & D’Costa, V. (2012). Preface. In 2012 Information and communications for development: Maximizing mobile (p. xiii). Washington, DC: International Bank for Reconstruction and Development/The World Bank. Osman, M. (2010). Controlling uncertainty: A review of human behavior in complex dynamic environments. Psychological Bulletin, 136(1), 65–86. Pew, R. W., & Mavor, A. S. (Eds.) (2007). Human-System Integration in the System Development Process: A New Look. Washington, DC: National Academies Press. Prewett, M. S., Elliott, L. R., Walvoord, A. G., & Coovert, M. D. (2012). A meta-analysis of vibrotactile and visual information displays for improving task performance. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and reviews, 42(1), 123–132.
A Relationship between Psychology and Technology • 17 Redden, E. S., Elliott, L. R., & Barnes, M. J. (2013). Robots: The new teammates. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York, NY: Routledge Academic. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68–78. Schiflett, S. G., Elliott, L. R., Salas, E., & Coovert M. D. (Eds.) (2004). Scaled Worlds: Development, Validation and Applications. Hants, UK: Ashgate. Shindev, I., Sun, Y., Coovert, M. D., Pavlova, J., & Lee, T. (2012) Exploration of Intention Expression for Robots, Proceedings of the Annual Conference on Human Robot Interaction, pp. 247–248, Boston, MA. Thompson, L. F., & Mayhorn, C. B. (2012). Aging workers and technology. In J. W. Hedge & W. C. Borman (Eds.), Oxford handbook of work and aging (pp. 341–361). New York, NY: Oxford University Press. United Nations Development Programme (2010). Human development report 2010: The real wealth of nations—pathways to human development. From http://hdr.undp.org/ en/reports/global/hdr2010/. Retrieved December 29, 2012. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 425–478. Vicente, K. J. (2002). Ecological interface design: Progress and challenges. Human Factors, 44, 62–78. Warwick, K. (2005). Future of computer implant technology and intelligent humanmachine systems. Studies in Health Technology and Informatics, 118, 125–131. Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and Power. New York, NY: Basic Books.
This page intentionally left blank
Section I
Traditional Topics
This page intentionally left blank
2 Technology-Based Selection Alan D. Mead, Julie B. Olson-Buchanan, and Fritz Drasgow
INTRODUCTION Selection in organizations not only includes making critical decisions about which applicants will be hired but also encompasses many other human resource decisions, such as identifying employees for promotions and training programs. The field of I-O psychology and related disciplines have a long history of the development and pursuit of psychometrically sound selection methods that serve to reduce the likelihood of making poor selection decisions and maximize the likelihood that selected employees will fit the needs of the organization, whether it be high job performance, high attendance, or low turnover. To a large extent, the significant research attention devoted to this area is a reflection of its importance to the success of organizations. Indeed, the benefit of quality selection systems to organizational functioning (and the bottom line!) is well established. The development and advancement of technology has dramatically affected the field of selection in several respects over the past few decades. First, technology has served to facilitate the flexibility and accessibility of traditional selection assessments, allowing, for example, selection tests to be completed off-site and in remote locations (Tippins et al., 2006). Second, technology has stimulated the development and application of new measurement theory (e.g., Item Response Theory) and serves as a platform for the development of novel selection assessments (e.g., highfidelity simulations; Drasgow & Olson-Buchanan, 1999). Third, the use of technology-based selection introduces a number of new or different concerns than what is encountered with more traditional selection methods, such as accessibility and anxiety issues related to technology and electronic privacy concerns. As such, these concerns have been the focus of a 21
22 • Alan D. Mead et al. considerable amount of research. This chapter will focus on these major ways in which technology has affected selection as well as identifying the frontiers of future research.
AUTOMATING TRADITIONAL SELECTION METHODS Automation is the use of technological systems to reduce or eliminate the role of humans in producing work products. In our experience, automation of traditional selection methods represents the preponderance of technological selection systems, although more novel selection systems continue to attract attention among researchers and practitioners (Drasgow & OlsonBuchanan, 1999; Tippins & Adler, 2011). A theme in the automation of traditional selection methods is using technology to assess more efficiently. Efficiencies can stem from allowing individuals to be more productive with their time (e.g., using video conferencing to conduct interviews instead of traveling) or from reducing or eliminating the need for human intervention (e.g., online testing replacing in-person proctored testing). Staffing organizations provides a good example of the benefits of automation. A large organization may receive 50,000 job applications or more per month. If these are paper applications, a veritable mountain of resumes will result and human resource professionals will be overwhelmed. Contrast this with an internet job application process, where job-seekers complete an online application form that is instantly uploaded to a server and possibly automatically screened for minimum qualifications. Hiring managers can be notified when applicants meet hiring criteria and, if desired, an interview can be scheduled. Job-seekers can quickly check the status of their job application by going online, which relieves a burden on the human resources department. The staffing process can be further enhanced by technology-based selection tests. When personal computers first arrived, the automation of tests originally developed for paper-and-pencil administration became popular. Such tests allowed for improved efficiency in administration and scoring and concerns about reduced validity were allayed by Mead and Drasgow’s (1993) meta-analysis findings. Although testing is known to have great benefits for organizations (Schmidt & Hunter, 1998), on-site computerized testing is still relatively expensive, requiring test administrators and a testing room, and inconvenient because of scheduling.
Technology-Based Selection • 23 The staffing process can be further enhanced by unproctored Internet testing (UIT). Since the advent of the internet, many organizations have begun using UIT to improve their hiring process. After job applicants complete an application form online, they are taken to an internet test site. Here they are given a brief assessment, perhaps consisting of a personality inventory and situational judgment test. Scores are computed instantly and uploaded to a corporate server. Individuals with satisfactory scores are emailed an invitation to move forward with their job application. This may consist of an on-site interview and a confirmation test to verify the scores that were obtained online. When this form of testing was first introduced, researchers and practitioners raised validity concerns and some research has focused on the effects of verification testing, particularly for tests with correct answers. For example, Fetzer (2011) has described using CAT (discussed in the next section) to reduce average verification time to a few minutes. However, a recent meta-analysis (Beaty et al., 2011) involving over 100 test-performance correlations found no decrement in criterion-related validity for UIT as compared to proctored on-site tests. Many organizations rely on job interviews in the selection process. With rising transportation costs, globalization, and the increased use of distance management, the application of technology to the interview context offers several potential benefits. Initially, technology-based interviews required the investment in or rental of expensive videoconferencing equipment. However, today the field is replete with inexpensive or even free videoconferencing platforms. Some researchers have examined whether a change in administration medium, from face-to-face interviews to videoconferencing, affects the interview process. Straus, Miles, and Levesque (2001) found no difference in the obtained ratings between face-to-face and videoconference interviews. However, earlier studies have found there to be more negative reactions to videoconferencing in terms of interviewer (Straus et al., 2001), and interviewee reactions (Chapman, Uggerslev & Webster, 2003; Straus et al., 2001). Perhaps these negative reactions will be reduced as videoconference interviews become more commonplace.
TECHNOLOGY AND NOVEL SELECTION METHODS As discussed above, technology has enabled the automation of traditional selection methods, allowing for several potential advantages such as flexibility, reduced scoring time, and accessibility. However, the advancement
24 • Alan D. Mead et al. of technology has done more than change the mode by which we collect information from applicants or employees (Olson-Buchanan, 2001). Technology has served to stimulate the development and advancement of sophisticated measurement theory and has served as a catalyst for the creation of unique technology-based selection methods. In this section we describe some of the major theoretical advancements afforded by technology as they relate to selection and the key, novel selection methods. Computer-Adaptive Tests In a typical test, examinees encounter many items that are too easy, and will only be answered incorrectly by mistake, as well as items that are too hard, and can only be guessed. These items not only waste examinees’ time, but mistakes and guessing decrease measurement precision. Powered by IRT-based algorithms, computer-adaptive tests (CATs) select a subset of items that are optimally difficult for each individual examinee. Given a fixed form, adaptive administration can substantially shorten the test length with no loss of reliability. Or, if a large pool of items exists, a CAT as long as its paper-and-pencil form can significantly increase the reliability of test scores (Wainer, 2000; Weiss, 1982). Thus CATs are good solutions when candidate testing time must be minimized or reliability maximized. A CAT starts with some initial guess about the examinee’s ability (designated by the Greek letter theta with a caret, θˆ), typically the mean ability of the applicant population. Two steps are then repeated: (1) using IRT, an item is selected from among the most informative items given the current ability estimate, θˆ; and (2) the item is administered and is updated. Note, θˆ increases following correct answers and decreases following incorrect answers. The CAT stops when it reaches a preset number of items, or a preset level of precision, a time limit, or similar stopping rule. Given a large pool of items calibrated using IRT, a CAT substantially shorter than a fixed test can produce more reliable scores (Weiss, 1982). Although computer adaptive testing (CAT) was developed in the context of ability tests (cf. Segall & Moreno, 1999; Zickar et al., 1999), studies have since demonstrated the effectiveness of CAT for measuring attitudes and personality, suggesting that an assessment half as long is needed to achieve comparable reliabilities (Waller & Reise, 1989; Stark et al., 2012). Most recent developments in adaptive testing have sought to apply multidimensional models to personality and attitude measures that have correlated factorial structures. Multidimensional CAT (MCAT; Segall, 1996) and bifactor CAT (BFCAT; Weiss & Gibbons, 2007) models provide a better basis
Technology-Based Selection • 25 for adaptively administering these assessments by leveraging the correlation among traits to improve overall measurement. For example, an introverted response suggests that the test-taker is also conscientious. In one simulation study of the adaptive administration of the 16PF Questionnaire (Mead et al., 1997), a unidimensional CAT allowed a reduction of test length of about 25 percent with just a slight loss of reliability. However, by using MCAT, test length was reduced to about 50 percent with a similar, small loss of reliability. In another example, Weiss and Gibbons (2007) found that they could reduce a 615-item personality instrument by about 80 percent with a slight loss of reliability by using a BFCAT. As such, substantially shorter personality measures (e.g., 120+ items vs. 615 items) should be much more appealing to organizations for a number of reasons including reduced test-taker fatigue and enhanced applicant reactions. As technology has advanced and personal computers (or now tablets, etc.) have become ubiquitous, hardware costs have decreased and adaptive testing may provide even more added value (Wainer & Eignor, 2000). A subtle advantage of implementing adaptive testing is the psychometric groundwork that must be laid for the test by itself ensures that the measurement quality of the assessment items is high. High-Fidelity Simulations Traditional multiple-choice items used to assess cognitive abilities are sometimes viewed with skepticism in the selection context because they are so different from how people solve real-world problems: They assess granules of knowledge rather than measure problem-solving skills. Further, some individual differences, such as leadership ability and interpersonal skills, may be particularly hard to measure using this format. Although faceto-face role-plays and other assessment center exercises allow for rich evaluations of individual differences, they can also be cost-prohibitive, inconvenient, and difficult to standardize. The advancement of technology introduces novel alternatives for such rich simulations. Video assessments using either actors or avatars may increase criterion and construct validity (Hunter & Hunter, 1984; Roth, Bobko, & McFarland, 2005), reduce adverse impact (Olson-Buchanan et al., 1998; Reilly & Chao, 1982; Schmitt & Mills, 2001), and improve applicant reactions (Schmitt & Chan, 1999). As personal computers became powerful enough to support rich media, a wide range of assessments have been developed (Olson-Buchanan & Drasgow, 2006). IBM’s Workplace Situations Test (e.g., Desmarais et al., 1994) was developed for selecting manufacturing employees and includes
26 • Alan D. Mead et al. high fidelity interactive video/audio workplace scenarios derived from critical incidents. The Federal Aviation Association’s Computer-Based Assessment Measure is used for selecting air traffic controllers and requires assessees to respond to realistic scenarios via a simulated radar screen (e.g., Hanson et al., 1999). Researchers at Illinois (e.g., Olson-Buchanan et al., 1998) developed and validated the Conflict Resolution Assessment, which is an interactive video assessment for measuring conflict management skills. Most recently, Oostrom et al. (2011) developed a webcam assessment of interpersonally oriented leadership in which test-takers are shown video clips (from the participants’ perspective) and their responses are recorded via a webcam. Although this assessment has not yet been validated in the field, it was shown to be related to student behavior in an academic setting. While there is little question that rich media allows the efficient measurement of novel skills, relatively little research has investigated the role of fidelity in assessment quality. Brehmer (2004) argued that simulations are always simplifications of a complex reality and emphasized the role of theory in identifying the key elements (knowledge, skills, behavior, etc.) that must be included in the simulation. Coovert and Riddle (2004) presented a method based upon rough sets theory for empirically evaluating the fidelity of an extant simulation. Discussions of the role of fidelity in training (Gray, 2002; Kozlowski & DeShon, 2004) emphasize two themes. The first is fidelity to the research or practical goal; for example, new learners can start with simple simulations but subsequently use simulations of increasing fidelity and culminate in competent learners training on very high fidelity simulations in order to facilitate the transfer of training to actual job duties. The second theme involves distinguishing psychological from physical fidelity and emphasizes the importance of psychological fidelity; while some simulations may sacrifice physical fidelity, all (useful) simulations must include “the essential underlying psychological processes relevant to key performance characteristics in the real-world setting” (Kozlowski & DeShon, 2004, p. 76). In designing simulations for selection testing, it is important to match psychological fidelity and the situation. In hiring experts (e.g., pilots trained to operate a particular aircraft), a very high level of fidelity (psychological and physical) would be appropriate, whereas a simpler simulation might be more appropriate for hiring pilots who will subsequently be trained to fly a particular aircraft. Also, while training is always over a relatively long span of time, selection usually occurs at one time point and often needs to be done quickly. Therefore, in a selection context, time spent learning how
Technology-Based Selection • 27 to perform in the simulation (the background, any rules or procedures, the interface, etc.) must be minimized because that time is not being used to gather information about the desired candidate job behaviors, knowledge, and skills. Finally, while higher levels of psychological fidelity should entail higher criterion-related validity, the shape of the fidelity–validity relationship is unknown because very few studies directly compare levels of fidelity. One early study of simulation fidelity for training (Weitz & Adler, 1973) compared physical fidelity conditions and found a non-significant trend towards better performance for lower physical fidelity. We know that simulations with very little physical fidelity can have useful criterionrelated validity (Motowidlo, Dunnette, & Carter, 1990) and that meta-analytic estimates of the predictive validity of work samples vary from .34 (Roth, Bobko, & McFarland, 2005) to .54 (Hunter & Hunter, 1984) and thus we would expect that the fidelity–validity relationship has an upper asymptote below 0.60 and that physical fidelity is not required in order to have substantial validity. If greater physical or psychological fidelity entails greater costs, then it could be the case that the utility of a selection simulation might be optimal at relatively low levels of physical fidelity and less than perfect psychological fidelity. Of course, utility also hinges upon the dollar value of the workers’ performance, individual differences in performance, and the cost of performance failures. Novel Item Types for Computerized Testing Technology has also served to ignite the creation and use of several new item types used in the selection context (Parshall, Davey & Pashley, 2002). For example, Microsoft certification exams contain a drop-and-connect item type where examinees can build schematics to answer applied problems. Based on the comparability literature (e.g., Mead & Drasgow, 1993) such items are unlikely to change the construct being measured but little research has examined the reliability, adverse impact or applicant effects of these novel item types. Jodoin (2003) compared drop-and-connect and another innovative item format to traditional multiple-choice and found that both innovative item types provided more information than multiple choice; however, they also took longer and multiple-choice were superior in terms of information per unit of time spent responding. Another novel item type is a technology-based fill in the blank (FITB). Although the idea of an open-ended response is not new to the field of testing in general, historically it was not used in high-volume paper-and-pencil
28 • Alan D. Mead et al. selection tests because written responses could not be automatically scored and manual scoring presented problems with reliability. However, for items with well-defined responses, computerized tests can easily record and score typed responses. Such items may be attractive alternatives to multiple-choice because they virtually eliminate guessing. In one study that compared multiple-choice items with identical items formatted as FITB items, on average, the items became much more difficult (as difficult as 5 percent correct) but the average item quality (i.e., corrected item-total correlation) increased substantially (Mead, 2002). Finally, another novel response type facilitated by technology is the constructed response, which refers to any item where the applicant can “create” an answer. In an early example, Bejar (1991) created an assessment task for an architect exam that required candidates to design a house and landscaping to achieve certain requirements, including drainage. Scoring this exam represented a formidable task and Bejar utilized a number of clever techniques. For example, drainage was assessed by computer simulated rainfall; credit was awarded if no water pooled on the property. Another type of constructed response is a short written answer (e.g., an “essay”) and considerable effort has been devoted to automating the grading of such items using natural language processing and some automated scoring systems agree with human graders as well as two human graders agree with each other (Burstein, 2003; Elliot, 2003; Landauer, Laham, & Foltz, 2003; Page, 2003).
TECHNOLOGY AND FAIRNESS ISSUES Although technology has enabled the automation of traditional selection methods and the development of novel selection tools, it also introduces several unique concerns such as technology accessibility as a possible factor in test scores and increased vulnerability to cheating. In this section we introduce and discuss some of the main technology-based fairness concerns in the selection context. Accessibility As prolific as technology has become in our society and workplace, it is not universally accessible and accessibility to technology may serve as an unfair disadvantage or source of contamination in the selection measure.
Technology-Based Selection • 29 For example, recruiting exclusively online or conducting selection procedures online (or in part online) may place those with limited access to the internet at a disadvantage. A recent survey (Smith, 2010) found that 21 percent of adults do not use the internet and 20 percent of these individuals lack the computer skills to do so. Overall 66 percent of Americans have high-speed internet access at home but only 56 percent of African-Americans have broadband access at home. However, it is important to note that in 2009 the figure for African-Americans was 46 percent, so about half the gap was reduced during 2010 and, as such, this gap may lessen or disappear in the coming years. Older adults with lower income, less education, and those living in rural locations were less likely to have broadband access as well. Fox (2011) found individuals with disabilities to be less likely to have internet access even after controlling for other demographic issues. In this study, respondents were asked to describe problems that non-broadband users suffer due to their lack of a high-speed connection. Concerns about job opportunities or gaining new career skills were the foremost concern, with 66 percent describing a lack of broadband as a disadvantage. There are efforts to increase accessibility to the internet through public services. For example, a 2007 survey of public libraries (Bertot, McClure, & Jaeger, 2008) indicated 73 percent report being the only provider of free internet access in their community and providing access to job seekers was viewed by librarians as their second most important service. This same survey found that libraries were often straining to keep up with demand; the average number of terminals was 10.7, although over half of libraries surveyed also provided free public wireless for patrons with portable computers. Clearly these publicly available access points should help address some of the accessibility gap, but it is not clear if this is enough access to offset a lack of home access. Further, efforts to address some accessibility-related issues may introduce additional fairness concerns. For example, concern about accessibility issues online for disabled individuals has risen to a level where the Department of Justice is considering whether it should establish web accessibility requirements for entities covered by the Americans with Disabilities Act. If such standards are established, they would likely cover many online selection systems and it is unclear how accessibility features affect test standardization (e.g., by ensuring that the end-user controls aspects such as magnification or timing) or security (e.g., by forcing content to be easily accessible and possibly easily copied) (www.ada.gov/anprm 2010/factsht_web_anrpm_2010.htm).
30 • Alan D. Mead et al. These surveys suggest that while most American adults have some access to the internet and at least some basic computer skills, a sizable proportion still lack these advantages and they are disproportionately likely to be lowincome, less well educated, older, African-American, or disabled. Trends suggest that this “digital divide” is disappearing, but slowly (Smith, 2010). The presence of a divide raises the potential for ethical and fairness issues for technological innovations (Tippins et al., 2006). Universal solutions to address these issues are not immediately apparent as mitigation strategies would depend on the specific problem and applicant population. For example, if some members of the applicant population lack convenient internet access, a business may continue to accept paper applications or may provide computer terminals that can be used on-site to apply for positions. Also, it is probably best practice to provide tutorials or sample exams to applicants prior to assessment, so that they may familiarize themselves with the interface; this is particularly important when any aspect of navigation or response requires learning. Standardization, Cheating, and Identity Verification Another concern about technology-based selection stems from one of its benefits—the accessibility and flexibility of being able to offer assessments online, at kiosks, or in remote locations. This increased accessibility means such assessments are increasingly likely to be unproctored and may suffer from a lack of standardization or another source of contamination. The three problem areas are: poor standardization; cheating on the assessment; and an inability to verify the identity of the examinee. Poor standardization may occur because the physical and psychological context of the test administration is not controlled. For example, unlike manual selection methods where the selection measure is administered under standard conditions typically during regular work hours, the examinees taking a technology-based assessment may choose to complete the assessment late at night after family and personal responsibilities have been met, in which case fatigue may be a factor. Or, examinees might try to complete the assessment during the day when children, phone calls, and many other interruptions might interfere with test-taking. Buchanan (e.g., Buchanan & Smith, 1999) suggested that due to base rates of alcohol and substance abuse, a portion of respondents completing an assessment from home during off-work hours will be under the influence. Other standardization issues arise from the nature of current computer systems. Screen size, resolution, and display technology can vary widely. A “screenful”
Technology-Based Selection • 31 on one computer may require considerable scrolling on another and differences in display technology, such as CRT versus LCD screens, can cause images, fonts, font effects, and colors to appear differently (Weiss, 2007). At first blush, the solution to this concern is not readily apparent, short of restricting when and where an online assessment can be completed (thereby reducing accessibility). However, perhaps future programming will allow for automatic adjustments that account for varying screens and display technology or perhaps organizations may choose to use authorized centers (such as what is sometimes done with testing for online classes). A related concern is cheating on technology-based selection tools. Cheating can take many forms. In UIT, for example, it may be impossible to prevent examinees from using calculators, searching the internet for answers, or inviting several helpers to aid in completing the exam. Also, the items themselves may be copied and shared through e-mail or braindump sites. Smith (2004) found that even proctored exams are easily and frequently compromised, with virtually the entire pool of one IT certification exam appearing online at a braindump site within eight months. Unfortunately, cheating is not a unitary problem that can be easily fixed. One proffered solution is the use of “on-the-fly” item generation (discussed in a later section). This approach may thwart examinees from stealing items and posting them to braindump sites, but does nothing to verify that the UIT is being completed by the job applicant (as opposed to his or her smarter friend). Perhaps the best solution is yet to be developed in future technology (e.g., streaming wide-angle video and disabling other computer or electronic devices) that can serve to enable the verification of the test-taker’s identity, as well as ensure that the test-taker is unable to communicate with others or access resources during the assessment. Equivalency and Comparability Another fairness concern about technology-based selection is whether technology-based versions of selection methods measure the same thing as their manual counterparts. And even if they do, are the scores affected by the automation (e.g., slightly higher or lower or more dispersed)? The short answer is that it depends. Meta-analytic reviews of cognitive (Mead & Drasgow, 1993) and non-cognitive (Mead & Blitz, 2003) measures have found that the rank ordering of examinees is highly similar across paper and computerized versions except when the test is speeded. Mead and Drasgow (1993) reported a meta-analytic true-score correlation of .97 for
32 • Alan D. Mead et al. power tests but only 0.72 for speeded tests, suggesting that while the rankordering of examinees is similar on speeded tests, about 50 percent of the variability in scores is caused by the computerization (differences in readability, the psychomotor aspects of responding [Boyle, 1984], etc.). The other aspect of this issue is the degree to which scores on the technology-driven and manual versions can be compared directly without adjustment. Clearly, if the two versions are measuring different things (as may be the case with speeded measures), no degree of adjustment will make the two forms perfectly comparable. However, in other instances, a small adjustment may be needed. For example, Richman and her colleagues (Richman, Kiesler, Weisband, & Drasgow, 1999) metaanalyzed social desirability distortion in computerized questionnaires, traditional questionnaires, and interviews and found an overall mean d of 0.02 (indicating essentially no effect) but in 92 effects comparing computerized to face-to-face interviews d = –.19, indicating that people produce more socially-desirable responding in face-to-face interviews.
TEST AND COMPUTER ANXIETY An early concern raised in the literature focused on computer anxiety and whether it might serve as a source of contamination in an applicant’s performance. Research on computer anxiety conducted by Educational Testing Services as part of the launch of CAT GRE suggested that computer anxiety is not a serious problem in the populations who sit exams such as the SAT and GRE (Wainer, 2000) and managerial respondents actually preferred a multimedia conflict resolution test (Richman-Hirsch, OlsonBuchanan, & Drasgow, 2000). However, younger adults are generally more likely to use computers, so it is not clear that this finding is generalizable to the employment context. Unfortunately, less research on computer anxiety has been conducted in other populations and a recent national survey (Smith, 2010) suggests that a minority of adults report a lack of adequate computer skills. Thus, similar to the issues raised with respect to access to technology, it is not clear whether computer anxiety might be an important factor in performance on technology-based selection methods, particularly in populations that have less access to technology. Further, virtually no research has examined anxiety as it relates to other types of technology-based selection methods (e.g., high fidelity simulations) and, as such, the potential contamination of technology anxiety on these selection measures is unclear.
Technology-Based Selection • 33
FUTURE DIRECTIONS Despite the proliferation of technology devices over the past several decades and the corresponding focus on technology-based selection methods in the research literature, there are a number of important—arguably critical— areas of needed research. As the discussion above attests, most of these areas relate to fairness issues. For example, a critical concern from a social justice perspective is whether the uneven access to technology serves to attenuate applicants’ (from limited technology access populations) performance on technology-based selection measures, thereby unfairly limiting employment and advancement opportunities. If so, a related critical question is whether and how this issue might be addressed in the employment context. Also, several concerns stemming from unproctored selection methods, such as reduced standardization and various forms of cheating, must be fully addressed in the research literature. Without careful research attention and evidence-based solutions, if needed, the integrity of technology-based selection, particularly internet-based assessments, is called into question. Perhaps the most significant frontier of future technology-based research is how current and future technological developments will further shape the development and application of the selection methods used in organizations. To a large extent, the novel selection methods to date have been determined by the current technological platform. As such, how will computing and connectivity be different in five or ten years? But, in the same way that a computer-user would recognize computers from 10 or 20 years ago (and maybe even recognize modems!), we predict that computers in five or ten years will still be computers, as we understand them today (if smaller, more portable, and offering even better connectivity). In this last section we discuss what we consider to be the primary areas of future research—or the areas in which the power of the state-of-the-art technological advancements will likely be harnessed to advance selection.
ON-THE-FLY ITEM GENERATION Administration and scoring of assessments have been profoundly impacted by technology, but the creation of assessments and assessment content have been little changed. For the most part, the guidelines available today have
34 • Alan D. Mead et al. been available for decades (Haladyna & Downing, 1989). Yet, we also know that the development of assessment content is inefficient. Depending on assessment type, item type, item writer proficiency, etc. somewhere between 20 percent and 80 percent of items are typically discarded during the development process (Henryssen, 1971) because the items were flawed. If we could engineer items with greater precision, we could generate assessments during administration, reducing costs and removing one opportunity for cheating. By far, most of the literature addressing automated exam creation focuses on two issues: (1) generating exam items with (2) known difficulty (Embretson, 1999; Irvine & Kyllonen, 2002). As an example of a relatively successful approach, Arendasy and Sommer (2007) were able to identify elements that could be used to generate algebra word problems from templates. Three “radicals” explained 75 percent of the variance in item difficulty in the generated items and a confirmatory factor analysis supported the construct validity of the algebra word problems. In another example, GRE candidates were given an experimental, “on-the-fly” quantitative reasoning test (Bejar et al., 2003). The correlation between the experimental test and the operational quantitative reasoning score was 0.87, which was as high as typical of test-retest quantitative reasoning correlations. However, problems remain. Methods vary markedly in terms of the success at generating plausible items, particularly in non-math domains. For example, in studies of generating Cloze items for ESL testing, only a highly constrained item type (e.g., “Will you go __ the movie with me?”) generated mostly plausible items (Lee & Seneff, 2007). Performance for more general ESL item generation has been found to only produce 60–66 percent plausible items (Liu, Wang, & Gao, 2005; Sumita, Sugaya, & Yamamoto, 2005; Pino, Heilman, & Eskenazi, 2008). These studies have not attempted to control the item characteristics of the generated items. One impediment to successful item generation may be the traditional multiple-choice format. Although multiple-choice has been successful for hand-crafted exams, it may be a poor choice for automatically generated exams because the difficulty of multiple-choice items depends upon the stem but also an interaction between the correct answer and the incorrect answers. Researchers have examined several alternative formats. For example, in the multiple-true-false (MTF) item format (Masters, 2010), a series of propositions are listed, just like response options of a multiplechoice item, but each proposition is marked as True or False independently. Thus, this format removes a serious problem of ensuring that there is only
Technology-Based Selection • 35 one correct answer, as well as any need to control similarity between foils and distractors. Another attractive format for some computerized exams is the fill-inthe-blank or short answer item type, where the examinee must type a word, numerical result, or some other very specific short alpha or numeric string into a blank field to answer the question. This item type is attractive because the distractors do not have to be generated, so the item is simpler. Another advantage is that such items tend to be uniformly quite difficult (Mead, 2002); if all such items are quite difficult and do not vary much in their difficulty, it may be relatively easy to predict their difficulty. A disadvantage is that it may be hard to identify all the possible correct responses (including typos and misspellings) prior to administering the item (see Vale, 1978).
ALTERNATIVE COMPUTING DEVICES We considered entitling this section with Apple’s trademarked “There’s an app for that!” slogan but computing trends change quickly and audiences in just a few years might not understand the reference. However, movement to smaller, highly-mobile devices and towards accessing the internet wirelessly seem to be well entrenched trends that will accelerate in future years. Today, two classes of mobile devices predominate: smartphones and tablet computers. Smartphones are small, multipurpose computers that happen to also make phone calls (or not, in the case of the Apple iTouch, which has removed the phone circuitry). These devices are the convergence of cellular phones and personal digital assistants (PDAs). Little research has been conducted using smartphones, but PDAs have been used for experience-sampling. For example, Stein (2010) distributed PDA computers to employees who were then cued to record perceptions of injustice immediately (rather than relying on after-the-fact recollections). In terms of selection, Schroeders and Wilhelm (2010) compared three types of reasoning items administered on paper, PDA and laptop computers. Their structural model of the covariances suggested a modest method factor, but they also found the PDA version to be consistently hardest. In both of these studies, researchers furnished PDAs to the participants; we found no research that relied upon the varied hardware and operating systems found “in the wild” on consumer devices.
36 • Alan D. Mead et al. Tablet computers can be seen as a convergence of the “smart” and weboriented features of smartphones with portable computers. Tablets afford much larger screens than smartphones, while still being very portable. In comparison to small laptop computers, these devices feature touch-sensitive screens that substitute finger gestures for mousing and an “on-screen” keyboard that replaces a separate, physical keyboard (reducing the size and weight of the device). Overton and his colleagues reported an early study of using a tablet computer as a replacement for paper-and-pencil or “traditional” computers (Overton et al., 1996). Their results showed a high degree of comparability for scores on the tablet and traditional computers for power tests (disattenuated r=0.90 to 1.0) but mixed results for two perceptual speed and accuracy tests (disattenuated r=0.81 to 0.98) and there was little indication that the tablet computers were more “paper-like” than the traditional computerized tests. In many ways, smartphones and tablet computers are literally small computers differing less in their applications than in the screen size and the user interface. As general computing has drifted towards a web-centric mode, so too it seems inevitable that applicants are increasingly likely to complete online applications and selection tests using tablets and smartphones. Future research should examine the effects on accessibility and comparability.
SOCIAL NETWORKS Another key area for future research is the use of social media (e.g., MySpace, Facebook, etc.) or search engines (e.g., Google) as a selection method. With a plethora of information readily and easily accessible, more managers and organizations are consulting this information during the selection process. According to a 2009 CareerBuilder survey of 2,600 hiring managers, nearly 50 percent of the managers surveyed reported using social media/search engines to gather information about potential candidates before offering them an interview or making an offer (CareerBuilder.com, n.d.). This practice carries a significant risk for the employer for a number of reasons, including: 1) information posted on the internet is easily falsified and relying on such information to make an employment decision (particularly when no attempt is made to verify the information) carries the risk of defamation of character claims; and 2) social
Technology-Based Selection • 37 media information, in particular, is likely to reveal information about a candidate’s demographic characteristics such as religion, age, ethnicity, sexual orientation, disabilities, etc., presenting risks of discrimination or discrimination claims. It is not clear whether managers are doing so informally (outside the purview of organizational policy) or if the organization has established a formal process for conducting such a check. Granted, a more formal practice would allow certain safeguards to be installed; however, the reliability and utility of such practice remains to be identified. Interestingly, a cottage industry is developing in which providers generate filler material about an individual so negative information is buried toward the back of an internet search or where providers remove unwanted or incorrect information (e.g., Fertik, 2007). Future research in this area is likely to focus on whether such searches can yield reliable, predictive information about future performance and, if not, how organizations can effectively prevent managers from informally engaging in this practice.
CONCLUSION The automation of selection systems has proceeded at a rapid pace, perhaps driven more often by efficiencies than by opportunities to substantially improve upon traditional selection methods. Yet those opportunities exist and some organizations are taking advantage of CAT, high fidelity simulations, and other innovations. Designers of technologically-advanced selection systems should consider issues such as accessibility, security, comparability, and examinee’s comfort with computerized assessments. We discussed future directions in “on-the-fly” item generation, alternative computing devices, and social networks, but we see this as a broad frontier, ripe for applied psychological researchers. Even the automation of traditional assessments has often escaped the attention of organizational researchers. We know that many resumes are screened by automated algorithms but the HR literature lacks research comparing algorithms and comparing human judges with automated algorithms. We know very little about the fairness, reliability and validity of various alternatives or the role of the HR user (e.g., in selecting keywords). Similarly, “instant” background and credit checks are now available but we know very little about the prevalence, reliability or validity of such procedures.
38 • Alan D. Mead et al.
REFERENCES Arendasy, M., & Sommer, M. (2007). Using psychometric technology in educational assessment: The case of a schema-based isomorphic approach to the automatic generation of quantitative reasoning items. Learning and Individual Differences, 17(4), 366–383. Beaty, J. C., Nye, C. D., Borneman, M. J., Kantrowitz, T. M., Drasgow, F., & Grauer, E. (2011). Proctored vs. unproctored Internet tests: Are unproctored tests as predictive of job performance? International Journal of Selection and Assessment, 19, 1–10. Bejar, I. I. (1991). A methodology for scoring open-ended architectural design problems. Journal of Applied Psychology, 76(4), 522–532. Bejar, I. I., Lawless, R., Morley, M. E., Wagner, M. E., Bennett, R. E., & Revuelta, J. (2003). A feasibility study of on-the-fly item generation in adaptive testing. Journal of Technology, Learning, and Assessment, 2(3). Available from www.jtla.org. Bertot, J. C., McClure, C. R., & Jaeger, P. T. (2008). The impacts of free public Internet access on public library patrons and communities. Library Quarterly, 78(3), 285–301. Boyle, S. (1984). The effect of variations in answer-sheet format on aptitude test performance. Journal of Occupational Psychology, 57, 323–326. Brehmer, B. (2004). Some reflections on microworld research. In S. G. Shiflett, L. R. Elliot, E. Salas, & M. D. Coovert (Eds.), Scaled worlds: Development, validation and applications (pp. 22–36). Burlington, VT: Ashgate Publishing. Buchanan, T., & Smith, J. L. (1999). Using the Internet for psychological research: Personality testing on the World Wide Web. British Journal of Psychology, 90, 125–144. Burstein, J. (2003). The E-Rater scoring engine: Automated essay scoring with natural language processing. In M. D. Shermis & J. Bustein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 107–115). Mahwah, NJ: Lawrence Erlbaum. CareerBuilder.Com (n.d.). Nearly Half of Employers Use Networking Sites to Screen Job Candidates. Retrieved February 14, 2011 from http://thehiringsite.careerbuilder. com/2009/08/20/nearly-half-of-employers-use-social-networking-sites-to-screenjob-candidates/. Chapman, D. S., Uggerslev, K. L., & Webster, J. (2003). Applicant reactions to face-to-face and technology-mediated interviews: A field investigation. Journal of Applied Psychology, 88 (5), 944–953. Coovert, M. D., & Riddle, D. L. (2004). Utilization of rough sets theory to assess physical and psychological fidelity within scaled worlds. In S. G. Shiflett, L. R. Elliot, E. Salas, & M. D. Coovert (Eds.), Scaled worlds: Development, validation and applications (pp. 134–153). Burlington, VT: Ashgate Publishing. Desmarais, L. B., Masi, D. L., Olson, M. J., Barbara, K. M., & Dyer, P. J. (1994, April). Scoring a multimedia situational judgment test: IBM’s experience. Paper presented at the annual conference of the Society for Industrial and Organizational Psychology, Nashville, TN. Drasgow, F., & Olson-Buchanan, J. B. (Eds.) (1999). Innovations in computerized assessment. Hillsdale, NJ: Erlbaum. Elliot, S. (2003). Intellimetric: From here to validity. In M. D. Shermis, & J. Bustein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 67–81). Mahwah, NJ: Lawrence Erlbaum. Embretson, S. E. (1999). Generating items during testing: Psychometric issues and models. Psychometrika, 64(4), 407–433.
Technology-Based Selection • 39 Fertik, M. (2007). Commentary on We Googled You. Harvard Business Review, June, p. 47. Fetzer, M. S. (2011, April). Serious games and virtual worlds: The next I-O frontier! [Panelist]. Panel discussion conducted at the twenty-sixth annual meeting of the Society for Industrial and Organizational Psychology, Chicago, IL. Fox, S. (2011). Americans living with disability and their technology profile. Technical report of the Pew Research Center’s Internet & American Life Project. Retrieved February 10, 2011 from http://pewinternet.org/Reports/2011/Disability.aspx. Gray, W. D. (2002). Simulated task environments: The role of high-fidelity simulations, scaled worlds, synthetic environments, and laboratory tasks in basic and applied cognitive research. Cognitive Science Quarterly, 2, 205–227. Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 37–50. Hanson, M. A., Borman, W. C., Mogilka, H. J., Manning, C., & Hedge, J. W. (1999). Computerized assessment of skill for a higly technical job. In F. Drasgow, & J. B. Olson-Buchanan (Eds.), Innovations in computerized assessment (pp. 221–247). Mahwah, NJ: Lawrence Erlbaum Associates. Henryssen, S. (1971). Gathering, analyzing, and using data on test items. In R. L. Thorndike (Ed.), Educational measurement (2nd edn). Washington, DC: American Council on Education. Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72–98. Irvine, S. H., & Kyllonen, P. C. (Eds.) (2002). Item generation for test development. Mahwah, NJ: Lawrence Erlbaum Associates Jodoin, M. G. (2003). Measurement efficiency of innovative item formats in computer-based testing. Journal of Educational Measurement, 40(1), 1–15. Kozlowski, S. W. J., & DeShon, R. P. (2004). A psychological fidelity approach to simulationbased training: Theory, research and principles. In S. G. Shiflett, L. R. Elliot, E. Salas, & M. D. Coovert (Eds.), Scaled worlds: Development, validation and applications (pp. 75–99). Burlington, VT: Ashgate Publishing. Landauer, T. K., Laham, D., & Foltz, P. W. (2003). Automated essay scoring: A cross disciplinary perspective. In M. D. Shermis and J. C. Burstein (Eds.), Automated essay scoring and annotation of essays with the Intelligent Essay Assessor (pp. 87–112). Mahwah, NJ: Lawrence Erlbaum Associates. Lee, J., & Seneff, S. (2007, August). Automatic generation of Cloze items for propositions. Paper presented at the 2007 INTERSPEECH Conference. Liu, C.-L., Wang, C.-H., & Gao, Z.-M. (2005). Using lexical constraints to enhance the quality of computer-generated multiple-choice Cloze items. Computational Linguistics and Chinese Language Processing, 10(3), 303–328. Masters, J. S. (2010). A comparison of traditional test blueprinting and item development to assessment engineering in a licensure context. Unpublished doctoral dissertation, University of North Carolina at Greensboro. Mead, A. D. (April, 2002). Creating alternate forms: An investigation into three methods of item cloning. Paper presented at the annual meeting of the Society of Industrial and Organizational Psychology, Toronto, Canada. Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114(3), 449–458. Mead, A. D., & Blitz, D. L. (April, 2003). Comparability of paper and computerized noncognitive measures: A review and integration. Paper presented at the annual meeting of the Society of Industrial and Organizational Psychology, Orlando, FL.
40 • Alan D. Mead et al. Mead, A. D., Segall, D. O., Williams, B. A., & Levine, M. V. (1997, April). Multidimensional assessment for multidimensional minds: Leveraging the computer to assess personality comprehensively, accurately, and briefly. Paper presented at the twelfth annual conference for the Society for Industrial and Organizational Psychology, St. Louis, Missouri. Motowidlo, S., Dunnette, M., & Carter, G. (1990). An alternative selection procedure: The low-fidelity simulation. Journal of Applied Psychology, 75 (6), 640–647. Olson-Buchanan, J. B. (2001) Computer-based assessment: Advances and challenges. In F. Drasgow, & J. B. Olson-Buchanan (1999), Innovations in computerized assessment. Mahwah, NJ: Lawrence Erlbaum Associates. Olson-Buchanan, J. B., & Drasgow, F. (2006). Multimedia situational judgment tests: The medium creates the message. In J. Weekley, & R. Ployhart’s (Eds.) Situational judgment tests (253–278). Mahwah, NJ: SIOP Frontiers Series, Erlbaum Publishing. Olson-Buchanan, J. B., Drasgow, F., Moberg, P. J., Mead, A. D., Keenan, P. A., & Donovan, M. (1998). Conflict resolution skills assessment: A model-based, multi-media approach. Personnel Psychology, 51, 1–24. Oostrom, J. K., Born, M. Ph., Serlie, A. W., & van der Molen, H. T. (2011). A multimedia situational test with a constructed-response format: Its relationship with personality, cognitive ability, job experience, and academic performance. Journal of Personnel Psychology, 10, 78–88. Overton, R. C., Taylor, L. R., Zickar, M. J., & Harms, H. J. (1996). The pen-based computer as an alternative platform for test administration. Personnel Psychology, 49, 455–464. Page, E. B. (2003). Project essay grade: PEG. In M. D. Shermis & J. Bustein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 39–50). Mahwah, NJ: Lawrence Erlbaum. Parshall, C. G., Davey, T., & Pashley, P. J. (2002). Innovative item types for computerized testing. In W. J. van der Linden, & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 129–148). Boston, MA: Kluwer. Pino, J., Heilman, M., & Eskenazi, M. (2008). A selection strategy to improve Cloze question quality. Poster presented at the 2008 Language Technologies Institute Student Research Symposium. Reilly, R. R., & Chao, G. T. (1982) Validity and fairness of some alternative employee selection procedures. Personnel Psychology, 35(1), 1–62. Richman, W. L., Kiesler, S., Weisband, S., & Drasgow, F. (1999). A meta-analytic study of social desirability distortion in computer-administered questionnaires, traditional questionnaires, and interviews. Journal of Applied Psychology, 84(5), 754–775. Richman-Hirsch, W. L., Olson-Buchanan, J. B., & Drasgow, F. (2000). Examining the impact of administration medium on examinee perceptions and attitudes. Journal of Applied Psychology, 85(6), 880–887. Roth, P., Bobko, P., & McFarland, L. (2005). A meta-analysis of work sample test validity: Updating and integrating some classic literature. Personnel Psychology, 58, 1009–1037. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in Personnel Psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274. Schmitt, N., & Chan, D. (1999). The status of research on applicant reactions to selection tests and its implications for managers. International Journal of Management Reviews, 45–62. Schmitt, N., & Mills, A. E. (2001). Traditional test and job simulations: Minority and majority performance and test validities. Journal of Applied Psychology, 86, 451–458.
Technology-Based Selection • 41 Schroeders, U., & Wilhelm, O. (2010). Testing reasoning ability with handheld computers, notebooks, and paper and pencil. European Journal of Psychological Assessment, 26(4), 284–292. Segall, D. O. (1996). Multidimensional adaptive testing. Psychometrika, 61, 331–354. Segall, D. O., & Moreno, K. E. (1999). Development of the computerized adaptive testing version of the armed services vocational aptitude battery. In F. Drasgow and J. B. Olson-Buchanan (Eds.), Innovations in computerized assessment (pp. 35–66). Mahwah, NJ: Lawrence Erlbaum Associates. Smith, A. (2010). Home Broadband 2010. Technical report of the Pew Research Center’s Internet & American Life Project. Retrieved February 10, 2011 from http:// pewinternet.org/Reports/2010/Home-Broadband-2010.aspx. Smith, R. W. (2004, April). The impact of braindump sites on item exposure and item parameter drift. Paper presented at the Annual Meeting of the American Education Research Association, San Diego, CA. Stark, S., Chernyshenko, O. S., Drasgow, F., & White, L. A. (2012). Adaptive testing with multidimensional pairwise preference items: Improving the efficiency of personality and other noncognitive assessments. Organizational Research Methods, 15, 463–487. Stein, J. (2010). Situational and trait influences on dynamic justice. Unpublished doctoral dissertation, University of Arizona. Straus, S. G., Miles, J. A., & Levesque, L. L. (2001). The effects of videoconference, telephone, and face-to-face media on interviewer and applicants judgments in employment interviews. Journal of Management, 27, 363–381. Sumita, E., Sugaya, F., & Yamamoto, S. (2005). Measuring non-native speakers’ proficiency of English by using a test with automatically-generated fill-in-the-blank questions. In Proceedings of the second workshop on building educational applications using NLP (pp. 61–68). Ann Arbor, MI: Association for Computational Linguistics. Tippins, N. T., & Adler, S. (2011). (Eds.) Technology-enhanced assessment of talent. Professional Practice Series. San Francisco, CA: Jossey-Bass. Tippins, N. T., Beaty, J., Drasgow, F., Gibson, W. M., Pearlman, K., Segall, D. O., et al. (2006). Unproctored Internet testing in employment settings. Personnel Psychology, 59(1), 189–225. Vale, C. D. (1978). Computerized administration of free-response items. In D. J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis, MN: University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory. Retrieved April 3, 2008 from www.psych. umn.edu/psylabs/catcentral/pdf%20files/va77004.pdf. Wainer, H. (2000). Computerized adaptive testing: A primer. New York, NY: Lawrence Erlbaum Associates. Wainer, H., & Eignor, E. (2000). Caveats, pitfalls, and unexpected consequences of implementing large-scale computerized testing. In H. Wainer (Ed.), Computerized adaptive testing: A primer (pp. 271–299). New York, NY: Lawrence Erlbaum Associates. Waller N. G., & Reise, S. P. (1989). Computerized adaptive personality assessment: An illustration with the Absorption scale. Journal of Personality and Social Psychology, 57, 1051–1058. Weiss, D. J. (1982). Improving measurement quality and efficiency with adaptive theory. Applied Psychological Measurement, 6, 473–492. Weiss, D. J. (2007). Adaptive—and electronic—testing: Past, present, and future. Invited address presented at the annual meeting of the National Council on Measurement in Education, Chicago IL.
42 • Alan D. Mead et al. Weiss, D. J., & Gibbons, R. D. (2007). Computerized adaptive testing with the bifactor model. In D. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing. Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program. Weitz, J., & Adler, S. (1973). The optimal use of simulation. Journal of Applied Psychology, 58(2), 219–224. Zickar, M. J., Overton, R. C., Taylor, L. R., & Harms, H. J. (1999). The development of a computerized selection system for computer programmers in a financial services company. In F. Drasgow and J. B. Olson-Buchanan (Eds.), Innovations in computerized assessment (pp. 7–34). Mahwah, NJ: Lawrence Erlbaum Associates.
3 Advances in Training Technology: Meeting the Workplace Challenges of Talent Development, Deep Specialization, and Collaborative Learning J. Kevin Ford and Tyler Meyer
Advances in technology have created jobs that are more cognitively complex and demanding. The shift from manufacturing to service jobs has increased the importance of softer skills such as interpersonal and communication. Organizations have become leaner resulting in broader responsibilities for workers, and an emphasis on teamwork as well as an enhanced role for more effective leadership. Work is becoming more knowledge driven and global in scope, requiring a deeper combination of information, experience, understanding, and problem-solving skills that can be applied to decisions or situations (Kraiger & Ford, 2007). One challenge faced by organizations today is how to train and develop talent as quickly, efficiently and effectively as possible. A second challenge is how to develop depth to the knowledge and skills needed as many core jobs in organizations are becoming less structured and more knowledge driven as well as more interdependent. A third challenge is the need to enhance collaboration and cooperation across jobs and functions in organizations that are now often enacted in virtual rather than face-to-face interactions. New training technologies and strategies hold promise for meeting these challenges. The purpose of this chapter is to highlight how training technologies can help meet the challenges of the changing nature of work. The chapter is divided into three sections. The first details the changing nature of work and its implications for organizational effectiveness. The second section 43
44 • J. Kevin Ford and Tyler Meyer describes the promises of new learning technologies that are emerging and the research evidence on their impact on learning and transfer. The third section describes various training and development strategies that have been found to be useful for improving learning and development that need to be incorporated into these emerging training technologies to maximize their effectiveness and thus better meet the training and development challenges of the changing nature of work. The chapter concludes with future directions for practice and research.
CHANGING REALITIES An examination of business publications focusing on predicting trends and directions in workplaces provides one window for understanding the changing realities of organizations. For example, the results of a McKinsey global survey (Dye & Stephenson, 2010) of 1,300 executives in 93 countries found that one significant trend is the intensifying battle for talented people. Given this competition, the need to develop and grow talent within an organization—especially leadership talent—is magnified. Another trend cited is the increasingly technological and digital connectivity. The web, along with rapidly increasing communication technologies, has stretched organizations’ boundaries, requiring more efforts towards building and managing ever-growing networks in order to remain competitive. For example, Hagel, Brown, and Davison (2008) discuss the need for organizations to become more tech-enabled and adopt better IT strategies in order to increase internal and external communication and compete on a global market. This requires more depth of knowledge and skills and greater adaptability of IT personnel to meet the needs of those organizations as well as others in the organization who are affected by these IT strategies. In another survey of over 1,100 executives from global organizations, roughly half of respondents did not feel confident that their current talent pool had the requisite knowledge and skills to meet the challenges embedded in the organization’s strategic goals over the next five years. The greatest shortfalls described in terms of talent development included management, research and development, and strategy (Dye & Stephenson, 2010). In addition, Western world economies, with low birth rates and aging work forces, must significantly increase the productivity of their existing labor force in order to remain competitive (Beinhocker, Davis, & Mendonca, 2009; Hagel, Brown, & Davison, 2009). To accomplish this,
Advances in Training Technology • 45 organizations must maximize the potential of their knowledge and talent (Hagel, Brown, & Davison, 2008). We describe three organizational needs that must be met to develop and maximize the potential of human capital in organizations. First, organizations need to systematically develop employee talent—especially in leadership positions (Ready, Hill, & Conger, 2008). Second, to help meet growing demands for technical and analytical skills, organizations need to adopt methods that accelerate and enhance employees’ deep and specialized knowledge. Third, given the more permeable organizational boundaries and concomitant needs for communication across business functions and geographical regions, organizations need to employ more collaborative strategies to diffuse innovations and enhance teamwork. Developing Talent through Building Adaptability Skills Talent in organizations must not only be identified but also managed and developed as quickly, efficiently, and effectively as possible. The dilemma is that rapid advancement requires individuals to not only become proficient in their present job duties but also be developing skills and competencies to move to more leadership type positions in a relatively short period of time (Avolio & Hannah, 2008). This requires individuals to develop “soft” skills such as being learning oriented, flexible, and tolerant of ambiguity (Schmitt et al., 2003). The development of these skills across key individuals helps organizations to build the talent base that is more fluid and adaptive to changing realities (Ilgen & Pulakos, 1999). Adaptability has been conceptualized as the capacity to alter one’s performance in response to shifting challenges and the ability to anticipate changes and to modify strategies (Ely, Zaccaro, & Conjar, 2009). There are many different scenarios where individuals can display different types of adaptive behavior. Yukl and Mahsud (2010) argue that individuals may display adaptive behavior by managing immediate crises, responding to emerging market threats or opportunities and also by changing roles horizontally, which requires the application of existing skills in a similar role, or vertically, which requires a new set of skills. Researchers have identified three important processes that help build for adaptability: cognitive frame changing (Nelson, Zaccaro, & Herman, 2010; Ely, Zaccaro, & Conjar, 2009); accepting uncertainty (Hodgson & White, 2001); and mastering contradictory demands (Kaplan & Kaiser, 2006). Cognitive frame-changing originated in the cognitive sciences, where it has been referred to as cognitive restructuring (Ohlsson, 1992), breaking
46 • J. Kevin Ford and Tyler Meyer frame (DeYoung, Flanders, & Peterson, 2008), overcoming fixation (Maier, 1931) and functional fixedness (Duncker, 1945). The process of altering internal states, which refers to breaking free of inappropriate assumptions and creating or adopting new task-relevant strategies, underlies insight (i.e. the ability to switch cognitive frames; Bowden et al., 2005; Ohlsson, 1992). The extent to which individuals can switch cognitive frames, they will be better able to find solutions to complex, novel problems and adapt their skills to meet current and anticipated environmental demands (Ely, Zaccaro, & Conjar, 2009). To grow and develop, individuals must also be able to handle negative emotions experienced under high-pressure situations or ambiguity (Beal et al., 2005) in which high-stakes decisions must be made with incomplete information. A qualitative study that employed in-depth interviews found that a key component of a leaders’ success is their ability to handle feelings of anxiety when encountering pressure and ambiguity, as it allows leaders to properly focus and think through problems (McKenzie et al., 2009). In addition, employees who are better able to cope with and tolerate feelings of uncertainty when facing ambiguity are more flexible and adaptable, and, therefore, perform better (Zaccaro et al., 2009; White & Shullman, 2010). A third process underlying adaptability is the ability to handle contradictory situations and master opposing skills. McKenzie and colleagues (2009) found that successful leaders possess the ability to manage opposing demands. This is in line with the Competing Values Framework (CVF; Quinn & Cameron, 1988; Cameron et al., 2006), which argues that individuals must “have the capacity to see problems from contradictory frames, to entertain and pursue alternative perspectives” so as to learn strategies for fulfilling competing expectations (Quinn & Cameron, 1988, p. 45). Similar to CVF, Trompenaars and Hampden-Turner’s (2002; see also Hampden-Turner & Trompenaars, 2000) dilemma theory argues that for global, transcultural organizations to be successful, individuals must possess transcultural competence, which is achieved by learning “through-through thinking” (Trompenaars & Hampden-Turner, 2002, p. 14). This type of thinking is the ability to reconcile seemingly contrasting, or “mirrored,” values that differ across cultures. By combining and synthesizing the opposing horns of a dilemma, leaders integrate cultural values and create a higher level of coherence and functioning. Both CVF and dilemma theory (see also polarity management by Johnson, 1992) have been successfully applied to improving organizations. Organizations who attempt to master contradictory goals and values, such as approaching adversity as opportunity for growth, tend to be more
Advances in Training Technology • 47 successful (Chakravorti, 2010). For example, one key ingredient identified for Toyota Motor Corporation’s success is their corporate culture that sets unrealistically high and contradictory goals that challenge employees and force them to think of new, innovative solutions (Takeuchi, Osono, & Shimizu, 2008). Deep, Specialized Knowledge With many jobs, the focus is not on rapid development for advancement but instead is on the need for deep specialization in key jobs where a person may stay for a long time—even a career. This deep specialization would be particularly important for core jobs in the organization such as information technology (IT) and various types of analyst jobs. Expertise has been defined as the achievement of consistent, superior performance through the development of specialized mental processes acquired through experience and learning activities including training (Ford & Kraiger, 1995). Researchers have begun to identify the depth of knowledge and skill building through formal and informal training as well as learning from experience that is needed to become an expert in a field (Ericsson, Nandagopal, & Roring, 2009; McCall, 2004). One quality of expertise is that knowledge is proceduralized and principled so that not only can individuals recall facts and figures but also can distinguish between situations when that knowledge (or skills) is applicable and when it is not applicable (Ericcson & Charness, 1994). While two individuals may possess the same number of facts, the individual with more depth to that learning can do a better job of relating information to changing demands and predicting what might happen next given the current situation. Thus, given a situation or problem, those with depth to their learning automatically know the proper response and can respond efficiently to many different types of problems. A second characteristic of experts is the quality of their mental models or ways of organizing knowledge. As individuals gain experience with a task or job, they begin to form relational knowledge that defines how things fit together. Experts have well-defined mental models that help them see connections between seemingly disparate pieces of information and these connections lead to problem solutions. In particular, experts possess knowledge structures that contain both problem definitions and specific solutions while individuals with less expertise tend to possess separate knowledge structures for problem definition and problem solutions (Ericsson & Charness, 1994). For example, expert programmers can
48 • J. Kevin Ford and Tyler Meyer mentally group steps within a task so that when they see a particular symptom or problem, they can identify a number of alternative strategies to take and can rank order these in terms of their likelihood of success. Experts also have well-developed self-regulatory skills that include the ability to know what the appropriate strategies are to facilitate further knowledge acquisition (Lord & Hall, 2005). Experts are able to more accurately monitor or assess their own mental states and are more likely to know when they have understood task relevant information and are more likely to discontinue a problem-solving strategy that would ultimately prove to be unsuccessful. Experts are also better able to estimate the number of trials they will need to accomplish as task. For example, highly expert IT people have been found to have superior understanding of programming tasks and of ideal working strategies, and have a better awareness of their own performance strategy options (Sonnentag, 1998). Interdependency, Collaboration, and Innovation Creating mechanisms and developing strategies that maximize the opportunities for individuals to learn from each other is critical to performance advantages in competitive markets (Senge, 2006). Project teams that span different organizational functions (and across different physical sites) are often created to solve an organizational problem or to develop a new, innovative process. To be effective, these virtual and faceto-face ad hoc teams must be seamless in their integration of activities, learn from or develop best practices, and actively support the diffusion of innovations derived from best practices throughout the organization. Cohen and Levinthal (1990) developed the concept of absorptive capacity, which is the organization’s ability to organize, assimilate, and apply new information. They argued that absorptive capacity is critical to an organization’s innovative capabilities. Absorptive capacity can emerge in settings that promote the exploration and application of new knowledge (Koza & Lewin, 1998; Lane, Koka, & Pathak, 2006). Research within the organizational sciences provides convincing evidence that absorptive capacity affects an organization’s ability to adopt and implement new ideas and practices (Szulanski, 1996; Zahra & George, 2002). This construct highlights the importance of learning and the need for mechanisms and processes that promote the integration of ideas and knowledge (e.g., Lasker & Weiss, 2003). In particular, the absorptive capacity of organizations is enhanced where a learning culture is valued, and where inquiry, dialogue and consideration
Advances in Training Technology • 49 of new information are promoted (Senge, 2006). A learning culture supports the gathering of data, the sharing of knowledge and the taking of collective action to improve system functioning (Cuther-Gershenfeld & Ford, 2005). In fact, some theorists suggest that learning culture and absorptive capacity are essential for developing an expanding spiraling process where a learning culture leads to more absorptive capacity which, in turn, leads to more learning (Van den Bosch, Volberda, & De Boer, 1999). Organizational learning has been linked to absorptive capacity and the modification of organizational routines. Zollo and Winter (2002) describe this as an organization’s dynamic capability to integrate, build, and reconfigure internal competencies and operations to address changing environments. They specifically link organizational learning cultures with this increasing dynamic capacity. They provide a cyclical evolutionary view of organizational knowledge, which includes scanning for new information, evaluating the legitimacy of the information, sharing the information across the organization and enacting and routinizing a new set of policies, procedures, and actions. Organizations with high absorptive capacity foster learning routines that support discussion and the selection of ideas (Davenport, Eccles, & Prusak, 1992). Diffusion of innovation and social network theories indicate that internal organizational linkages, specifically the creation of knowledgesharing and internal social ties, facilitate learning by fostering the flow of information across organizational members (Foster-Fishman et al., 2001) and promoting the distribution of evidence based best practices (Frank, Krause, & Penuel, 2009). For example, Szulanski (1996) examined absorptive capacity as a predictor of effective transfer of best practices within an organization. The results across multiple organizations showed that low levels of absorptive capacity in some parts or the whole organization led to difficulties in imitating best practices throughout the organization. In addition, educational researchers have found that school districts are more likely to use research-based best practice evidence to guide their decisions when they have the capacity to acquire and make sense of this evidence (e.g., Honig & Coburn, 2008; Spillane, Reiser, & Reimer, 2002).
TRAINING TECHNOLOGIES There are a number of limitations with traditional instructor led classroom instructional methods to meet the challenges of changing realities at work.
50 • J. Kevin Ford and Tyler Meyer Traditional classroom instruction is relatively expensive and time bound. In addition, it is difficult to customize to meet individual trainee needs or provide for extensive practice opportunities across a wide variety of situations that might be encountered on the job (Goldstein & Ford, 2002). Advances in training technology have striven to keep pace with the changing requirements of work and the need to enhance training and development to be more effective and efficient. Historical Perspective Technological innovations in the delivery of training content have a long history (Kraiger & Ford, 2007). In the 1950s and 60s, programmed instruction and self paced programs were heralded as being able to systematically present information to the learner while utilizing the principles of reinforcement theory (Silverman, 1960). The 1970s and 1980s saw the rise in computer-assisted instruction and the development of largescale equipment simulators. As noted by Goldstein (1974), “one of the newer developments in programmed instruction has been computer assisted learning where a trainee interacts directly with a computer by means of electronic typewriters, pens that draw lines on TV screens, and devices that present auditory material” (p. 187). Goldstein noted the encouraging evidence that computer-assisted instruction required less time to teach the same amount of information than conventional training methods. Wexley and Latham (1981) touted the use of “adaptive training” (a forerunner of today’s intelligent tutoring systems; ITS) where trainee performance is monitored, the problem or task is changed in difficulty for the trainee; and the adaptive logic changes as a function of performance. The pioneering work on ITS was conducted by Carbonell (1970) on providing training on learning geography. Wexley and Latham also noted the use of equipment simulators for training pilots and machine operators where they can learn relevant job tasks with “safety hazards removed, time pressure for productivity minimized, individualized feedback increased, and where opportunities for repeated practice are provided” (p. 140). As one example of the success of simulators, Killian (1976) described how airlines saved about 204 million gallons of fuel through the use of flight simulators. The 1990s saw the development of lower cost and more realistic work simulators. For example, driving simulators became more available for helping to develop defensive driving skills of automobile and truck drivers. Practitioners began developing guidelines and standardized procedures for
Advances in Training Technology • 51 the effective design of simulators by taking into account learning principles and psychological fidelity issues (Goldstein & Ford, 2002). An eventbased approach to training (EBAT) was developed to guide the design of simulators (Fowlkes et al., 1998). The EBAT approach identifies and introduces events with the training scenarios of the simulation to provide opportunities to perform and observe the behaviors that are the objectives of the program. The 1990s also saw the development of CD-ROM and interactive multimedia training programs. As noted by Reeves (1992), advances in multimedia included live action video, animation, graphics, text, and audio to deliver training content through computer systems. Ives (1990) described a multimedia interpersonal skills program that taught sales skills to new insurance agents. Research indicated that trainees got up to speed much more quickly than those who were trained through traditional methods. ITS became more sophisticated as developers increased their capacity to develop meta-strategies based on a history of the trainee’s learning that guided selection of more specific teaching strategies (Benyon & Murray, 1993). By comparing models of the learner’s behaviors and cognition with expert content and instructional models, ITS systems became more sophisticated in selecting training goals and example types, providing tailored feedback, and making decisions about when to stay at a particular module, when and where to advance and when and where to remediate and what type of feedback is best for preventing trainees from wondering too far off track (Gold, 1998). In the twenty-first century, we have seen a large upsurge in the use of training technologies. Computer enabled work simulation training technologies such as virtual reality training systems and the use of serious games have been created to meet increasing challenging learning objectives for a wide variety of jobs (Gupta et al., 2008; Ritterfeld, Cody, & Vorderer, 2009). In addition, there has been an explosion of e-learning/web 2.0 applications (Brown, Charlier, & Pierotti, in press). Virtual Reality Training A virtual reality experience consists of the development of work simulations through the use of virtual reality (VR) technology. With VR training, learning occurs by immersing the trainee in media rich contexts that are similar to those encountered in real life (Brooks, 1999). Trainees can view a 3D world of the kinds of situations they might typically face on the job with objects in this simulated world that can be touched, looked at, and
52 • J. Kevin Ford and Tyler Meyer repositioned. VR training capitalizes on visual learning and experiential engagement very similar to the transfer context without the physical space requirements of full-scale training simulators. A VR training system can simulate many different types of situations and learning events within a short timeframe (Gupta et al., 2008). NASA has been long involved in virtual reality training to help prepare international crews for space travel (Loftin, 1996). An interesting application of successful VR training was the preparation of astronauts for fixing the Hubble Telescope. Over 100 flight controllers experienced simulated extravehicular activities designed to familiarize them with the location, appearance, and operability of the telescope’s components and the maintenance components of the space shuttle cargo bay, to verify and improve procedures, and to create contingency plans. VR technology also allowed Bernard Harris, an astronaut from the United States stationed in Houston, to enter a virtual environment and interact with astronaut Ulf Merbold who was physically located in Germany. They spent over 30 minutes performing the procedures for replacing the damaged lens and communicating with each other. As noted by Loftin, at the conclusion of the procedure, the two astronauts “shook hands and waved goodbye” to each other. Virtual reality training applications are now numerous, more powerfully realistic, and more innovative. VR technology has been applied to areas such as driving simulators (Cockayne & Darken, 2004), medical situations such as surgical procedures (Hague & Srinivasan, 2006), military tactics (Knerr, 2007), and aircraft maintenance tasks (Bowling et al., 2008) Helping train medical students, we now have an avatar-mediated training in which trainees provide bad news to a female avatar in a three-dimensional simulated clinic (Andrade et al., 2010). Harders (2008) provides a detailed discussion of virtual reality training across a variety of medical situations to train new and experienced physicians. He notes the advantages over mock ups and non-interactive PC based tools where there is no tactile information, limited interactivity and no immersion. Serious Games Adult learning theory highlights the important role of learners as active participants in their own learning processes (Baker, Jensen, & Kolb, 2002). From this perspective, learning is most likely to occur when dealing directly with work-related issues during a formal training session, which is enhanced through a systematic process of action and reflection. Games have
Advances in Training Technology • 53 been proposed as one strategy to facilitate active learning by constructing a smaller and more simplified version of the real world issues and problems facing individuals and teams on the job. Games allow for the development of “what if” scenarios in which participants are embedded into a workrelated problem. An early example of incorporating “what if” scenarios into board games is provided by The American Red Cross (1999) who developed a simulation for training staff in emergency operations. The simulation features scenarios that require participants to make decisions and take actions that have identifiable consequences. The game unfolds through brief situation reports that participants receive at the beginning of each round. There are a number of management tasks embedded into the game that require the participants to complete tasks such as staffing and opening facilities, assisting clients, and placing volunteers. Embedded into the game are opportunities to practice and gain skills in more effective information sharing, better coordination of limited resources and smoother transitions form a localized response to a nationwide network of emergency response personnel and resources. More recently, Brehmer (2004) noted the push for more interconnectedness (e.g., multiple goals across participants), dynamics (rate of change and degree of feedback delays), and in-transparency (the degree to which the system state can be ascertained) into the “microworld” of serious games. In this way, the games can not only be engaging but also educational—providing opportunities for deep and sustained learning. Ratan and Ritterfeld (2009) note that 63 percent of these types of serious games are curriculum based to supplement traditional classroom instruction; 22 percent of serious games were found to be related to work oriented in a number of professional fields such as health and safety as well as the military. Examples include business games on how to develop a new product market and games for training lawyers in courtroom skills. Aldrich (2009) notes that the simulations can be developed such that participant decisions have impacts on other participants as well as specific organizational systems. Participants are often assigned or adopt specific goals, receive feedback, and are allowed to repeat the scenario to learn from mistakes. In order to learn key skills such as problem analysis, there are numerous design strategies such as branching stories, interactive spreadsheets, interactive diagrams, and practiceware (flight sims) (Aldrich, 2009). For example, the Department of Defense has contracted out movie companies to incorporate compelling storytelling techniques into complex battle simulations to improve their effectiveness (Luppa & Borst, 2007).
54 • J. Kevin Ford and Tyler Meyer The games typically are structured to include progressive problem solving complexity and scaffold learning (Van Eck, 2006). Multiplayer, interactive serious games are also being used to facilitate team effectiveness by building team skills such as coordination and cooperation as well as honing skills in strategy and tactics. Multiplayer games often model events requiring critical decisions. With multiplayer serious games, trainees work with and coordinate actions with other trainees as well as the gaming software. Trainees take on the role of a specific identity often represented in the gaming environment (e.g., America’s Army, Counter-Strike) as an avatar (O’Connor and Menaker, 2008). For example, McGowan and Pecheux (2007) describe a game called Hot-Zone, in which trainees act as either hazardous materials technicians or the incident commander responding to the release of chlorine gas in a store. The trainer can call up different scenarios to practice and can alter the difficulty levels and the ways trainees can communicate among themselves so trainees can experience how these factors impact or influence the outcomes of the scenarios. O’Connor and Menaker (2008) contend that such multiplayer games have the potential to promote deep learning by: 1. providing a safe environment to make mistakes and see the consequences of actions; 2. presenting scenarios that stimulate senses and tap into emotions; 3. incorporating interactivity and cause and effect linkages; 4. including a cycle of judgments, behaviors and feedback; and 5. incorporating psychological and functional fidelity. E-learning and Social Networking E-learning now accounts for over one-third of learning content as training developers are turning to technology to streamline operations and deliver programs at less cost while having a wider reach (Paradise, 2008). E-learning can increase the number of people who can obtain the training but also enable continual access as well as allowing individuals to return to a particular section of the training for real time performance support (Clark & Mayer, 2008). It allows for just-in-time training delivery as trainees can access the material when needed. The learners also can have more control over how much information to attend to as well as how many practice exercises to complete. This allows for the possibility of the learner determining the depth of information and practice desired from the
Advances in Training Technology • 55 training to better fit their individual training needs. These technologies provide the potential for learners to customize their training and development activities to enable them to manage more complex and fast changing job demands (Kraiger & Ford, 2007). Brown, Charlier, and Pierotti (in press) define e-learning as a broad array of applications and processes that share a common feature of relying on some type of computer technology (internet, intranet, satellite broadcasts) to promote learning that are purposefully designed to achieve a particular set of learning outcomes for intended users. They developed a typology of e-learning experiences that can vary in terms of interactivity (high or low) and instructional focus (on developing skills or more for informational purposes of easy access, storage and use). Their typology highlights the notion that e-learning is multidimensional. Thus, the research comparing the effectiveness of e-learning and traditional classroom instruction must be viewed with caution as the results may be more a function of the interactivity and focus of the training rather than the delivery mode. Such conclusions would also seem relevant for the various mobile learning applications that have drawn increasing interest in providing short, targeted learning opportunities. For example, Gadd (2008) presented a case study of delivering learning “nuggets” by smartphone to sales personnel that impacted sales results. Organizations are also supporting the development of online communities of practice or social networks who share similar interests or similar work problems that need to be addressed (Wegner and Snyder, 2000). Downes (2010) discusses the development of web-based learning networks that provide a number of ways for users to interact and provide opportunities for individuals and teams to capture tacit knowledge or to spur informal learning across employees who are not physically in the same location. He contends that this move to an e-learning 2.0 architecture takes an ecosystem approach to learning with decentralized and distributed networks as the delivery mechanisms. Kraiger (2008) has described this approach as consistent with a social constructivist perspective that contends learning takes place in a social and cultural context that is constantly changing. Thus, learning best occurs when individuals have a social process (like social networking) where participants can develop shared understandings and socially negotiate possible solutions to real world problems. Though networking and peer-to-peer learning and sharing of strategies like best practices, individuals and teams can adopt new ideas quickly and help diffuse innovative solutions.
56 • J. Kevin Ford and Tyler Meyer Research Evidence Empirical research on the effectiveness of these training technologies has expanded over the last 10 years. Given this proliferation, meta-analytic studies have now been conducted for serious games, virtual reality training and web-based instruction. Vogel et al. (2006) conducted a meta-analytic review of games and interactive simulations and traditional instruction. Across 32 (mostly educational curriculum), they found greater cognitive and attitude gains for students in the game/interactive simulations over traditional instruction. However, they noted that the improvements for the games and interactive simulations disappeared when instructors had control over the game sequence or where the computer controlled the game sequence—indicating that a key component in the success of serious games is the extent to which the learner has control over what they practice, how long they practice, and in what sequence they practice the skills embedded into the game. More recently, Sitzmann (2011) examined 65 studies that have used simulation games and found higher levels of post-training self-efficacy, higher levels of knowledge acquisition (declarative and procedural) and greater longterm retention for those in the gaming condition than traditional training delivery. However, Sitzmann also found that traditional training was equally effective as gaming when the traditional training engaged the learner in the learning experience. Therefore, a key to the success of games as the mechanism for training delivery is the extent to which trainees are engaged through the provision of vivid examples and scenarios, a focus on guidance during the practicing of the skills, and the provision of diagnostic feedback for improvement. Chapman and Stone (2010) note that while VR is gaining greater acceptance and adoption in organizations, evidence of its effectiveness is still in an early stage. They note the potential of the technology to allow for innovative assessment such as evaluating learning “artifacts.” The evidence for the effectiveness of virtual reality training is striking for training very specific skills within discrete tasks such as medical procedures. For example, Hague and Srinivasan (2006) examined the evidence across 16 studies of surgical simulators and found that the simulators lessened the time taken to complete a given surgical task in the operating room over the more traditional clinical training. They also found no differences in error rates on the job between the two training conditions. Larson et al. (2009) examined studies on training for laparoscopic surgery through randomized controlled trials. They found that virtual reality training led to the equivalence of the experience gained from 25 surgeries. The
Advances in Training Technology • 57 traditional clinical training was found to be the equivalence of the experience gained from five surgeries. In addition, they found that medical doctors trained via virtual reality completed operations in half the time (12 to 24 minutes) of those trained through the traditional clinical method. These studies show that targeted, focused, and deliberate practice of specific skills that are directly transferrable to the job enhance training transfer. In terms of web-based instruction, Sitzmann et al. (2006) found that webbased instruction was 6 percent more effective for the acquisition of declarative knowledge and equally effective for procedural knowledge than traditional classroom instruction. The researchers also found that webbased instruction was 19 percent more effective for the acquisition of declarative knowledge when trainees were provided with a degree of learner control, allowed more time to practice the material, and where trainees received diagnostic feedback during the training. Means et al. (2009) examined 51 studies in higher education, medical schools, and K-12 programs and found that blended instruction (traditional classroom supplemented by web based instruction) was superior to either classroom instruction or e-learning alone. However, an examination of the blended instruction showed that students in this condition had, on average, more learning time under this delivery system and had additional instructional elements than the traditional classroom instruction method. They concluded that the positive effects associated with blended learning cannot be attributed to the delivery mechanism. Finally, Bernard, et al. (2009) examined 74 studies of educational programs and found that student to student interaction led to higher achievement scores and better attitudes towards learning than with student to teacher interactions. Yet, they also found that the key was the strength of the interaction component—the level of student to student or student to teacher interaction was a key driver of increased achievement scores. This suggests that with web based instruction, designers need to consider incorporating learner control, targeted practice (rather than just training content), high quality diagnostic feedback, relevant instructional principles of learning, and strengthen the interaction component within this virtual web enabled environment to be as effective as possible. Training technologies such as serious games, virtual reality, and web based instruction provide the opportunity for practicing key individual and team skills. Such practice can be especially useful in areas where the trainee may use the skills infrequently on the job (Ford & Schmidt, 2000). The technologies also provide the opportunity for people in disparate geographical sites to be engaged together in a common learning experience
58 • J. Kevin Ford and Tyler Meyer such as the NASA example of astronauts training for fixing the Hubble Telescope (Loftin, 1996). Social networking generated as part of this common experience can lead to the discussion of best practices and the diffusion of those best practices as a much faster pace. These advantages are more likely to be achieved if training strategies are incorporated into the training technologies to maximize the chances for learning to occur with these technologies. The results of the meta-analytic studies are consistent with this notion. It is not the training delivery technology per se that is the key to training effectiveness. Instead, it is the set of instructional strategies or methods that are incorporated into these technologies to maximize their potential. As noted by Clark (2010), “with each new technology wave, enthusiasts ride the crest with claims that finally we have the tools to really revolutionize training. Yet, in just a few years, today’s latest media hype will fade, yielding to the inexorable evolution of technology and a fresh spate of technological hyperbole” (p. 12). The next section highlights learning strategies that have great potential to be value-added components to enhance the effectiveness of these training delivery technologies so as to better meet the needs of the changing realities of work.
TRAINING TECHNOLOGIES, CHANGING REALITIES AND LEARNING STRATEGIES In this chapter, we have argued that three key developmental needs in organizations involve talent development especially in leadership positions, developing deep specialization of expertise in core jobs, and fostering collaborative and informal learning. In addition, we have highlighted emerging training technologies that provide the opportunity for organizations to provide more timely and effective training solutions. This section links these organizational realities with training technologies and provides suggestions on how organizations can maximize the effectiveness of these technologies to helping organizations meet these needs. Table 3.1 presents a model for considering training technologies as mechanisms for helping to meet changing organizational realities relevant to talent development, deep specialization, and collaborative learning. The cells of the model provide an avenue for considering what learning strategies could be incorporated into the training design to maximize the chances that the training is effective in accelerating talent development,
Advances in Training Technology • 59 enhancing the development of individuals from relative novices to more experienced and expert performers, and enhancing knowledge sharing, informal learning, and the diffusion of best practices. For illustrative purposes, we provide examples of learning strategies that are relevant for improving the effectiveness of a particular type of training technology platform. Thus, we highlight three cells in the model: 1. how incorporating cognitive frame changing experiences as well as providing opportunities for trainees to master contradictory demands within serious games has the potential to have high impact on developing talent; 2. how incorporating targeted and deliberative practice opportunities and building self regulatory skills through virtual reality training platforms can aid in the development of deep specialization; and 3. how to promote individual learning through e-learning 2.0 architecture (and beyond) as well as fostering collaborative learning in the organization.
TABLE 3.1 Linking Training Technologies to Developmental Needs of Organizations Serious Games
Accelerate Talent Development
Develop Deep Specialization
Facilitate Collaborative Learning
Virtual Reality
E-Learning (Web 2.0 and beyond)
Change cognitive frames Master contradictory demands Incorporate deliberate practice Build self regulatory skills Provide adaptive guidance Develop feedback skills to facilitate peer to peer learning
60 • J. Kevin Ford and Tyler Meyer Talent Development and Multiplayer Serious Games Serious games have the potential to help enhance the development of leadership competencies. These competencies include building skills to being more adaptable to changing realities and working with others to achieve team or intergroup goals. Learning strategies that can enhance an individual’s ability to react to and anticipate challenges encountered in organizations are emerging (Zaccaro et al., 2009). As noted by Cameron and Quinn (2011), “individuals must have the capacity to see problems form contradictory frames, to entertain and pursue alternative perspectives and learn to fulfill competing expectations” (p. 45). A key learning strategy, then, is how to create ways to operationalize this idea and incorporate these elements into the scenarios used in serious games. In terms of fostering cognitive frame-changing skills, Ely and colleagues (2009) propose that experiential variety, adaptive guidance and error management training will lead to greater frame-changing skills. In addition, some researchers posit that experiential variety during training must be accompanied by strategic information provision in order to promote the development of adaptability (Nelson, Zaccaro, & Herman, 2010). In a series of studies examining the effects of combining strategic information provision and experiential variety on team adaptation, Zaccaro and colleagues (2009) employed a computer-based war simulation game that allowed for deep experiential variety. Not only did team-level strategic information (i.e. outcome and process feedback) lead to higher levels of adaptability, but it also moderated the effects of experiential variety on adaptability. These findings are in line with recent research on enhancing an individual’s insight and problem solving. In a series of studies, Ansburg and Dominowski (2000) examined the effects of different learning strategies on improving verbal insight problem solving. They found that strategic information provision before and during training to help guide the trainees in combination with practice involving surface variation led to an increase in insight problem solving (i.e. cognitive frame-changing skills). In addition, designing in time during training for elaborating on problems and facilitating the search for finding procedural similarities between problems was found to increase insight problem solving. This research suggests that designers of serious games should not only incorporate a high level of variability in practice scenarios but also provide individuals and teams with the appropriate strategic information on which to form a solid base for skill development.
Advances in Training Technology • 61 Bell and Kozlowski (2008) provided participants with a pre-training list of potential errors one could make with respect to skills or strategies being emphasized in a simulated radar tracking game. Participants were told that errors were part of the learning process and to focus on better understanding through self analysis of their “mistakes” during the learning phase. This type of framing of training is consistent with research on self-dialogue in which participants are trained to monitor for negative or self-defeating thoughts and instructed on how to replace those thoughts with positive and constructive self statements (Brown, 2003; Kanfer & Ackerman, 1990). Bell and Kozlowski found that self dialogue and providing emotional control strategies along with the error framing intervention led to a decrease in anxiety, which in turn increased selfefficacy and, subsequently, increased adaptability. A creative application focused on enhancing leader adaptability has been described by Sendelbach (1993). He reports on a six-activity, multilevel training program at Ford Motor Company based on the competing values framework of Quinn and Cameron (1988). First, the training program used the competing values framework to profile the current and past organizational culture and highlight desired changes. At a less abstract level, managers were taught to use the framework to examine the current and desired characteristics of their functional unit, and how their unit fits within the overall organization. Last, at a personal level, the program enhanced leader ability to understand and reconcile managerial dilemmas, and to align their values with the values needed in their functional unit. Serious games with their focus on interconnectedness, dynamics, and in-transparency provide a rich platform from which to develop talent and build leadership skills. Research evidence suggests that adaptability can be enhanced and leadership skills accelerated by designing game scenarios that require mastering contradictory demands and integrating competing values into the creation of game solutions. The effectiveness of serious games can be enhanced by focusing on how to develop cognitive frame changing skills through incorporating experiential variety, adaptive guidance and error management strategies into the game design and delivery. Developing Deep Specialization through Virtual Reality Training Virtual reality training has the potential to help move relative novices on a job to become more competent and thus on the road to expertise. The effectiveness of virtual reality training is a function, in part, on the quality
62 • J. Kevin Ford and Tyler Meyer of the practice experiences and the extent to which self-regulatory skills are enhanced. A critical component in starting and sustaining the process of building towards expertise is through deliberate practice. Ericsson, Krampe, and Tesch-Romer (1993) presented a theoretical framework that explained expert performance as a function of sustained, prolonged efforts by an individual to continuously improve performance. Yet, it is not simply practice that is important for enhancing and sustaining performance improvements but it is what they call deliberative practice—practice focused on closing a recognized knowledge or skill gap that is seen as impacting performance. As noted by Ericsson (2006), executing proficiently during routine work may not lead to further improvement—instead improvements depend on deliberate efforts to continually refine and enhance ones skills to enhance performance. Many thousands of hours of specific types (deliberate) of practice and training are necessary for reaching superior, reproducible performance. The necessity of domain specific deliberate practice for attaining this superior, reproducible performance has been found in a variety of domains such as chess, typing, music, and sports (e.g., see Charness et al., 2005; Krampe & Ericsson, 1996). Similar findings are emerging in work contexts. For example, in a study of insurance agents, Sonnentag and Kleine (2000) found that the amount of time agents spent on deliberate practice activities was related to ratings of work performance. Research has demonstrated the usefulness of virtual reality training especially for instructing medical students. Ericsson (2004) contends that such simulators will be even more effective the extent to which they incorporate deliberate practice guided by the goals of the training. Virtual reality training may be particularly effective when helping with the initial acquisition of perceptual-motor coordination to carry out job procedures successfully. For example, Wayne et al. (2006) found that a medical curriculum featuring deliberate practice greatly increased the skills of medical residents in advanced cardiac life support scenarios. Ericsson (2004) also argues that the skill building of beginners is, however, only a first step toward capitalizing on the opportunities to engage in deliberate practice in potential simulators for domains such as medical surgery. Future simulators should “allow surgeons ample opportunities to engage in deliberate practice similar to expert performers in other domains such as music” (p. 79). This notion of the importance of deliberate practice through realistic simulations such as virtual reality training for experienced incumbents is particular relevant for jobs where certain tasks (e.g.,
Advances in Training Technology • 63 employees in nuclear facilities dealing with a major safety issue) are performed on an infrequent basis. Another critical component in the development of expertise is the capability of learners to regulate their own learning. This capacity for self-regulation is linked to deliberative practice as an individual has to recognize what specifically needs to be practiced to improve performance (Zimmerman, 2006). This capacity for self-regulation of learning is often discussed as metacognition, consists of three components—planning, monitoring, and evaluating (Ford & Kraiger, 1995). Planning involves the learner’s analysis of a learning situation and determination of what strategy is likely to lead to successful acquisition of trained knowledge and skills. Monitoring involves learners’ active attempts to track their allocation of attention, as well as their assessment of how well they comprehend the material. Evaluation involves learners’ active assessment of their success (and failures) in skill acquisition and their likelihood of successfully transferring the learned skills to the job. This self-evaluation component also includes the ability to correct ineffective learning strategies. Thus, those who are more aware of their cognitive processes and are more effective at monitoring and evaluating their strategies concurrently with performing a complex new task are more likely to be successful (Vancouver & Day, 2005). Researchers contend that increasing a learner’s metacognitive processing during training promotes a deeper processing of information by assisting them to integrate material and identify interrelationships among training concepts (Sitzmann et al., 2009). Research supports the contention that incorporating metacognitive activities into instruction can facilitate knowledge and skill acquisition as well as aid in the generalization of training to the job. For example, Ford et al. (1998) found that trainees who initiated more metacognitive activity (planning, monitoring, and selfevaluation) not only learned more, but also were better able to handle a more complex transfer task. Metacognitive processing can be facilitated during virtual reality training by encouraging learners to identify goals, generate or elaborate on existing ideas, and strive for greater understanding after each session by learning from feedback. To strengthen the effectiveness of virtual reality training of job skills, trainees, prior to and after entering the virtual reality world could be asked set challenging learning goals, visualize possible courses of action, reflect on how much they have learned, and consider if alternative learning strategies might be more effective. For example, Bornstein and Zickafoose (1999) found that informing individuals of the tendency to
64 • J. Kevin Ford and Tyler Meyer overestimate their learning led to a reduction in subsequent levels of overconfidence. Schmidt and Ford (2003) found that prompts provided during training to self question the effectiveness of their learning strategies led to greater learning and transfer to a novel task for those with mastery orientations. Sitzmann et al. (2009) found that prompting self-regulation improved declarative and procedural knowledge acquisition in a technology-delivered instruction context. Therefore, we contend that virtual reality training can help develop individuals from relative novice to competent and ultimately expert performance through sustained, and prolonged deliberative practice of skills. In addition, systematic efforts to prompt metacognitive processing during learning can lead to higher levels of learning outcomes. Build Dynamic Capabilities through e-Learning and Social Networking e-Learning and web 2.0 have the potential to enhance individual learning as well as to facilitate shared learning and social networking. Individual learning can be enhanced by taking advantage of the dynamic capabilities of e-learning systems to allow for a more personalized or customized learning experience. Learning systems can be set up where the learner has control over the content, sequence and amount of practice obtained. Yet, there are a number of problems with allowing learners total control over the learning process as they may make decisions that are not optimal for learning (Kraiger, 2008). Kirschner, Sweller, & Clark (2006) contend that such minimal guidance for learners does not work as it places a heavy cognitive load on the learner as they are attempting to acquire new knowledge and skills. Research supports the use of discovery learning with adaptive guidance of the learner (Bell & Kozlowski, 2002). In discovery learning, individuals must explore a task or situation to infer and learn the underlying rules, principles, and strategies for effective performance (Smith, Ford, & Kozlowski, 1997). There are several reasons why discovery learning can be beneficial (Klahr & Nigam, 2004). First, in a discovery learning approach, individuals are typically more motivated to learn. This increased motivation occurs due to the fact that the trainee is responsible for generating correct task strategies and, thus, is more actively engaged in learning. Second, discovery learning allows learners to use hypothesis testing and problem solving learning strategies. In contrast to the traditional deductive learning approach, this active process requires more conscious attention for its application and adds depth to the learning
Advances in Training Technology • 65 process. Third, individuals engaged in exploratory learning also have the opportunity to experiment with a greater range of strategies. The development of these strategies for discovering information can help individuals identify novel or unpredictable challenges in a job situation and, thus, promote a search for new ways to approach the situation. The new knowledge that is acquired by trying out alternative strategies through a web based instructional system can then become better integrated with the learner’s existing knowledge. There are several ways to implement a guided discovery approach to learning into web based instruction. Guidance can include the following types: having an expert/mentor who gives partial answers to problems, provides leading questions or hints to the learners, varies the size of steps in instruction (part versus whole learning), and provides prompts without giving solutions. In addition, guidance can be given to learners on how to form hypotheses and test out these ideas in an effective way (Mayer, 2004). For example, trainees can be presented with case studies of previous situations and asked to draw inferences about effective and ineffective responses to these situations. From these specific incidents, general principles of effective response can be generated and discussed. The potential to develop intelligent web based learning systems offers the opportunity to customize learning paths for individual learners. Taking principles from ITS, intelligent web based learning systems help personalize the sequencing of content based on the testing responses of the learner throughout the learning activity. Chen (2008) presents evidence that such personalized learning path guidance led to higher levels of learning than the “freely browsing” learning mode. Diziol et al. (2010) provide examples of how to incorporate ITS technology into web based learning to facilitate and support collaboration and sharing of learning across participants. Liu and Yu (2011) present a strategy to identify aberrant learning patterns early in web based training (through item response theory models) so that a computer tutoring agent can appear to notify and encourage the learner to try a different approach. Other examples are the use of online animated coaches who help show trainees how to navigate the course. For example, Rickel and Johnson (2000) describe how an on-screen agent supports trainee learning how to maintain gas turbine engines on a navy ship. From a social networking perspective, web-based communities can also be set up to share information and share best practices. These communities of practice (Wegner & Snyder, 2000) can help organizations enhance its absorptive capacity for knowledge generation and implementation of
66 • J. Kevin Ford and Tyler Meyer innovative solutions. As noted by Boiros (2011), social learning requires three elements to be successful—the right technology platform, a vibrant community of learners and engaging content. As noted by Kraiger (2008), social learning is the exchange among learners to extract meaning in the workplace through completing projects socially and providing feedback, support, and encouragement. This implies that social learning cannot be one of minimal guidance—there has to be clear goals and objectives, defined norms for participating in the social networking system, and strategies for dealing with interpersonal issues that might arise. In addition, social learning rests on the skills and motivations of the community members’ ability and willingness to provide accurate information and diagnostic feedback to others to help facilitate the learning process (Bedwell & Salas, 2008; Ilgen, Fisher, & Taylor, 1979).
FUTURE DIRECTIONS FOR RESEARCH AND PRACTICE Given the push to make traditional forms of learning digital and interactive in the military and medical fields, it is likely that training and development activities across educational and business settings will follow this trend. For example, Apple Inc has unveiled a new strategy to replace paper-based high school textbooks with interactive electronic textbooks (National Public Radio Report, 2012). A multiuser virtual environment that teaches hypothesis formation and testing, and experimental design (Clarke et al., 2006), has been shown to enhance children’s knowledge and skills in scientific inquiry (Dede, 2009). Another successful educational game, Dreambox, teaches early elementary students math, and has shown to lead to 19 to 50 percent improvements in test scores over the course of a single semester (Jorgensen, 2010a, b). Serious games have been applied in several business-related areas such as manufacturing, engineering, IT security, management practices, retail, diabetes management, construction safety and disaster preparedness (Andreatta et al., 2010; Corti, 2006). These games are seen as building the experience foundation of employees in a way that is timely and in a more controlled environment than the typical on-the-job developmental assignments. Leaders can have multiple opportunities in a serious gaming context to tackle problems, make critical decisions and obtain systematic feedback in a way that would not be possible on the job.
Advances in Training Technology • 67 An example for the business world is IBM’s serious game, Innov8 2.0, that teaches the fundamentals of business management to employees. In one of the games, learners must immerse themselves in the game, quickly learn the parameters and adapt their problem solving to successively complicated issues. Serious games such as Innov8 are successful training tools because they provide multiple perspectives, contextualized learning, adaptive experiences, and multiple opportunities to practice and improve performance (Dede, 2009). For line workers, Chrysler recently touted the opening of the World Class Manufacturing Academy, which goes high tech using VR immersion, 3D gaming strategies, and motion sensors to raise worker skills in areas such as safety, problem solving, and efficient body movements (www.lansingstatejournal.com/article20120130/business01/ 201300304/Chrysler). The use of motion sensors to help workers see how to be more efficient as well as having less stress on the body is a more technologically sophisticated take on early work by Myers (1925) in which researchers photographed movements of chocolate factory workers movements with an electric glowlamp attached to the workers hand to show new workers an easier, rhythmic action to reduce fatigue and increase output. It is clear that as VR, serious games, and web-based networking progress and enhance their cognitive fidelity, complexity and interactivity, their ability to build knowledge structures necessary for real-world problem solving and improvement in performance effectiveness will grow, ultimately outpacing traditional methods of learning in many areas. These technologies will also allow for building experiences in areas that are infrequently performed on the job. Clearly, the focus in all these endeavors with new technology is to facilitate learning and the transfer of learning to the job. This requires the accurate analysis of training needs, the clear specification of training objectives, and the understanding of criteria of success. Only then can an effective plan of instruction be developed and linked to the appropriate technological innovation to achieve the promise of these emerging technologies. Although VR, serious games, and e-learning have demonstrated success in educational or organizational settings, they are still bourgeoning learning tools. The ultimate goal of training and developmental activities through the variety of training strategies and training technologies is transfer, i.e., knowledge and skills are acquired and applied to the job such that individual behavior leads to more effective performance on the job. We have noted how learning strategies such as incorporating dilemmas and building metacognitive skills need to be incorporated to maximize the
68 • J. Kevin Ford and Tyler Meyer effectiveness of training technologies such as serious games, virtual reality, and e-learning. Research is needed to examine what types of dilemmas have greater impacts on learning as well as how best to facilitate the building of metacognitive skills within VR or serious gaming contexts. In addition, when thinking about training transfer, one must also consider pre- and post-training activities that can help the individual be ready to learn and help the individual to apply skills to the job effectively and sustain changes in behavior over time. For example, Cannon-Bowers et al. (1998) describe various pre-practice conditions that can impact learning and transfer of knowledge and skills to the job. Blume et al. (2010) conducted a meta-analysis and found that peer and supervisory support and transfer climate impacted transfer. At the team level, Marks, Zaccaro, and Mathieu (2000) demonstrate how leader briefings and team interaction interventions can facilitate team adaptation to transfer environments. Others have noted the benefits of post-action reviews of training experiences that provide time for analysis, feedback, reflection, and shared learning (Ellis & Davidi, 2005). Therefore, research is needed that links what we already know about factors impacting learning and transfer to the investigation of how to improve the effectiveness of these new and innovative training technologies (Blume et al., 2010). For example, the effectiveness of web based networking and peer-to-peer sharing is based on a number of untested assumptions. Massman (2012) contends that one critical assumption is that participants can provide the high level and quality of feedback to facilitate the learning process. Individuals are not necessarily good at providing specific and diagnostic feedback—instead the feedback tends to be more general and positive—encouraging others without much substance (Falchikov & Goldfinch, 2000). There is research evidence that training peers on how to give quality feedback can be effective (e.g., Sluijsmans, Brand-Gruwel, & Van Merriënbor, 2002; Sluijsmans et al., 2001). Thus, social networking systems for learning could benefit from interventions that help peers provide more valuable feedback. Such interventions should focus on what is considered effective feedback and allow for participants to practice giving specific and diagnostic feedback that is timely in delivery to aid other’s learning. In terms of virtual reality and serious games, we know that the type and quality of practice can have an impact on learning and transfer. Schmidt and Bjork (1992) have shown that design principles that facilitate shortterm learning and immediate retention may not produce long-term retention and transfer to more complex task situations. For example,
Advances in Training Technology • 69 constant mapping conditions in training can inhibit skill development if the learner is in the acquisition stage but facilitate transfer if the learner has already proceduralized the skills. These findings suggest the timing of when to incorporate consistent or inconsistent mapping conditions during training may be important for facilitating learning and transfer. In addition, Ford et al. (1992) noted the importance of examining the opportunity to perform or the extent to which a trainee is provided with or actively obtains work experiences relevant to the tasks for which he or she was trained. The researchers identified the opportunity to perform as a multidimensional construct that includes the breadth, depth, and complexity of practice. Thus we need a better understanding of how many times a person has to perform a task after training (and over what period of time) to automatize and retain proficiency on the job. In addition, we need a better understanding of what type of opportunities during training and on the job help meet the goal of building routine or adaptive expertise (Smith et al., 1997). The bottom line is that we have much research evidence to draw on to enhance the learning experience of trainees regardless of delivery mechanism. The effectiveness of the more sophisticated and robust learning technologies today (and tomorrow) is the incorporation of these best practice learning strategies into the content and delivery of training and learning activities as well as the pre- and post-learning environments.
REFERENCES Aldrich, C. (2009). The complete guide to simulations and serious games. San Francisco, CA: Pfeiffer/Wiley. American Red Cross. (1999). Building leadership skills through a board game based simulation to create shared mental models of the organization’s critical systems. American Red Cross Disaster Services, Alexandria, VA. Andrade, A., Bagri, A., Zaw, K., Roos, B., & Ruiz, J. (2010). Avatar-mediated training in the delivery of bad news in a virtual world. Journal of Palliative Medicine, 13, 1415–1419. Andreatta, P. B., Maslowski, E., Petty, S., Shim, W., Marsh, M., Hall, T., Stern, S., & Frankel, J. (2010). Virtual reality triage training provides a viable solution for disasterpreparedness. Academic Emergency Medicine, 17, 870–876. Ansburg, P. I., & Dominowski, R. L. (2000). Promoting insightful problem solving. The Journal of Creative Behavior, 34, 30–60. Avolio, B. J., & Hannah, S. T. (2008). Developmental readiness: Accelerating leader development. Consulting Psychology Journal, 60, 331–347.
70 • J. Kevin Ford and Tyler Meyer Baker, A., Jensen, P., & Kolb, D. A. (2002). Conversational learning: An experiential approach to knowledge creation. Westport, CT: Quorum Books. Beal, D. J., Weiss, H. M., Barros, E., & MacDermid, S. M. (2005). An episodic process model of affective influences on performance. Journal of Applied Psychology, 90, 1054–1068. Bedwell, W. L., & Salas, E. (2008). If you build it, will they interact? The importance of the instructor. Industrial and Organizational Psychology, 1, 491–493. Beinhocker, E., Davis, I., & Mendonca, L. (2009). The 10 Trends You Have to Watch. Harvard Business Review, Jul–Aug, 55–60. Bell, B. S., & Kozlowski, S. W. J. (2002). Adaptive guidance: Enhancing self regulation, knowledge and performance in technology based training. Personnel Psychology, 55, 267–305. Bell, B. S., & Kozlowski, S. W. J. (2008). Active learning: Effects of core training design elements on self-regulatory processes, learning, and adaptability. Journal of Applied Psychology, 93, 296–316. Benyon, D., & Murray, D. (1993). Adaptive systems: From intelligent tutoring to autonomous agents. Knowledge-Based Systems, 6, 197–219. Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, A., Wozney, L, Wallet, P. A., Fixet, M., & Huant, B. (2009). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74, 379–439. Blume, B., Ford, J. K., Baldwin, T., & Huang, J. (2010). Transfer of training: A meta-analytic review. Journal of Management, 36, 1065–1105. Boiros, P. (2011). The eight truths of social learning. Now. White Paper. SkillsSoft. Bornstein, B. H., & Zickafoose, D. J. (1999). “I know I know it, I know I saw it”: The stability of the overconfidence-accuracy relationship across domains. Journal of Experimental Psychology: Applied, 5, 76–88. Bowden, E. M., Jung-Beeman, M., Fleck, J., & Kounios, J. (2005). New approaches to demystifying insight. Trends in Cognitive Sciences, 9, 322–327. Bowling, S. R., Khasawneh, M. T., Kaewkuekool, S., Jiang, X. C., & Gramopadhye, A. K. (2008). Evaluating the effects of virtual training in an aircraft maintenance task. International Journal of Aviation Psychology, 18, 104–116. Brehmer, B. (2004). Some reflections on microworld research. In S. Schiflett, L. Elliott, E. Salas, & M. D. Coovert (Eds.), Scaled worlds: Development, validation and applications. Burlington, VT: Ashgate Publishing. Brooks, F. P. (1999). What’s real about virtual reality? IEEE Computer Graphics and Applications, 16–27 Brown, K. G., Charlier, S. D., & Pierotti, A. (in press). E-Learning at work: Contributions of past research and suggestions for the future. In. G. Hodgkinson & J. K. Ford (Eds.), The International Review of I/O Psychology. Chichester, UK: Wiley/Blackwell. Brown, T. C. (2003). The effects of verbal self-guidance training on collective efficacy and team performance. Personnel Psychology, 56, 935–964. Cameron, K. S., & Quinn, R. E. (2011). Diagnosing and changing organizational culture: Based on the competing values framework (3rd edn). New York: John Wiley. Cameron, K. S., Quinn, R. E., DeGraff, J., & Thakor, A. V. (2006). Competing values leadership: Creating value in organizations. London: Edward Elgar. Cannon-Bowers, J. A., Rhodenizer, L., Salas, E., & Bowers, C. A. (1998). A framework for understanding pre-practice conditions and their impact on learning. Personnel Psychology, 51, 291–320. Carbonell, J. R. (1970). AI in CAI: An artificial intelligence approach to computer-assisted instruction. IEEE Transactions on Man-Machine Systems, 11, 190–202.
Advances in Training Technology • 71 Chakravorti, B. (2010). Finding Competitive Advantage in Adversity: Difficult business environments can offer rich opportunities to entrepreneurs. Harvard Business Review, November, 102–108. Chapman, D. D., & Stone, S. J. (2010). Measurement of outcomes of virtual environments. Advances in Developing Human Resources, 12, 665–680. Charness, N., Tuffiash, M., Krampe, R., Reingold, E., & Vasyukova, E. (2005). The role of deliberate practice in chess expertise. Applied Cognitive Psychology, 19, 151–165. Chen, C. (2008). Intelligent web-based learning system with personalized learning path guidance. Computers & Education, 51, 787–814. Clark, R. C. (2010). Evidence-based training methods: A guide for training professionals. Washington, DC: American Society for Training and Development Press. Clark, R. C., & Mayer, R. E. (2008). E-learning and the science of instruction (2nd edn). San Francisco, CA: John Wiley. Clarke, J., Dede, C., Ketelhut, J., & Nelson, B. (2006). A design-based research strategy to promote scalability for educational innovations. Educational Technology, 46, 158–165. Cockayne, W., & Darken, R. (2004). The application of human ability requirements to virtual environment interface design and evaluation (pp. 401–421). In D. Diaper and N. Stanton, (Eds.), The handbook of task analysis of human computer interaction. Mahwah, NJ: LEA. Cohen, W. M, & Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35, 128–152. Corti, K. (2006). Games-Based Learning: A Serious Business Application. Games-Based Business & Management Skills Development. Retrieved August, 2011, from: www. pixelearning.com/docs/seriousgamesbusinessapplications.pdf Cuther-Gershenfeld, J., & Ford, J. K. (2005). Valuable disconnects in organizational learning systems. New York: Oxford University Press. Davenport, T. H., Eccles, R. G., & Prusak, L. (1992). Learning and governance. Sloan Management Review, 34, 53–65. Dede, C. (2009). Immersive interfaces for engagement and learning. Science, 323(66), 66–68. DeYoung, C. G., Flanders, J. L., & Peterson, J. B. (2008). Cognitive abilities involved in insight problem solving: An individual differences model. Creativity Research Journal, 20, 278–290. Diziol, D., Walker, E., Rummel, N., & Koedinger, K. R. (2010). Using intelligent tutor technology to implement adaptive support for student collaboration. Educational Psychology Review, 22, 89–102. Downes, S. (2010). Learning networks and connective knowledge. In H. Yang & S. Yuen (Eds.), Collective intelligence and e-learning 2.0. Hersey, PA: IGI Global. Duncker, K. (1945). On problem solving. Psychological Monographs, 58 (Whole No. 270). Dye, R., & Stephenson, E. (2010). Five Forces Reshaping the Global Economy. McKinsey Quarterly. Retrieved January, 2011, from www.mckinseyquarterly.com/Five_forces_ reshaping_the_global_economy_McKinsey_Global_Survey_results_2581 Ellis, S., & Davidi, I. (2005). After-event reviews: Drawing lessons from successful and failed experience. Journal of Applied Psychology, 90, 857–871. Ely, K., Zaccaro, S. J., & Conjar, E. A. (2009). Leadership development: Training design strategies for growing adaptability in leaders. In C. Cooper & R. Burke (Eds.), The peak performing organization. London: Routledge.
72 • J. Kevin Ford and Tyler Meyer Ericsson, E. A. (2004). Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Academic Medicine, 79, supplement, S70-S81. Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. Feltovich, & R. R. Hoffman (Eds.), Cambridge handbook of expertise and expert performance. Cambridge, UK: Cambridge University Press. Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49, 725–747. Ericsson, K. A., Krampe, R. T., & Tesch-Romer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 363–406. Ericsson, K. A., Nandagopal, K., & Roring, R. W. (2009). Toward a science of exceptional achievement. Longevity, Regeneration, and Optimal Health: New York Academy of Sciences, 1172: 199–217. Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A metaanalysis comparing peer and teacher marks. Review of Educational Research, 70, 287–322. Ford, J. K., & Kraiger, K. (1995). The application of cognitive constructs to the instructional systems model of training. In C. L. Cooper, I. T. Robertson (Eds.), International Review of Industrial and Organizational Psychology. Chichester, UK: Wiley. Ford, J. K., & Schmidt, A. (2000). Emergency response training: Strategies for enhancing real-world performance. Journal of Hazardous Materials, 75, 195–215. Ford, J. K., Quinones, M. A., Sego, D., & Sorra, J. (1992). Factors affecting the opportunity to perform trained tasks on the job. Personnel Psychology, 45, 511–527. Ford, J. K., Smith, E., Weissbein, D., Gully, S., & Salas, E. (1998). Relationships of goal orientation, metacognitive activity, and practice strategies with learning outcomes and transfer. Journal of Applied Psychology, 83, 218–233. Foster-Fishman, P. G., Salem, D. A., Allen, N. A., & Fahrbach, K. (2001). Facilitating interorganizational exchanges: The contributions of interorganizational alliances. American Journal of Community Psychology, 29, 875–905. Fowlkes, J., Dwyer, D. J., Oser, R. L., & Salas, E. (1998). Event-based approach to training (EBAT). International Journal of Aviation Psychology, 8, 209–222. Frank, K. A., Krause, A., & Penuel, W. R. (2009). Knowledge flow and organizational change. Invited presentation. University of Chicago, Sociology Department, March. Gadd, R. E. (2008). Sales Quenchers Case Study: Delivering Learning Nuggets by Smartphone. Learning Solutions Magazine. Retrieved August, 2011, from www. learningsolutionsmag.com/articles/89/sales-quenchers-case-study-delivering-learningnuggets-by-smartphone Gold, S. C. (1998). The design of an ITS-based business simulation: A new epistemology for learning. Simulation and Gaming, 29, 462–474. Goldstein, I. L., (1974). Training in organizations. Belmont, CA: Brooks/Cole. Goldstein, I. L., & Ford, J. K. (2002). Training in organizations: Needs assessment, development, and evaluation (4th edn) Belmont, CA: Wadsworth. Gupta, S. K., Anand, D. K., Brough, J. E., Schwartz, M., & Kavetsky, R. A. (2008). Training in virtual environments. College Park, MD: Calce Press. Hagel III, J., Brown, J. S., & Davison, L. (2008). Shaping Strategy in a World of Constant Disruption. Harvard Business Review, Oct, 81–89. Hagel III, J., Brown, J. S., & Davison, L. (2009). The Big Shift: Measuring the Forces of Change. Harvard Business Review, Jul–Aug, 86–89. Hague, S., & Srinivasan, S. (2006). A meta-analysis of the training effectiveness of virtual reality surgical simulators. Transactions on Information Technology in Biomedicine, 10, 51–58.
Advances in Training Technology • 73 Hampden-Turner, C., & Trompenaars, F. (2000). Building cross-cultural competence: how to create wealth from conflicting values. London: Yale University Press. Harders, M. (2008). Surgical scene generation for virtual reality-based training in medicine. London: Springer-Verlag. Hodgson, P., & White, R. P. (2001). Relax, it’s only uncertainty. London: Financial Times Prentice Hall. Honig, M. I., & Coburn, C. (2008). Evidence-based decision making in school district central offices: Toward a policy and research agenda. Educational Policy, 22, 578–608. Ilgen, D. R, Fisher, C. D., & Taylor, M. S. (1979) Consequences of individual feedback on behavior in organization. Journal of Applied Psychology, 64, 349–371. Ilgen, D. R., & Pulakos, E. D. (1999). Introduction: Employee performance in today’s organization. In D. R. Ilgen & E. D. Pulakos (Eds.), The changing nature of performance: Implications for staffing, motivation, and development (pp. 1–18). San Francisco, CA: Jossey-Bass. Ives, W. (1990). Soft skills in high tech: Computerizing the development of interpersonal skills. Information Delivery Systems, March/April. Johnson, B. (1992). Polarity management. Amherst, MA: HRD Press. Jorgensen, M. (2010a). Results from the DreamBox Learning Grade 2 Assessment Study: Math Achievement Test Demonstrates 19% Increase. Retrieved August, 2011, from www.dreambox.com/downloads/pdf/DreamBox_Results_from_the_Grade_2_Study. pdf Jorgensen, M. (2010b). Results from DreamBox Learning Embedded Assessment Study: Demonstrates 50% Increase in Student Proficiency in Math. Retrieved August, 2011, from www.dreambox.com/downloads/pdf/DreamBox_Results_from_Embedded_ Assessment_Study.pdf Kanfer, R., & Ackerman, P. L. (1990). Ability and metacognitive determinants of skill acquisition and transfer (Air Force Office of Scientific Research Final Report). Minneapolis, MN: Air Force Office of Scientific Research. Kaplan, R. E., & Kaiser, R. B. (2006). The versatile leader: Make the most of your strengths— without overdoing it. San Francisco, CA: Pfeiffer. Killian, D. C. (1976). The impact of flight simulators on U.S. Airlines. American Airlines Flight Academy, Fort Worth, Texas. Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work. Educational Psychologist, 41, 75–86. Klahr, D., & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15, 661–667. Knerr, B. W. (2007). Immersive simulation training for the dismounted soldier. Simulator Systems Research Unit, U.S. Army Research Institute for the Behavioral and Social Sciences, Orlando, FL. Koza, M. P., & Lewin, A. Y. (1998). The co-evolution of strategic alliances. Organization Science, 9, 255–264. Kraiger, K. (2008). Transforming our models of learning and development: Web-based instruction as enabler of third-generation instruction. Industrial and Organizational Psychology, 1, 454–467. Kraiger, K., & Ford, J. K. (2007). The expanding role of workplace training: Themes and trends influencing training research and practice (pp. 281–309). In L. Koppes (Ed.), Historical Perspectives in Industrial and Organizational Psychology, Mahwah, NJ: LEA. Krampe, R. T., & Ericsson, K. A. (1996). Maintaining excellence: Deliberate practice and elite performance in young and older pianists. Journal of Experimental Psychology: General, 125, 331–359.
74 • J. Kevin Ford and Tyler Meyer Lane, P. J., Koka, B. R., & Pathak, S. (2006). The reification of absorptive capacity: A critical review and rejuvination of the construct. Academy of Management Review, 80, 833–863. Larson, C. R., Soerensen, J. I., Grantcharov, T., Dalsgaard, T., Schouenborg, L., Ottosen, C., Schroeder, T., & Ottesen, B. S. (2009). Effect of virtual reality training on laparoscopic surgery: Randomised controlled trial. British Medical Journal, 338, 1–6. Lasker, R. D., & Weiss, E. S. (2003). Broadening participation in community problem solving: A multidisciplinary model to support collaborative practice and research. Journal of Urban Health: Bulletin of the New York Academy of Medicine, 80, 14–60. Liu, M. T., & Yu, P. T. (2011). Aberrant learning achievement detection based on personfit statistics in personalized e-learning systems. Educational Technology & Society, 14, 107–120. Loftin, R. B. (1996). Hands across the Atlantic. Virtual Reality Special Report (March/April), 39–42. Lord, R. G., & Hall, R. J. (2005). Identity, deep structure, and the development of leadership skills. The Leadership Quarterly, 16, 591–615. Luppa, N., & Borst, T. (2007). Story and simulations for serious games: Tales from the trenches. Burlington, MA: Elsevier. Maier, N. R. F. (1931). Reasoning in humans: II. The solution of a problem and its appearance in consciousness. Journal of Comparative Psychology, 12, 181–194. Marks, M. A., Zaccaro, S. J., & Mathieu, J. E. (2000). Performance implications of leader briefings and team interaction training for team adaptation to novel environments. Journal of Applied Psychology, 85, 971–986. Massman, A. J. (2012). Improving third generation learning: The effects of peer feedback training on quality feedback, training characteristics, and performance. Unpublished paper. Michigan State University, E. Lansing, MI. Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery learning? The case for guided methods of instruction. American Psychologist, 59, 14–19. McCall, M. W. (2004). Leadership development through experience. Academy of Management Executive, 18, 127–130. McGowan, C., & Pecheux, B. (2007). Serious gaming: Advanced computer simulation games help to transform healthcare and disaster preparedness. Health Management Technology, 14, 16–23. McKenzie, J., Woolf, N., van Winkelen, C., & Morgan, C. (2009). Cognition in strategic decision making A model of non-conventional thinking capacities for complex situations. Cognition in strategic decision making, 47, 209–232. Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidencebased practices in online learning: A meta-analysis and review of online learning studies. U.S. Department of Education, Washington, DC. Myers, C. S. (1925). Industrial psychology. New York: The People’s Institute Publishing Company. National Public Radio Report (2012). Apple Pushes Interactive Textbooks on iPads. NPR. Retrieved January, 2012, from www.npr.org/2012/01/19/145457942/apple-pushesto-put-textbooks-on-ipads Nelson, J. K., Zaccaro, S. J., & Herman, J. L. (2010). Strategic information provision and experiential variety as tools for developing adaptive leadership skills. Consulting Psychology Journal: Practice and Research, 62, 131–142. O’Connor, D. L, & Menaker, E. S. (2008). Can massively multiplayer online gaming environments support team training? Performance Improvement Quarterly, 21, 23–41.
Advances in Training Technology • 75 Ohlsson, S. (1992). Information-processing explanations of insight and related phenomena. In M. T. Keane and K. J. Gilhooly (Eds.), Advances in human thinking: Volume One. New York: Harvester Wheatsheaf. Paradise, A. (2008). State of the Industry. ASD’s annual review of trends in workplace learning and performance. Alexandria, VA: ASTD. Quinn, R. E., & Cameron, K. S. (1988). Paradox and transformation: Toward a theory of change in organization and management. Cambridge, MA: Ballinger. Ratan, R., & Ritterfeld, U. (2009). Classifying serious games. In U. Ritterfeld, M. Cody, & P. Vorderer (Eds.), Serious games: Mechanisms and effects. New York: Routledge. Ready, D. A., Hill, L. A., & Conger, J. A. (2008). Winning the Race for Talent in Emerging Markets. Harvard Business Review, Nov, 63–70. Reeves, T. C. (1992). Evaluating interactive multimedia. Educational Technology, 32, 47–53. Rickel, J., & Johnson, L. W. (2000). Task oriented collaboration with embodied agents in virtual worlds. In J. Cassell, J. Sullivan, S. Prevost, & E. Churchill (Eds.), Embodied conversational agents. Cambridge, MA; MIT Press. Ritterfeld, U., Cody, M., & Vorderer, P. (2009). Serious games: Mechanisms and effects. New York: Routledge. Schmidt, A., & Ford, J. K., (2003). Learning within a learner control training environment: The interactive effects of goal orientation and metacognitive instruction on learning outcomes. Personnel Psychology, 56, 405–429. Schmidt, R. A., & Bjork, R. A. (1992). New conceptualizations of practice: Common principles in three paradigms suggest new concepts for training. Psychological Science, 3, 207–217. Schmitt, N., Cortina, J. M., Ingerick, M. J., & Wiechmann, D. (2003). Personnel selection and employee performance. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology: Industrial and organizational psychology (Vol. 12; pp. 77–105). Hoboken, NJ: John Wiley & Sons, Inc. Sendelbach, N. B. (1993). The competing values framework for management training and development: A tool for understanding complex issues and tasks. Human Resource Management, 32, 75–99. Senge, P. M. (2006). The fifth discipline (2nd edn). New York: Doubleday. Silverman, R. E. (1960). Automated teaching: A review of theory and research (NAVTRADEVCEN Technical Report 507–2). Port Washington, NY: U.S. Naval Training device Center. Sitzmann, T. (2011). A meta-analytic examination of the instructional effectiveness of computer-based simulation games. Personnel Psychology, 64, 489–528. Sitzmann, T., Bell, B. S., Kraiger, K., & Kanar, A. M. (2009). A multilevel analysis of the effect of prompting self regulation in technology-delivered instruction. Personnel Psychology, 62, 697–734. Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59, 623–664. Sluijsmans, D., Brand-Gruwel, S., & Van Merriënbor, J. (2002). Peer assessment training in teacher education: Effects on performance and perceptions. Assessment and Evaluation in Higher Education, 27, 443–454. Sluijsmans, D., Moerkerke, G., Van Merriënbor, J., & Dochy, F. (2001) Peer assessment in problem-based learning, Studies in Educational Evaluation, 27, 153–173. Smith, E. M., Ford, J. K., & Kozlowski, S. W. J. (1997). Building adaptive expertise: Implications for training design. In M. A. Quinones & A. Dudda (Eds.), Training for 21st century technology: Applications of psychological research (pp. 89–118). Washington, DC: APA Books
76 • J. Kevin Ford and Tyler Meyer Sonnentag, S. (1998). Expertise in professional software design: A process study. Journal of Applied Psychology, 83, 703–715. Sonnentag, S., & Kleine, B. M. (2000). Deliberate practice at work: A study with insurance agents. Journal of Occupational and Organizational Psychology, 73, 87–102. Spillane, J. P., Reiser, B. J., & Reimer, T. (2002). Policy implementation and cognition: Reframing and refocusing implementation research. Review of Educational Research, 72, 387–431. Szulanski, G. (1996). Exploring internal stickiness: Impediments to the transfer of best practice within the firm. Strategic Management Journal, 17, 27–43. Takeuchi, H., Osono, E., & Shimizu, N. (2008). The Contradictions that Drive Toyota’s Success. Harvard Business Review, June, 96–104. Trompenaars, F., and Hampden-Turner, C. (2002) 21 Leaders for the 21st Century, Capstone, London. Vancouver, J. B., & Day, D. V. (2005). Industrial and organization research on selfregulation: From constructs to applications. Applied Psychology: An International Review, 54, 155–185. Van Den Bosch, F. J. A., Volberda, H. W., & De Boer, M. (1999). Co-evolution of a firms’ absorptive capacity and knowledge environment: Organizational forms and combinative capabilities. Organization Science, 10, 551–568. Van Eck, R. (2006). Digital game-based learning: It’s not just the digital natives who are restless. Educause, March/April, 16–30. Vogel, J. J., Vogel, D. S., Cannon-Bowers, J., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: A meta-analysis. Journal of Educational Computing Research, 34, 229–243. Wang, X., & Tsai, J. (2011). Collaborative design in virtual environments. Intelligent systems. Control and Automation: Science and Engineering, 48, 17–26. Wayne, D. B., Butter, J., Siddall, V. J., Fudala, M. J., Wade, L. D., Feinglass, J., & McGahie, W. C. (2006). Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. Journal of General Internal Medicine, 21, 251–257. Wegner, E. C., & Snyder, W. M. (2000). Communities of Practice: The Organizational Frontier. Harvard Business Review, 78, 139–145. Wexley, K., & Latham, G. P. (1981). Developing and training human resources in organizations. Glenview, IL: Scott, Foresman. White, R. P. & Shullman, S. L. (2010). Acceptance of uncertainty as an indicator of effective leadership. Consulting Psychology Journal: Practice and Research, 62, 94–104. Yukl, G., & Mahsud, R. (2010). Why flexible and adaptive leadership is essential. Consulting Psychology Journal: Practice and Research, 62, 81–93. Zaccaro, S. J., Banks, D., Kiechel-Koles, L., Kemp, C., & Bader, P. (2009). Leader and team adaptation: The influences and development of key attributes and processes. Tech. Rep. No. #1256, U.S. Army Research Institute for Behavioral and Social Sciences, Arlington, VA. Zahra, S. A., & George, G. (2002). Absorptive capacity: A review, reconceptualization, and extension. The Academy of Management Review, 27, 185–203. Zimmerman, B. J. (2006). Development and adaptation of expertise: The role of selfregulatory processes and beliefs. In K. A. Ericsson, N. Charness, P. Feltovich, & R. R. Hoffman (Eds.), Cambridge handbook of expertise and expert performance. Cambridge, UK: Cambridge University Press. Zollo, M., & Winter, S. G. (2002). Deliberate learning and the evolution of dynamic capabilities. Organization Science, 13, 339–351.
4 Technology and Performance Appraisal James L. Farr, Joshua Fairchild, and Scott E. Cassidy
Performance appraisal and management are cornerstones of industrialorganizational psychology and human resources, yet frequently are major sources of dissatisfaction among organizational employees and management (Pulakos, 2009). Performance-related information can be used in a number of ways in work organizations (Landy & Farr, 1983), including making administrative decisions (e.g., pay increases, promotions, and terminations), providing feedback to employees about strengths and developmental needs, and serving as criteria for the assessment of other HR systems (e.g., validation of selection procedures and evaluation of training programs). While the wide array of uses for performance-related information enhances its potential value, its many purposes (and stakeholder groups) can lead to conflicting goals and pressures regarding the performance data that are obtained. These many purposes and many interested stakeholders also have resulted in a voluminous literature regarding performance appraisal and management. Space limitations require us to be selective in the topics we address, even when we focus on the relation of technology and the measurement and use of job performance information, and we refer the reader to more comprehensive sources in several cases. Likewise, there are large numbers of firms providing software systems for various aspects of performance appraisal and management. Such systems have quickly moved from ones that tended to address one or two relatively narrow functions to software suites providing a comprehensive set of tools for performance information measurement and application to cloud computing-based approaches that permit efficient worldwide centralization of HR functions. We avoid detailed descriptions of specific 77
78 • James L. Farr et al. technology systems since those would likely become obsolete prior to the publication of this volume! What we do discuss are issues more generally applicable to the implementation of new technology in relation to job performance measurement. In addition, we highlight some other aspects of the changing work environment that we believe are especially sensitive to psychological issues related to new technologies. We begin with some discussion of the general benefits and drawbacks to technology-based performance appraisal and management in work organizations.
GENERAL CONSIDERATIONS IN THE IMPLEMENTATION OF ELECTRONIC PERFORMANCE APPRAISAL SYSTEMS Potential Benefits of Electronic Performance Appraisal Systems Electronic performance appraisal systems, particularly online systems, present many potential benefits for organizations. These systems centralize numerous human resource functions, and enable easy access to a wide variety of information about employees. By making such information continuously available to employees, managers, and HR, such online systems provide a framework to enhance organizational efficiency and decision-making. Of particular interest to the organization, when used properly, such systems have the potential to increase productivity and enhance an organization’s competitiveness (Johnson & Gueutal, 2011; Levensaler, 2008). In fact, a survey of organizations by Gueutal and Falbe (2005) notes that organizations cite potential cost savings as the number one reason for adopting electronic HR systems. Furthermore, these systems can serve as a solid backbone for multisource or 360-degree appraisal systems, allowing users to submit their ratings easily via the organization’s network (Bracken, Summers, & Fleenor, 1998; Summers, 2001). (For detailed discussions of multisource appraisal systems, see Bracken, Timmreck, & Church (2001) and Morgenson, Mumford, & Campion (2005).) By combining such multisource feedback with other HR software tools (such as Enterprise Resource Planning (ERP) software), such systems can make it more efficient for executives or HR professionals to obtain a picture of the organization’s overall personnel strengths and
Technology and Performance Appraisal • 79 weaknesses, allowing for more informed decisions about employees (Cardy & Miller, 2005; Greengard, 1999; Johnson & Gueutal, 2011). Since online performance appraisal systems have the potential to capture and store a rich variety of information about employees (Gueutal & Falbe, 2005; Johnson & Gueutal, 2011; Neary, 2002), they can be a valuable decision-making aid for upper management (Johnson & Gueutal, 2011). For instance, an employee database created by such systems can be used to identify high or low performers, assist in compensation decisions, aid in succession planning, or to determine training needs for departments or individual employees. Additionally, unlike more traditional appraisal systems, many electronic systems have the added benefit of including built-in tutorial or training systems (Summers, 2001). Such systems allow users to quickly troubleshoot simple problems and familiarize themselves with the features of the software (Neary, 2002; Summers, 2001). This again reduces the need to spend additional time and money providing rater training for those individuals providing feedback. Again, this has the potential to save the organization money. Also of interest to the organization is that these systems often also provide checks for legal compliance, such as scanning managers’ assessments for overly harsh or discriminatory language (Cardy & Miller, 2005). Such features as these provide both short-term savings and long-term protection for the organization. From the manager’s perspective, online performance appraisal systems also have numerous benefits. For one, such systems can greatly simplify and alleviate time spent collecting and aggregating employee performance data (Johnson & Gueutal, 2011), particularly via electronic performance monitoring (EPM), which when used properly can collect and store employee performance data in a central repository (Ehrhart & ChungHerrera, 2008). By maintaining such a database that includes performance data and prior feedback from multiple sources (such as other past raters), much of the strain associated with providing feedback to employees can be alleviated (Bracken, Summers, & Fleenor, 1998). In addition to built-in help features mentioned previously, the automation and electronic assistance built into many online systems can alleviate repetitive and tedious aspects of conducting a performance review, allowing managers to focus more closely on rating their employees, devoting more time to the feedback itself, and less to collecting performance data and navigating the appraisal process (Cardy & Miller, 2005; Hunt, 2011; Johnson & Gueutal, 2011).
80 • James L. Farr et al. Furthermore, such systems often contain mechanisms for more frequent feedback than would be provided in a more traditional appraisal system. This both avoids the rush and time crunch that can result at the end of the year, when feedback would normally be given, resulting in less stress for the managers and likely higher-quality feedback. Additionally, by reducing the delay between employees’ at-work behavior and feedback, employees are more likely to see a clear connection between their performance and their evaluations. Such specific feedback is more likely to be accepted by employees and lead to performance improvements (Atkins, Wood, & Rutgers, 2002). Additionally, many online systems contain error or accuracy checking features, such as providing feedback to raters about how closely their ratings agree with those of others (Bracken, Summers, & Fleenor, 1998), or monitoring legal compliance. While there can admittedly be shortcomings with such features, these systems may provide managers with a sense of confidence that their ratings are accurate and valid, encouraging them to provide more honest feedback to their employees (Cardy & Miller, 2005). Similarly, employees evaluated via an electronic system that collects and reports a broad range of performance criteria were found to view their performance ratings as more fair and accurate (Payne et al., 2009). Although Payne et al. (2009) found that employees viewed their performance ratings obtained from an electronic system as fair and accurate, other studies have reported inconsistent results regarding rating accuracy. Weisband & Atwater (1999) found in a laboratory setting that self-ratings were more inflated and less accurate when obtained electronically, although ratings of peers were more accurate and less influenced by liking for the peer when obtained electronically. Kurtzberg, Naquin, & Belkin (2005) found in three empirical studies that peers were rated more negatively when e-mail was used as the communication medium than when a more traditional paper-and-pencil medium was used. Kurtzberg et al. noted that their findings were consistent with other research findings that individuals are more negative in an online environment than they are faceto-face (e.g., Herbert & Vorauer, 2003; Sussman & Sproull, 1999). As suggested here, in addition to benefiting the organization and managers, electronic performance appraisal systems offer potential benefits for the employee as well. For example, supervisors are often able to provide more frequent feedback to employees and direct them toward meaningful developmental opportunities. Also, frequent feedback that is targeted at specific incidents (which may have been glossed over at an annual performance review) may be more accepted by employees as
Technology and Performance Appraisal • 81 accurate and useful developmental feedback (Atkins, Wood, & Rutgers, 2002). Additionally, by providing a platform encouraging more frequent feedback, employees may find negative feedback less surprising, and are more likely to see it as “developmental,” particularly if it is paired and/or linked with developmental opportunities (Kluger & DeNisi, 1996; Cardy & Miller, 2005). Furthermore, by providing a direct link between performance appraisal and training systems (as is often the case when an online performance appraisal system is linked to an enterprise-wide HR system), it becomes easier for employees to seek out specific training opportunities based on their feedback (Cardy & Miller, 2005; Johnson & Gueutal, 2011), as well as to chart their own performance improvement over time. Finally, online appraisal systems provide a platform for centralizing the multisource feedback process so commonly used in modern organizations. In such a system, all raters can submit their ratings through a common system, which then aggregates them into a unified feedback report, which can help preserve the anonymity of peer raters. Such a feature should alleviate some of the potential discomfort that may occur when providing assessments of one’s colleagues in the multisource feedback process, and should in turn encourage more accurate, honest feedback. As seen here, electronic performance appraisal systems have numerous potential benefits for organizations, as well as their employees. However, these systems also introduce a number of unique considerations that must be addressed, in order to ensure successful implementation and avoid pitfalls. Potential Pitfalls and Considerations for Implementation While electronic appraisal systems have numerous benefits associated with network-based technology, they can also suffer from many of the same pitfalls associated with other technology. First of all, it is important to consider the attitudes that both supervisors and subordinates may have toward technology. In particular, some individuals may have a distrust or discomfort with technology, which can impair the success of any system that relies on it. The potential effects of such attitudes on system use and effectiveness (particularly with regard to feedback) will be discussed in greater detail later, but suffice to say that failure to get buy-in from the individuals who will be using it can lead to an organization spending a great deal of time and money on something that will not be used effectively. In support of this line of reasoning, one survey study reported that organizations identified insufficient training and support for technology-based
82 • James L. Farr et al. HR systems as the greatest barrier to their success (Levensaler, 2008). As such, in order for an electronic appraisal system to be effective, both managers and their subordinates need to fully understand the technology and feel comfortable using it, a process that can involve expensive and time-consuming training prior to adoption. Recently, there has been some indication that employees may not be as fazed by (well-implemented) technologically-mediated appraisal systems as often assumed (Payne, et al., 2009). Additionally, some electronic HR service vendors advertise the “user friendliness” of their services, asserting that their systems are easy to use with minimal computer skill (e.g., Taleo, 2008). However, there is a difference between tolerance and desire to fully utilize a system, and truly effective implementation requires that both supervisors and employees are using the system to its fullest potential. Johnson & Gueutal (2011) argue that upper-level organizational support for electronic HR systems is essential for effective implementation. They suggest that a system will be embraced if the organization can demonstrate that the system will be fair, will save time, and make work easier for supervisors and employees (Johnson & Gueutal, 2011). Such a demonstration cannot be a one-time thing; instead, a successful electronic appraisal system must continue to convey its benefits to employees and supervisors after implementation. Software features, such as built-in help systems can help to facilitate such perceived support (Johnson & Gueutal, 2011). Also essential is communication between the developer and vendor of the electronic HR system and the organization in which it will be implemented (Gueutal & Falbe, 2005; Taleo, 2008). Whether the system is developed in-house or sourced from an outside vendor, it is essential that any technologically-mediated appraisal system demonstrate that it is tailored to the organization’s needs, values, and culture (Gueutal & Falbe, 2005; Johnson & Gueutal, 2011). Technological systems are far more likely to be accepted by supervisors and employees if they appear to operate on principles that the organization values. Perhaps more worrisome than the effects of distrust for technology itself is the likelihood more frequent data collection and reporting associated with many electronic performance appraisal systems may negatively impact employee performance. These effects will be discussed in greater detail in a subsequent section, but suffice to say that electronic performance monitoring may impair performance both by reducing subordinate trust or redirecting their focus to tangential or irrelevant tasks (e.g., Cardy & Miller, 2005; see the Alge and Hansen chapter in this volume).
Technology and Performance Appraisal • 83 In order for such systems to be effective, it is essential that employees fully understand and accept the purpose and nature of electronic performance monitoring, and that the managers conducting performance evaluations understand and use the full range of job-relevant criteria when generating ratings and providing feedback. Failure to do so may not only result in feedback that is not meaningful, such an approach can place constraints on what employees can do and how they do it, instead of allowing managers the flexibility to provide detailed, meaningful assessments of performance. Again, as powerful as these systems may be, it is vital to tailor the use of software to the people in the organization, not tailor the people to the software. Similarly, some electronic appraisal systems provide helpful services to managers, such as automatically generating performance dimensions for assessments (Cardy & Miller, 2005). Such computer-generated reports can be a fast and efficient aid to the appraisal process, and may help increase the speed with which performance feedback can be delivered (Johnson & Gueutal, 2011). However, such tools may lead managers to focus their assessments on machine-identified dimensions that can be either deficient or irrelevant to employees’ jobs (Gueutal & Falbe, 2005; Johnson & Gueutal, 2011). Ratings based on such dimensions would be useless at best, and could undermine employee satisfaction and confidence in their managers at worst. Again, it is vital that technology, no matter how advanced, be used as a tool for the appraisal process, and not as the driving force behind the process. A related concern with online appraisal systems is associated with such systems’ feedback delivery mechanisms. While the anonymity associated with feedback reports produced from such systems can be beneficial (as discussed previously), this can also pose a detriment to manager–subordinate relationships by increasing the perceived distance between supervisors and their employees (Cardy & Miller, 2005). This can result in negative outcomes such as decreased trust, perceptions of unfairness or procedural injustice, and reduced feedback acceptance. For instance, a study by Payne and colleagues (Payne et al., 2009) found that employees perceived feedback quality to be lower after their organization switched from a paper-based appraisal system to an online process. Worse still, reliance on the mechanisms of an online system to generate ratings can lead to supervisors providing feedback that they are unable to explain, creating a perception that the system is in charge, and not the supervisor. This can further undermine perceptions of managerial competence and fairness. In order to avoid such problems, electronic appraisal
84 • James L. Farr et al. systems should be used to support, not replace, the traditional face-to-face feedback process, and in-person meetings with one’s subordinates should still be conducted frequently. When such frequent, in-person feedback is provided, using data from online appraisals as a supplement, feedback is more likely to be perceived by employees as objective and useful (Johnson & Gueutal, 2011). Perhaps most importantly, it must be remembered that no matter how high-tech a system is, it cannot run well if people are not willing to use it appropriately. Specifically, “implementation that fails to consider trust, fairness, system factors, objectivity, personality, or computer literacy and training has negative implications for an organization’s distinct and inimitable human component” (Cardy & Miller, 2005, p. 151). Similarly, Levensaler (2008) notes that the potential benefits of electronic HR systems are only realized through the implementation of the system, and the organization’s support for its employees and culture. Again, it is essential to remember that even the most advanced systems cannot be effective without support from members at all levels of the organization. Pulakos and O’Leary (2011; O’Leary & Pulakos, 2011) have noted the need in any performance management system for considerable attention to be paid to the enhancement of manager–employee communications and relationships in order to improve trust in its processes and perceptions of fairness concerning the various outcomes and decisions based on its data. Although there are multiple purposes and applications for performance information, one of the most important is the provision of feedback to job incumbents about their strengths and weaknesses. In the following section we examine issues related to technologically-based performance feedback.
PERFORMANCE MANAGEMENT AND FEEDBACK IN ONLINE ENVIRONMENTS Feedback has long been demonstrated to be an important part of the performance appraisal process and research has shown, to varying degrees, that feedback can have a positive impact on organizational performance (e.g., Bourne, 1966; Hackman & Oldham, 1976; Kluger & DeNisi, 1996). Early definitions of performance feedback reflect the core of this concept in the most general sense, asserting that feedback is simply information received by an individual about his or her past behavior (Annett, 1969).
Technology and Performance Appraisal • 85 More modern definitions, however, are somewhat more specific and tend to acknowledge the feedback provider in addition to the recipient. Kluger and DeNisi (1996) describe feedback as “actions taken by external agents to provide information regarding some aspects of one’s task performance” (p. 255), while Velsor, Leslie, and Fleenor (1997) define feedback as “information about a person’s performance or behavior, or the impact of performance or behavior, that is intentionally delivered to that person in order to facilitate change or improvement” (p. 36). In addition to the relative convergence of modern definitions of performance feedback, it is also generally agreed upon that, for feedback to be useful and effective, it must be relevant, accurate/valid, specific, consistent, understandable, and timely (Baker, 2010; Mohrman, ResnickWest, & Lawler, 1989; Rummler & Brache, 1995). As was previously discussed, electronic performance appraisal systems have the potential to facilitate such effective feedback, by enabling supervisors to provide more frequent, behavior-specific feedback. Given the ubiquitous presence of technology in the modern workplace, a logical step forward is an exploration of how such advances are being harnessed to impact the utility and effectiveness of organizational feedback as it relates to job performance and the appraisal process more generally. Technology in Performance Feedback: Attitudes and Relative Effectiveness While one might understandably conclude that the study of the interaction between technology and performance feedback is a product of the current “digital age,” the reality is that this interaction has been explored for decades. Before modern advances (e.g., personal computers, the microprocessor, advanced graphical-interface operating systems such as Microsoft Windows, the internet, and computer networking) were commonplace in organizations, scholars suggested that technology could play a role as a feedback source. In the 1980s, when the pervasiveness of the personal computer (PC) was in the infantile stages of spreading into the work environment, Weick (1985) suggested that the use of technological sources of feedback (i.e., computers) poses two “threats”: 1) that technology psychologically distances workers from the source of performance information, decreasing the likelihood of appropriate utilization of the information; and 2) that technology permits the substitution of “machine skills” for intellectual involvement in work tasks, encouraging a sort of blind overreliance on machines (referred to by subsequent scholars
86 • James L. Farr et al. as “technomindlessness”) that may serve to decrease productivity and innovation. In response, Northcraft and Earley (1989) tested a series of hypotheses to demonstrate a lack of empirical support for the contention that technology-based feedback sources foster a sense of “technomindlessness” (p. 83). In a laboratory study, 55 participants engaged in a stock market simulation and received performance feedback from one of four sources: organization, supervisor, and self-generated with or without the aid of a computer. The results demonstrated that involvement in the generation of feedback (i.e., self-generated feedback condition) significantly influenced credibility of feedback, strategy acquisition, and performance regardless of whether a computer was used, and that an individual’s personal experience (i.e., expertise) with a computer did not significantly impact the credibility and psychological distance of feedback. The authors point to the importance of these findings in terms of the potential benefits of computergenerated feedback as opposed to a fostering of “technomindlessness,” and discuss constructs, such as trust, that are now inherently bound into the modern study of feedback more generally. From a modern technological perspective, what is particularly interesting (and clearly foreshadowing), is the suggestion that another experimental condition, whereby feedback is self-generated in the sense that the recipient uses the computer only to “display or retrieve” (p. 94) feedback should be evaluated, and that great potential for involvement in the feedback generation process resides in the “possibility” of a database that could be readily accessible and which permits query-control and feedback presentation options. As it turns out, more than two decades later, this is precisely the sort of technology that is employed in modern organizations where advances such as the internet, e-mail, and sophisticated, specialized software applications are often used to deliver, communicate, display and retrieve performance feedback (including multi-source and/or 360 evaluations). However the modern trend of using technological advances, to include online performance feedback, particularly at the expense of more traditional methods of feedback, has been met with some empirically-based criticism, both experimental and non-experimental. Such criticism is in line with our earlier suggestion that individuals’ knowledge and attitudes toward technology may have influences on the effectiveness of electronic performance appraisal systems. In an experimental study, Kurtzberg, Belkin, and Naquin (2006) argue that few efforts to empirically investigate reactions to performance feedback via different delivery methods have been conducted, and that extant studies provide inconsistent results. The authors
Technology and Performance Appraisal • 87 employed a scenario study with 171 business school students to investigate participant’s differential attitudes regarding receipt of identical feedback via e-mail, hardcopy (i.e., paper), or a face-to-face interaction. It was found that participants responded most positively to performance feedback when it was delivered via hardcopy, and most negatively when it was delivered via e-mail (i.e., online). Kurtzberg, Belkin, and Naquin (2006) theoretically challenge the notion that all text-based media (e.g., hardcopy and e-mail) should be considered equally effective, concluding that, in organizational settings, e-mail will be viewed as less effective and less accepted for feedback delivery particularly when the feedback contains some element of criticism (due, in part, to the incongruence of recipient expectations and the interpretation that computer communication is the psychologically easier choice for delivery of undesirable information). Additional experimental evidence supports the author’s claims (e.g., Riccomini, 2002; Smith & Ragan, 1999). Non-experimental studies provide additional empirical evidence that technologically delivered feedback is not always the preferred method of receipt. For example, a study by Huang et al. (2005) investigated the potential uses of performance feedback systems in the trucking industry as a means of improving safety. Truck driver reactions to technologically delivered versus people (i.e., supervisor) delivered performance feedback were evaluated and it was found that truck drivers generally preferred the latter. More specifically, the authors suggested that because truck drivers spend the majority of their working time alone and rarely interact with peers, it may be possible to use data gathered via in-vehicle technology to provide feedback to drivers regarding their driving behavior. The primary purpose of this study was to examine truck drivers’ attitudes toward using this invehicle technology to provide feedback for enhancing driving safety and to understand the relative effectiveness of different ways of providing performance-based feedback to them. To do so, nine focus groups were conducted, with a total of 66 participants, and the qualitative data on attitudes toward feedback technology were used to develop a survey which was, in turn, used to collect quantitative data from 198 long-haul truck drivers. Therefore, it should be noted that truck drivers did not actually receive performance feedback, but rather the study focused on truck driver attitudes if such technology was employed. Regardless, drivers reported that 1) more feedback was preferred to less feedback; and that 2) feedback from other truck drivers, supervisors, or managers was more desirable than nonhuman, technology-based feedback. However, the majority of drivers were
88 • James L. Farr et al. willing to accept feedback by technology if the program itself was designed properly. Furthermore, the truck drivers expressed no strong preference regarding the best form of performance feedback (i.e., modality, frequency and/or timing). Therefore, the authors concluded that it is important for technology-based programs for providing feedback to be adaptable to varying preferences (Huang et al., 2005). The limited extant literature explicitly involving technology-based performance feedback provides a contradictory picture regarding attitudes about feedback communication technology. For example, Keil and Johnson (2002) suggest that students perceive e-mail to be a high quality medium for delivering performance feedback on an exam, primarily due to the textbased nature and ability to absorb the message content how and when one desires. Similarly, Kluger and DeNisi (1996) posit that computeradministered performance feedback might elicit positive reactions because of the provision of time that is afforded to the message recipient to process the message without the need to react, become defensive, or manage immediate impressions. Both of these works, however, were theoretical rather than empirical. While some empirical work (e.g., Payne et al., 2009) has also suggested that users did not exhibit negative attitudes toward electronic systems, it is important to note that other research indicates that insufficient training can impair acceptance of such tools (Levensaler, 2008). While the effects of individuals’ attitudes toward technology and technology-based appraisal systems are not clear-cut, what is apparent is that differences in attitudes regarding the role of modern technology in the performance feedback process do exist. But what is driving these differences? Despite earlier findings to the contrary (e.g., Northcraft & Earley, 1989), it may be that familiarity, perceived self-efficacy, and perceived competency with computers and related technologies are important moderators of reactions to, acceptance of, and trust in, technology-based performance feedback. Indeed, despite the “basic” technological aptitude that is often afforded the average worker in the current digital age, many scholars warn that significant generational differences exist, which may in part be driving such attitudinal differences. Again, Levensaler (2008) identifies inadequate training and limited support for technology as the greatest barrier to successful implementation. While an exhaustive discussion of generational differences, and corresponding attitudes toward performance feedback technology, is outside the scope of this chapter, several fundamental tenets bear mention, in particular that individuals with less experience with technology may need increased training and support in order to feel comfortable using or receiving feedback from such systems.
Technology and Performance Appraisal • 89 In fact, the idea of individual differences, such as those highlighted above, influencing feedback receptivity and use is not a new one (e.g., Ilgen, Fisher, & Taylor, 1979; Northcraft & Earley, 1989). As Northcraft and Earley (1989) suggest, familiarity or facility with computers may prove to be a key moderator of reactions to computers as a feedback source. More recently, the notion of generational differences as a driving force behind such a moderating effect was proposed by Prensky (2001) in his work delineating digital immigrants from digital natives. Prensky (2001) focuses primarily on the impact of such generational differences from a pedagogical perspective, suggesting that while some refer to modern students as the “N” (for Net) or “D” (for digital) generation, the most appropriate designation is digital natives given that they are all “native speakers” of the digital language of computers, video games, and the internet (p. 1). Those born before the digital age are designated digital immigrants and continue to learn and adapt to technology (some better than others) but never lose their “accent” from the past. According to Prensky (2001), this “accent” is manifest by scenarios such as turning to the internet for information as a backup rather than by default, or by reading an instruction manual for a given technology rather than assuming that the technology itself will teach one how to use it. In this way, older generations were socialized differently from subsequent generations, and are now in the process of learning a new language (Prensky, 2001). This phenomenon may, at least in part, help explain why there may be generational differences in attitudes toward the use of technology to deliver performance feedback given that different generations may be speaking different “languages.” This same notion is echoed in a recent discussion by Baker (2010), who suggests that performance feedback needs to be tailored to the recipient with great consideration given to the recipient’s generation and length of employment. Given that generation “Y” (i.e., digital natives) is a technologically savvy group that has been used to receiving instant performance feedback via technology, managers and supervisors should consider that these same individuals are quite comfortable with instant messaging, e-mail, texting, and other means of feedback provision, while employees from other generations may consider such means less “personal” and prefer more traditional face-to-face feedback interactions (Baker, 2010). At the end of the day, while technology can play a critical role in speed and effectiveness of information processing, organization, and delivery, ultimately the final product incorporates a human operator. As such, to fully develop human capital, as Baker (2010) suggests, the best method and amount of performance feedback need to suit the recipient
90 • James L. Farr et al. from a technological perspective and not be based solely on the preferences of the feedback provider. Feedback and Computerized Performance Monitoring (CPM) While the most common feedback technologies include traditional face-to-face assessments, multi-source feedback (e.g., 360 feedback), and coaching, a particularly intriguing performance management tool, computerized performance feedback (CPM), is becoming increasingly popular. At the most fundamental level, CPM technology facilitates performance data collection by counting the number of work units completed during a specified time period and may include: tracking the length of time a computer terminal idles; the number of keystrokes pressed; an employees working pace and/or degree of accuracy; log-in and log-off times; and customer orientation at any moment (Aiello & Kolb, 1995). In this way, CPM allows for both supervisor control over employee performance data for evaluation purposes as well as potentially permitting employees to track their own progress and generate feedback data (Aiello & Kolb, 1995; Miller, 2003). A unique organizational context where CPM has been especially popular and effective in recent years is in the e-service industry, or service that is provided via electronic means such as websites, e-mail, or the internet (Ehrhart & Chung-Herrera, 2008). In their recent examination of human resources management (HRM) practices in this environment, Ehrhart and Chung-Herrera (2008) point out that, some companies combine traditional face-to-face client interactions along with e-service (e.g., Barnes & Noble, Eddie Bauer), while other companies rely exclusively on e-service (e.g., Amazon.com). Furthermore, several companies, including IBM Corporation, offer CPM systems that can be customized to a particular organization’s needs for those e-service companies that do not have the skills or resources to create their own CPM systems. The exciting new and emerging technology provides a wealth of potential benefits, and it has been suggested that coupling CPM with Managementby-Objectives (a goal-setting technique) might be a particularly effective performance management approach for technology-laden jobs (Ehrhart & Chung-Herrera, 2008; Miller, 2003). However, this technological approach to performance management and feedback has also generated controversy because of its shortcomings, to include the potential for abuse (Hawk, 1994). Foremost among these, according to Miller (2003), is trust, such that an
Technology and Performance Appraisal • 91 overreliance on CPM often reduces face-to-face interactions, weakening trust, and, in turn, decreasing productivity (Ilgen, Fisher, & Taylor, 1979). Furthermore, such close monitoring can impart “Big Brother” overtones, making employees uncomfortable or suspicious of their managers or the organization and further reducing trust (Cardy & Miller, 2005). Therefore, it is essential that supervisors continue to communicate with their employees in person, and convey that the systems (and associated feedback) will be used for developmental purposes. Furthermore, as Miller (2003) suggests, trust may also decrease when an individual believes that the intent of CPM is to monitor and control rather than to coach and develop, which also generates controversy with regard to worker privacy and security. On the contrary, Miller (2003) and others have suggested that employee self-efficacy may increase when one is permitted control to generate their own feedback via CPM technology as opposed to reliance on a supervisor. Another potential problem related to CPM is the likelihood that employees will over-emphasize the behaviors that they feel are being closely monitored or recorded (Johnson & Gueutal, 2011). If these monitored behaviors in fact cover the extent of an employee’s responsibilities, then such emphasis may be beneficial. However, it is much more likely that these criteria represent a small subset of employees’ responsibilities, and that over-emphasizing closely monitored criteria can lead to employee performance suffering in other, valued areas (Johnson & Gueutal, 2011). As such, when implementing an electronic performance appraisal system, detailed feedback on all job-relevant aspects of employee performance becomes even more valuable; by providing more holistic evaluations, supervisors can discourage employees from focusing too closely on any one aspect of their jobs. This recommendation assumes that supervisors will be effective in their use of the information maintained by an electronic performance appraisal system. However, this assumption does not always hold true. As previously discussed, online performance appraisal systems often make it easy to track and maintain data on a large variety of performance criteria (Ehrhart & Chung-Herrera, 2008). While such data can be collected regularly and used to provide frequent feedback to employees, such easy data collection can lead supervisors to focus on minutiae that may not be important in the long run. Similarly, if not properly trained, managers could fall into the trap of using the system as a “checklist,” crossing off tasks when their employees complete them, or making a note when they fail to do so (Cardy & Miller, 2005; Johnson & Gueutal, 2011).
92 • James L. Farr et al. In order to prevent such misuse, and to ensure that employees receive meaningful performance feedback that engenders trust, the organization must communicate to managers that the appraisal system should be used to support the values of the organization (Levensaler, 2008). The discussion above regarding generational differences in acceptance of, and trust in, emerging feedback technologies has implications for CPM as well. As Earley (1988) has suggested, employees with less experience and familiarity with computers and related technology may be more likely to reject performance feedback generated via CPM systems. The first two of Miller’s (2003) 12 research propositions regarding CPM effectiveness offer support for this idea: 1) computer literacy will influence satisfaction with appraisal in CPM environments, such that individuals with greater computer literacy will experience higher levels of appraisal satisfaction; and 2) appraisal satisfaction in CPM environments will rise concurrently with greater investments in employee literacy and training. Virtual Teams and Performance Management Another sector of the modern workplace where the union of rapidly emerging technology with performance management has important implications is virtual teams. In response to an increasingly decentralized and global work environment, many organizations have implemented virtual teams, whereby group members who are geographically or temporally dispersed work together. One of the consequences of this loss in proximity is a heavy reliance on modern technologies, such as e-mail, instant messaging, “smart” phones, and video-conferencing, to communicate and deliver feedback. The confluence of technological advancement and the need to adjust to the modern work environment has created a situation whereby most large organizations, to varying degrees, employ virtual teams (e.g., Gibson & Cohen, 2003; Hertel, Geister, & Konradt, 2005). However, despite the rapid growth of virtual teams and supporting technologies, relatively little is known about how to effectively manage the human resource components, including performance feedback, within these teams (Hertel, Geister, & Konradt, 2005; Kirkman et al., 2004). One area that has received some empirical attention is Electronic Performance Monitoring (EPM), which, for the purposes of this chapter, is synonymous with CPM (addressed above) in that both involve the use of computer-based systems to record various aspects of performance. In their recent review of empirical research on managing virtual teams, Hertel, Geister, and Konradt (2005) claim that most studies reveal some evidence
Technology and Performance Appraisal • 93 that EPM is associated with increased stress among employees. However, the authors note that such EPM effects are also highly variable and buffered to a great extent when employee participation, system input, team cohesion and individual differences such as locus of control are taken into account. Despite mixed evidence, Hertel, Geister, and Konradt (2005) conclude that EPM, overall, is not well suited for virtual team management but favor a more delegative system whereby certain managerial functions—to potentially include performance feedback—are shifted to team members. Of notable promise and compromise, however, might be the provision of certain EPM data to team members as part of a multi-source/360 feedback scenario, which may serve to foster trust among virtual team members. In fact many scholars, including MacDuffie (2008), suggest that peer assessments as part of the feedback may also prove particularly important for virtual teams in terms of building a stronger sense of team unity and identity, as well as trust among team members. As noted above, this may be an area where EPM (or CPM) may be useful for virtual teams in terms of providing team members with peer-related performance data for multisource feedback. MacDuffie (2008) explains that virtual teams benefit as much as non-distributed teams from certain fundamental performance management concepts such as the provision of clear goals and objectives, participation in setting these goals, and receipt of performance-related feedback. However, he also suggests that, due to the decreased likelihood of informal feedback during direct personal encounters, explicit performance feedback may be particularly helpful and important for members of distributed teams. Furthermore, MacDuffie (2008) explains that trust is emergent and evolutionary for all teams, but that teams with distributed members often experience greater difficulty in establishing trust, which may, in part, be a function of experiencing greater conflict and mistaken attributions. The empirical literature on trust and virtual teams presents a mixed picture with regard to trust and virtual teams, but generally points to the notion that trust, while clearly attainable for virtual teams, may arise differently than it does for traditional teams, and is heavily dependent on electronic communication (e.g., Child, 2001; Zheng et al. 2001). In the modern workplace, electronic communication is possible via numerous methods (e.g., video-conferencing, texting, instant messaging) that provide real-time communication as well as virtual face-to-face encounters. As such, it is likely that such technology will continue to become increasingly sophisticated and, as a result, provide virtual teams with access to powerful communication tools that will undoubtedly impact trust among group members.
94 • James L. Farr et al. What is apparent from the preceding discussion is that technology provides a mixed bag of benefits and drawbacks for managing virtual teams, to include the provision of performance feedback, and that there is likely no single technological advancement that uniformly presents a best-practice solution for organizations. As Fligo, Hines, and Hamilton (2008) suggest, every job and every company is unique and there are likely no “cookie cutter” tools in the assessment world that can function to promote and evaluate virtual team performance across all jobs and cultures (p. 542).
TECHNOLOGY AND PERFORMANCE APPRAISAL: SOME CONCLUDING REMARKS A simple internet search is all that is needed to discover the dizzying array of off-the-shelf performance appraisal and feedback software options that exist in today’s digital age and, while there are clear benefits to such technological advancement, there are also potential drawbacks as well. Indeed, decisions regarding which technologies might be used in a given organization should be weighed carefully and tailored to a given organization. As Cardy and Miller (2005) suggest, technology does provide great positive possibilities (i.e., the “light side”), but negative outcomes, while typically unintended, can be part of the advancement (i.e., the “dark side”). The key, of course, is to maximize the benefits while minimizing the drawbacks and negative outcomes. To do so, Cardy and Miller (2005) and Pulakos (2009) provide numerous recommendations for the implementation and maintenance of technology-based performance management systems, including: 1. Monitor employee satisfaction with appraisal via regular surveys or focus groups. 2. Bolster trust in feedback by permitting employees direct access to feedback data and some level of control in system processes. 3. Remain cognizant of relationships between demographic and personality factors and workplace technology use, and be willing to adapt as appropriate. 4. Provide manager and employee training for performance appraisal software and for understanding the potential advantages of the performance management system if properly used.
Technology and Performance Appraisal • 95 5. Avoid allowing technology to intervene between a manager and employee by replacing face-to-face discussions with sole reliance on computer-generated feedback. 6. Evaluate with multiple metrics (important to various stakeholder groups) and continually improve the system. We do not disagree with any of these recommendations, although we are somewhat uncomfortable with the “thin” research base that supports some of them. In particular, we know little about the specific contextual factors that moderate the details of how we implement them. As performance management systems become more fully integrated both technologically and functionally, doing rigorous research on their effectiveness becomes more challenging but more critical. Quasi-experimental designs that utilize the sequential application of system features have the potential to add much to our knowledge base and we encourage their use. See Mayer and Davis (1999) for an example of such a design related to performance appraisal system implementation and Grant and Wall (2009) for a general discussion of these designs.
AUTHOR NOTE The authors thank Allan Church, Russell Lobsenz, and Nathan Mondragon for very helpful discussions of their experiences with technology-based performance appraisal and management systems.
REFERENCES Aiello, J. R., & Kolb, K. J. (1995). Electronic performance monitoring and social context: Impact on productivity and stress. Journal of Applied Psychology, 80, 339–353. Annett, J. (1969). Feedback and human behavior. Baltimore, MD: Penguin. Atkins, P. W. B., Wood, R. E., & Rutgers, P. J. (2002). The effects of feedback format on dynamic decision making. Organizational Behavior and Human Decision Processes, 88, 587–604. Baker, N. (2010). Employee feedback technologies in the human performance system. Human Resource Development International, 13(4), 477–485. Bourne, L. E. (1966). Comments on professor I. M. Bilodeau’s paper. In E. Bilodeau (Ed.), Acquisition of skill. New York: Academic Press. Bracken, D. W., Summers, L., & Fleenor, J. (1998). High-tech 360. Training & Development, 52, 42–45.
96 • James L. Farr et al. Bracken, D. W., Timmreck, C. W., & Church, A. H. (2001). The handbook of multisource feedback. San Francisco: Jossey-Bass. Cardy, R. L., & Miller, J. S. (2005). eHR and performance management: A consideration of positive potential and the dark side. In: H. G. Gueutal & D. L. Stone (Eds.), The brave new world eHR: Human resources management in the digital age (pp. 138–165). San Francisco: Jossey-Bass. Child, J. (2001). Trust-the fundamental bond in global collaboration. Organizational Dynamics, 29, 274–288. Earley, P. C. (1988). Computer-generated performance feedback in the magazinesubscription industry. Organizational Behavior and Human Decision Processes, 41, 50–64. Ehrhart, K. H., & Chung-Herrera, B. G. (2008). HRM at your service: Developing effective HRM systems in the context of e-service. Organizational Dynamics, 37, 75–85. Fligo, S. K., Hines, S., & Hamilton, S. (2008). Using assessments to predict successful virtual team collaboration performance. In J. Nemiro, M. M. Beyerlin, L. Bradley, & S. Beyerlin (Eds.), The handbook of high performance virtual teams: A toolkit for collaborating across boundaries, pp. 533–552. San Francisco: Jossey-Bass. Gibson, C. B., & Cohen, S. G. (2003). Virtual teams that work. Creating conditions for virtual team effectiveness. San Francisco: Jossey-Bass. Grant, A. M., & Wall, T. D. (2009). The neglected science and art of quasi-experimentation: Why-to, when-to, and how-to advice for organizational researchers. Organizational Research Methods, 12, 653–686. Greengard, S. (1999). How to fulfil technology’s promise. Workforce, 4, 10–18. Gueutal, H. G., & Falbe C. M. (2005). eHR: Trends in delivery methods. In: H. G. Gueutal, & D. L. Stone (Eds.), The brave new world eHR: Human resources management in the digital age (pp. 190–225). San Francisco: Jossey-Bass. Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16, 250–279. Hawk, S. R. (1994). The effects of computerized performance monitoring: An ethical perspective. Journal of Business Ethics, 13, 949–957. Herbert, B. G., & Vorauer, J. D. (2003). Seeing through the screen: Is evaluative feedback communicated more effectively in face-to-face or computer-mediated exchanges? Computers in Human Behavior, 19, 25–38. Hertel, G., Geister, S., & Konradt, U. (2005). Managing virtual teams: A review of current empirical research. Human Resources Management Review, 15, 69–95. Huang, Y. H., Roetting, M., McDevitt, J. R., Melton, D., & Smith, G. S. (2005). Feedback by technology: Attitudes and opinions of truck drivers. Transportation Research, 8, 277–297. Hunt, S. T. (2011). Technology in transforming the nature of performance management. Industrial and Organizational Psychology, 4, 188–189. Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behavior in organizations. Journal of Applied Psychology, 64, 349–371. Johnson, R. D., & Gueutal, H. G. (2011). Transforming HR through technology: The use of E-HR and HRIS in organizations. Society for Human Resource Management Effective Practice Guidelines Series. Alexandria, VA. Keil, M., & Johnson, R. D. (2002). Feedback channels: Using Social Presence Theory to compare voice mail to e-mail, Journal of Information Systems Education, 13, 295–302. Kirkman, B. L., Rosen, B., Tesluk, P. E., & Gibson, C. B. (2004). The impact of team empowerment on virtual team performance: The moderating role of face-to-face interaction. Academy of Management Journal, 47, 175–192.
Technology and Performance Appraisal • 97 Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254–284. Kurtzberg, T. R., Belkin, L. Y., & Naquin, C. E. (2006). The effect of e-mail on attitudes towards performance feedback. International Journal of Organizational Analysis, 14(1), 4–21. Kurtzberg, T. R., Naquin, C. E., & Belkin, L. Y. (2005). Electronic performance appraisals: The effects of e-mail communication on peer ratings in actual and simulated environments. Organizational Behavior and Human Decision Processes, 98, 216–226. Landy, F. J., & Farr, J. L. (1983). The measurement of work performance: Methods, theory, and applications. New York: Academic Press. Levensaler, L. (2008). The essential guide to employee performance management systems (Part 2). Bersin & Associates Research Report. MacDuffie, J. P. (2008). HRM and distributed work: Managing people across distances. In J. P. Walsh & A. P. Brief (Eds.), The academy of management annals, pp. 549–615. New York: Taylor & Francis Group/Lawrence Erlbaum Associates. Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal system on trust for management: A field quasi-experiment. Journal of Applied Psychology, 84, 123–136. Miller, J. S. (2003). High tech and high performance: Managing appraisal in the information age. Journal of Labor Research, 24(3), 409–424. Mohrman, A. M., Resnick-West, S. M., & Lawler, E. E. (1989). Designing performance appraisal systems. San Francisco: Jossey-Bass. Morgenson, F. P., Mumford, T. V., & Campion, M. A. (2005). Coming full circle: Using research and practice to address 27 questions about 360-degree feedback programs. Consulting Psychology Journal: Practice and Research, 57, 196–209. Neary, D. B. (2002). Creating a company-wide, on-line, performance management system: A case study at TRW, Inc. Human Resource Management, 41(4), 491–498. Northcraft, G. B., & Earley, P. C. (1989). Technology, credibility, and feedback use. Organizational Behavior and Human Decision Processes, 44, 83–96. O’Leary, R. S., & Pulakos, E. D. (2011). Managing performance through the manager– employee relationship. Industrial and Organizational Psychology, 4, 208–214. Payne, S. C., Horner, M. T., Boswell, W. R., Schroeder, A. N., & Stine-Cheyne, K. J. (2009). Comparison of online and traditional performance appraisal systems. Journal of Managerial Psychology, 24(6), 526–544. Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, NCB University Press, 9(5), 1–6. Pulakos, E. D. (2009). Performance management: A new approach for driving business results. Chichester, UK: Wiley-Blackwell. Pulakos, E. D., & O’Leary, R. S. (2011). Why is performance management broken? Industrial and Organizational Psychology, 4, 146–164. Riccomini, P. (2002). The comparative effectiveness of two forms of feedback: Web-based model comparison and instructor delivered corrective feedback. Journal of Educational Computing Research, 27(3), 213–228. Rummler, G. A., & Brache, A. P. (1995). Improving performance: How to manage the white space on the organizational chart (2nd edn). San Francisco: Jossey-Bass. Smith, P. L., & Ragan, T. J. (1999). Instructional design (2nd edn). New York: John Wiley and Sons.
98 • James L. Farr et al. Summers, L. (2001). Web technologies for administering multisource feedback programs. In D. Bracken, C. W. Timmreck, & A. H. Church (Eds.), The handbook of multisource feedback (pp. 165–180). San Francisco: Jossey-Bass. Sussman, S. W., & Sproull, L. (1999). Straight talk: Delivering bad news through electronic communication. Information Systems Research, 10, 150–166. Taleo Corporation (2008). Taleo case study: Freeport-McMoRan Copper & Gold, Inc. Dublin, CA. Velsor, E. V., Leslie, J. B., & Fleenor, J. W. (1997). Choosing 360: A guide to evaluating multirater feedback instruments for management development. Greensboro: Center for Creative Leadership. Weick, K. E. (1985). Cosmos vs. chaos: Sense and nonsense in electronic contexts. Organizational Dynamics, 14, 50–65. Weisband, S., & Atwater, L. (1999). Evaluating self and others in electronic and face-toface groups. Journal of Applied Psychology, 84, 632–639. Zheng, J., Bos, N., Olson, J. S., & Olson, G. M. (2001). Trust without touch: Jump-start trust with social chat. Paper presented at CHI-01 Conference on Human Factors in Computing Systems. Seattle, WA.
5 Teams and Technology Jonathan Miles and John R. Hollenbeck
This chapter expands the dimensional scaling approach used for team description by Hollenbeck, Beersma, and Schouten (2012) by adding two dimensions to deal with team communication: interaction frequency and interaction quality. Beginning with a thorough review of research on technology and teams, the authors identify the key fundamental constructs that have been used to separate virtual teams from their face-to-face counterparts, and point out that this separation is ultimately harmful to further research in this area. The chapter instead proposes using a set of dimensional scales to define teams, regardless of their virtual or face-toface nature. Using examples from existing literature, the authors examine the moderating effects of interaction frequency and interaction quality. Finally, the chapter discusses the contribution of this new dimensional scaling approach for future research on virtual teams.
INTRODUCTION The use of work teams in contemporary organizations has been wellpublicized over the last two decades. Companies in the United States, encouraged by the success of Japan’s team-oriented workplace, began basing their compensation systems on team-based pay as early as the 1970s. In 1990, 59 percent of companies surveyed used such team-based work systems, a number that rose to 80 percent by 1999 (Garvey, 2002). These teams have also played an increasingly larger role in the way that organizations compete with one another, and research consensus has shown that the use of team-based structures makes organizations more able to adapt to novel events and creates meaningful, effective individual roles for employees to follow (Ilgen et al., 2005). 99
100 • Jonathan Miles and John R. Hollenbeck More recently, team-based structures have been shown to have another advantage: through the use of new technologies, these structures are not limited by spatial or temporal requirements. While it was at one time impossible for employees in far-flung locations and time zones to work effectively with one another, improvement in communication technology has significantly lessened such concerns. A 2008 I4CP Taking the Pulse survey of managers in 250 companies found that 67 percent thought virtual teams would skyrocket in importance in the future. For companies in the survey with more than 10,000 employees, the number was 80 percent (Stillman, 2008). This explosion in the importance of virtual teams is often due to the need for global organizations to populate teams with the best employees possible regardless of their location. This growth of use in industry has been paralleled by a growing interest in the research community on virtual teams. Although early research made simple gross distinctions between virtual and face-to-face teams (Guzzo & Dickson, 1996), increasingly refined differentiations have resulted in a view that suggests that “virtualness” may be a useful dimension on which all teams could be rated (Griffith & Neale, 2001; Griffith, Sawyer, & Neale, 2003). Most recently, researchers have begun to look more deeply into the differences between the many different types of teams that could be qualified as virtual teams (Martins, Gilson, & Maynard, 2004). While the recognition of the varying types of virtual teams is certainly an improvement over the gross characterization of teams into virtual vs. face-to-face categories, the number of varying taxonomic approaches for virtual teams continues to grow. Instead of continuing to add to this number of different taxonomies of virtual teams, it would be best to eliminate much of the confusion by applying a dimensional scaling approach. Hollenbeck, Beersma, and Schouten (2012) recently provided just such a framework for work teams, which proposed the possibility of a dimensional scale to accommodate the special differentiation common in virtual teams. The purpose of this chapter is to provide a brief overview of the current research on virtual teams, concentrating on those areas in which such teams behave differently than traditional face-to-face teams. After this background is in place, we will provide a set of two dimensional scales (interaction frequency and interaction quality), which should be used to specify how often and how well interaction occurs between members of a team. These dimensional scales, when combined with three others, will then represent a method of classifying teams that will be inclusive of whether the team in question is face-to-face or virtual. Finally, the chapter will propose how
Teams and Technology • 101 this new dimensional scaling approach lends itself to providing future research directions for studying teams in an environment of constant technological advancement.
THE IMPACT OF DISTANCE AND TECHNOLOGY ON TEAMS While the concept of a virtual team is not new, producing a single definition of the term from the literature is difficult if not impossible. A recent review of the literature on virtual teams was forced to state that “the foundation for the majority of definitions is the notion that [virtual teams] are functioning teams that rely on technology-mediated communication while crossing several different boundaries” (Martins, Gilson, & Maynard, 2004, p. 807). In this context, the boundaries that have received the most research are geographical, temporal, and organizational, with the majority of attention being paid to the first two. The two aspects that seem constant for all definitions of virtual teams are (a) that they are made up of members who are spatially or temporally distributed; and (b) that the use of communication technology is required to bridge this spatial or temporal gap. It is this inclusion of distance and technology that ultimately creates the main differences between face-to-face teams and virtual teams. In many ways, the MIS literature on technology adaptation has been the most useful in understanding how the introduction of technology to team interactions can produce new challenges. In general, the phenomenon of technological adaptation has been examined from an organizational standpoint, beginning with a model of mutual adaptation where the misalignments between the demands of the technology and the routines and norms of the organization are slowly rectified by a series of changes in each (Leonard-Barton, 1988). More recently, however, adaptive structuration theory holds that the features and spirit of the technology, in concert with the structure of the organization and group, create pressures to modify existing routines or develop new social structures. Generally, these structures result in modifying organizational policies first, because the technological elements are thought to be more difficult to change (DeSanctis & Poole, 1994). A later study found that the technology might be changed at a later time, when it is found too inflexible to appropriate into reasonable social structures (Majchrzak et al., 2000). In any case, however, the temporal window in which these changes must be made before
102 • Jonathan Miles and John R. Hollenbeck behavioral routinization makes any change difficult is relatively small (Tyre & Orlikowski, 1994). These studies point out that adding a technological adaptation layer on to the standard interpersonal socialization processes involved in teamwork can result in quite different results than would be experienced without this additional burden. Essentially, by adding technology to the mix, managers are imposing a learning and socialization task on employees above and beyond any actual task work that may be required. Straus and McGrath (1994) demonstrated that this added burden is not generally a problem for low complexity tasks, but that it becomes a serious issue for tasks with high complexity, such as those that normally would require a virtual team. Overcoming the additional task of technology adaptation generally requires a level of organizational support, especially in the areas of motivation and psychological safety (Gibson & Gibbs, 2006; Edmondson, Bohmer, & Pisano, 2001). The differences between face-to-face and virtual teams do not cease once team members are willing and able to make use of the communication technology, however. Driskell, Radtke, & Salas (2003) provide an excellent breakdown of the inherent problems the literature has found with computer-mediated communication (CMC). Their paper places the limitations of CMC into four categories: cohesiveness, status relations, counternormative behavior, and communication. These four categories, however, are separated by the outcomes, rather than the underlying causes, of differences between virtual and face-to-face teams. Each of these four categories of outcomes can be explained by one or both of the most common differences found between CMC and face-to-face communication: reduction in mutual knowledge and lack of social and status cues. When information is exchanged over a medium that does not have the richness of face-to-face communication, there is bound to be some loss. These losses may seem slight at first, but they can cause subtle change that become big problems. Researchers have found that, in CMC, team members did not communicate local context to others, failed to distribute the same information to all team members, had difficulty understanding and communicating the salience of information, accessed information at different speeds, and had difficulty interpreting the meaning of silence. The result of these five communication failures was a fundamental lack of mutual knowledge in the team, which tends to result in members making internal (rather than external) attributions for the behavior of their peers (Cramton, 2001). This lack of mutual knowledge was confirmed in a later study, which found that virtual teams had higher levels of confusion and
Teams and Technology • 103 lower levels of satisfaction than their face-to-face counterparts, as well as less accuracy recording their decisions (Thompson & Coovert, 2003). These findings point out a large gap between face-to-face and virtual teams, especially considering the recent support for shared mental models as keys to team learning and performance outcomes (Marks, Zaccaro, & Mathieu, 2000; Mathieu et al., 2000). The second important loss of richness from face-to-face communication to CMC is that of social and status cues. During a standard face-to-face conversation, thousands of subtle cues are exchanged between members that show their attentiveness, approval level, relative status, desire to be next to contribute, etc. These social cues allow for a tremendous amount of information exchange, which occurs nonverbally within the conversation context. Some consequences of the lack of social and status cues are longer time periods required to come to a decision and more uninhibited behavior (such as inflammatory comments) in virtual teams (Siegel et al., 1986). Interestingly enough, however, a lack of social and status cues also reduces the chances that team members are distracted or biased in their treatment of each other. In other words, virtual teams have more social and status equalization, with decisions being spread more equally among all members, regardless of their social standing (Dubrovsky, Kiesler, & Sethna, 1991; Siegel et al., 1986). When presented with a decision subject to possible escalation of commitment, virtual teams are more likely than face-to-face teams to avoid continuing adherence to a failing course of action (Schmidt, Montoya-Weiss, & Massey, 2001). Even when virtual teams make less accurate decisions than their face-to-face counterparts, the lack of status cues makes it easier for leaders to be unbiased in their assessment of team member effectiveness (Hedlund, Ilgen, & Hollenbeck, 1998). A lack of social and status cues is thus both a positive and negative force when it comes to team effectiveness. While there are many other issues that have been raised as possible shortcomings or disadvantages for virtual teams when compared to face-to-face teams, they generally appear to be problems that occur for all teams, regardless of their level of “virtualness.” For example, both cultural differentiation and short team tenure have been provided as possible definitional elements of virtual teams, but these are problems that simply occur more frequently in teams with spatial and temporal differentiation. When such situations occur in face-to-face teams, they cause the same problems and lack of coordination. In fact, as the use of virtual teams becomes more common, a smaller and smaller percentage of virtual teams will be subject to cultural differentiation and short team
104 • Jonathan Miles and John R. Hollenbeck tenure. We will deal more with this in a later part of this chapter. As seen thus far, the two parts of our definition of virtual teams (spatial or temporal differentiation and CMC technology) lead to three distinct differences between them and their face-to-face counterparts: technical adaptation, lack of mutual knowledge, and reduction in social or status cue information. Before we can begin to categorize virtual teams, it will be important to first understand how to categorize face-to-face teams. Next, we will examine a method for the dimensional scaling of team attributes. By learning how these dimensions were adopted, we can use a similar method to develop additional dimensions that will include virtual teams.
A DIMENSIONAL SCALING APPROACH TO TEAM ATTRIBUTES For many years now, teams were defined by a set of taxonomies that each sought to be the most inclusive method of dividing teams into easy categories. In a recent article, Hollenbeck, Beersma, and Schouten (2012) found several issues with the use of a categorical framework that make it inferior to a dimensional scaling approach. Categories force taxonomic variables into an either/or, dichotomous relationship regardless of whether the actual construct has such a relationship, a practice that could cause serious methodological problems (MacCallum, Zhang, Preacher, & Rucker, 2002). Adding new variables to such a categorical framework will result in a dramatic increase in complexity (from a 2x2x2 to a 2x2x2x2, etc.), which makes working with such frameworks limiting. Finally, even if the underlying dimensions in such a framework are not dichotomous, they are likely normally distributed, resulting in most samples bunching around the mean point and reducing variance in the measure. Beyond the methodological limitations of the existing taxonomies, they were also often in competition with one another, lacking any sort of theoretical or contextual consensus. Initially, teams were organized into four types based on a set of nine underlying dimensions (Sundstrom, De Meuse, & Futrell, 1990). This was followed by a set of studies and metaanalyses, many of which chose to provide their own, new categorization of team types based on categories of their choosing (Ancona & Caldwell, 1992; Cohen & Bailey, 1997; Devine et al., 1999; De Dreu & Weingart, 2003; Klein et al., 2006; Salas et al., 2008). This constant re-invention of team typologies is covered in detail in Hollenbeck, Beersma, and Schouten
Teams and Technology • 105 (2012), who pointed out that the result was a literature unable to come to satisfying theoretical consensus about its central topic. The use of typologies for such a wide-ranging area as team research had led to a confusing collection of differing viewpoints. To break free from this cycle of typology generation and provide a more useful method for categorizing teams, Hollenbeck, Beersma, and Schouten (2012) examined the existing team typologies for common underlying dimensions. Instead of typing teams with a categorical framework, they looked for dimensional scales that can be expressed with continuous variables. The authors surveyed the existing literature and found a common set of themes that ran through the various taxonomies. By examining how groups differ in their horizontal interdependence, their vertical interdependence, and the strength of their in-group/out-group boundary, it was possible to classify groups based on a scale with three dimensions. First, teams differ in skill differentiation, the degree to which members have specialized knowledge or functional capacities that make it difficult to substitute one member for another. Teams also have different levels of authority differentiation, the degree to which decision-making responsibility is vested in individual members versus the collective as a whole. Finally, teams exhibit a level of temporal stability, the degree to which membership in the team is stable over time and marked by a lack of team member turnover (Hollenbeck, Beersma, & Schouten, 2012). By assigning a continuous value to each of these three dimensional scales, it is possible to categorize any face-to-face team, and the scale should allow for more productive theory-building in the future. To begin with, these scales have research consensus behind their definitions, as elements of one or more of these dimensions were present in each of the team types that had been specified in the literature. In addition, on a practical note, these dimensions are the three main drivers of structure, and can be used as the main dimensions of organizational charts (Hollenbeck, Beersma, & Schouten, 2012). Unfortunately, this model was created without taking virtual teams into account, and thus will need to be modified slightly to include the rich and varied types of virtual teams in the literature. Luckily, the method used to determine these different underlying dimensional scales should work just as well on the numerous articles attempting to define and classify virtual teams as it did when used on the face-to-face team literature. By starting with a review of the attempts to categorize virtual teams, it should be possible to determine what dimensions are fundamental to describing them.
106 • Jonathan Miles and John R. Hollenbeck
ADDING DIMENSIONAL SCALES: INTERACTION FREQUENCY AND INTERACTION QUALITY Much like the literature on teams in general, the literature on virtual teams has featured many attempts to define the dimensions of its core construct. (For reference, the various virtual team dimensions proposed by the literature have been collected in Table 5.1.) These attempts have generally sought to identify and define virtual teams by the features that either enhance or limit the members’ abilities to communicate effectively with one another. Recall that a virtual team is defined as a team with members who are distributed temporally and/or spatially, and who use CMC to bridge those temporal or spatial gaps. Based on this, the choice of communication as a focus makes sense both because communication seems to be the main difference between face-to-face and virtual teams and because different virtual teams vary widely in the effectiveness of their communication.
TABLE 5.1 Summary of Virtual Team Dimensions Article Citation
Virtual Team Dimensions Description
Jarvenpaa & Leidner (1999) Context T,F,Q
Interaction Mode Q Type of Group T
Griffith & Neale (2001)
Time Spent Working Apart F,Q Technological Support F,Q
Bell & Kozlowski (2002)
Spatial Distance F,Q Information, Data, and Personal Communication F,Q Temporal Distribution F,Q
Similarity or difference in team member culture and geographical location Team communication via face-to-face, CMC, or both Presence or Absence of common team history and common team future Percentage of work done with members distributed in time and space Level of communication and documentation support used by team Degree to which team crosses spatial boundaries Richness of communication medium used for team interaction Degree to which team crosses temporal boundaries continued overleaf
Teams and Technology • 107 TABLE 5.1 continued Summary of Virtual Team Dimensions Article Citation
Virtual Team Dimensions Description Boundary Spanning T
Lifecycle T Member Roles S
Griffith, Sawyer, & Neale (2003)
Time Apart F,Q
Level of Tech Support F,Q
Physical Distance F,Q
Shin (2004)
Spatial Dispersion F,Q
Temporal Dispersion F,Q
Cultural Dispersion T
Organizational Dispersion T
Kirkman & Mathieu (2005)
Extent of Reliance on Virtual Tools F,Q
Informational Value Q
Synchronicity Q
Degree to which team crosses functional, national, or organizational boundaries Stability of team membership and length of team tenure Number of different roles each team member is required to fill Percentage of work done with members distributed in time and space Communication, documentation, and/or decision support used by team Distribution of physical locations occupied by team members Extent to which team members are distributed spatially Degree to which team members operate asynchronously Extent to which teams are made up of members from different cultures Extent to which team members exist outside the organizational boundary Degree to which team uses virtual tools to communicate and perform tasks Richness of communication medium between team members Degree of synchronicity in communication between team members continued overleaf
108 • Jonathan Miles and John R. Hollenbeck TABLE 5.1 continued Summary of Virtual Team Dimensions Article Citation
Virtual Team Dimensions Description
Gibson & Gibbs (2006)
Geographic Dispersion T,F,Q Physical distance between team members Electronic Dependence F,Q Relative amount of CMC versus face-to-face communication Dynamic Structural Rate of change in participants, roles, and Arrangements T,S relationships Ratio of team members from National Diversity T different national cultures
Note: Subscripts indicate dimensional scales reflected: S Skill Diff., A Authority Diff., T Temporal Stability, F Int. Frequency, Q Int. Quality
Each of the studies summarized in Table 5.1 have proposed a set of constructs to be used to define virtual teams and make it possible to study their differences. By looking at these dimensions collected in one place, it becomes clear that the common denominator for all proposed dimensions of virtual teams is communication. Communication is what separates virtual teams from their face-to-face counterparts, as well as what differentiates one virtual team from another. As seen in Table 5.1, communication is broken down into two dimensional scales: interaction frequency and interaction quality. These two scales, along with skill differentiation, authority differentiation, and temporal stability, should together form a method of accurately describing any work team, whether face-to-face or virtual. While simple to explain and measure, the frequency with which teams communicate is quite important to the study of work teams, especially virtual teams. In general, interaction frequency is a continuum that represents the amount of communication between the members of the team, whether it be one-on-one or among the entire team as a whole. Teams that are high on this continuum would be face-to-face teams who have the opportunity to work in each other’s presence and thus communicate at will throughout the performance of their duties. On the other hand, virtual teams with members spread across widely different time zones would tend to communicate less with one another due to non-overlapping work schedules, and would be relatively low on this continuum. It is important to point out
Teams and Technology • 109 that this scale is independent of quality of communication, so sending many e-mails a day to each of the other team members, who answer them on their own time, might still result in a relatively high interaction frequency. Interestingly enough, interaction frequency itself has rarely been studied without an attached level of interaction quality or message context. For example, the effects of feedback interventions on performance have generally been considered inconclusive, indicating that increasing the number of feedback interactions would have a negligible (or even slightly negative) effect on the team’s performance (Kluger & DeNisi, 1996). On the other hand, a lack of communication between team members would hamper the team’s ability to cooperate on tasks and properly transition from one phase of the project to the next. It appears that a moderate level of interaction frequency would be ideal for performance, and that too little or too much would result in process losses for the team, whether face-toface or virtual. The most obvious difference between face-to-face and virtual teams is the quality of communication exchanged between team members. Communication between two parties can take many forms, and has a distinct set of characteristics that determine the amount and quality of information exchanged. At the top of the communication characteristics are the dimensions of copresence, visibility, and audibility, the ability of group members to occupy the same physical location, see one another, and hear one another, respectively. Additionally, time plays a key role in communication quality. Cotemporality (the ability to receive messages at approximately the same time they are sent), simultaneity (the ability of group members to send and receive simultaneously), and sequentiality (the ability of members’ speaking turns to stay in sequence) all provide important explanatory value to a measure of interaction quality (Clark & Brennan, 1991; Driskell, Radtke, & Salas, 2003). For each of these characteristics that is present, the quality of the communication between the members increases. To this list, it is also necessary to add the idea of shared context, as teams form a shared context over time, which helps to facilitate their communication. Teams with a strong set of mutually shared context among all members will by nature have higher quality communication and fewer instances of misunderstanding than those that do not. Thus, the more communication characteristics and shared context that is present in the team, the higher the team’s interaction quality. As with interaction frequency, face-to-face teams will generally be higher on this continuum than virtual teams due to the number of communication characteristics inherent in face-to-face communication.
110 • Jonathan Miles and John R. Hollenbeck The current research on the differences between face-to-face and virtual teams has focused extensively on the differences in interaction quality between the two. As Driskell, Radtke, and Salas (2003) point out, much of the problems with virtual teams tend to stem from fundamental lapses in communication as it relates to the quality of information exchanged. The research on trust in virtual teams has concentrated heavily on the communication between members, but is still inconsistent, with some low interaction quality teams forming “swift trust” that fades over time (Jarvenpaa & Leidner, 1999; van der Kleij et al., 2009), others being very slow to build trust in comparison to high interaction quality teams (Siegel, et al., 1986; Wilson, Straus, & McEvily, 2006), and others developing initial trust normally, but losing trust more quickly than their high interaction quality counterparts (Kanawattanachai & Yoo, 2002). The role of communication in conflict among virtual teams is more certain, as teams with low interaction quality suffer from more inflammatory remarks and uncivil behavior, leading to more relationship conflict (Siegel, et al., 1986; Wilson, Straus, & McEvily, 2006), and lack shared context and synchronous communication, increasing team conflict (Hinds & Mortensen, 2005). One other dimension that is often used to define virtual teams is a measure of the cultural or national dispersion of the team members (Jarvenpaa & Leidner, 1999; Bell & Kozlowski, 2002; Shin, 2004; Gibson & Gibbs, 2006). This focus on cultural differences between virtual team members is likely a result of multinational organizations’ initial use of CMC to form virtual teams that span national boundaries, something that was previously very rare for face-to-face teams. In the team literature, these cultural differences have been studied as causes of team faultlines (Lau & Murnighan, 1998; Cramton & Hinds, 2005; Lau & Murnighan, 2005). These types of faultlines, however, are also likely to occur in face-to-face teams. When Lau and Murnighan (1998) first proposed the faultline phenomenon, they were working in the context of face-to-face teams and organizations. Mere demographic or status differences between team members often cause faultlines to form, and such differences are common in face-to-face groups. In fact, in face-to-face groups, such differences are more salient to group members (Dubrovsky, Kiesler, & Sethna, 1991). The dimensional scales of temporal stability and information quality are already based on these forces and their effects, so there is little reason to produce any sort of specific scale to address cultural differences directly. A final non-communication characteristic that appears in the dimensions of virtual teams is the relatively short tenure that such teams tend to have. The relative newness of the use of virtual teams, combined with their use
Teams and Technology • 111 being confined generally to limited life-span project teams and managerial teams has resulted in the impression that virtual teams must by nature be defined as short in tenure. Short tenure has been a common feature of many types of face-to-face teams as well, but rarely is short team tenure seen as a fundamental part of their makeup. As virtual teams are used more frequently and for more diverse functions, it seems likely that they will exist for longer periods of time, eventually occupying all parts of the team tenure spectrum. Even if virtual teams must by nature be short in tenure and suffer a lack of member commitment, the existing dimensional scale of temporal stability already provides a good place to record the results of such a paradigm. Teams that are low in tenure will by definition have low temporal stability, as the members are all relatively new to the team. By adding interaction frequency and interaction quality to the existing dimensional scales of skill differentiation, authority differentiation, and temporal stability, it should now be possible to compare teams to one another regardless of whether the team is face-to-face or virtual. In addition, this model now makes it possible for any of the dimensional scales to be examined for their possible moderating effects on the relationship between the others and some outcome of interest. Such a set of dimensional scales also makes it possible to study face-to-face and virtual teams together all in the same data group, without forcing them to be split into an unnecessary dichotomous relationship. This ability will make it possible to study teams in a more meaningful fashion rather than worrying about how the differences between them might divide the sample. The interaction of the two new dimensional scales with skill differentiation provides an opportunity to explore the flow of learning within the team. Teams with high skill differentiation have members whose skillsets do not generally overlap, meaning that the members are each specialists in their area. In teams that have low interaction frequency and low interaction quality, it is much more difficult to share skills and information between group members, and it also requires more effort and time to perform task switches from one group member to another. Groups with high skill differentiation will require higher levels of interaction frequency and interaction quality to achieve the same performance level as a group with low skill differentiation. This relationship would indicate a need for virtual teams to value cross-training and other strategies to reduce skill differentiation in order to compete favorably with face-to-face teams on the same task. Interaction frequency and interaction quality also have a much greater effect on groups with low authority differentiation. In teams with low
112 • Jonathan Miles and John R. Hollenbeck authority differentiation, decisions are made by many different team members working together and debating the available options. This is significantly more difficult to accomplish in an environment of low interaction frequency and low interaction quality, as experienced in many virtual teams. In contrast, teams with high authority differentiation, with an established leader who controls most decision-making, aren’t likely to be significantly hindered by a lack of interaction frequency and quality. These relationships suggest that virtual teams responsible for quick decisions are likely to be more effective if they have a strong, hierarchical leader. Finally, teams with high temporal stability have built a strong set of group roles and norms, as well as a shared communication context, which lessens the need for communication between members. In teams that have high temporal stability, and thus have been together for some time and intend to stay together, it is likely that members require very little information from each other to be effective in fulfilling group goals. Such groups should operate effectively with much lower levels of interaction frequency and interaction quality than teams with low temporal stability. As seen above, the addition of interaction frequency and interaction quality makes it possible for the existing dimensional scaling framework to be used to examine how virtual teams might differ from face-to-face teams. More importantly, however, it is now possible to compare virtual teams with different CMC technologies and different team profiles to one another in a meaningful fashion. These two dimensional scales now make it possible to study how small changes in the makeup of teams, whether they be face-to-face or virtual, result in changes in team effectiveness or team performance.
FUTURE RESEARCH DIRECTIONS The current research on teams and technology is dynamic, with many articles in recent years working hard to determine the underlying effects. A few areas, however, present themselves as particularly interesting for study, given the direction of the literature thus far. First, the divergent results on how trust forms in virtual teams provide an opportunity for clarification of the moderating factors responsible not only for trust in a virtual environment, but also for the development of trust relationships as a whole. Second, it will be important to determine if many of the negative
Teams and Technology • 113 results seen in virtual teams are merely artifacts of short team tenure, which will disappear as these teams work together for longer periods of time. Finally, communication among team members, especially in a lab environment, is often solely task-focused, which means that the effect of social communication and building of social bonds between team members on virtual team cohesiveness and communication has yet to be fully understood. While the conventional wisdom would state that virtual teams should lag behind their face-to-face counterparts in trust formation, the results of current research on the topic is mixed at best. As stated before, there are at least three different models of trust formation in virtual teams, which have all received at least some empirical support, with little understanding of why their results are in opposition to one another. The moderators of the relationship between interaction frequency, interaction quality, and trust will need to be examined carefully, in hopes of piecing together the true relationship between these constructs. As Jarvenpaa & Leidner (1999) point out, it was at one point considered unlikely that virtual teams could form trust bonds at all. Now that we know that it is possible, it would behoove us to understand what conditions make trust more likely or more lasting. The vast majority of studies on virtual teams thus far have either been lab studies or concentrated on teams whose members had little expectation of working together for a long time. As we pointed out in this chapter, this identification of short team tenure as a characteristic of virtual teams unnecessarily limits the research that can be done on the effects of interaction frequency and interaction quality. In fact, some research has shown that virtual teams with a common history and an expectation of long team tenure have similar levels of trust and cooperation as do faceto-face teams (Alge, Wiethoff, & Klein, 2003). It would be very interesting to determine if team tenure, as well as constructs similar to it (team member commitment, shared team identity, temporal stability) have the effect of substituting for high interaction quality when it comes to team processes. This is an area of research that could question the notion that virtual teams are significantly different from their face-to-face counterparts after all. Finally, the role of social ties and non-work-related communication in the differences between face-to-face and virtual teams is an area in need of further study. As team members are co-located in space, they would naturally be more likely to exchange non-work-related communication with one another as part of the normal work day. These social
114 • Jonathan Miles and John R. Hollenbeck communication events could have some explanatory power over the differences seen between face-to-face and virtual teams. Several studies have pointed out that face-to-face teams report higher levels of satisfaction than their virtual counterparts, regardless of the results of the team task (Warkentin, Sayeed, & Hightower, 1997; Thompson & Coovert, 2002, 2003). This could indicate that members of the face-to-face teams received some sort of social outcome that members of the virtual teams did not. By looking into the moderating effects of social communication and social ties in groups, it may be possible to determine a key element that is lacking in virtual teams.
CONCLUSION Treating virtual teams and face-to-face teams as two sides of a coin has led to some interesting findings in the past, but future research on teams must consider virtual and face-to-face teams as all part of the continuum that is work team research. Using dimensional scales like interaction frequency and interaction quality allows researchers to study how incremental changes in team virtuality fundamentally change how team members interact and produce quality outcomes. This move from a split sample approach to a dimensional approach results in more accurate methodology (MacCallum et al., 2002) and clearer theory-building (Hollenbeck, Beersma, & Schouten, 2012). By returning to a consideration of all teams as fundamentally similar, researchers can continue to build on the decades of strong research on teams while examining how advances in technology and management practice change team processes and outcomes.
REFERENCES Alge, B. J., Wiethoff, C., & Klein, H. J. (2003). When does the medium matter? Knowledge building experiences and opportunities in decision-making teams. Organizational Behaviors and Human Decision Processes, 91, 26–37. Ancona, D. G., & Caldwell, D. F. (1992). Demography and design: Predictors of new product team performance. Organization Science, 3(3), 321–341. Bell, B. S., & Kozlowski, S. W. J. (2002). A typology of virtual teams: Implications for effective leadership. Group and Organization Management, 27(1), 14–49.
Teams and Technology • 115 Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). Washington, DC: American Psychological Association. Cohen, S. G., & Bailey, D. E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Management, 23(3), 239–290. Cramton, C. D. (2001). The mutual knowledge problem and its consequences for dispersed collaboration. Organization Science, 12(3), 346–371. Cramton, C. D., & Hinds, P. J. (2005). Subgroup dynamics in internationally distributed teams: Ethnocentrism or cross-national learning? Research in Organizational Behavior, 26, 231–263. De Dreu, C. K. W., & Weingart, L. R. (2003). Task versus relationship conflict, team performance, and team member satisfaction: A meta-analysis. Journal of Applied Psychology, 88(4), 741–749. DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5(2), 121–147. Devine, D. J., Clayton, L. D., Philips, J. L., Dunford, B. B., & Melner, S. B. (1999). Teams in organizations: Prevalence, characteristics, and effectiveness. Small Group Research, 30(6), 678–711. Driskell, J. E., Radtke, P. H., & Salas, E. (2003). Virtual teams: Effects of technological mediation on team performance. Group Dynamics: Theory, Research, and Practice, 7(4), 297–323. Dubrovsky, V. J., Kiesler, S., & Sethna, B. N. (1991). The equalization phenomenon: Status effects in computer-mediated and face-to-face decision-making groups. HumanComputer Interaction, 6, 119–146. Edmondson, A. C., Bohmer, R. M., & Pisano, G. P. (2001). Disrupted routines: Team learning and new technology implementation in hospitals. Administrative Science Quarterly, 46, 685–716. Garvey, C. (2002). Steer teams with the right pay. Workforce, 47(5), 70–78. Gibson, C. B., & Gibbs, J. L. (2006). Unpacking the concept of virtuality: The effects of geographic dispersion, electronic dependence, dynamic structure, and national diversity on team innovation. Administrative Science Quarterly, 51, 451–495. Griffith, T. L., & Neale, M. A. (2001). Information processing in traditional, hybrid, and virtual teams: From nascent knowledge to transactive memory. Research in Organizational Behavior, 23, 379–421. Griffith, T. L., Sawyer, J. E., & Neale, M. A. (2003). Virtualness and knowledge in teams: Managing the love triangle of organizations, individuals, and information technology. MIS Quarterly, 27(2), 265–287. Guzzo, R. A., & Dickson, M. W. (1996). Teams in organizations: Recent research on performance and effectiveness. Annual Review of Psychology, 47, 307–338. Hedlund, J., Ilgen, D. R., & Hollenbeck, J. R. (1998). Decision accuracy in computermediated versus face-to-face decision-making teams. Organizational Behavior and Human Decision Processes, 76(1), 30–47. Hinds, P. J., & Mortensen, M. (2005). Understanding conflict in geographically distributed teams: The moderating effects of shared identity, shared context, and spontaneous communication. Organization Science, 16(3), 290–307. Hollenbeck, J. R., & Spitzmuller, M. (2010). Team structure: Tight versus loose coupling in task-oriented groups. Handbook of Industrial and Organizational Psychology, London: Oxford Press.
116 • Jonathan Miles and John R. Hollenbeck Hollenbeck, J. R., Beersma, B., & Schouten, M. (2012). Beyond team types and taxonomies: A dimensional scaling conceptualization for team description. Academy of Management Review, 37, 82–106. Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jundt, D. (2005). Teams in organizations: From I-P-O Models to IMOI models. Annual Review of Psychology, 56, 517–543. Jarvenpaa, S. L., & Leidner, D. E. (1999). Communication and trust in global virtual teams. Organization Science, 10(6), 791–815. Kanawattanachai, P., & Yoo, Y. (2002). Dynamic nature of trust in virtual teams. Journal of Strategic Information Systems, 11, 187–213. Kirkman, B. L., & Mathieu, J. E. (2005). The dimensions and antecedents of team virtuality. Journal of Management, 31(5), 700–718. Klein, K. J., Ziegert, J. C., Knight, A. P., & Xiao, Y. (2006). Dynamic delegation: Shared, hierarchical, and deindividualized leadership in extreme action teams. Administrative Science Quarterly, 51(4), 590–621. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284. Lau, D. C., & Murnighan, J. K. (1998). Demographic diversity and faultlines: The compositional dynamics of organizational groups. Academy of Management Review, 23(2), 325–340. Lau, D. C., & Murnighan, J. K. (2005). Interactions within groups and subgroups: The effects of demographic faultlines. Academy of Management Journal, 48(4), 645–659. Leonard-Barton, D. (1988). Implementation as mutual adaptation of technology and organization. Research Policy, 17, 251–267. MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7(1), 19–40. Majchrzak, A., Rice, R. E., Malhotra, A., King, N., & Ba, S. (2000). Technology adaptation: The case of a computer-supported inter-organizational virtual team. MIS Quarterly, 24(4), 569–600. Marks, M. A., Zaccaro, S. J., & Mathieu, J. E. (2000). Performance implications of leader briefings and team-interaction training for team adaptation to novel environments. Journal of Applied Psychology, 85(6), 971–986. Martins, L. L., Gilson, L. L., & Maynard, M. T. (2004). Virtual teams: What do we know and where do we go from here? Journal of Management, 30(6), 805–835. Mathieu, J. E., Heffner, T. S., Goodwin, G. F., Salas, E., & Cannon-Bowers, J. A. (2000). The influence of shared mental models on team process and performance. Journal of Applied Psychology, 85(2), 273–283. Orton, J. D., & Weick, K. E. (1990). Loosely coupled systems: A reconceptualization. Academy of Management Review, 15(2), 203–223. Paul, D. L., & McDaniel, R. R. (2004). A field study on the effect of interpersonal trust on virtual collaborative relationship performance. MIS Quarterly, 28(2), 183–227. Salas, E., DiazGranados, D., Klein, C., Burke, C. S., & Stagl, K. C. (2008). Does team training improve team performance?: A meta-analysis. Human Factors, 50(6), 903–933. Schmidt, J. B., Montoya-Weiss, M. M., & Massey, A. P. (2001). New product development decision-making effectiveness: Comparing individuals, face-to-face teams, and virtual teams. Decision Sciences, 32(4), 575–600. Shin, Y. (2004). A person-environment fit model for virtual organizations. Journal of Management, 30(5), 725–743.
Teams and Technology • 117 Siegel, J., Dubrovsky, V., Kiesler, K., & McGuire, T. W. (1986). Group processes in computermediated communication. Organizational Behavior and Human Decision Processes, 37, 157–187. Stillman, J. (2008, September 8). Managers fret as virtual teams grow with globalization. Retrieved from www.bnet.com/blog/bnet1/managers-fret-as-virtual-teams-growwith-globalization/582 Straus, S. G., & McGrath, J. E. (1994). Does the medium matter? The interaction of task type and technology on group performance and member reactions. Journal of Applied Psychology, 79(1), 87–97. Sundstrom, E., De Meuse, K. P., & Futrell, D. (1990). Work teams: Applications and effectiveness. American Psychologist, 45(2), 120–133. Thompson, L. F., & Coovert, M. D. (2002). Stepping up to the challenge: A critical examination of face-to-face and computer-mediated team decision making. Group Dynamics: Theory, Research, and Practice, 6(1), 52–64. Thompson, L. F., & Coovert, M. D. (2003). Teamwork online: The effects of computer conferencing on perceived confusion, satisfaction and postdiscussion accuracy. Group Dynamics: Theory, Research, and Practice, 7(2), 135–151. Tyre, M. J., & Orlikowski, W. J. (1994). Windows of opportunity: Temporal patterns of technological adaptation in organizations. Organization Science, 5(1), 98–118. Van der Kleij, R., Schraagen, G. M., Werkhoven, P., & De Dreu, C. K. W. (2009). How conversations change over time in face-to-face and video-mediated communication. Small Group Research, 40(4), 355–381. Warkentin, M. E., Sayeed, L., & Hightower, R. (1997). Virtual teams versus face-to-face teams: An exploratory study of a web-based conference system. Decision Sciences, 28(4), 975–996. Weick, K. E. (1976). Educational organizations as loosely coupled systems. Administrative Science Quarterly, 21(1), 1–19. Wilson, J. M., Straus, S. G., & McEvily, B. (2006). All in due time: The development of trust in computer-mediated and face-to-face teams. Organizational Behavior and Human Decision Processes, 99, 16–33.
6 Leadership and Technology: A Love–Hate Relationship Denise Potosky and Michael W. Lomax
LEADERSHIP AND TECHNOLOGY: A LOVE–HATE RELATIONSHIP Imagine our technologically advanced society 30 to 50 years from now. As a result of automation and reduced demands for human labor, each person will spend, at most, 1,100 of the 8,760 hours in a year at work. We will work seven-and-a-half hours per day, four days per week, thirty-nine weeks per year. Work–life balance will probably still be a problem, but the problem will shift from trying to manage one’s personal life amid overwhelming work hours to determining what to do with an abundance of leisure time. This affluent, leisure society will arise from the unlimited productive capacity associated with advancements in technology. These are not our predictions, but were anticipated 50 years ago. David Reisman (1958), describing “post-industrial society,” wrote about the sociological problems that would likely surface when people would confront the problems of so much leisure time instead of the drudgery of work. In their book, The Year 2000, Kahn and Wiener (1967) described an affluent, leisure society in which per capita income doubles every 18 years, and they estimated 39 working weeks and 13 weeks vacation for almost everyone in the modern world. Not everyone in the 1960s looked forward to the new technological age. For example, in 1964 a group of social activists, professors, and technologists (referred to as “the Ad Hoc Committee”) prepared and sent a memorandum called the “The Triple Revolution” to U.S. President Lyndon B. Johnson and other government leaders. The triple revolution consisted of 1) the cybernation revolution; 2) the weaponry revolution; and 3) the human rights revolution, but the “cybernation revolution” was 118
Leadership and Technology • 119 a primary focus of the document. Cybernation, as defined in a report by Donald Michael (1962), was a combination of computers and other automated, self-regulating machines. Although Michael (1962) anticipated that people would eventually find new purpose in a cybernated world, the Ad Hoc Committee argued that replacing people with machines at the rapid pace of technological advancement would have dire implications for the economy and welfare of America’s citizenry unless the government acted to constrain deployment of new technologies while social support systems were put into place. Whether a technology-facilitated leisure society was envisioned or a cybernated economy of displaced workers without purpose was anticipated, these prophets were wrong. As Daniel Bell (1973) pointed out, scarcity of labor was replaced with scarcity of information, coordination, and time. Ten years after Bell described this new scarcity in post-industrial society, John Naisbitt (1982) commented in his book, Megatrends, “We are drowning in information but starved for knowledge” (p. 24). Advancements in technology created more work, more information, more data, and an overwhelming exigency for leadership, organization, and knowledge management in the information age. Our focus in this chapter is on the challenges, frustrations, and opportunities for research regarding leadership and technology, specifically information technology. Leaders in our society have at times hailed the advancements technology has offered and at other times struggled to comprehend and lead them. Some of the things we might love about technology (e.g., rapid access to information and expanded cellular communication) are the very things we could hate about it (e.g., viral spread of information we want to limit and 24/7 access to us). Scholarship within industrial and organizational psychology has only intermittently incorporated “technology” in theory development and research. In addition, “technology” seems to mean at least two different things in research relevant to leadership and I-O psychology. On one hand, we can talk about technology in terms of the fundamental shifts that have occurred in the way we organize ourselves and the leadership exigencies these changes have created. From this perspective, technology provides a contextual and historical influence on organizations and work processes and an arena for organizational change. On the other hand, we can treat technology as an application used within leadership and social exchange processes. From this standpoint, technology is a tool or medium to be used within specific situations. This chapter includes both approaches to technology, as relevant to leadership. We focus on the influence of rapidly changing information and communication
120 • Denise Potosky and Michael W. Lomax technology on leadership in practice and on the development of leadership theory and research. In an effort to describe what it means to lead in the information age, we also consider how leaders have shaped the evolution of technology in the structure of relationships between leaders and followers. We sought to verify our observations from the research literature by asking some leaders in practice to describe their own leadership experiences with respect to information technology, and we weave this practice perspective into our review. Overall, our examination suggests what could be described as a “love–hate relationship” between leaders and technology. This chapter is organized as follows. First, we examine technology’s influence on leadership processes in the information age. Second, we examine research regarding the influence of leaders and leadership on technology. Third, we consider aspects of technology that disappoint or frustrate leaders, a subject that has not received much research attention in I-O psychology. Finally, we refer to three contemporary, implicit leadership theories (i.e., contextual leadership theory, relational leadership theory, and transformational leadership theory) to suggest some ways future research might theoretically ground the exploration of leadership and technology within the field of I-O psychology.
TECHNOLOGY’S INFLUENCE ON LEADERSHIP PROCESSES Several scholars have observed that the broader, dynamic influences of technology on leadership processes represent a noteworthy gap in the leadership research literature (Avolio, Kahai, & Dodge, 2000; Gardner et al., 2010; Lowe & Gardner, 2000). For example, in their review of the first ten years of publication of The Leadership Quarterly (LQ), a well-regarded international journal that is dedicated to advancing leadership theory, research, and development, Lowe and Gardner (2000) identified technology as one of eight content directions for future leadership research. In their more recent analysis of the reputation and content of LQ, however, Gardner et al. (2010) observed that “a disappointing amount of attention” was paid to “the impact of technology on leadership” since Lowe and Gardner’s (2000) suggestions 10 years earlier.
Leadership and Technology • 121 Technology as a Contextual Influence Our own search of LQ between 2000 and 2010 produced a total of four articles published that included the word “technology” in the title or the abstract. In these LQ articles, technology was conceptualized as an environmental backdrop providing contextual influences on leadership processes. Makri and Scandura (2010) examined CEO leadership in 77 high-technology firms. In this study, a high technology firm referred to firms that are technology intensive or research and development intensive, and/or firms whose purpose is to foster invention and generate new knowledge. Howell and Boies (2004) identified organizations that had implemented new information technology designed for use by managers and/or professionals, and then examined the role of champions in the innovation process. Halbesleben et al. (2003) mention the rapid pace of technology in their abstract, and their article considers the social aspects of time and temporal complexity in relation to leadership and innovation. It is interesting to note that these three of the four “leadership” articles that referenced technology focused on creativity and innovation. We wondered why information technology that was anticipated in the 1950s should be more associated with innovation than routine processes in leadership research in the 2000s. Yet, clearly, the adoption of new technology continues to provide a catalyst for innovation and change in organizations, and these changes require leadership. In the fourth LQ article since 2000 that referenced technology, Avolio, Kahai, and Dodge (2000), proposed a framework for studying how “Advanced Information Technology” (AIT) influences and is influenced by leadership. AIT is defined as the “tools, techniques, and knowledge that enable multiparty participation in organizational and inter-organizational activities through sophisticated collection, processing, management, retrieval, transmission, and display of data and knowledge” (Avolio, Kahai, & Dodge, 2000, p. 616; see also DeSanctis & Poole, 1994). Avolio, Kahai, and Dodge (2000, p. 624) explained that “a leadership system may be enabled, undermined, or completely disabled by the introduction of AIT.” In their article, they proposed adaptive structuration theory as a framework to illustrate how technology and organizational structures influence each other and co-evolve. It is not evident in LQ or other I-O psychology research, however, that Avolio et al.’s (2000) proposed research agenda focused on leadership and technology has been thoroughly pursued. A fifth LQ article did not mention the word technology in the title or abstract, but was clearly focused on leadership in the information age. Brown and Gioia (2002) conducted an in-depth study of leaders in an
122 • Denise Potosky and Michael W. Lomax e-business venture, and reported that the executive team in this “bricks and clicks” organization had to learn to function in a “disorienting context.” They observed that two contextual features, the rapid speed and the ambiguity of an online business environment, profoundly affected leadership and managerial processes. They concluded that e-business contexts require shared/relational leadership and distributive leadership (Gronn, 2002) practices. One aspect of leadership and technology that is not well understood concerns the impact that followers’ rapid, often unfiltered access to leaders has on leaders’ behaviors. Mathieu, Ahearne, and Taylor, (2007, p. 528) noted that “Advanced informational technologies enable individuals to communicate more easily, rapidly, and less expensively across time and space compared to traditional work designs. These capabilities facilitate storage, retrieval, access, and synthesis of large amounts of information so as to create new information insights.” The response of leaders to technology has not been thoroughly researched, however. “It is now possible at most companies for anyone at any level anywhere in the world to send an e-mail to the upper echelon of the organization; yet the literature is silent on whether (and how) this unparalleled access impacts what leaders do” (Gardner et al., 2010, p. 950). Clearly, leadership behavior can no longer be understood as more or less one-way communication from leaders (e.g., articulating a vision) to followers. One effect of IT on leadership is that it has altered expectations about leaders’ responsiveness. For example, whether, how, and how quickly leaders respond to followers may be an important aspect of leadership attribution and/or effectiveness. We spoke with Howard Sundwall, Chief Information Officer for CTDI Communications Test Design, Inc. to get his opinion on the most significant advancements of the information age from the perspective of leading an organization. He stated, “the two most significant IT advancements from my perspective would be empowerment and an explosion of knowledge.” He explained that the key change in terms of empowerment is that all end users are now able to use the computer, access information, and dictate their needs and expectations. He added that: The access to knowledge has altered the end user’s expectations. There is an increasing demand for more information at an even quicker speed. How the information is used has become a greater topic for review. There is so much information now that it is difficult to manage the volume and the speed of delivery. (H. Sundwall, personal communication, November 12, 2010)
Leadership and Technology • 123 One wonders if leaders feel very “leaderly” while trying to keep up with the influx of information and followers’ expectations about responses to electronic communication. Several research questions about the rapid flow of communication and technological advancements surface from the form that access to and communication with leaders can take. How does the definition of followers change when electronic communication channels and social networking sites are considered? How does rapid, unfiltered, and voluminous communication access to leaders affect leaders’ behaviors? Have leaders’ own perceptions changed in terms of what they are supposed to do as a result of the information age? The availability of real-time information compels leaders to be more responsive to all of their stakeholders (Avolio, Kahai, & Dodge, 2000). This might sound obvious or easy to those who are accustomed to instant-messaging and web-based interaction, but the learning curve for establishing visions and goals, motivating work effort, directing global resources, and remotely guiding behavior has probably been steeper than most managers and leaders would like to admit. As Avolio, Kahai, & Dodge (2000) noted more than 10 years ago, leaders need to proactively participate in the creation of IT dependent social structures. To date, individual leaders’ acceptance and accomplishment of this challenge has received only piecemeal research attention. Leaders influence followers in a process of co-construction (cf., Smircich & Morgan, 1982). From the perspective of technology as a contextual influence, leadership theory could do more to address how leaders shape or should shape this context. Do leaders command communication technology or are they overwhelmed by it? Command of the technological context may be a key factor of leadership effectiveness. Technology as a Tool Used in Leadership In addition to the treatment of technology as a contextual influence on leadership processes, the influence of specific types of information technology (especially internet technology) on certain leadership and management activities has also been examined in the research literature. From a “technology as a tool” perspective, several studies refer to technology-related terms such as “computer” or “virtual” or “online.” Some studies have focused on leadership in relation to virtual workers (e.g., Golden & Veiga, 2008; Purvanova & Bono, 2009; Sosik et al., 2005). Other studies reflect more specialized interests such as how simulations could be used in causal analysis training to develop understanding of leadership roles
124 • Denise Potosky and Michael W. Lomax (Marcy & Mumford, 2010). Hazy (2007) described 14 different approaches for using computer simulation in leadership research. Gardner et al. (2010) noted that technology has influenced and altered the way leadership development is conducted: “Simulations and virtual reality may well be the next step in extending leadership development beyond the current action-learning model” (p. 951). Technology tools and applications have changed the nature of work and restructured traditional workplaces into virtual workspaces. Using internet and group conferencing tools, for example, individuals are able to see, discuss, and solve complex problems jointly. We asked former Vanguard CEO, Jack Brennan, what he thought were significant advancements at the intersection of IT and leadership, and he identified personal computers and web/collaboration tools as most significant: Personal computing has really changed the nature of work in a dramatic way. Personal computing improved the quality of work and reduced the risk of human error. [As a result of this technology,] organizations invested in training people, e.g., phone associates, to work differently. We improved quality by having intelligence at the desktop. Collaboration technology allows workforce productivity to grow in leaps and bounds. [We can] work without physical proximity, across time zones, and leverage synergy. (J. Brennan, telephone interview, December 13, 2010)
Leadership in virtual work environments using technology-mediated communication tools is an increasingly important topic for research. We asked Fireman’s Fund Insurance President, Darryl Page, about his views on how leaders use technology to set organizational expectations. He responded that: Today, our challenge is to get people to see a future that is not in sight. We need to leverage technology to bring information together so we can better understand our audiences. Technology is a vehicle to bring people together . . . Communication is critical. Many fail in mobilizing others because they haven’t given enough thought to how they will use technology tools to move people from where they are today to where you’d like them to be in the future. (D. Page, telephone interview, December 10, 2010)
An individual who cannot reach others through electronic channels and/or who cannot be accessed by these channels may be less likely to be identified by group members as a leader.
Leadership and Technology • 125 Maruping and Agarwal (2004) called attention to the advantages for virtual teams in “brainstorming and decision-making” (p. 975) tasks. Virtual teams can cross geographical, cultural, and organizational boundaries in order to leverage a larger pool of intellectual resources that yield multiple solutions, as compared to face-to-face or intra-departmental groups, which have more limited resources and diversity. The roles and activities of leaders in virtual teams have also received some research attention, especially in terms of the effectiveness of transformational leadership behaviors for virtual teams (e.g., Purvanova & Bono, 2009). Further, the notion of leader–follower relationships has expanded to include technologyfacilitated interaction. Relationships are created and maintained through information technology tools, and leaders and managers have had to learn to manage interpersonal relationships and communicate with followers using technology-mediated channels. For example, Golden and Veiga (2008) found that the quality of leader–follower relationships impacted the commitment, satisfaction, and performance of virtual workers. One implication here is that leaders need to figure out how to establish and maintain productive relationships with virtual employees. In addition, how leaders emerge and “prove” themselves in organizational contexts needs to include leaders’ command of communication technology tools and their use of information (as a resource to bring to followers).
LEADERS’ INFLUENCE ON TECHNOLOGY The influence of leaders and leadership processes on the evolution of information technology has received less research attention than the influence of IT on leadership processes. Although some research has explored the role of leadership in the diffusion of innovation, there is scant research to apply specific leadership theories or prototypical leadership practices to the technological transformation that has occurred across organizations and whole societies. Some research has examined the leaders’ role in implementing specific technology projects. For example, Potosky and Olshan (2008) provide a detailed case example of a business process manager’s role as a champion in the rollout of a global enterprise resource planning (ERP) project. In this paper, the leader’s role in integrating a new system across an organization operating in 115 countries was likened to leading people through the “valley of despair” (Kübler-Ross, 1969).
126 • Denise Potosky and Michael W. Lomax Former CEO of Vanguard Investments, Jack Brennan, personally demonstrated the leader-as-technology-champion role and the value of creating buy-in with new technology implementation. He emphasized his personal commitment to technology, and he explained his belief that the standard for the adoption of new technology should be established at the most senior levels of organizations. In my case, I was functionally the CIO in our early days and we sold the idea that committing to being a technology driven company would make a difference. Our entire senior leadership team went to training to understand this idea and to understand the ROI (return on investment). We were scale driven and quality driven, and technology is the best way to address both. People who became invested in technology radically changed their thoughts [about technology]. (J. Brennan, telephone interview, December 13, 2010
Citing a Conference Board report, Sosik et al. (2005) noted that 40 percent of information technology development projects are cancelled before completion, and the primary reason for this failure is “a lack of strategic leadership” (p. 48). Tarafdar and Qrunfleh’s (2009) study of tactical IT–Business alignments revealed some startling revelations of wasted IT investments and failed projects. For example, during 2002–2004, over $100 billion worth of IT projects failed and an estimated 68 percent of IT projects did not fulfill originally stated business goals or deliver envisioned business benefits. This research reported a lack of alignment between IT and business processes. Communication processes were inadequate to address appropriate matching of resources, objectives, and priorities between IT and the individual business units. Throughout the technological revolution, it seems that leaders in most organizations have been regarded as “end users” rather than drivers or champions of technological change. Very little research in IT refers to the psychology literature. For example, Fjermestad and Hiltz (1999) reviewed 230 articles on group support systems (GSS) and observed that leadership has been ignored in the GSS literature. IT leadership means acquiring and allocating resources to bring technological advancements to a group, but such leadership necessarily requires change and conflict management skills as well as coordination and communication skills. One consistent theme that runs through many of the research and/or case studies on the problems with IT design and implementation processes is communication (or the lack thereof) between the end-user and the designers. Yet, IT research and
Leadership and Technology • 127 practice tends not to use psychological lenses in order to “see” the leadership and communication issues involved in IT management. For example, similar to the Tarafdar and Qrunfleh (2009) study noted above, Smith, Koohang, and Behling (2010) discussed the adverse affects of poor communications between IT and the business units, but did not equate this with a lack of leadership processes. Smith, Koohang, and Behling (2010) surveyed IT managers to determine the “greatest challenges” to effectively managing information technology, and leadership or people management processes were not very high on the list of challenges. According to the IT managers, key challenges were data privacy, data management, meeting legal requirements, and protecting systems from hackers. Less than 50 percent of the IT managers perceived employee adoption of new technologies or staff training as very important. Our initial reaction to this report was “What do they know?” as it seems obvious that leadership should play a key role in the management of IT. But it was humbling to then ask ourselves “What do we know?” about the activities and actual role of leaders during the introduction or updating of IT in organizations. There simply is not enough research to argue our implicit assumptions about leadership’s importance or effects on technology. We asked the leaders contacted for this chapter about organizational leaders’ role and influence on the technological changes implemented in their organizations over the past 10 years. Specifically, we asked about the acceptance of new IT as presented, “off the shelf,” versus their self-perceived role and the role of line managers in requesting specific IT innovations. Typically, the designers decide the default settings. They are meeting what they believe are the requirements of the buyer. Certainly, the end user is not aware of the default settings. The end-user is more concerned with ease of use; the designer may focus on transparency of the default settings. In most cases, the end-user does not understand the process and will usually agree to whatever settings the designer has established. (H. Sundwall, personal interview, November 12, 2010)
The notion that organizational leaders are end-users, just like everyone else, at the mercy of software developers and technical experts presents several intriguing research questions regarding leadership processes. Does perceived technical competence enhance attributions of overall leadership competence or trustworthiness? Do people expect leaders to be competent at using technology? Do they expect leaders to be at the helm of bringing new technology to employees’ work efforts and capabilities? Research in
128 • Denise Potosky and Michael W. Lomax I-O psychology could do more to elaborate leaders’ expertise, self-efficacy beliefs, and sense of responsibility for the technological advancements that continue to alter organizational structures and work contexts. There have been a number of studies on the acceptance and the adoption of new technologies by employees. The technology acceptance model (TAM; Davis, Bagolli, & Warshaw, 1989) is frequently used to explain individual technology use, as it suggests that psychological factors, perceived usefulness, and ease of use are central to influencing the use of technology. Schepers, Wetzels, and Ruyter (2005) analyzed transactional and transformational leadership styles to determine if these influence technology acceptance by employees. They hypothesized that both leadership styles positively relate to perceived usefulness and perceived ease of technology use. Their findings suggest that the relationship between transformational leadership and perceived usefulness was significant. In particular, intellectual stimulation, whereby a leader encourages new ways of thinking and enabling subordinates to analyze problems from a variety of viewpoints, was associated with perceived usefulness and ultimately intended use by employees. Transactional leader behaviors (i.e., setting targets and objectives that require the use of technology) were not related to technology acceptance in their model. In addition to the intellectual stimulation component of transformational leadership, Schepers, Wetzels, and Ruyter (2005) commented that organizational support can also be a strong influence on employees’ acceptance of technology. The introduction of formal training sessions, help desk support, on-site support as well as less tangible activities such as internal marketing campaigns and word of mouth communications can demonstrate an organization’s (and leaders’) earnest commitment to new technology. There is an important role for leaders and leadership processes in the rollout of new technologies. A leader’s support for the use of technology and the introduction of technological advancements may directly affect the intended use of technology deployed in an organization. Darryl Page, President of Fireman’s Fund Insurance, pointed out to us that line leaders have an important role to play in the design stages of new technology: Line leaders have a role in the design process and new technology rollouts. They must think about this differently, in terms of change management. Line leaders need to work to prepare the organization to embrace the future. Just having or acquiring new technology is important, but less important than having a plan for how we use and leverage the technology we have. (D. Page, telephone interview, December 10, 2010)
Leadership and Technology • 129 Study results from Mathieu, Ahearne, and Taylor (2007) showed that leaders’ commitment to sales technology as well as leaders’ empowering behaviors enhanced salespersons’ technology self-efficacy and usage. Mathieu et al. monitored the introduction of new technology tools on social-psychological factors relevant to the sales performance of 592 sales persons. Their findings suggest that leaders play an important role throughout the process by which individuals approach new technology, use the technology, and work toward organizational goals. The Mathieu et al. work is particularly interesting in light of the earlier references (e.g., Mills, 1995; Parthasarathy & Sohli, 1997), which suggested that salespeople are among the most technophobic and resistant of all white-collar workers. In addition, prior research (Igbaria & Iivari, 1995; Pearson et al., 2002) suggested that work experience, job tenure, and age are negatively correlated with enthusiasm about adopting new technologies. Mathieu et al.’s research adds the potential influence of empowering leadership styles on users’ adoption of new technology. They proposed, for example, that highly-experienced sales people with low technology self-efficacy might need less empowerment and more direction from leaders. In Mathieu, Ahearne, and Taylor’s (2007) study, salespeople reported greater technology self-efficacy and use when they worked for leaders who emphasized the use of the new system. These effects echo the long-standing belief from organizational change and development literature that “getting leaders on board” is critical for the success of any organizational intervention. (p. 535)
These findings might generalize to any introduction of new technology in organizations such that leaders must not only buy-in and support technology development, but they must also be cognizant of their influence as role models and advocates for change as employees come to accept and adopt new ways of doing things. As former Vanguard CEO Jack Brennan pointed out, cross-development processes that include both business leaders and technical experts are important because they foster collaboration during early design phases, reduce finger pointing as IT projects are rolled out, and encourage mutual accountability for IT results (J. Brennan, telephone interview, December 13, 2010). In summary, the information age is annotated with profound changes in leadership and ways of organizing people and work. In some ways, the development of new information technologies seems to have surpassed our
130 • Denise Potosky and Michael W. Lomax expectations, but the rapid pace of advancement has also been somewhat disorienting, perhaps especially for individuals in leadership positions. As Darryl Page, Firemen’s Fund President put it, We got what we expected, but we did not expect enough. In many cases, leaders can identify needs in specific ways, and they can find the technology that fills that need. However, leaders need to change the way they think about what is possible. Doing this depends on the leader and whether the leader is open to the possibilities. Many leaders say, “Here’s the business process I want to execute again, here’s the technology to meet that need.” Maybe we should be less specific about what we need and have more dialogue about what could be. Needs may be met, but technology is usually not fully utilized. (D. Page, telephone interview, December 10, 2010)
Indeed, the 2008 annual report of the American Psychological Association observed that “dramatic changes in human behavior rarely follow immediately from the introduction of new technologies” (p. 461). Research in I-O psychology could do more to examine the nature, reach, and evolution of a leader’s IT vision for an organization. Research could also use the technological change context to elaborate the role of leaders as change agents and engineers of new organizational structures that facilitate leader–follower relationships.
FRUSTRATION WITH TECHNOLOGY In addition to considering what technology has contributed to leadership and managerial processes and what we like about it, this chapter also considers what technology has not done and/or how it has disappointed us. Our purpose is not to promulgate cynicism or nostalgia, but to offer a pragmatic view of the relationship between advancements in information technology and the development of management and leadership as fields of practice and areas of research. One area in which I-O psychology researchers have been particularly silent concerns resistance to technology and dissatisfaction with the information age. There has been some research regarding computer anxiety and resistance to change, but very little research has examined the emotional frustration that leaders and followers experience when using technology, the circumstances in which people fairly or unfairly blame technology for communication breakdowns or other problems, and the
Leadership and Technology • 131 broader stressors associated with real and perceived aspects of technology in organizations. We asked Howard Sundwall, CIO of CTDI Communications, to describe what frustrates leaders and managers when it comes to technology. His response focused on the increasing demands that followers placed on leaders as new technologies are adopted: The area of frustration for leaders with technology is the degree of involvement end-users attempt to exert in managing the system. As the number of end-users expands, there is a corresponding demand for more simplicity with technology. The question is “Why is it so difficult to manage.” As technology evolves, it becomes more complicated, expensive and may take longer to implement or change. (H. Sundwall, personal interview, November 10, 2010)
According to Pfeffer (1994), there is a common (but flawed) assumption that because production processes have become more sophisticated, high technology can substitute for skill in managing a workforce. He argues that advanced technology actually demands more, not less, of a workforce. As in the faulty predictions about the information age made in the 1950s, leaders (and followers) who assume that new technology will provide a substitute for effort and leadership may become very frustrated indeed. In describing his theory on sustaining technological change in the disk writing industry, Christensen (2000) called attention to the stress associated with leading in a technological environment. According to Christensen, “Coping with the relentless onslaught of technology change was akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you have to stay on top of it, and if you ever once stop to catch your breath, you get buried” (p. 8). We asked Fireman’s Fund President, Darryl Page, the same question about what frustrates leaders with respect to technology. His response emphasized the pressure leaders are under to manage resources while keeping systems up-to-date and responding to followers’ enthusiasm: We feel excited and overwhelmed at the same time. In today’s world, the gap is significant. There’s a reason why certain technology investments weren’t made . . . What you build, because of the time it takes to build it, may be already surpassed by the time it is done. (D. Page, telephone interview, December 10, 2010)
According to IBM’s 2008 global survey, 83 percent of 1,130 CEOs indicated that their companies faced substantial or very substantial change
132 • Denise Potosky and Michael W. Lomax over the next three years. Two years prior, 65 percent reported facing substantial change. Many of the CEOs surveyed in 2008 admitted they were struggling just to keep up with the rate of change, and technology was given as one of the three major drivers of change. According to the CEOs, “technology advances are reshaping value chains, influencing products and services and changing how their companies interact with customers” (The Enterprise of the Future, 2008, p. 16). The notion that the marriage of leadership and technology can be stressful is important to keep in mind. For example, although people might enjoy, appreciate, or otherwise prefer using web technology (e.g., Potosky & Bobko, 2004, reported that people enjoyed taking web-based tests more than paperand-pencil versions), some studies have suggested that many people dislike or are reluctant to use technology (e.g., Thompson & Surface (2007) found that people do not feel comfortable responding to web-based inquiries). Also, Ferran and Watts (2008) reported that working in virtual teams increases team members’ cognitive load beyond the requirements of meeting face to face. Research in leadership could seek to provide recommendations for leader behaviors in situations where followers are not comfortable or capable with web-facilitated communication, as well as the use of technology that is more taxing than enabling. Further, the larger issue of stressors associated with technology on leaders as individuals or on leader–follower relationships seems to be an area where psychology can contribute important insight. For example, it is not clear whether use of web-based technology automatically increases cognitive load during interaction, or if the structural and dynamic characteristics (cf., Barry & Fulmer, 2004; Potosky, 2008) of the specific web-medium can be effectively altered to become more transparent. Although technology has not been a primary area of focus for I-O psychologists, several contemporary leadership theories, especially complexity, relational, and transformational leadership theories, seem well-positioned to include research at the leadership–technology juncture. In the next section, we briefly describe these frameworks and suggest specific ways they might be used in research on leadership and technology.
ROBUST LEADERSHIP THEORIES FOR A HIGH-TECH WORLD Contemporary leadership theories emphasize the connection between leaders and followers, as well as contextual features constructed within the
Leadership and Technology • 133 frame of leader–follower relationships. As such, modern frameworks of the leadership process can be regarded as implicit theories of leadership, which describe how followers construct a prototype of leaders as they interact with others in social situations (Lord, Foti, & DeVader, 1984; Lord & Maher, 1991). As Lord and colleagues observed, individuals whose behaviors match the prototype and who meet followers’ expectations tend to be perceived and described as leaders. Although most leadership theories do not specifically preclude virtual or technology-facilitated leadership, they do not address the unique demands of leading in IT contexts. Most leadership theories are framed on assumptions of face-to-face interaction between leaders and followers. The structural and dynamic features of technology-facilitated interaction have been articulated in some theoretical frameworks (e.g., Barry & Fulmer, 2004; Potosky, 2008), but these have not been fully explored within the context of leadership processes. Rather than propose new leadership theories here, we encourage new research that extends or refines current leadership frameworks to include either technology as a contextual influence or the use of technology as a tool or medium of interaction in leadership relationships. In an effort to inspire some new considerations for studying leadership and technology, we chose three frameworks that could theoretically ground investigation of some of the research questions raised in this chapter. Complexity leadership theory examines leadership processes within organizational systems. Relational leadership theory presents an evolution beyond Leader-Member Exchange theory (LMX; Gerstner & Day, 1997; Graen, Novak, & Sommerkamp, 1982; Graen & Uhl-Bien, 1995; Liden, Sparrowe, & Wayne, 1997) to examine the dynamic relationships constructed in the leadership processes of organizing and change (Uhl-Bien, 2006). Transformational leadership theory describes individuals who inspire followers to transcend their own self-interests, to buy in to the vision articulated by the leader, and to adopt organizational goals as their own (Bass, 1985; Bass & Avolio, 1993; Burns, 1978). We believe that these three models offer robust theoretical frameworks that can be used in future research on leadership and technology. Complexity Leadership Theory In their article on complexity leadership theory, Uhl-Bien, Marion, and McKelvey (2007) point out that most leadership theory and research has focused on the actions of individual leaders rather than “the dynamic, complex systems and processes that comprise leadership” (p. 299). They
134 • Denise Potosky and Michael W. Lomax also argue that I-O psychologists have not addressed the “new age” knowledge and innovation challenges that are discussed in the management and business school literature. Reminiscent of the angst associated with the cybernation revolution, Osborn, Hunt, and Jauch (2002) noted that current strategy and organization theory tends to minimize the influence of human agency in favor of a more mechanistic view of the organization, but they also pointed out that leadership scholars can make important contributions to organizational theory by demonstrating the role of leadership in the larger context of complexity and change. They observed that “one of the most profound consequences of changes in information technology and globalization has been the recognition that the firm and its executives do not operate as an independent bureaucratic entity” (Osborn, Hunt, and Jauch, 2002, p. 817). New theories of organizing not only need to think in terms of networks within the environment, but also choices about activities inside the firm and about how these activities are managed. Older models of leadership dealt with a very different set of circumstances than those experienced in the contemporary work environment (Davenport, 2001), and few contemporary leadership theories provide a perspective that is relevant in the Knowledge Era (Osborn, Hunt, and Jauch, 2002; Marion & Uhl-Bien, 2001; Schneider & Somers, 2006). Whether technology is treated as a contextual backdrop or a set of tools introduced to an organizational system, complexity theory offers one vantage point for considering the intersection of leadership processes and technological innovations. Uhl-Bien, Marion, and McKelvey (2007) explain that: a complexity leadership perspective requires that we distinguish between leadership and leaders. Complexity Leadership Theory will add a view of leadership as an emergent, interactive dynamic that is productive of adaptive outcomes (which we call adaptive leadership, cf., Heifetz, 1994). It will consider leaders as individuals who act in ways that influence this dynamic and the outcomes. (p. 299)
Complexity leadership theory may provide an appropriate, robust framework for studying the leader’s role in influencing technology-facilitated activities in organizations. Operationally, research might explore “detailed records of how managers network, gather information, plan, think, and communicate as reflected in the form of the technology they use, the websites they visit, and the e-mails they send” (Gardner et al., 2010, p. 950).
Leadership and Technology • 135 Relational Leadership Theory Uhl-Bien (2006) described relational leadership theory (RLT) as a framework for studying leadership as a social influence process involving co-constructed coordination and change. This RLT framework includes leadership research from the perspective of studying the attributes and interpersonal relationships of individuals who are or who become leaders (an entity perspective) as well as from the perspective of studying leadership as a process of social construction through which understandings about leadership surface (a relational perspective). RLT views leadership as it emerges throughout the relational dynamics within an organization, and as such does not limit the study of leadership to the examination of individuals in assigned leadership roles or positions within an organization’s hierarchy (Uhl-Bien, 2006). Given its inclusion of context in its consideration of relational dynamics, the broader framework of RLT is useful for research at the juncture of leadership and technology. For example, UhlBien (2006) points out how RLT extends beyond network theory, which often implies an entity perspective and an examination of “who talks to whom,” to focus on dynamic interactions between people and between people and technology. Rather than focus on what leaders do, RLT provides a framework for examining technology-facilitated social networks as organizations in which leadership emerges (and contracts) as people interact. “A key question asked by RLT is: How do people work together to define their relationships in a way that generates leadership influence and structuring?” (Uhl-Bien, 2006, p. 668). This relational perspective lends itself nicely to examining leadership processes in virtual contexts, as it focuses on interaction, conversation, and dialogues (Dachler & Hosking, 1995; Uhl-Bien, 2006). Noting a study by Abel (1990) in which the use of rich audio-visual media facilitated the construction of cohesive teams, Avolio et al., (2000) proposed several research questions related to technology-facilitated leadership that could potentially span great, global distances. For example, they asked how virtual contact throughout the day with others who are geographically far away will alter interactions and expectations. Other research has suggested that technology-mediated communication can actually improve leader–follower relations with regard to followers’ apprehension about being evaluated or feelings of domination (Avolio & Kahai, 2003; Kahai, Sosik, & Avolio, 2003). These studies represent a starting point for understanding roles and relationships in technologyfacilitated and technology-dependent interactions.
136 • Denise Potosky and Michael W. Lomax By focusing on “the social dynamics by which leadership relationships form and evolve” (Uhl-Bien, 2006, p. 672), RLT-framed technology studies could examine how people in an organization interact and negotiate leadership roles. Also, RLT lends itself to examining the role of aesthetics or sensory experiences of leaders and followers. This presents interesting new avenues for research on virtual leadership and interaction processes. For example, one might anticipate that leading from a geographical distance using virtual communication technology might encourage autonomy among individual followers. But such arrangements could also foster feelings of remoteness as opposed to interconnectedness. The notion that technology itself has enabled people all around the world to connect with each other belies the social-psychological processes involved in “connecting.” The application of RLT to technology-facilitated communication and leadership processes may provide a suitable theoretical framework for future investigations of technology-facilitated leadership phenomena. Transformational Leadership Theory Given the pervasiveness of technology-mediated communication within and between all sorts of organizations, contemporary leaders must leverage information technology in the process of building relationships and communicating with followers. Avolio, Kahai, and Dodge (2000) proposed that e-leadership, which they defined as a social influence process mediated by IT tools, techniques, and knowledge, would “transform our models of leadership, and ultimately the way it is measured and developed in organizations, even though many aspects of leadership will also remain the same” (p. 660). Avolio et al. also pointed out that although the specific behaviors needed to influence others will likely require skillful use and knowledge of IT, “leaders who are more inspirational, caring, intellectually challenging, credible, honest, goal-oriented, and stable will still be seen as more effective” (p. 660). Transformational leadership theory and its full range of leadership development framework (Avolio, 1999, 2011; Avolio & Bass, 1994; Bass, 1985; Bass & Avolio, 1993), identifies key leadership behaviors that are essential to productive leadership communication and development of followers, presumably in every context: Transformational leaders provide inspiration, intellectual stimulation, individualized consideration, and innovation (Bass & Riggio, 2006). More than a decade ago, researchers were encouraged to examine how transformational leaders prepare organizations to change along with information technology advancements (Avolio, 1999; Avolio, Kahai, &
Leadership and Technology • 137 Dodge, 2000; House & Aditya, 1997). There have not been very many studies, however, to illustrate the influence of transformational leadership on the development, implementation, or acceptance of new technology within organizations. Sosik et al. (2005) reported that successful executives leading in technologically driven environments “possess a high level of cognitive complexity are adept at obtaining, storing, retrieving, categorizing and using new information and integrating a variety of perspectives into strategic solutions that blend technology, people and ideas” (p. 49). Carpenter, Fusfeld and Gritzo (2010) considered leadership skills and styles for an R&D work population and suggested that transformational leadership (especially inspirational and intellectual stimulation) was an effective approach for radical innovations. Transactional leaders, who were more adept at initiating structure, assigning tasks, and defining subordinates’ roles, were more effective with incremental innovations and modifications of existing products. These authors pointed out that these findings are critical for managing expectations and aligning leadership approaches for various team projects. Overall, the full range of leadership model and transformational leadership theory seems well-suited to inform and guide an organization through technological transformation, and future research might verify this expectation. The effectiveness of the transformational leadership framework in accomplishing goals and achieving performance has occasionally been tested in situations involving the use of technology contexts. For example, Purvanova and Bono (2009) used a repeated measures study design to examine the effectiveness of leaders’ transformational behaviors in both face-to-face and virtual teams. They found that the most effective leaders (in terms of team performance) were those who increased their transformational leadership behaviors when communicating with the virtual teams. The effect of transformational leadership on team performance was stronger in virtual than in face-to-face teams. Perhaps virtual teams have a greater need for leadership. Thompson and Coovert (2003), for example, found that computer-conferencing teams were less satisfied and more confused than face-to-face teams, and were less content with the outcomes generated by the team. The computer-mediated teams spent more time making decisions, and they had more inaccuracies in independent recordings of the team’s decisions. A number of other studies have suggested that computer-mediated teams struggle with communication and coordination (e.g., Dennis, Hilmer, & Taylor, 1997; Straus & McGrath, 1994; Thompson & Coovert, 2003). The need for communication and coordination suggests a need for leadership. More
138 • Denise Potosky and Michael W. Lomax broadly, these types of studies provide some insight into the influence of technology on leadership processes. Yet, more theoretically-grounded research is needed to understand whether advancements in information and communication technology have fundamentally changed leadership processes and the expected behaviors of transformational leaders. For example, research on inspirational leadership, which relates to research on charismatic leadership (e.g., Conger & Kanungo, 1987), originally emphasized the importance of eye contact, nonverbal communication, and the manipulation of symbols and artifacts from the situation in the process of connecting with followers (Gardner & Avolio, 1998; Holladay & Coombs, 1994; Howell & Frost, 1989; Richardson & Thayer, 1993). We found no research that described if and how leaders could earn attributions of charisma through technology-mediated communication with followers. Balthazard, Waldman, and Warren (2009) compared 127 members of virtual decision-making teams with 135 members of traditional face-toface teams in terms of the relationship between aspects of personality and the emergence of transformational leadership. While personality characteristics such as extraversion and emotional stability were relevant to the emergence of transformational leadership in face-to-face teams, these characteristics were not related to leadership in the virtual teams. After analyzing the content of interactions in the virtual context, Balthazard, Waldman, and Warren (2009) reported that the linguistic quality of a person’s written communication predicted the emergence of transformational leadership in virtual teams. It is interesting to note the implication, however, that the use of technology in leadership communication may be a way to mimic or substitute for transformational leadership rather than as another representation of authentic transformational leadership behaviors (Balthazard, Waldman, & Warren 2009). Research from the transformational leadership perspective could compare competing hypotheses that communication technology provides a substitute for transformational leadership versus technology as a communication medium exploited by individuals who might not otherwise emerge as leaders in face-to-face contexts. Overall, transformational leadership theory lends itself to the examination of leadership behaviors in the context of technological innovation as well as with regard to the application and use of advancements in communication technology. In addition, transformational leadership is understood as a reciprocal process whereby both leaders and followers transform each other (Burns, 1978). Research has been remarkably silent about the potential reciprocal influence processes between leaders and
Leadership and Technology • 139 followers within the context of technology as well as with regard to the way technology tools are used. It would be interesting to push further in leadership research to identify ways in which leaders have not only transformed organizations through the use of technology, but also the transformation of leaders as they interact with followers using technology. Research could also more carefully examine the role of leaders in the evolution of technology itself. Finally, in support of future leadership development efforts, it would be helpful to refine and update transformational leadership theory to define and elaborate technology-facilitated transformational behaviors within the framework of the four-I’s (inspiration, intellectual stimulation, individualized consideration, and innovation; Bass & Riggio, 2006).
CONCLUSION A love–hate relationship is characterized by idealization and admiration on one hand, and devaluation, disdain, or perhaps even indifference on the other (cf., DSM-IV, 1994). Our examination of leadership psychology research, our consideration of a broader range of scholarship about leadership and technology, and our conversations with business leaders suggests a love–hate relationship between leadership and technology. In practice, leaders in many instances have embraced new technology and have championed IT advancements within their organizations. At the same time, based on our conversations with business and IT leaders, leaders have not always succeeded in obtaining what they had hoped for from technology (as in IT failures) and have sometimes received more than they were prepared for (as in fundamental change in communication channels and follower expectations). Further, one wonders whether individuals recognized within their organizations as leaders with respect to managing and influencing people have indeed been more reactive than proactive when it comes to technology. The field of industrial and organizational psychology anticipated and acknowledged the profound effects of technology on leadership processes and relationships, and some researchers have enthusiastically endeavored to evaluate the effects of leadership on technology adoption, use, and development. That said, despite repeated calls for more research on leadership and technology, the relatively small amount of study on the huge phenomenon that brought us into the “information age” has been rather
140 • Denise Potosky and Michael W. Lomax scattered, often side-stepping refinements to leadership theory in favor of accepting the notion that just like everyone else, leaders are end-users of new technological tools. Research at the junction of leadership and technology has tended to treat technology either as a contextual aspect relevant to the leadership process or as a set of tools that leaders and followers can use to communicate with each other. From the technology-as-context perspective, technological advancements alter the field in which leaders and followers interact. Walsh, Kefi, and Baskerville (2010) suggest that the use of IT entails an acculturation process. Framing the acceptance and use of technology as a cross-cultural phenomenon creates a role for leaders (and managers in organizations) to influence different types of IT users (and potential users and non-users) to participate in an IT cultural group. Leaders themselves might need to assimilate into an IT culture, and ultimately they may need to encourage large in-groups around the culture of IT. From this perspective, the use of IT is less contingent on a technical skill set and more determined by one’s motivation and mindset as well as group- and organizational-level influences. Further, as noted by Avolio, Kahai, and Dodge (2000), leadership can not only promote adaptations to IT-related organizational change, but it is also possible for leadership to ensure that new information technology does not adversely affect the existing sociocultural organizational system. From the technology-as-a-tool vantage point, “the extent to which leadership researchers have incorporated technology into the study of leadership has been disappointing” (Gardner et al., 2010, p. 950). As Avolio et al. (2000) pointed out, the implementation of advanced information technology (AIT) tools and systems requires leadership, specifically to construct and influence social structures: “One of the main challenges leaders face today is how optimally to integrate human and information technology systems in their organizations to fully leverage AIT” (Avolio, Kahai, & Dodge, 2000, p. 617). Perhaps, as with other learning curves associated with communication and learning a new language, the issue of leaders’ ability to embrace technology-facilitated communication relates to learning and fluency in the use of technological tools. That is, maybe leaders (and followers) who are fluent in the use of technology in fostering leader–follower relationships have developed expertise regarding technology-facilitated exchanges that novice “users” might envy—or resent. Leadership research has not carefully examined the extent to which the vision of the future that leaders engender depends on current or future IT. It is not clear to us that new leadership theory is needed to address the
Leadership and Technology • 141 aforementioned gaps in leadership and technology research. We proposed three existing leadership theories that could be used when investigating some of the substantive research questions brought forth in this chapter. Researchers interested in examining leadership processes within social networks or the activities of emergent or identified leaders within organizational systems might use complexity leadership theory as a guiding framework. Those interested in studying technology-facilitated communication within groups or leadership processes in virtual contexts might begin with relational leadership theory in order to understand how people connect with each other and with their leaders. And although transformational leadership theory has already been applied to investigations of virtual teams and comparisons with face-to-face leadership, future research might explore the transformational process experienced by leaders and followers within the context of technological change. Research from any of these frameworks might consider leaders’ perceptions about technology—not only the technology they have and use, but the technology they envision—and how well these perceptions align with followers’ experiences and technological realities. In an open-ended manner, we asked the executives we interviewed to “make a wish” regarding the next generation of information technology. Their responses included the desire for some technical solutions such as advancements in the application of radio frequency identification (RFID), cloud computing, and data storage capabilities within their organizations. But their responses also include socio-technological changes that I-O psychologists might consider. For example, one executive responded, I would like to see the social network platform as the primary communication vehicle for all of the company’s activities (E-mail, instant messaging, etc.). This forum blends social and business transactions to develop a collaborative network organization. That is the future for us. (H. Sundwall, personal interview, November 12, 2010)
Another executive wished for “tools that show how people’s actions are connected to their needs. For example, surveys and customer research tell us what customers are saying, but this does not always match their behavior. I would like tools that narrow the gap between what people say and what they do” (D. Page, telephone interview, December 10, 2010). These responses reflect wishes for technology as a “means” to something greater. Taken together, these wishes reflect optimism about technological advancement that is reminiscent of the aspirations about the technological age and
142 • Denise Potosky and Michael W. Lomax the anticipated leisure society from the 1950s, without even a hint of the cynicism associated with the cybernation revolution. Yet, clearly, leaders in practice seem aware of their role and accountability in relation to where technology takes us. One executive from our interviews said that “technology improves the realm of what’s possible,” but he went on to caution that it does not relieve leadership from the responsibility of improving people and processes (D. Page, telephone interview, December 10, 2010). Research on leadership and technology has, perhaps as an overarching goal, a mandate to examine not only how IT facilitates the emergence of leaders, but also how leaders use technology to create, organize, and connect potentially vast groups of followers. We need greater understanding of how leaders influence and integrate technology and human development. Even though we might hate the inevitable frustrations and constraints that we will undoubtedly encounter, we anticipate that leaders and researchers alike are going to love these exciting new directions.
REFERENCES Abel, M. J. (1990). Experiences in an exploratory distributed organization. In J. Galegher, R. Kraut, & C. Agate (Eds.), Intellectual team work: Social and technological foundations of cooperative work (pp. 489–510). Hillside, NJ: Lawrence Erlbaum. Ad Hoc Committee (1964). International socialist review, 24(3), Summer, 85–89. Annual report of APA policy and planning board (2008). How technology changes everything (and nothing) in psychology, 64(5), 454–463. Avolio, B. J. (1999, 2011). Full range leadership development. Thousand Oaks, CA: SAGE Publications. Avolio, B. J., & Bass, B. M. (1994). Improving organizational effectiveness through transformational leadership. Thousand Oaks, CA.: Sage Publications, Inc. Avolio, B. J., & Kahai, S. S. (2003). Adding the “E” to e-leadership: How it may impact your leadership. Organizational Dynamics, 31, 325–338. Avolio, B. J., Kahai, S., & Dodge, G. E. (2000). E-leadership: Implications for theory, research, and practice. The Leadership Quarterly, 11(4), 615–668. Balter, R. M. (2009). How technology changes everything (and nothing) in psychology: 2008 Annual report of the APA policy and planning board. American Psychological Association, 64(5), 454–463. Balthazard, P. A., Waldman, D. A., & Warren, J. E. (2009). Predictors of the emergence of transformational leadership in virtual decision teams. The Leadership Quarterly, 20, 651–663, 652. Barry, B., & Fulmer, I. S. (2004). The medium and the message: The adaptive use of communication media in dynamic influence. Academy of Management Review, 29, 272–292. Bass, B. M. (1985). Leadership and performance. New York: Free Press.
Leadership and Technology • 143 Bass, B. M., & Avolio, B. J. (1993). Transformational leadership: A response to critiques. New York: Free Press. Bass, B. M., & Riggio, R. E. (2006). Transformational leadership (2nd edn.). Mahwah, NJ: Lawrence Erlbaum Associates. Bell, D. (1973). The coming of post-industrial society: A venture in social forecasting. New York: Basic Books. Borecki, D. Y. (2009). Leader’s ICT usage’s influence on follower’s positive work attitudes via perceived leader-follower relations. Journal of Leadership & Organizational Studies, 16(2), 141–158. Brown, M. E., & Gioia, D. A. (2002). Making things click: Distributive leadership in an online division of an offline organization. The Leadership Quarterly, 13(4), 397–419. Burns, J. M. (1978). Leadership. New York: Harper and Row. Carpenter, D. J., Fusfeld, A. R., & Gritzo, L. A. (2010). Leadership skills and styles. Research Technology Management, 53(6), 58–60. Christensen, C. M. (2000). The innovator’s dilemma: When new technologies cause great firms to fail. New York: Harper Business. Conger, J. A., & Kanungo, R. N. (1987). Toward a behavioral theory of charismatic leadership in organizational settings. Academy of Management Review, 12(4), 637–647. Dachler, H. P., & Hosking, D. M. (1995). The primacy of relations in socially constructing organizational realities. In D. M. Hosking, H. P. Dachler, & K. J. Gergen (Eds.), Management and organization: Relational alternatives to individualism (pp. 1–29). Aldershot: Avebury. Davenport, T. H. (2001). Knowledge work and the future of management. In W. G. Bennis, G. M. Spreitzer, & T. G. Cummings (Eds.), The future of leadership: Today’s top leadership thinkers speak to tomorrow’s leaders (pp. 41–58). San Francisco: JosseyBass. Davis, F. D., Bagolli, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. Dennis, A. R., Hilmer, K. M., & Taylor, N. J. (1997). Information exchange and use in GSS and verbal group decision making: Effects of minority influence. Journal of Management Information Systems, 14, 61–88. DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive Structuration Theory. Organization Science, 5, 121–147. Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV-TR). (1994). Washington, DC: American Psychiatric Association. Ferran, C., & Watts, S. (2008). Videoconferencing in the field: A heuristic processing model. Management Science, 54(9): 565–1578. Fjermestad, J., & Hiltz, S. R. (1999). An assessment of group support systems experimental research: Methodology and results. Journal of Management Information Systems, 15, 7–149. Gardner, W. L., & Avolio, B. J. (1998). The charismatic relationship: A dramaturgical perspective. The Academy of Management Review, 23(1), 32–58. Gardner, W. L., Lowe, K. B., Moss, T. W., Mahoney, K. T., & Cogliser, C. C. (2010). Scholarly leadership of the study of leadership: A review of The Leadership Quarterly’s second decade, 2000–2009. The Leadership Quarterly, 21(6), 922–958. Gerstner, C. R., & Day, D. V. (1997). Meta-analytic review of leader–member exchange theory: Correlates and construct issues. Journal of Applied Psychology, 82(6), 827–844.
144 • Denise Potosky and Michael W. Lomax Golden, T. D., & Veiga, J. F. (2008). The impact of superior–subordinate relationships on the commitment, job satisfaction, and performance of virtual workers. The Leadership Quarterly, 19(1), 77–88. Graen, G. B., & Uhl-Bien, M. (1995). Relationship based approach to leadership: Development of leader-member exchange (LMX) theory of leadership over 25 years. Applying a multi-level multi-domain perspective. The Leadership Quarterly, 68, 219–247. Graen, G. B., Novak, M. A., & Sommerkamp, P. (1982). The effects of leader-member exchange and job design on productivity and satisfaction: Testing a dual attachment model. Organizational Behavior & Human Decision Processes, 30, 109–131. Graham, S. M., & Clark, M. S. (2006). Self-esteem and organization of valenced information about others: The “Jekyll and Hyde”-ing of relationship partners. Journal of Personality and Social Psychology, 90, 652–665. Gronn, P. (2002). Distributed leadership as a unit of analysis. The Leadership Quarterly, 13(4), 423–451. Halbesleben, J. R. B., Novicevic, M. M., Harvey, M. G., & Buckley, M. R. (2003). Awareness of temporal complexity in leadership of creativity and innovation: A competencybased model. The Leadership Quarterly, 14 (4–5), 433–454. Hazy, J. K. (2007). Computer models of leadership: Foundations for a new discipline or meaningless diversion? The Leadership Quarterly, 18, 391–410. Heckscher, C. (1994). Defining the post-bureaucratic type. In C. Heckscher & A. Donnellon (Eds.), The post-bureaucratic organization: New perspectives on organizational change. Thousand Oaks, CA: Sage. Heifetz, R. A. (1994). Leadership without easy answers. Boston, MA: Cambridge Harvard University Press. Holladay, S. J., & Coombs, W. T. (1994). Speaking of visions and visions being spoken: An exploration of the effects of content and delivery on perceptions of leader charisma. Management Communication Quarterly, 8(2), 165–189. Hoppe, B., & Reinelt, C. (2010). Social network analysis and the evaluation of leadership networks. The Leadership Quarterly, 21(4), 600–619. House, R. J., & Aditya, R. W. (1997). The social science study of leadership: Quo Vadis? Journal of Management, 23, 409–473. Howell, J. M., & Boies, K. (2004). Champions of technological innovation: The influence of contextual knowledge, role orientation, idea generation, and idea promotion on champion emergence. The Leadership Quarterly, 15(1), 123–143. Howell, J. M., and Frost, P. J. (1989). A laboratory study of charismatic leadership. Organizational Behavior and Human Decision Processes, April, 243–269. Igbaria, M., & Iivari, J. (1995). Effects of self-efficacy on computer usage. OmegaInternational Journal of Management Science, 23, 587–605. Kahai, S. S., Sosik, J. J., & Avolio, B. J. (2003). Effects of leaderships style, anonymity, and rewards of creativity-relevant processes and outcomes in an electronic meeting system context. The Leadership Quarterly, 14, 499–524. Kahn, H., & Wiener, A. J. (1967). The Year 2000: A framework for speculation on the next thirty-three years. New York: Macmillan. Keller, R. T. (1995). Transformational leaders make a difference. Research Technology Management, 38(3), 41–44. Kübler-Ross, E. (1969). On death and dying. New York: MacMillan. Liden, R. C., Sparrowe, R. T., & Wayne, S. J. (1997). Leader—member exchange theory: The past and potential for the future. In G. R. Ferris (Ed.), Research in personnel and human resource management, Vol. 15. (pp. 47–119). Greenwich, CT: JAI Press.
Leadership and Technology • 145 Lord, R. G., & Maher, K. J. (1991). Leadership and information processing: Linking perceptions and performance. Boston, MA: UnwinEveryman. Lord, R. G., Foti, R. J., and DeVader, C. L. (1984). A test of leadership categorization theory: Internal structure, information processing, and leadership perceptions. Organizational Behavior and Human Decision Processes, 34, 343–378. Lowe, K. B., & Gardner, W. L. (2000). Ten years of Leadership Quarterly: Contributions and challenges for the future. The Leadership Quarterly, 11, 459–514. Makri, M., & Scandura, T. A. (2010). Exploring the effects of creative CEO leadership on innovation in high-technology firms. The Leadership Quarterly, 21(1), 75–88. Marcy, R. T., & Mumford, M. D. (2010). Leader cognition: Improving leader performance through causal analysis. The Leadership Quarterly, 21(1), 1–19. Marion, R., & Uhl-Bien, M. (2001). Leadership in complex organizations. The Leadership Quarterly, 12, 389–418. Maruping, L. M., & Agarwal, R. (2004). Managing team interpersonal processes through technology: A task-technology fit perspective. Journal of Applied Psychology, 89(6), 975–990. Mathieu, J., Ahearne, M., & Taylor, S. (2007). A longitudinal cross-level model of leader and salesperson influences on sales force technology use and performance. Journal of Applied Psychology, 92(2), 528–537. Michael, D. N. (1962). Cybernation: The silent conquest. Center for the Study of Democratic Institutions. Mills, S. (1995, October). Maximize the payoff from information technology. CIO Magazine, 9(1), 112. Naisbitt, J. (1982). Megatrends: Ten new directions transforming our lives. New York: Warner Books. Novie, G. (2005). The long goodbye: Termination in the treatment of borderline personality. Psychologist-Psychoanalyst, 25(2), 16–19. Osborn, R. N., Hunt, J. G., & Jauch, L. R. (2002). Toward a contextual theory of leadership. The Leadership Quarterly, 13(6), 797–837. Parthasarathy, M., & Sohli, R. S. (1997). Salesforce automation and the adoption of technological innovations by salespeople: Theory and implications. Journal of Business and Industrial Marketing, 12, 196–208. Pearson, J. M., Crosby, L., Bahmanziari, T., & Conrad, E. (2002). An empirical investigation into the relationship between organizational culture and computer efficacy as moderated by age and gender. Journal of Computer Information Systems, 43, 58–70. Pfeffer, J. (1994). Competitive advantage through people. Boston, MA: HBR Press. Potosky, D. (2008). A conceptual framework for the role of administration medium in the personnel assessment process. Academy of Management Review, 33(3), 629–648. Potosky, D., & Bobko, P. (2004). Selection Testing via the Internet: Practical Considerations and Exploratory Empirical Findings, Personnel Psychology, 57, 1003–1034. Potosky, D., & Olshan, B. (2008). The secret success of a global ERP champion: Everything changed and nothing happened. In C. Ferran & R. Salim (Eds.), Enterprise resource planning for global economies (pp. 92–105). Hershey, PA: Idea Group International. Purvanova, R. K., & Bono, J. E. (2009). Transformational leadership in context: Face-toface and virtual teams. The Leadership Quarterly, 20(3), 343–357. Reisman, D. (1958). Leisure and work in post-industrial society. In E. Larrabee & R. Meyersohn (Eds.), Mass Leisure (pp. 368–370). Glencoe, IL: Free Press. Richardson, R. J., & Thayer, S. K. (1993). The charisma factor: How to develop your natural leadership ability. Englewood Cliffs, NJ: Prentice Hall.
146 • Denise Potosky and Michael W. Lomax Schepers, J., Wetzels, M., & Ruyter, K. D. (2005). Leadership styles in technology acceptance: Do followers practice what leaders preach? Managing Service Quality, 15(6), 496–505. Schneider, M., & Somers, M. (2006). Organizations as complex adaptive systems: Implications of complexity theory for leadership research. The Leadership Quarterly, 17(4), 351–365. Smircich, L., & Morgan, G. (1982). Leadership: The management of meaning. Journal of Applied Behavioral Science, 18(3), 257–273. Smith, T., Koohang, A., & Behling, R. (2010). Understanding and prioritizing technology management challenges. Journal of Computer Information Systems, Fall, 91–98. Sosik, J., Jung, D., Berson, Y., Dionne, S., & Jaussi, K. (2005). The strategic leadership of top executives in high-tech organizations. Organizational Dynamics, 34(1), 47–61. Straus, S. G., & McGrath, J. E. (1994). Does the medium matter? The interaction of task type and technology on group performance and member reactions. Journal of Applied Psychology, 79, 87–97. Tarafdar, M., & Qrunfleh, S. (2009). Examining tactical information technology: Business alignment. Journal of Computer Information Systems, Summer 2010, 107. The Enterprise of the Future (2008). Third edition of biennial global CEO study series by IBM. Retrieved May 25, 2011, from http://cssp.us/pdf/Global%20CEO%20Study% 20The%20Enterprise%20of%20the%20Future.pdf Thompson, L. F., & Coovert, M. D. (2003). Teamwork online: The effects of computer conferencing on perceived confusion, satisfaction, and post-discussion accuracy. Groups Dynamics: Theory, Research, and Practice, 7(3), 135–151. Thompson, L. F., & Surface, E. A. (2007). Employee surveys administered online: Attitudes toward the medium, nonresponse, and data representativeness. Organizational Research Methods, 10, 241–261. Uhl-Bien, M. (2006). Relational leadership theory: Exploring the social processes of leadership and organizing. The Leadership Quarterly, 17, 654–676. Uhl-Bien, M., Marion, R., & McKelvey, B. (2007). Complexity leadership theory: Shifting leadership from the industrial age to the knowledge era. The Leadership Quarterly, 18, 298–318. Walsh, I., Kefi, H., & Baskerville, R. (2010). Managing culture creep: Toward a strategic model of user IT culture. Journal of Strategic Information Systems, 19, 257–280.
Section II
Human Factors
This page intentionally left blank
7 Human Factors Peter A. Hancock
INTRODUCTION Human Factors and its comparable European antecedent, Ergonomics, have primarily been concerned with people in conjunction with the work they perform. In this chapter, I am going to provide a brief introduction to these two, now coincident scientific disciplines and look to explain how they currently impinge upon the world of work. However, I am also going to look at the nature of work itself and how technology, and our interactions with it, are beginning to change the way we view and conceive work. As I will look to conclude, technology is the most powerful shaping influence on our planet. Those who mediate between people, technology and their work thus exert a highly influential effect on the future. In light of sequential waves of evolutions in technology, I look to consider what futures are therefore probable, possible and even feasible. I want to begin my chapter with a brief overview of the issue of work. In particular, I want to illustrate how work is not only about the sustenance of life but how it plays a crucial role in the social organizations of life as well. Our view of work is therefore not simply a pragmatic and utilitarian exercise in continued physical existence but it is much more fundamentally a political force. In light of this examination, I look to explain the role of Human Factors (HF) in the modern work setting. I seek to show how the actions of those involved in advancing the HF agenda are actually instrumental in an intrinsic, and sometimes explicit, process of social change (Hancock & Drury, 2011). Changing the nature of how people work, especially in an information-processing dominant world, changes the very nature of society itself. As a result, Human Factors and Ergonomics (HF/E) are not merely an academic wedding of knowledge from the psychological and engineering sciences. Rather, they are crucial pursuits at the heart of 149
150 • Peter A. Hancock a technical and moral revolution (Hancock, 2009a). However, as I have noted before, in order to see clearly into the future, we have to see well into the past. Thus, I will start this chapter with a necessary but limited excursion into history.
TRIPALIARE: ON THE ORIGIN AND CONCEPTION OF WORK The term tripaliare is a very interesting one. It is derived from the Latin word tripalium, which originally described a three-pronged instrument used by farmers to help in threshing during the harvesting process. The word is, to some degree, the basis of the French word travail1 as well as the Spanish word trabajar and these both link to our modern English term for work and thus to workers themselves. However, even more interesting is that the tripalium itself became primarily known, not as an instrument of labor, but rather as an instrument of torture. Indeed, the use of the French term travail when used in English still retains a frisson of the idea of hard and potentially tortuous physical labor. From Victorian times up until the middle of the twentieth century, prisoners were often sentenced to “hard physical labor,” and this harsh penalty was to be served in the penitentiary. To be put to “hard labor” was redolent of the idea of social retribution and a degree of what was viewed as “acceptable” institutionalized torture. When, for example, the Irish playwright Oscar Wilde was sentenced to such physical labor for his purportedly scandalous and society-threatening behavior (Ellman, 1987), the general consensus was that no nominal “gentleman” would be able to survive such an ordeal. In polite society, the fate of non-gentlemen was not discussed. This thread of lexical origins begins to show us that our idea of work is linked originally to the notion of tortuous physical effort and also that inherent social class divisions often derived from the nature of the work that an individual was required to perform. Unending and mindless physical effort was the lot of either animals or the lowest of human classes. We still have remnants in our language which reference “beasts of burden” of both human and animal form. The next level up was the artisan with their greater level of skill, but still primarily featuring physical actions. Beyond the skilled artisans we find the mercantile and business classes whose activities mixed both physical and more mental endeavours. The time-honored tradition of the son of the family “starting at the bottom”
Human Factors • 151 features this sort of intimate, special apprenticeship. Intellectuals such as teachers, the clergy and indeed professors stood one notch above “trade.” Military forces represented the last intermediate class in which the nobility could “buy” a commission, such that the officer class often had little or no actual martial experience, often with disastrous results. Finally, there were the true nobility in which resided the class of gentlemen. Of course, true gentleman simply did not work! (Waugh, 1945). Work then was not simply a necessity for survival but throughout history the nature of one’s obligatory activity has served an incredibly important marker function in connoting and sustaining the fabric of all class-ridden societies. Intrinsic also to that social structure was the necessity to formalize a person’s moral obligation to acquiesce in the status quo. In this matter formal religion exerted a critical influence. Many of our phrases concerning work derive from moralizations backed by elements of JudeoChristian (and indeed other) religious dogmas: “The devil makes work for idle hands”; “Working our fingers to the bone.” In many cultures, idleness and sloth are considered not a blessing but, as their semantic overtones still imply, as sins and promulgated as such. These ideas of sin and the associated epithets each frame the notion that people had their “place” in society and that place was fixed not by power and convenience of the ruling (and indeed leisured class) but by God himself. It was a powerful soporific that still echoes through our world today. Indeed, many of our ideas about work and its nature derive from these foundations and antecedents. But the edifice began to crack when technology began to exert and accelerate its more convivial influences (Illich, 1973). For it is with the advance of technology that we begin to see not simply the greater differentiation of work categories but the emergence of some fundamental questions about the nature of work itself. As work changes, so too do the ways in which it serves as a framework for social organization, albeit slowly and interdependently with other primary factors such as wealth. However, again we have to delve into the past to find the roots of such an evolving revolution.
THAUMATURGIKE: THE LOWEST FORM OF MAGIC Hard, unremitting, grinding, physical labor does not serve simply to exhaust, it serves also to inhibit the concomitant use of mental faculties.
152 • Peter A. Hancock Strangely but interestingly, this might just simply be a matter of energy balance. The human brain takes up an enormous number of calories, consuming something like one-third of the resting metabolic rate even though it represents something less than 10 percent of the total body volume. Put in these simple terms—thinking costs! An individual who has been out all day spending their hard earned calories on daunting physical tasks must spend a large part of their time seeking to replenish those calories, not contemplating the fundamental nature of society and the universe. One is reminded of Mr. Bumble in Oliver Twist who eschewed the provision of anything but gruel for the orphans to eat in case it fomented rebellion! Thus, technology that replaces that need for human motive power does not only change the immediate, proximal task, it also provides other avenues and vistas on which the “worker” can now expend their recovered calories. Some of those workers will use those resources to think, and thinking can be a dangerous thing. Like animals, machines can be used to replace human power. However, unlike animals, machines can be both invented and rapidly redesigned and reconfigured for specific work purposes. Creating tools (and by extension machine technology) might indeed be one, if not the, basic marker of the human species (Hancock, 2009a). The conjunction of human intention and the magnification of capacity via physical tools is indeed the very foundation of the discipline of Ergonomics (Jastrzebowski, 1857). However, the recent path of evolution of machine systems has been very much along a vector of dissociating intention from action. That is, originally in tools the individual expressing intention was necessarily the same individual who wielded the tool. However, with the growth of automation especially, the designer of the system could become progressively more remote from the instrument itself. With the inflational rise of calculational complexity, it is the case that today we often cannot be exactly sure who is the designer of some particular system. For highly complex systems such as microprocessors, the designer may even be one step further removed as machines are themselves now required to help design these complex technologies. Who the originators of such systems are in these cases begins to vanish in the mists of impenetrable obscurity. One of the most interesting examples at the beginning of the more practical uses of automation is the story of Dr. John Dee and the “Wheeling Beetle.” Here Dee, then a sixteenth-century English university student, created the “Beetle,” which was an automaton that flew about the stage without any apparent support or human interference. Such was the astonishment that Dee was taken as a magician (Roberts, 2004), and indeed
Human Factors • 153 was known as a necromancer throughout his later life. Such wonder and suspicion serves to remind us of Arthur C. Clarke’s affirmation that any sufficiently advanced technology will, to the untrained eye, appear simply to be magic (http://en.wikipedia.org/wiki/Clarke’s_three_laws). Dee was actually part of a long line of such individuals who created marvellous innovations in automation-based technologies. In writing in a very important text, Dee referred to this level of “magic” as thaumaturgike or what today we would recognize as mechanical engineering. The point here is not so much the specific form of the technology but the fact that its creation allows work to be done independent of the momentary necessity for human interference of either the physical or cognitive kind. Little wonder that in his own world Dee was, at one time, highly regarded and was, for a while, accorded great respect but always treated with great suspicion.2 Like many inventors and innovators, Dee finally ended up penniless and in the gutter—it is a moral tale for us all. The conclusion here is that change in the nature of work is not merely a matter of the direct interface between human and machine, nor is it simply an alteration and amendment of the socio-technical system to hand, it is crucially a lever on the nature of society itself. That is why, when work is changing so quickly, we see the volatilities and uncertainties in society that accompany it. This vector of evolution, and indeed the rate of such change, is liable to continue to accelerate. In respect of Human Factors, it is useful to consider in which direction the future of computationally-supported work may take us, and what role such changes will make in individual and collective lives. Here, we can begin to ask revolutionary questions of the sort: If a job, task, or occupation is thoughtfully-enough designed, is it work at all? If automation is sufficiently advanced—would any human necessarily expend effort on an activity they find aversive? And, perhaps more provocatively—do humans need work for their own well-being? As technology continues to blur the boundaries between work and non-work (leisure?), we have to examine our possible working futures.
WHAT WORK MAY COME? Let us proceed to examine what work may come. To do this, we have to make explicit what we presently see as work. For most people, but crucially not for most of the people who would read this chapter, work is and has always been an aversive and obligatory activity. Most people work
154 • Peter A. Hancock not because they want to but because they have to. As part of an economic framework, people are constrained to provide their labor in order to obtain resources for existence. However, one empirical question can be advanced in respect of this perception of work. That is, does work necessarily have to be aversive in nature? The interesting thing is that for most people who read this chapter, work is hardly aversive at all. For the current reader, what they term work is actually more of a vocation and avocation, and therefore much closer to what the wider community sees as vacation rather than work. The traditional apperception of work is very different. In the sciences of Human Factors and Ergonomics, we have, for several decades, been strongly involved with the transformation in the content of work. We have an enormous number of papers, presentations, and protestations about the transition between physical work and cognitive work. Having contributed extensively to this discussion (e.g., Hancock & Meshkati, 1988), I do not wish to resurrect or reiterate these arguments here. However, one very important issue does arise. While we have focused almost exclusively on the content of work, we have often neglected to consider the nature of work. Our disciplines have discoursed extensively on job design and re-design, the measurement of mental workload, etc, what we have failed to explore is whether it needs to be “work” at all. To develop this argument we need to consider the nominal antithesis. That is, we need to look at what is typically termed “leisure.” Leisure has many of the attributes of work. People often engage in physical and cognitive activity and on many occasions this can be very challenging on each dimension (e.g., mountain climbing, chess, complex video games, etc.). Certainly, calories are expended in the pursuit of a particular goal in the same way as occurs in traditional “work.” But leisure is not “work” in the way we traditionally view work. Consider, for example, the poor office workers who sit before a computer screen each day and are forced to watch a series of images passing across in front of them in order to distil certain specific information. They end up tired, bored, frustrated, stressed and desperate for 5 o’clock to come around. Wearily, those same individuals go home to relax where they crumple up in front of the television and now gladly watch a series of images passing across a TV screen in front of them in order to distil certain specific information! Just as piquantly, we might think of the New York bicycle messenger tracking all over the metropolis all day, who goes home at night only to compete in a BMX competition! Indeed, modern executive fitness centers with their stationary treadmills actually bear a very strong resemblance to the “hard labor” that Oscar Wilde was put to (Hancock, 2009a). The central point
Human Factors • 155 is that the content of work can be very much the same as the content of leisure and it is much more crucially the nature of the compulsion that forms the terms of work. So, let us look at this issue of compulsion. The primary reason why most individuals reading the present chapter do not “hate” their work in the traditional manner is that, although subject to some degree of compulsion, they have much freedom, latitude, and autonomy in their choice of the timing and strategy by which they accomplish their work. For many individuals this is not so, technology dictates all such dimensions to them—in short they have no choice. Therefore, in the future evolution of work, it is not simply enough to change the content of the displayed information and the associated and required responses, we must be much more flexible about the nature of work compulsion. Sadly, as the world goes ever faster (Gleick, 1999), the natural as opposed to social compulsion gets ever more constricting itself. Let me explain. When we meet our bosses for dinner the choice is often theirs as to time and place and we feel constrained to adhere to their wishes. However, their wish is only one social compulsion. Thus, if we arrive at the restaurant an hour late, it is unlikely that the establishment will have run out of food. We may have annoyed our bosses but no one died. If, on the other hand, we are flying an aircraft and it has only one hour’s worth of fuel left, however much we might desire it to be otherwise, we cannot now proceed to an airport two hours away (Hancock, 2009b). The former is a social imperative; the latter is a natural imperative. Much of modern day work is so closely interconnected that failure to perform at the right place and at the appropriate time can leave a whole system vulnerable to failure. So, while we can to some degree re-design our work (Hancock, 2009a), we really need to begin to understand how to re-design time itself. This latter challenge is indeed a difficult one and is one that has largely been neglected in HF and the associated technological sciences. But these are matters of strategy and scheduling for which our colleagues in operations research live. It should not be beyond the realms of conception or even achievement to automatically interconnect operators with the necessary tasks at the right time and place. This is especially true if the workforce of individuals is truly global. The overlying layer of complexity is the economic one which dictates that we must determine how to “value” an individual’s contribution (work) and how to factor this into a profit-driven structure. Interestingly, social imperatives certainly vary by culture (and can indeed be a large source of friction when intercultural/global actions have to be integrated). The empirical question is whether a global work system could be integrated so as to incorporate and
156 • Peter A. Hancock harmonize both these cultural and natural imperatives? If this is possible, it might truly be feasible to provide all individuals with their own personal “teletic” form of “work” (Csikszentmihalyi, 1990). But what of people who wish to engage in no activity whatsoever? How are they to be accommodated in such a system? How do we incorporate issues such as personal ambition, idiosyncratic decisions, and individual expression? Each of these are fundamentally human challenges in relation to structured and socially beneficial activity. However they are answered, if indeed they can be answered, the solution can and will be expressed through a technological conduit. So, while some see HF in a very limited light, I see HF at the heart of all human existence and indeed central to the expression of aspirations for that existence, individually and collectively. One important question thus emerges concerning the nature of the intimacy that we have with our burgeoning technologies and what the future of, what I have termed, this “self-symbiosis” might look like.
DISAPPEARING DEMARCATIONS: PEOPLE AND TECHNOLOGY One can strike deadly fear into the heart of any modern under-30 year old just by threatening to remove their cell phone, PDA, I-Pad, or whatever other technical appendage that today they simply cannot live without. This observation is only partly in jest for there are many individuals who most certainly rely directly on technology for their continued existence. Primarily in the area of medical devices, these are usually appendages that help with either monitoring or ongoing maintenance of necessary biological functions. A most interesting intermediary case is that of the heart pacemaker that in some cases is permanently active while in others only serves a monitoring role. Here, the technology lives within the individual. It is no longer an external appendage that one can pick up or put down at will; it is co-resident with the person. These medical devices are thus often obligatory in nature but the transition to elective, in-dwelling technologies is surely approaching. As I write this passage, there are companies and organizations planning any number of such transitional technologies! First, there are and will be waves of technology that will live in our clothing and in our jewelry as our attachment becomes progressively ever more intimate. Of course, this stage is already well upon
Human Factors • 157 us. Quickly, this intimacy will be replaced by truly “intimate” technologies (i.e., internal) that will live directly on us and within us. The medical device innovations will only be the first wave because they will easily be represented as vitally necessary but already people are starting to live with elective technologies embedded in them. This line of progress, particularly focusing on the promise of nano-technology, does not appear to have any insurmountable barriers across its immediate path. But where does this line of evolution leave HF? Much of the “work” in HF, Human-Computer Interaction (HCI), Usability concerns the “interface” between humans and technology. How many forests have we destroyed talking of menu formats, font, and screen legibility? If there is no screen to link us to the machine, what of the whole science of HF and all that has gone into it? The simple answer is that HF will itself have to adapt to the new technologies. One such example can be garnered from the idea of multi-modal cueing (Merlo, Duley, & Hancock, 2010) in which traditional visual displays are augmented and occasionally replaced with increasingly sophisticated auditory and tactile representations. Indeed, for many in-dwelling technologies, tactile-kinaesthetic communication may very well be the preferred mode of interaction. While these new forms of human-machine feedback loop pose a number of very practical and interesting design challenges, the real question is what human beings are becoming when technology is a resident and growing part of their “personhood.” Once again, questions such as this confirm that HF is not simply a pleasant adjunct to the technology bandwagon but rather HF must be at the forefront of posing, discussing, and resolving crucial issues about the future of humanity itself.
HF AND MORALITY If technology challenges us to understand who we are becoming as individuals, it must even more urgently address the question of how we organize ourselves as communities and more, as a global society. Over the immediate past decades in HF we have seen the development of a focus on an ever larger and larger “unit of analysis.” For both HF and the European form Ergonomics, the middle of the twentieth century featured a primary (and almost exclusive) focus on the individual. How one person performed either physical or cognitive work was the central issue. However, over the intervening decades we have seen the emergence of new
158 • Peter A. Hancock perspectives such as “macroergonomics,” which have been much more oriented towards a larger “systems-based” approach. Often formalized as socio-technical systems studies, this burgeoning effort has seen a much greater focus on teams and even “teams of teams.” We now hear discussion of “systems of systems” as the level of analysis expands toward even greater horizons. The natural extension of this line of progress is to consider the “system of all human systems,” namely society itself. Obviously HF and its traditional pursuits have to now look to understand and incorporate new information from traditional sciences such as sociology and political science which have spent many decades looking at this particular level of analysis. Our contribution should contain our understanding of how humans work with technology but even more critically should be cognizant of the goals of global society, i.e., the why of existence, not merely the how of technological innovation. The latter requires a much more intimate embrace with the philosophy of moral purpose. Typically, as scientists, we do not see ourselves as engaged in Politics with a capital “P.” It might be that we are members of specific parties, and are even involved in some active manner but this is viewed as “apart” from our science, not integral with it. As I do not wish to commit “le trahison des clercs,” I do not believe that science should be involved with Politics at the party level. Nevertheless, HF (and arguable virtually all of science) does act as an explicit agent of change and therefore is necessarily involved with politics, with a small “p” (Hancock, 2011). Thus, I believe that we have to begin to engage in a process in which the science of people and technology plays a much larger role in the way in which we organize and govern ourselves both as nations and a world. Fortunately, in this view I am not alone and the same general sentiments have been expressed by others in differing scientific disciplines such as neuroscience (Harris, 2010) and systems theory (Ackoff, 2004). What has yet to emerge clearly is a formal, theory-based branch of science specifically dedicated to this issue. One obvious empirical question is whether science can indeed direct society or whether science can and should act only as one partner in a collective that exercises mutual, cybernetic “checks and balances.” It is very evident that the founding fathers of the United States, for example, in using this mechanism to ensure the mitigation of totalitarianism, did not foresee the enormous influence that technological advance would wield. Although leading technologists of their own time, they could not predict the constantly accelerating effect of modern computational systems on future social organization. Thus, what we presently possess in
Human Factors • 159 the United States is essentially a palimpsest of eighteenth-century social organization now colliding against the highly technologically-contingent world of the twenty-first century. On a global level it is arguable that such tensions mandate a fundamental revision of structural governance. Problematically, such organizational change does not appear to be evolving at a rate commensurate with the emerging global challenges that are now imposing world-wide compulsions on survival. Even the most cursory glance at international responses to issues such as global warming confirms these impasses. It is here, where people meet technology, that larger-scale HF efforts should have a critical role. However, this will never happen if small-minded visions of HF persist.
SUMMARY AND CONCLUSIONS We are engaged in a most interesting race for survival. On the one hand, technology has served as the boon that has enabled the continuing growth of the human species. On the other hand, technology has served as the curse that has supported the continuing growth of the human species. Technology then, is our species conduit to both power and destruction. Both statistically and rationally, it is the far more prevalent probability that we shall exterminate ourselves, but this in itself is not a certainty, and if answers are to be found that permit survival they will almost certainly be technology-based in nature. Where does HF fit in this welter of growth and self-destruction? Initially, it might seem that HF is one of those minor, interdisciplinary pursuits that attaches itself, remoralike, to the more central and profound efforts in psychology, engineering, and design. However, this is to see HF as fundamentally a pragmatic and contingent pursuit. While this is indeed how some leading individuals have cast HF (Norman, 2010), I cannot agree (Buckle, 2011). While this professional aspect of HF is indeed an important pursuit, HF is at the center of a quincunx of integration between engineering, design, usability, and psychology. It is here that HF must present a fundamental philosophical lead to emphasize the central role of both theory and morality in guiding our future (Hancock, 2009a; Harris, 2010). As I noted above, I am not sanguine about the odds for the survival of the human race in general. However, if we do not learn to actively control, direct, and cogitate about the form, function, and purpose of the technology we create, we are surely lost indeed. In this fateful race for the continuance of
160 • Peter A. Hancock the human species, HF is neither a small matter nor on the edge of things. Whether recognized as such or not, HF is at the very heart of our possible future and our potential for survival. It demands our best and mandates our moral and rational exegesis. I hope that the present chapter can persuade enterprising and ascendant minds toward this challenge.
NOTES 1
2
Travail: “labor, toil,” mid-13c, from O. Fr. Travail “suffering or painful effort, trouble” (12c), from travailler “to toil, labor,” originally “to trouble, torture,” from V.L. *tripaliare “to torture,” from tripalium (in L.L. trepalium) “instrument of torture,” probably from L. tripalis “having three stakes” (from tria tres “three” + Pauls “stake”), which sounds ominous, but the exact notion is obscure. The verb is recorded from c. 1300. www.etymonline.com/index.php?search=transmutation+of+ sounds+&searchmode=none&p=1 Dee, for example, was permitted to cast Queen Elizabeth I’s horoscope which, without permission was considered treason as it potentially foretold the demise of the Sovereign.
REFERENCES Ackoff, R. L. (2004) Transforming the systems movement. Third International Conference on Systems Thinking in Management. URL: www.acasa.upenn.edu/RLAConfPaper. pdf, 29 November 2010. Buckle, P. (2011). The perfect is the enemy of the good: Ergonomics research and practice. Ergonomics, 54 (1), 1–11. Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper & Row. Ellman, R. (1987). Oscar Wilde. New York: Vintage Books. Gleick, J. (1999). Faster: The acceleration of just about everything. New York: Pantheon. Hancock, P. A. (2009a). Mind, machine, and morality. Ashgate: Chichester, England. Hancock, P. A. (2009b). Human factors and ergonomic issues involved in the disappearance and search for Amelia Earhart. Ergonomics in Design, 17(4), 19–23. Hancock, P. A. (2011). Notre trahison des clercs: Implicit aspiration, explicit exploitation. In: Proctor, R. W., and Capaldi, E. J. (Eds.), Psychology of science: Implicit and explicit reasoning (pp. 479–495). New York: Oxford University Press. Hancock P. A., & Drury, C. G. (2011). Does Human Factors/Ergonomics contribute to the quality of life? Theoretical Issues in Ergonomic Science, 12, 1–11. Hancock, P. A., & Meshkati, N. (1988). (Eds.). Human mental workload. Amsterdam: NorthHolland. Harris, S. (2010). The moral landscape: How science can determine human values. Free Press: New York.
Human Factors • 161 Illich, I. (1973). Tools for conviviality. New York: Harper & Row. Jastrzebowski, W. B. (1857). An outline of ergonomics, or the science of work. (2000 edition. Published by the Central Institute for Labour Protection, Warsaw, Poland). Merlo, J. L., Duley, A. R., & Hancock, P. A. (2010). Cross-modal congruency benefits for combined tactile and visual signalling. American Journal Of Psychology, 123(4), 413–424. Norman, D. (2010). www.7.nationalacademies.org/dbasse/30th%20Anniversary%20of%20 the%20Board%20on%20Human%20Systems-Integration.html Roberts, R. J. (2004). Dee, John (1527–1609), Oxford Dictionary of National Biography, Oxford: Oxford University Press. Waugh, E. (1945). Brideshead Revisited: The sacred & profane memories of Captain Charles Ryder. Boston: Little, Brown & Co.
8 Usability Science II: Measurement Douglas J. Gillan and Randolph G. Bias
MEASURING USABILITY Engineering requires a science base. Accordingly, just as mechanical engineering builds upon the science of mechanics in physics, usability engineering must build on usability science (Gillan & Bias, 2001). The pursuit of good usability in any human artifact—an electronic device, a website, a workflow, a gardening tool—can be demonstrated to lead to positive and often robust return-on-investment for the hour and dollar any organization spends on usability engineering practice (e.g., Bias & Mayhew, 2005). Indeed, in the Bias and Mayhew (2005) book on cost-justifying usability there are chapters addressing the pursuit of usability practice in startups (Crow, 2005), in vendor companies making software for sale (Rohn, 2005), in large corporations developing internal applications for their own use (Mauro, 2005), and in organizations developing web-based applications (Karat & Lund, 2005). Here, one-eighth of the way through the new century, every computer user would easily be able to come up with enough personal anecdotes of recent frustrations with user interfaces to suggest that not every organization has taken to heart this message of usability engineering, and/or hasn’t figured out how to successful implement a program of usability engineering. The use of machine-centered terminology (“Fatal error #27.”), the pretty but unreadable white text on a yellow background, the second (and third, and fourth) request for the same information during the completion of an online form, the need to attend a one-hour in-house training session in order to be able to complete the annual employee benefits selection process, all sit as silent but maddening signposts of the need for more, and better, usability engineering. 162
Usability Science II: Measurement • 163 Our previous paper on the science base of usability (Gillan and Bias, 2001) attempted to identify some of the content-related foundations of usability science. That science base for usability can be found in research from human factors, cognitive psychology, social psychology, anthropology, and other social, cognitive, and information sciences. The present paper serves as a sequel and addresses issues of measurement in usability science, with a strong focus on psychological measurement. Philosophers and sociologists of science have recently suggested that defining science in a way that excludes pseudoscience and other non-science endeavors is very difficult (e.g., Taylor, 1996). However, a reasonable characterization of science suggests that it has the goals of (1) describing past and current phenomena; (2) predicting future phenomena; and (3) providing testable mechanistic explanations of past, current, and future phenomena (Leary, 1985). The tools related to these goals include the scientific method, descriptive statistics, inferential statistics, scientific laws, models, and theories. One feature central to all of these tools is measurement. Although measurement may not be unique to science, it does serve as its cornerstone.
MEASUREMENT THEORY AND MEASUREMENT MODELS One key feature of measurement in science is that often the explicit measure serves as a pointer to an implicit construct that is the real object of conceptual interest (for example, see Blalock, 1979; Kuhn, 1961). For example, the timing or amplitude of an event-related potential in the brain is neither brain activity nor attention; rather it is a representation of brain activity that may be correlated with attention. Measurement theory is the field of applied mathematics that is concerned with the basic principles for assigning numerical values to quantifiable characteristics of an implicit construct such as attention. Our goal in this chapter is to examine issues related to the measurement of usability, not to provide a general review of measurement theory; excellent reviews can be found in Krantz et al. (1971) and Allen and Yen (2002). One version of measurement theory, representational measurement theory, focuses on relating quantitative structures (the representation) with quantifiable empirical observations. Luce and Suppes (2001) suggest that ordinary real numbers are the most common representing structure.
164 • Douglas J. Gillan and Randolph G. Bias We use real numbers because they have many valuable properties—they can be used to convey cardinality, ordering of amounts, differences between amounts, and ratios of amounts. When we use real numbers to represent observable empirical entities (like the number of jelly beans spread on a plate), the application of the numbers to the empirical entity is apparently direct. However, if we use real numbers to represent unobservable empirical entities (like the number of jelly beans in a jar), we need a principled way of relating the numbers to the entity. Most psychological research involves constructs like learning, perceptual processes, or cognitive processes that are even less observable than the jellybeans in the jar—we can sense only a motor outcome (perhaps in the form of a verbal report) of the operation of the construct. So, an important issue is how we relate the construct to the numbers used to measure it. Theories and hypotheses suggest mechanisms that relate changes in a predictor variable (e.g., the number of items in a set of stimuli to be remembered) to the measured value of a criterion variable (e.g., time to respond to a target item), which are then related by inference to a covert construct (e.g., working memory). However, to be complete the theory or hypothesis should include a set of ancillary assumptions that include a measurement model. The purpose of the measurement model is to account for how the unobservable construct is translated into the observable behavior that is measured. Research by Donders (1868/1969) on the subtractive approach and Sternberg (1969) on the additive factors approach supply examples of measurement models. Briefly, Donders (1868/1969) developed a set of elemental tasks that he believed consisted of a sequence of discrete and independent cognitive operations. The simplest task, which Donders called the reaction time task, required a participant to make a simple response to a specific stimulus. For example, the participant might press a key with a right-hand index finger when a red light turned on, and only the red light was ever presented in the reaction time task. The response time measure was theorized to involve the operations of detecting the stimulus and making the response. The next most complex task, a stimulus discrimination task, required the participant to respond to the stimulus but not to a different stimulus. Continuing with the example, the participant would press the key when the red light turned on, but not press the key when the green light turned on, with the red and green lights equally likely to occur. The time to respond to the red light in this task was theorized to consist of processing the stimulus, discriminating the presented stimulus from the other stimulus, and making the response. Thus, the difference in time between the stimulus discrimination task and the simple reaction time task gives
Usability Science II: Measurement • 165 a measure of the time required to discriminate between the two stimuli. The third task, the choice task, required the participant to respond to one stimulus with one response and to a different stimulus with a different response. Extending the example, the task would involve responding to the red light with a press of the right key and to the green light with a press of the left key. If a red light were turned on, the operations involved would be detecting the red light, discriminating the red from the green light, selecting the correct response to make for the red light, and making the response. So, the difference in response time between the choice task and the discrimination task give an estimate of the time to choose between two responses. Thus, Donders’ subtractive method was based on a measurement model that explicated the precise relation between the response time measure and the covert construct of the human information processing step. A century later, Sternberg (1969) described another approach to using response time to measure unobservable mental events. His paradigm involved developing a task, which varied the level of two separate variables that were theorized to affect independent mental processes in a factorial design experiment. He proposed that if the processes are independent, then varying the level of one variable should have no influence on the relation between the other variable and the time to complete the total task. That is to say, the two variables should have no significant interaction. In contrast, if the tasks share processing steps, then varying the levels of one variable should affect the relation between the other variable and the task completion time. Sternberg was interested in whether the stages of an item recognition task—perceptual encoding of a stimulus and retrieving the meaning of that stimulus from memory—are independent. For example, one can independently manipulate (1) perceptual encoding by degrading the readability of a target probe letter (e.g., by obscuring features) and (2) retrieval from memory by varying the size of a memory set (e.g., by asking people to keep in mind a set of two, three, four, or five letters). The experiment to test the independence of the processing stages would have a 2 ⫻ 4 design. Observing an interaction between the readability of the probe and the memory set size would indicate that the stages in item recognition shared processing steps. Both Donders’ subtractive method and Sternberg’s additive factors method have come under attack for a variety of reasons. For example, both require that processing steps be carried out in sequence. However, with certain assumptions, parallel processes can produce results predicted from the Sternberg approach (e.g., Corcoran, 1971; see also Townsend, 1990).
166 • Douglas J. Gillan and Randolph G. Bias It should be clear that, if a given theory and associated measurement model accurately predicts a particular experimental outcome, a different theory and measurement model combination might also predict that set of results. Results in support of a theory do not compel scientists to accept the theory in the way that results that are counter to a theory can compel us to reject the theory. Our goal in this section of the chapter is not to argue that developing a measurement model as an explicit part of a theory or hypothesis will guarantee the uniqueness of that theory to account for a phenomenon. Rather, our goal is to show how, in certain research, a measurement model is an integral and explicit part of the theory. Another frequent type of measure in psychological research is scaling. Scales involve having participants assign numerical values to a property of a psychological state (often the magnitude of that state). Typically, that state that has been evoked by a question or a stimulus. The old folk saying is that we should not compare apples and oranges. But, anyone on a budget who has gone to the grocery store has compared apples and oranges on a common mental scale of value. If I wanted to investigate how you compared apples and oranges, I might approach you at the grocery store and ask you to rate the value of an apple on a scale from 1 to 10 and to rate the value of an orange using that same 10-point scale. A typical consumer would be able to make both ratings quite easily. Further, a substantial difference in those ratings would be likely to correspond to the consumer’s choice behavior. Embedded in this story of a simple consumer behavior study are yet other measurement models: the model relating choice to the value scale ratings and models relating the scale value ratings to the mental representation of the value of apples and oranges. Modern scaling is typically traced to Stevens’ (1946) paper in which he classified scales as nominal, ordinal, interval, and ratio on the basis of the scale transformations, mathematical structures, and statistics that could be applied. This approach is an example of measurement theory as applied to quantitative judgments by humans. Stevens also developed scaling methods, including magnitude estimation (1957). Magnitude estimation is a procedure in which participants are instructed to judge the strength of a stimulus using a ratio scale. Magnitude estimation has been especially popular in the study of suprathreshold sensory stimuli. These studies have typically not included an explicit measurement model, perhaps because the use of magnitude estimation implied a model in which the covert construct (a set of relations among sensations) and the measure (the scale) appeared to be isomorphic. However, Birnbaum and Elmasian (1977) have found evidence that the cognitive operation underlying magnitude
Usability Science II: Measurement • 167 estimation involves determining differences in the magnitudes of stimuli in a set rather than their ratios. Their findings show the need for developing an explicit measurement model, rather than accepting an implicit one. In summary, we consider a measurement theory to be a general-purpose set of principles and axioms about how measures relate to empirical observations. In contrast, a measurement model, as we conceive of it here, involves a specific purpose model that describes a relation between the measure used as the dependent variable in an experimental paradigm and the construct being tested in the experiment. As such, the measurement model might be thought of as an enhanced version of an operational definition of a criterion variable. The rest of this chapter will focus on measurement models for usability.
WHAT DO WE MEAN WHEN WE TALK ABOUT “USABILITY”? In an everyday discussion, we might tell a neighbor that a certain computer is very usable or is easy to use. The neighbor would likely have a general understanding of what we meant by that statement. That everyday use of the term “usability” may suffice for the purposes of talking with neighbors or of marketing technological artifacts, but is not adequate for scientific investigations or for the purposes of engineering-related measurement. We would go further to argue that, in the absence of a measurement model as an explicit part of usability studies, usability researchers and engineers are engaging in little more than marketing. (Indeed, one of us has opined frequently on the costs of this lack of rigor—Bias, 2003, 2008, and 2011.) Various definitions have been proposed for usability. For example, the International Organization for Standardization (commonly called the ISO) has proposed efficiency, effectiveness, and user satisfaction as the component parts of usability (ISO 9241–11). Lund (2001) used these components to create the Usefulness, Satisfaction, and Ease of use (USE) questionnaire. Nielsen has added two components to the definition of usability—learnability and memorability—and changed effectiveness into errors (actually, lack thereof). Other components, such as aesthetically pleasing and confidence inspiring and so on, might be added to the list. McGee, Rich, and Dumas (2004) identified 25 usability components from the literature, then asked 46 users to rate how integral each of the components was to their definition of usability. They analyzed the ratings
168 • Douglas J. Gillan and Randolph G. Bias using hierarchical cluster analysis and factor analysis and identified five general groupings, which fell into two large categories of Usability and Not Usability. Under usability were (1) core characteristics, including consistent, efficient, organized, easy, and intuitive; (2) secondary characteristics, such as effective, familiar, controllable, complete, beneficial, and useful; and (3) tertiary characteristics like expected, natural, worthwhile, and flexible. The not usability groupings were focused on (1) satisfaction; and (2) style. So, McGee et al., would eliminate satisfaction, one of the core constituents of the other definitions. Merely listing components does not produce a measurement model. These lists leave us with many critical questions—for example, is usability the simple summation of the values of the components as they are found in a technological artifact? Or perhaps a summation of weighted components? Must all components be present for an artifact to be deemed usable? Or could a very high effectiveness rating overcome the lack of efficiency? Do the components interact such that there is a multiplicative component based on the product of two or more individual components? Or might a combination of components create an emergent property? Sauro and Kindlund (2005) have used a principal components analysis to produce a model of usability that results in a single score from a weighted combination of efficiency (measured as task completion time), effectiveness (measured as number of errors and task completion amount), and satisfaction (measured by rating of satisfaction). This type of measurement model comes from a psychometric/classical test theory approach to measurement theory (Allen & Yen, 2002). In addition to the questions raised by the listing of components, the definitional approach to measuring usability seems dissatisfying because it appears to place usability in the technological artifact. Might the definition vary depending on the product (e.g., e-commerce website vs. information-only site), the user type (power user vs. neophyte), or the environment (emergency situation vs. not)? For example, we would say that the interface is organized or the display is familiar. However, an alternative is that usability does not exist in either the technology or the user, but in the interaction between the user and technology. It is that the interaction of a user with the artifact, not the artifact per se, is efficient or effective or satisfying. This is not unlike current views of information and the recognition that “. . . information can result, it can evolve, or it can emerge but only from the interaction of the entity with a human or intelligent agent” (Dillon, 2005, p. 311). Taking an interactional approach leads us to the view that usability depends on the context in which the interaction occurs, particularly the context of the user and task. A user with such a visual disability as red-green color deficiency
Usability Science II: Measurement • 169 would have a very different interaction with most modern visual displays than would a person with full color vision. Likewise, people who are novices with an interface differ in their interaction compared to experts. Our responses to pull-down menus—the ones that seemed so helpful when we were first using a system—change as we gain experience and knowledge (and yearn for shortcuts). Or one user might have very different needs as he or she moves from the interface for word and graphics processing at work to playing a game or searching for a restaurant on Google. An adequate measurement model should take user and task and task context (environment) into account.
MEASURES OF USABILITY Time and Accuracy as Measures of Usability The review of the components of usability above suggests that there is near universal agreement that efficiency and effectiveness are two components that define usability. Common measures of efficiency and effectiveness are task completion time and error rate, respectively. One potential benefit of working on issues concerned with human interaction with technology and usability is that the measurement model appears to be quite simple. Of course, appearances can be deceptive. Defining efficiency as time to complete a task and effectiveness as a measure of errors made in interacting with a specific technological artifact, then, one could argue that no inferential leap is required from the overt measures to a covert construct. In other words, the measured time and accuracy of the interaction with the artifact to perform the task refer directly to the constructs of interest. Contrast this with the research in cognitive psychology by Sternberg (1969) described above in which the measure of response time serves as an observable marker for cognitive processing stages. As a consequence, cognitive psychology research requires that complex measurement models relating the measured variables to the true variables of interest accompany the cognitive research (or at least they should). Unfortunately, this apparent benefit may be illusory. For example, even in usability testing, the researchers want to generalize from the specific test conditions and participants to other environments and users. Such generalizations require at least some degree of inference to translate between the conditions of measurement and those of application. Those inferences would certainly be aided by developing a measurement model.
170 • Douglas J. Gillan and Randolph G. Bias In addition to the above concerns, if usability is fully defined by task completion time and error rate, why bother with the construct of usability? Why not simply refer to task completion time and error rate, which is more direct? The construct of usability would no longer be needed. But, if we could identify examples when time was not synonymous with usability, it would indicate that there is something more to usability than time. We will attempt this by means of a thought experiment. Imagine that you have to select which of two new statistical software systems is most usable in completing statistical analyses. Also, imagine that you are a moderate expert in statistics, so you want to understand how the software works for complex analyses. The software product, StatKnow, allows you to track the assumptions that the system makes as it performs analyses, whereas the other product, StatNo, just performs the analyses without explanation. Both give you the same analyses, but StatNo is markedly faster than StatKnow. Based solely on the finding of comparable accuracy and lower response times, the StatNo is the obvious choice as the most usable. But, the example suggests that time and errors may not be the only factors to consider when measuring usability. The well-documented phenomenon of the speed-accuracy trade-off (e.g., Woodworth, 1899) suggests another issue related to time and errors as measures of usability. Imagine that Designer A improves the interface of a computer system so that the information is easier to perceive and to relate to the task. The Design A improvement would be likely to increase the speed at which the task could be accomplished and to reduce errors in performing the task. However, Designer B might modify the interface so that users could complete the task faster, but they have done so by trading off accuracy for increased speed. Using the new and improved Design B interface would reduce task completion time but increase errors. The examples in this section indicate that, although time and errors may appear to be direct measures of a desired outcome for usability-centered design, there is more to these measures than meets the eye. And, as a consequence, explicit measurement models, which relate time and errors to “usability,” will be necessary to interpret results from usability tests. Rating Usability Another common way to measure usability or user satisfaction is to have users (or “participants”) perform a task with a technological artifact (or maybe multiple artifacts) and rate the usability of the artifact. Traditionally,
Usability Science II: Measurement • 171 the rating scales used in these studies are limited-point category scales, often with labels and/or numerical values for each category. These kinds of scaling methods are referred to under a variety of names, such as Likert scales, Stapel scales, and semantic differential scales that differ according to the number of values in the scale and the types and numbers of labels of the categories. For example, a Likert scale (Likert, 1932) is usually considered to be a five- to nine-point scale in which each point relates to the amount of agreement with a statement. The values on the scale are symmetrical around a neutral point, with degree of disagreement on one side and degree of agreement on the other side. Category scales of these types are subject to systematic sources of distortion as users select a response that represents their actual judgment. Common sources of distortion or bias are greater likelihood of agreeing with statements (acquiescence bias); the tendency to try to look good to others (social desirability bias); and use of certain numbers or avoidance of others typically avoidance of the values at either end (central tendency bias). In addition, these scaling techniques are very sensitive to the range of stimuli to be judged and the frequency with which various stimuli are judged (e.g., Parducci, 1965). An alternative approach to rating usability or user satisfaction using category scales is the use of magnitude estimation (e.g., McGee, 2003). Magnitude estimation of usability involves (1) instructing participants in the use of a ratio scale and giving them practice using the scale; (2) defining usability (for example, by the ISO 9241 international standard); (3) providing multiple artifacts to be rated, typically one at a time (presenting the artifacts would usually include having the participant interact with the artifact); and (4) having the participant use the ratio scale to rate the usability of the artifact or their satisfaction with their interaction with the artifact (the ratings would typically be made shortly after interacting with each artifact, as opposed to rating all of the artifacts following the entire session). Many of the sources of distortion or bias that affect category ratings—number selection biases, and Parducci’s range and frequency effects—influence magnitude estimation, as well. Unfortunately, research using rating scales does not typically provide measurement models that explicate how people translate covert states, attitudes, and judgments into overt numbers. This is a problem with much of the literature on rating scales in general, but the problem includes the use of ratings to measure usability. Research on distortion or biases in how people use rating scales provides a start on this issue. However, usability researchers and engineers have typically treated rating scale data as if the
172 • Douglas J. Gillan and Randolph G. Bias ratings were a perfect reflection of the covert constructs that they measure. Before accepting this proposition, which on face value seems unlikely given the known influences of bias, research is needed about how ratings relate to the construct of usability.
THE DANGER OF PROLIFERATION OF CONSTRUCTS AND MEASURES In an editorial in the American Journal of Public Health titled “When to welcome a new measure,” Kasl (1984) stated, “it is difficult and expensive to assemble such cross-disciplinary expertise and to create a smoothly functioning, multifaceted research team” (p. 106). Although Kasl applied this suggestion to epidemiology, it seems to apply just as well to usability research in the early part of the twenty-first century. He further proposed that one popular approach to bridging the gaps created by failures to adequately represent various cross-disciplinary components is “THE MEASURE: an instrument for assessing some construct or other which is so irresistibly useful that it guarantees interpretable and fascinating results no matter how or where it is used” (p. 106). Kasl (1984) goes on to suggest that a new measure should be welcomed into a discipline only if the researchers (1) continue to develop the measure; (2) relatedly, recognize the limitations of the measure and the assumptions on which it is based; and (3) do not use the measure as a means of overcoming poor or absent theories, research designs and methods, or failures of making use of transdisciplinary knowledge. In addition, a new measure should add to the ability to describe, predict, or explain phenomena. Implicit in Kasl’s suggestions is a critique concerning the unnecessary proliferation of constructs. In the study of interaction with usability, a cluster of constructs relate to interactions with technology: “user friendly,” “usability,” “user experience,” “utility,” “accessibility,” “usercentric,” “learnability” among others. Can we differentiate among these constructs? If so, can we identify the unique elements as well as the common elements among the constructs? If they are not conceptually different, why do we have multiple constructs? Let’s take, for example, usability and user experience. The evaluation of user experience and the measurement of usability have been proposed to differ due to a greater focus of user experience on emotional response as a function of experience (e.g., Hassenzahi & Tractinsky, 2006). However, with satisfaction as a component
Usability Science II: Measurement • 173 of usability measurement, it would appear that emotional response is included in usability. One might argue that satisfaction does not cover the full range of emotions, such as fear, surprise, sadness, joy, and disgust. However, measures of user experience do not appear to provide a broad range measure of emotion. Plus, rather than creating competing measures, it would be clearer and more parsimonious to expand an existing measure, like usability, to include the appropriate types and ranges of emotional response. Among the dangers of the proliferation of constructs is a lack of conceptual clarity leading to miscommunication. If researchers, engineers, and designers in the fields interested in human-technology interaction use several different teams to communicate about the same things—efficiency, effectiveness, and/or satisfaction, for example—they might not realize that they all mean the same thing. In addition, progress in the scientific basis of usability can be delayed by having researchers argue over the meaning and usefulness of multiple constructs that largely overlap. Kasl’s suggestion to continue to refine, develop, and understand what and how we are measuring would seem to be the best advice at this stage in the development of usability science.
A MODEL OF INTERACTION WITH TECHNOLOGY AS A BASIS FOR A MEASUREMENT MODEL OF USABILITY A Description of the Model We have argued above that the measurement of usability for scientific purposes requires a measurement model that specifies the relation of the covert construct of usability to a measure, whether it be task completion time, errors, or ratings. In order to produce the measurement model, a model of the processes involved in human interaction with technology would be useful. Our intention for this section is to propose such a model and to identify how it might be turned into a measurement model for usability. Norman (1988) proposed a seven-stage action cycle that moves from a user’s goal to execution to an affect on the world to feedback from the world to evaluation of the feedback and finally back to the goal. More specifically, in the action cycle, (1) forming a goal leads to (2) an intention followed by (3) a sequence or plan to act which results in (4) the execution of the
174 • Douglas J. Gillan and Randolph G. Bias action sequence. The action sequence is the point of interaction with the world, and the world is typically a technological artifact with which the user is interacting to perform a task. The conceptual distance between the goal and the world is known as the Gulf of Execution (Norman, 1988). The response of the world would typically involve feedback from the technological artifact—it might produce a document or change the information on a display. The user then (5) perceives the world, (6) interprets that perception, and (7) evaluates the interpretation in terms of the original goal. If the goal is met, the user moves to a new goal (the next goal in a sequence, or a completely novel goal); if the goal was not met, then the user may continue to keep the original goal active, if it continues to be appropriate. The distance between the feedback from the world and the evaluation of the feedback is called the Gulf of Evaluation. This model of interaction with the world is analogous to early post-Chomsky models of language production and comprehension in which a kernel sentence or idea is transformed into a linguistic utterance through a series of stages (for example, see Garrett, 1984). Similarly, the evaluative side of the action cycle is analogous to early post-Chomsky language comprehension models by which a heard utterance is transformed into a semantic representation in a sequence of stages (e.g., Fodor, Bever, & Garrett, 1974). More recent theories of response production have moved from a hierarchical series of transformative stages to sets of processing cycles that include both feedforward (to permit prediction of action) and feedback (to allow for monitoring of actions; see McLaughlin, Simon, and Gillan, 2010, for a historical review of action production models). However, nested within those cycles are the backbone of a hierarchy of stages. Figure 8.1 adopts this type of hybrid model of action production to human-technology interaction. As in the Norman seven-stage model, our hybrid model starts with the formation of a goal. That goal has explicit provenance in the knowledge that the user has of the task and system. A user’s goal has a basis, likely in his/her knowledge of the task and the requirements of the task, as well as his/her knowledge of the system and the constraints that the system exerts on the user. The goal elicits an intention, while at the same time provides anticipatory input to a comparator (C1). The comparator checks the intention to make sure that it matches the goal. A mismatch at this point would indicate an error in intention formation, such as a mode error (Norman, 1981), for example, when a user does not recognize that a keyboard is in the caps lock mode and types in CAPS UNINTENTIONALLY. If the comparator catches the mismatch, the mistake could be stopped before it occurred, either by modifying the intention so that it more closely maps onto the goal or by terminating the process. The goal would not have been
Usability Science II: Measurement • 175
Task, System Knowledge
Goal
C1 C5
Intention
Interpret Display
C2 Action Plan/ Selection
Efference Copy
Expectation: Feedback Type, Location
C3
Predictive Feedback
Action Predictor
C4
Perceive Display
Action Action Feedback
Accept Action
Interpret Action
Display Feedback
Change State
Create Feedback Token
FIGURE 8.1
A model of human–technology interaction showing hierarchical stages of processing, feedforward cycles, feedback cycles, and error monitoring via comparison of anticipated inputs
met, so it would produce a new intention. In a cataloguing of types of errors that users could recall making, Mentis (2003) catalogues no reported errors related to the formation of a goal or an intention. It may be that, rather than having no errors of this type, errors involving goal or intention formation (1) do not reach the level of consciousness and so are not reported; (2) are described in ways that result in their classification as other types of errors; or (3) are caught by the error monitoring system by means of comparison in C1 and are prevented before becoming overt responses. The next stage involves the translation of the intention into a plan and concurrently the intention is fed forward to the comparator, C2. After the plan is developed, C2 compares it to the intention and if there is a mismatch, the plan can be corrected. This stage would move from an abstract representation of an action to a more concrete representation that would include the functional motor systems and types of actions (e.g., reaching, pressing a button, grasping, etc.), as well as their sequence.
176 • Douglas J. Gillan and Randolph G. Bias A common type of error that would result from this stage would be a capture error. A capture error occurs when a new sequence of actions, e.g., A-B-C-X-Y, are supplanted by a similar, but much more familiar sequence of actions, e.g., A-B-C-D-E. An example might involve an experienced Macintosh user interacting with a Windows machine, then forgetting and using Macintosh command sequences. Mentis (2003) found approximately 21 percent of the errors reported were at this stage. The occurrence of these errors suggests that the comparator did not catch the mismatch between intention and plan. The action plan is transformed into motor acts, with environmental stimuli providing triggers for the appropriate action (e.g., Rumelhart & Norman, 1982), as well as affordances that guide that specific action (Norman, 1988). As the plan is transformed into action, a copy of the action (the efference copy) is created as a prediction of the action and is compared to the intention in comparator, C3. If the action that is about to be performed does not map onto the plan, then the plan can be modified to either terminate the action early or to ameliorate the effects. So, a misordering of actions—for instance in typing one hand may get ahead of the other, leading to a transposition error—might be averted. If the comparator does not catch the mismatch, the action proceeds. An example of catching the slip would be if a user intended to buy a cola from a vending machine, but pushed the button prior to putting money into the machine, then recognized that he/she needed to put the money in first. Errors involving translation of action plans into the response were relatively rare in the Mentis (2003) study, with just 4.5 percent of errors. However, production of the response, which is also covered in this stage, resulted in 68 percent of the errors. In addition to the feedforward loop, the prediction of the action plays a role in a feedback loop. When the action is performed the user experiences motor and proprioceptive/kinesthetic/tactile feedback from the action. This actual feedback and the predicted feedback based on the predicted action are compared in C4. This comparison allows users to identify an error immediately after committing it, before the feedback from the system has occurred. In a successful interaction with technology, the technological system accepts the user’s input action and interprets that action appropriately. This leads to a change of system state, which the system’s feedback reflects, often in a visual and/or auditory form, such as a change in a display. Ideally, the user will perceive the feedback; the task and system knowledge may help users to direct attention to certain locations or to anticipate certain types
Usability Science II: Measurement • 177 of system feedback. Once the user perceives the system feedback, he/she needs to interpret it in terms of the intended action, which occurs in C5. Mentis (2003) classified 7 percent of the errors that users recalled as having occurred in the stages in which feedback from the system is perceived or interpreted. Finally, if the system feedback is consistent with the intention, the user can move to the next goal, but if it is inconsistent with the intention, the goal will be reactivated, and a somewhat modified intention will be produced. Implications of the Model for Usability Measurement As the above discussion of the hybrid model of human-technology suggests, the model has clear predictions about the types of errors that should occur as a function of the stage of processing. Errors in the formation of an intention differ from those in the translation of an intention to a plan. For example, failure to recognize the current mode would indicate that the user had made an error early in processing, leading to formation of the wrong intention. In contrast, a capture error is one in which two tasks share a number of responses at the start of each task, but then diverge; the error occurs when the user should follow one sequence after the point of divergence, but follows the other sequence, usually from the more familiar or currently more salient task. Thus, classification of errors and the rates of error within each type could be as important as a measure of overall error rate. In addition to the type of errors, the model suggests ways to reduce errors both by designing to prevent errors and designing to enhance the ability to catch errors through the complex error monitoring system. In terms of measurement of usability, a new measure of the difficulty of designing ways to prevent errors might be recorded from designers. This measure would not be without problem as individual differences in designers’ abilities and creative ways of thinking about error prevention could lead to differences in the measure that would be unrelated to usability. Also, an innovation from this model is that users catch errors during the processes involved in transforming goals into actions and system feedback into an interpretation of goal-related outcomes of the action. Accordingly, designers could develop systems that enhance these abilities, for example by leading the user into a unique action sequence such that the sequence would be highly predictable. A new measurement here might be to get users to articulate caught errors during their interaction with a technological artifact. The overall number of made errors and caught errors, as well as the ratio of
178 • Douglas J. Gillan and Randolph G. Bias made to caught errors would be informative concerning the usability of the artifact. Task completion time, based on the hybrid model, would be a function of the number of goals required to compete the task multiplied by (1) the time to form each goal; and (2) the time to complete each processing step and the feedback and feedforward cycles for each goal. However, just as the model makes predictions concerning the type rather than the rate of errors, the model makes more predictions that go beyond the simple total task completion time. For example, if the processing stages are independent, then manipulating them should lead to additive effects. In contrast, if the stages share processing steps, then they should have a multiplicative relation, as well as any additive ones. So, for example, a usability researcher might manipulate the number of available modes to increase the time to form intentions. They might also manipulate the difficulty of processing the feedback concerning the action performed and the artifacts’ response to the input. Those two factors should be additive and independent (no interaction between them). Another way to think about the various relations in this thought experiment would be that the task completion time would be a simple linear function of the difficulty of forming the intention and the difficulty of interpreting the artifacts’ perceptual feedback concerning the input action. Finally, if they were separate processes with no shared elements, the relation between forming the intention and task completion time would not be moderated by the difficulty of interpreting the feedback. A finding of an interaction between two of the most distant processing steps in the model would result in a reassessment of the model. One might expect that processing steps close together in the model would be most likely to share processing, and that sharing processing could increase workload and lead to decreased usability. So, task-artifact combinations that had the most distinct processing steps might be high in usability due to lower competing for resources by concurrent steps. Accordingly, another new measure related to usability might be to manipulate the features of the interface to vary the processing in adjacent steps in the model. For example, manipulating (1) the number of available modes to manipulate the formation of an intention; and (2) the number of steps shared by two different tasks that users perform with the artifact (to vary the likelihood that the wrong task could capture the sequence of responses) would involve adjacent processing steps. An interaction between those two factors would suggest shared resources. We believe that user satisfaction might be related to several features within the model. For example, the rating of satisfaction should depend
Usability Science II: Measurement • 179 on the consistency between the goal and the interpreted feedback at the end of each processing step. Inconsistent goal-feedback relations would indicate that the actions were not leading to the goals being met, which would be unsatisfying. One way to investigate this prediction would be to provide users sometimes with misleading interpretations of the consequences of their action such that the relation between action and feedback was random. Satisfaction might also be expected to be related to the ability to form clear intentions from the goals, develop an action plan that leads directly to actions, and create feedback from the display of the artifact with limited cognitive effort. Any one of these factors would reduce user satisfaction independently of one another, with overall satisfaction being an additive function of the successful negotiation of each processing step. Finally, although one might anticipate that performance measures and user satisfaction would be highly correlated, many instances of dissociation between performance and satisfaction exist (see Dillon, 2001). The present model and any model of user interaction with technology should be able to account for this dissociation among measures. The hybrid model presented here makes interesting predictions concerning measurement of usability. It is presented, not as a finished product, but as a work under development. We hope that it will serve as an impetus to research centered on what measurement means and how it is performed in the science and engineering of usability, and that the research will help us to refine the model. We started with the proposition that usability engineering requires usability science. Often, science should precede engineering to provide a knowledge base for the development of engineering concepts, methods, and measures. However, at this point in the development of usability, usability engineering methods and phenomena will often precede science, so that the purpose of the science will also be to describe and account for phenomena discovered during engineering activities (e.g., Gillan & Bias, 1992; Gillan & Schvaneveldt, 1999).
BURNING ISSUES 1. Kasl’s (1984) suggestion to continue to refine, develop, and understand what we are measuring and how we go about measuring it would seem to be the best advice at this stage in the development of usability
180 • Douglas J. Gillan and Randolph G. Bias
2.
3.
4.
5.
science. It follows from that suggestion that more work on both the psychometrics of usability measures and theories that lead to usability measures should be central foci of the work on usability. For the usability/HCI researcher this is a call for empiricism regarding user-centered design methods. We need more research regarding which methods yield the highest return on investment for the usability hour and dollar invested. For the usability practitioner in an organization this “refine, develop, and understand what and how we are measuring” will be best realized via staying abreast of the research on user-centered design methods and insisting on excellence of design of their own evaluation studies, including avoidance of confounds, attention to bias and other threats to validity, and selection of methods based on the researchers’ findings per above. In addition, usability practitioners in an organization should attempt to serve as an information gatekeeper who imports methods and measurement techniques into the organization from the scientific discipline outside of the organization, while also interpreting and championing the methods and measures for possible users (e.g., Gillan & Bias, 1992; Gillan & Schvaneveldt, 1999). Continual changes in technology should lead us to identify the core issues in usability to ensure that we measure those, while at the same time updating the measures to reflect the new needs of users.
REFERENCES Allen, M. J., & Yen, W. M. (2002). Introduction to measurement theory. Prospect Heights, IL: Waveland Press. Bevan, N. (2009). What is the difference between the purpose of usability and user experience evaluation methods? UXEM’09 Workshop, INTERACT 2009, Uppsala, Sweden. Bias, R. G. (2003). The dangers of amateur usability engineering. In S. Hirsch (chair), Usability in practice: Avoiding pitfalls and seizing opportunities. Annual meeting of the American Society of Information Science and Technology, October, Long Beach Bias, R. G. (2008). Panelist. Discount testing by amateurs: Threat or menace? Steve Krug (Chair). Usability Professionals’ Association annual meeting, Baltimore, June. Bias, R. G. (2011). The importance of rigor in usability studies. In A. Marcus (Ed.), Design, user experience, and usability: Theory, methods, tools, and practice (HCI 2011 International) (pp. 255–258), Berlin: Springer. Bias, R. G., & Mayhew, D. J. (Eds.) (2005). Cost-justifying usability, 2nd Edition: Update for the Internet age. San Francisco: Morgan Kaufmann.
Usability Science II: Measurement • 181 Birnbaum, M. H., & Elmaisan, R. (1977) Loudness “ratios” and “differences” involve the same psychophysical operation. Attention, Perception, and Psychophysics, 22, 383–391. Blalock, H. M. (1979) The presidential address: Measurement and conceptualization problems: The major obstacle to integrating theory and research. American Sociological Review, 44(6), 881–894. Corcoran, D. W. J. (1971). Pattern recognition. Middlesex, PA: Penguin. Crow, D. (2005). Valuing usability for startups. In R. G. Bias & D. J. Mayhew (Eds.), Costjustifying usability, 2nd Edition: Update for the Internet age. San Francisco: Morgan Kaufmann. Dillon, A. (2001). Usability evaluation. In W. Karwowski (Ed.), Encyclopedia of Human Factors and Ergonomics. London: Taylor and Francis. Dillon, A. (2005). So what is this thing called information? In H. van Oostendorp, L. Breure, & A. Dillon (Eds.), Creation, deployment and use of digital information. Mahwah, NJ: LEA, pp. 307–316. Donders, F. C. (1868/1969). On the speed of mental processes. Republished in W. G. Koster (Ed.), Attention and Performance II. Acta Psychologica, 30, 412–431. Fodor, J. A., Bever, T. G., & Garret, M. F. (1974). The psychology of language: An introduction to psycholinguistics and generative grammar. New York: McGraw-Hill. Garrett, M. F. (1984). The organization of processing structure for language production: Application to aphasic speech. In D. Caplan (Ed.), Biological perspectives on language (pp. 173–193). Cambridge, MA: MIT Press. Gillan, D. J., & Bias, R. G. (1992). The interface between human factors and design. Proceedings of the Human Factors Society 36th Annual Meeting (pp. 443–447). Santa Monica, CA: Human Factors and Ergonomics Society. Gillan, D. J., & Bias, R. G. (2001). Usability science. I: Foundations. International Journal of Human-Computer Interaction, 13, 351–372. Gillan, D. J., & Schvaneveldt, R. W. (1999). Applying cognitive psychology: Bridging the gulf between basic research and cognitive artifacts. In F. T. Durso, R. Nickerson, R. Schvaneveldt, S. Dumais, M. Chi, & S. Lindsay (Eds.), The handbook of applied cognition (pp. 3–31). Chichester, UK: Wiley. Hassenzahi, M., & Tractinsky, N. (2006). User experience—a research agenda. Behaviour and Information Technology. 25, 91–97. ISO FDIS 9241–210:2009. Ergonomics of human system interaction—Part 210: Humancentered design for interactive systems (formerly known as 13407). International Organization for Standardization (ISO). Switzerland. Karat, C-M., & Lund, A. (2005). The return on investment in usability of web applications. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability, 2nd Edition: Update for the Internet age. San Francisco: Morgan Kaufmann. Kasl, S. V. (1984). When to welcome a new measure. American Journal of Public Health, 74, 106–108. Krantz, D. H., Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundations of measurement: Vol. 1. Additive and polynomial representations. New York: Academic Press. Kuhn, T. S. (1961). The function of measurement in modern physical science. Isis, 52, 161–193. Leary, R. A. (1985). A framework for assessing and rewarding a scientist’s research productivity. Scientometrics, 7, 29–38. Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 1–55.
182 • Douglas J. Gillan and Randolph G. Bias Luce, R. D., & Suppes, P. (2001). Representational measurement theory. In H. Pashler (Series Ed.) & J. Wixted (Vol. Ed.) Stevens’ handbook of experimental psychology, Vol. 4, Methodology in experimental psychology, 3rd ed. (pp. 1–38). New York: John Wiley & Sons. Lund, A. M. (1997). Expert ratings of usability maxims. Ergonomics in Design, 5(3), 15–20. Lund, A. M. (2001). Measuring usability with the USE questionnaire. STC Usability SIG Newsletter, 8:2 (available online at www.stcsig.org/usability/newsletter/0110_ measuring_with_use.html). McGee, M. (2003). Usability magnitude estimation. In Proceedings of the Human Factors and Ergonomics Society, 47th Annual Meeting (pp. 691–695). Santa Monica, CA: HFES. McGee, M., Rich, A., Dumas, J. (2004). Understanding the usability construct: Userperceived usability. In Proceedings of the Human Factors and Ergonomics Society 48th Annual Meeting (pp. 907–911). Santa Monica, CA: HFES. McLaughlin, A. C., Simon, D. A., and Gillan, D. J. (2010). From intention to input: motor cognition, and the control of technology. In D. H. Harris (Ed.), Reviews of Human Factors and Ergonomics, Vol. 6 (pp. 123–171). Santa Monica, CA: HFES. Mauro, C. L. (2005). Usability science: Tactical and strategic cost justifications in large corporate applications. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability, 2nd Edition: Update for the Internet age. San Francisco: Morgan Kaufmann. Mentis, H. (2003). User recalled instances of usability errors: Implications on the user experience. In Proceedings of the Conference in Human Factors in Computing Systems (CHI 2003) (pp. 736–737). New York: ACM. Nielsen, J. www.useit.com/alertbox/20030825.html Norman, D. A. (1981). Categorization of action slips. Psychological Review, 88, 1–15. Norman, D. A. (1988). The design of everyday things. New York: Doubleday. Parducci, A. (1965). Category judgment: A range-frequency model. Psychological Review, 72, 407–418. Reason, J. (1990). Human error. Cambridge, UK: Cambridge University Press. Rohn, J. A. (2005). Cost-justifying usability in vendor companies. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability, 2nd Edition: Update for the Internet age. San Francisco: Morgan Kaufmann. Rumelhart, D. E., & Norman, D. A. (1982). Simulating a skilled typist: A study of skilled cognitive-motor performance. Cognitive Science, 6, 1–36. Sauro, J. & Kindlund E. (2005). A method to standardize usability metrics into a single score. In Proceedings of the Conference in Human Factors in Computing Systems (CHI 2005) (pp. 401–409). New York: ACM. Sternberg, S. (1969). The discovery of processing stages: Extensions of Donders’ method. Acta Psychologica, 30, 276–315. Stevens, S. S. (1946). On the theory of the scales of measurement. Science, 103(2684), 677–680. Stevens, S. S. (1957). On the psychophysical law. Psychological Review, 64, 153–181. Taylor, C. A. (1996). Defining science: A rhetoric of demarcation. Madison, WI: Univerisity of Wisconsin Press. Townsend, J. T. (1990). Serial vs. parallel processing: Sometimes they look like Tweedledum and Tweedledee but they can (and should) be distinguished. Psychological Science, 1, 46–54. Woodworth, R. S. (1899) Accuracy of voluntary movements, Psychological Review, 3, 1–101.
Section III
Emerging Areas
This page intentionally left blank
9 Robots: The New Teammates Elizabeth S. Redden, Linda R. Elliott, and Michael J. Barnes
INTRODUCTION In the past, robots in the workplace were primarily static machines that loaded and unloaded stock, assembled parts, transferred objects and performed unsafe, manually intensive, highly repetitive tasks. Today, however, robots are being used in many new and exciting ways by more and more organizations. The U.S. Army is an example of an organization that has had rapid acquisition and a successful incorporation of robots into its ranks. The contribution of robots to Army operations is evidenced by their widespread use, with many thousands of assets deployed for diverse combat missions—such as collecting reconnaissance information, supporting logistics (e.g., carrying materiel), executing combat operations, protecting personnel, and retrieving the combat wounded (Axe, 2008). Their effectiveness has made robotic assets ubiquitous and extremely diverse. As Army robotic missions have become multifarious, their areas of operation and scopes have expanded. The widespread prevalence and increasing variety of these robot assets and their controllers create a major challenge to robot operators, who must learn to manage a spectrum of assets. There is a corresponding challenge to human factors specialists to design human–robot control interfaces that are simple and easy to use. This is particularly crucial to the warfighters who must use these systems while maintaining awareness of their surroundings at all times. There is also a challenge to designers to understand how warfighters think and their potential reactions to the new robotic technology, especially when bullets are flying and when decisions are many and must be made quickly. The competition for warfighter attention is high, particularly for visual and audible attention (Mitchell, 2005, 2008, 2009). Warfighters must 185
186 • Elizabeth S. Redden et al. interpret their immediate surroundings in light of their mission and be always attentive to the threat, while maintaining awareness of their own resources, fellow warfighters, civilians, and potential combatants. Within this context, operators must be able to control robot performance with minimal cognitive workload. This chapter will discuss three approaches for improving robotic performance in the workplace that have proven fruitful during military experimentation—multisensory and telepresence displays for teleoperation, semi- or fully-autonomous capabilities, and control of multiple robots. While this discussion is focused on Army infantry use, generalizations are easily made to similar contexts. Army infantry missions involve highly coordinated and hierarchical decision making and action teams, often working under high stress, uncertainty, and time pressure and are thus similar to other occupational scenarios such as emergency first responders, firefighters, police, and search/rescue teams. The first approach involves the use of multisensory and telepresence displays to immerse the operator into the environment of the teleoperated robot. A major challenge to designing interfaces for teleoperation is maximizing the amount of information processed and acted on by the operator while minimizing cognitive workload. One prominent method for improving task information delivery is the use of multisensory devices. These are devices that convey task information through multiple or alternative sensory channels. This multisensory approach includes the concept of telepresence displays, which are multisensory in nature, to enable the perception of “being there” and usually includes naturalistic controls and displays to reduce cognitive workload (Wickens, 2008) and induce more expert-based naturalistic cognition (Klein, 2008; Schraagen et al., 2008). For example, head-mounted camera controls allow the operator to view and control camera movements intuitively, by wearing a head-mounted display that is worn like a pair of glasses. Head movements to control the camera are instinctual because they are identical to the movements one makes when moving the head to look at objects outside the field of view. The second approach is development of semi- or fully-autonomous capabilities to perform facets of relatively complex missions while the operator is performing other aspects of the mission. While autonomy sounds like a panacea for increasing efficiency, decreasing labor costs, and decreasing workplace hazards, it must be implemented using a systematic thoughtful approach. Allocation of functions between humans and robots is an area that needs great consideration. Also, automation has been shown
Robots: The New Teammates • 187 to create its own set of problems such as decreased situation awareness (SA), distrust of automation, misuse and disuse, complacency, vigilance decrements, and adverse impacts on other facets of human performance; and these problems must be taken into consideration during task allocation (Chen, Barnes, & Harper-Sciarini, 2011). The third approach is to enable a single operator to control multiple robots. The future workplace will be inundated with robots and one-toone control will be impractical considering the multitude of systems involved. Many-to-one supervision will require increased autonomy, which in turn introduces problems such as those described above. The use of intelligent agents is one way to focus the individual’s attention on multiple robots while maintaining SA. Each of these three approaches is valid depending upon the tasks and, often, an integrated approach is most appropriate. It is important to determine which approach is best for each specific task. Thoughtful implementation will ensure that the workplace of the future is more productive, less costly, and safer for the human who will inhabit it.
MULTISENSORY AND TELEPRESENCE DISPLAYS FOR ROBOT CONTROL Multisensory and telepresence displays comprise two approaches to robot controller design with intuitive control as the goal. Multisensory displays are designed to offload visual processing task demands to other sensory channels (e.g., tactile, auditory, olfactory, and taste). Such devices typically augment visual information through auditory and/or tactile senses, and they facilitate information-processing primarily by easing visual workload and guiding user attention. This multisensory approach creates a novel and intuitive means to aid warfighters in general. Here, we discuss issues and provide examples of how a multisensory system can enhance robot control operations in the military. Telepresence is usually multisensory; however, in telepresence there is an overarching goal of enabling an operator to feel “present” in the robot situation, and to be able to control the robot as naturally as possible (van Erp et al., 2006). For example, a telepresence controller often has a head-controlled camera, where the operator’s head movements control the gaze direction of the robot sensor. This naturalistic approach to robot control enables more intuitive control of robot movement and sensor-driven perceptions.
188 • Elizabeth S. Redden et al. Multisensory Displays Advancements in technology enable information exchange at unprecedented levels; however, the warfighter can become overwhelmed creating a cognitive burden, particularly under conditions of stress and fatigue. Under these circumstances, poor decision making, slower response times, and generally poor performance can result because the individual is too focused on processing information rather than performing tasks (Wickens, 2008). Thus it is critical for designers to ensure future information displays are designed to optimize information perception, interpretation, and decision making. Because many emerging Army systems provide visual information (e.g., maps, diagrams, photographs, text, information system displays, etc.), there are multiple demands for focal visual attention in addition to the need for general SA (Mitchell, 2005, 2008, 2009). Two theory-based arguments support the use of multisensory displays to help alleviate the cognitive burden. One approach, Multiple Resource Theory (MRT) (Wickens, 2002, 2008), is based on distribution of information processing and task behaviors across different sensory channels, in order to offload workload when a particular channel is overloaded. MRT defines these attentional resources, resource interactions, and situational constraints to predict the degree to which information from a particular sensory channel can be effectively offloaded to another channel. While MRT emphasizes the benefit of information processing that is distributed through different cognitive resources, a second argument is based on reduction of cognitive effort through more intuitive displays and controls. The Prenav model (van Erp, 2007) posits that sensory input can bypass effortful deliberation and, instead, lead to more intuitive response and automated performance. Consider the use of a steering wheel for vehicle navigation—it provides a natural, intuitive means for vehicle control. In the same way, sensory cues can elicit a natural alerting response (e.g., reaction to a tap on the shoulder or a loud noise). Thus, multisensory displays can be designed to not only capitalize on offloading visual workload onto alternate sensory information processing channels; the cues themselves may be designed to elicit a more directly intuitive response. Meta-analyses of empirical data support these predictions (Coovert et al., 2008; Elliott, Coovert, & Redden, 2009; Prewett et al., 2012). Given the accumulation of empirical and theory-based support for multisensory displays in general, Redden et al. (2009) investigated the potential of a multisensory display for robot control through consideration of additional tactile direction cues. Tactile vibratory cues (e.g., eight tactors
Robots: The New Teammates • 189
FIGURE 9.1
Tactile belt associated with multisensory display condition and picture of single C-2 tactor
mounted on a torso belt) have been used effectively by warfighters for navigation and communication, resulting in performance that they have described as “eyes-free,” “hands-free,” and “mind-free.” Warfighter feedback on these tactile systems was extremely positive, and warfighters were able to use the tactile systems very well after only 15 minutes of training (Elliott et al., 2010). In this particular experiment, they compared a 6.5" split screen display, providing camera and map information at the same time with a smaller 3.5" display that provided either camera or map information, one at a time. It was reasonable to expect better performance with the split screen display; however, it is very desirable to have smaller, more lightweight displays for warfighters. The Global Positioning System (GPS)-driven tactile belt (Figure 9.1) was integrated with the smaller single screen system, so that the operator could view the camera while getting direction information from the belt, thus reducing reliance on map-based information. Results indicated that the operator performed more poorly with the single screen compared to the split screen and that the addition of the tactile belt to the single screen condition enhanced performance so that it was comparable to the split screen condition. Warfighter feedback indicated lower workload when the tactile belt was used with the smaller system and was generally very positive for the multisensory display (Redden et al., 2009). Thus, the addition of the tactile direction cues enabled a reduction in the size of the display, without loss in performance.
TELEPRESENCE DISPLAYS FOR ROBOT CONTROL In contrast to regular ground teleoperation of robots using a joystick or gaming controller, telepresence capabilities incorporate features such as
190 • Elizabeth S. Redden et al. immersive three-dimensional (3-D) vision, 3-D audio, and head-driven camera movement controls. Normally in telepresence, the operator wears a head-mounted display and head tracker to allow the operator to move the camera through head movements, as if they were seeing through the robot’s “eyes.” Immersive telepresence has also been demonstrated for situations more directly relevant to military operations, such as search and rescue, forestry, mine operations, remote security guard, and reconnaissance. To date, results are mixed but show promise. For example Halme, Suomela, and Savela (1999) investigated several levels of telepresence for field tasks, ranging from “full” telepresence (i.e., stereovision, sound, two degrees of freedom head tracking (up-down and left-right)) to partial combinations of individual telepresence features. They applied these controller conditions to several different mission-relevant tasks. Overall, results showed that effectiveness of capabilities depended on the task demands, the opportunities for training and practice (operators improved considerably with all conditions after practice), and the degree of novelty of the task (e.g., operators facing unknown territory or performing a task as a novice). Generally, performance was better when the head tracking was included because it assisted during various tasks, while stereo vision may need additional improvements to aid performance. Further prototypes for military applications are being developed for a number of uses such as remote sentry with directional 3-D spatial audio to detect, localize, and track vehicle targets (Overland, 2005). In a study by Yamauchi and Massey (2008), immersive teleoperation, based on a headmounted display and head-aimed cameras, was expected to enable higher driving speeds than current small Army unmanned ground vehicles (UGVs). Researchers reported that this system enabled operators to drive at full speed (estimated top speed of 30 mph) even while making turns and that warfighters reported increased SA. Researchers speculated that the telepresence, along with semi-autonomous driver assist capability (e.g., automation to help the robot stay in the lane), will prove more effective as higher speeds are attained. Again, there is further need for systematic comparison of telepresence features and specific combinations of those features, as they apply to specific tasks. Elliott et al. (2012) describe two experiments comparing telepresence with standard robot controllers. In the first experiment, while the telepresence system was rated highly by the warfighter operators, objective results were not conclusive. In addition, several warfighters reported headaches or nausea from the telepresence system. Engineers refined the
Robots: The New Teammates • 191 telepresence system to better address the issue of warfighter discomfort and researchers revisited task demands to develop and refine a follow-on experiment that also added the 3-D audio capability and corresponding audio search task demands. The full telepresence condition consisted of stereo vision, 3-D audio, and head tracked camera controller (see Figure 9.2). The baseline telepresence used mono-vision, mono audio, and a joystick camera controller. A third condition used mono vision, mono audio, and the head tracked camera controller. This second telepresence experiment yielded significant differences among conditions. The telepresence condition was associated with the fastest mean times in two different search tasks and with higher mean percentage of correct identifications, with these differences in time and identifications being more pronounced for the more difficult targets. In addition, mean ratings of NASA TLX workload were significantly lower for the telepresence condition. Thus the telepresence condition was associated with faster and better performance and lower experience of workload. The headtrack condition was associated with improved performance and lower workload compared to the mono-joystick. The addition of stereo audio and stereo vision further improved performance and lowered workload when added to the headtrack capability (Elliott et al., in review). There was also evidence for the role of individual differences; in this case, for differences in spatial ability as measured by the Cube Comparisons Test (Ekstrom et al., 1976). Spatial ability was found to be a direct contributor to robot control performance and also a moderator of the effects of display on performance. Spatial ability correlated significantly with audio search measures and when they were entered as a covariate in repeated measures analyses of display and audio search time, there was a
FIGURE 9.2
Unmanned Vehicle “Generaal” stereo vision (left) and 3D audio features (right)
192 • Elizabeth S. Redden et al. significant interaction between display and spatial ability. This interaction was reflected in the different correlation values between spatial ability and audio search times for the different display conditions. The correlation with performance was lowest for telepresence, with the implication that telepresence allowed participants with lower spatial ability to perform somewhat better than the other conditions. A continuing problem for telepresence is that of motion sickness. While refinements to software were conducted to minimize discomfort, some warfighters reported symptoms of eyestrain, headaches, motion sickness, nausea, and dizziness. These symptoms may be due to factors related to stereo vision (e.g., competition between eyes, time lag in camera movements) or mild simulator sickness (Elliott et al., 2012). In response to questionnaire based inquiries, warfighters also commented on the weight and fit of the headset, reporting that it was too heavy/hard/tight and contributed to their discomfort, and the need for increased resolution in the video display. When asked which controller condition was preferred, 13 of 18 responses were in favor of the telepresence condition. Reasons provided for the preference included overall ease of use, and in particular, ease of visual search and target localization. These comments were consistent with overall ratings of controller characteristics (Elliott et al., 2012). In summary, multisensory and telepresence controls and displays are designed to allow the operator greater ease of use by lowering demands for effortful deliberation. Multisensory displays that allocate information across visual, auditory, and tactile sensory channels have been shown to be effective as a means of reducing operator workload by reducing the size and complexity of visual displays. Similarly, telepresence displays provide easy control of sensors and heightened capabilities to perceive and understand the robot surroundings. While results demonstrate high effectiveness and potential, further research is needed to generate systematic theory-driven principles for multisensory integration with telepresence capabilities.
SEMI- OR FULLY-AUTONOMOUS ROBOTS The previous section discussed how operator workload during teleoperation can be alleviated through means of advanced designs in robot controllers and displays. In this section, we discuss a complementary
Robots: The New Teammates • 193 approach to ease operator workload and enhance performance. Semi- and fully-autonomous robots support operators through varying levels of artificial intelligence, thus relieving the operator from tasks ranging from simple (e.g., monitoring, tracking, driving) to complex (e.g., planning, advanced decision making). Maes (1995) proposed a continuum of robot control that ranges from teleoperation to full autonomy. Robots can be placed on this continuum based on the level of human–robot interaction that is required with robotic autonomy being inversely proportional to human control involvement. Teleoperation is the lowest level of automation on the continuum because it requires the most intervention from the robot operator. A teleoperated robot is totally under control of the operator who uses a joystick or other control device to command the robot. This requires constant interaction between the robot and the operator. For example, the da Vinci Robot, which enables surgeons to make tiny incisions and provides greater precision and control during laparoscopic operations, is teleoperated because the surgeon is actively controlling the robotic arm. While teleoperation has proven successful in myriad circumstances, many teleoperation tasks are repetitive and boring and the work requires constant attention on the part of the operator (Chen, Haas, & Barnes, 2007). Between the extremes of teleoperation and full autonomy lies the continuum of semi-autonomous control (often called supervisory control), which requires the operator to provide an instruction or portion of a task that can safely be performed by the robot on its own. Two types of semiautonomous control are often identified—shared control (also called continuous assistance) and control trading. Shared control requires the teleoperator to delegate a task for the robot or to accomplish it via direct control of the robot. If the operator delegates control to the robot, he or she must still monitor the robot to ensure that it is performing the task correctly. A guarded teleoperated robot can be placed on this portion of the continuum because it has the ability to sense and avoid obstacles but will otherwise navigate as instructed by the operator. During control trading, the human only interacts with the robot to give it a new command or to interrupt it and change its orders. A line-following robot can be placed on this portion of the continuum if it simply follows something painted, embedded or placed on the floor and does not have the ability to circumnavigate obstacles on its own. After receiving instructions, autonomous robots (those that perform tasks independently and/or have the capacity to choose goals such as determining whether to turn left or right by themselves) operate under all
194 • Elizabeth S. Redden et al. reasonable conditions without recourse to an outside operator, and can handle unpredictable events (Haselager, 2005). Huang et al. (2005) state that a fully-autonomous robot has the ability to gain information about the environment, work for an extended period without human intervention, move itself through its environment without human assistance, and avoid situations that are harmful to people, property, or itself unless those are part of its design specifications. An autonomously guided robot can be classified as fully-autonomous because it has information about where it is and how to reach waypoints (Murphy, 2000). Knowledge of its current location is determined by using sensors such as lasers and GPS. Positioning systems determine the location and orientation of the platform, so the robots can plan a path to their next set of waypoints or goals. Simply stated, autonomous robots make their own choices instead of following goals set by other agents. The incorporation of autonomous robotic systems into military schema involves more than just effective technical engineering. System designers must understand which autonomous capabilities match or surpass human abilities and when, during scenario accomplishment, the human needs assistance or is overloaded. In fact, the cognitive task analysis techniques developed to optimize human performance are critical to the design of optimal human-machine systems (e.g., Crandall, Klein, & Hoffman, 2006). Robot control competencies and inefficiencies must first be identified and then they must be understood in relation to specific task performance. Trade-offs among levels of autonomy must be identified. For example, an autonomous system may have a slower reaction time to difficult problems than a system being teleoperated by a human, but latency involved in teleoperation communication between the controller and the robot could render the human’s reaction time ineffective. Also, moderators of robot control performance must be identified, such as workload and situational task demands. The requirement for the operator to perform simultaneous tasks such as controlling multiple robots or performing local security tasks could have a detrimental effect on human intervention in robotic task outcomes for reasons such as operator availability and cognitive overload. Many studies have demonstrated that operators’ SA was higher when they were controlling robots with semi-autonomous or autonomous capabilities than when they were controlling them using teleoperation (Chen et al., 2008; Dixon, Wickens, & Chang, 2003; Luck et al., 2006; Prewett et al., 2010). However, increased autonomy is not a panacea. In a two-year study of a collaborative human-robot system, Stubbs, Hinds, and Wettergreen (2007) found that as autonomy increased, users’ inability to
Robots: The New Teammates • 195 understand the reasons for the robot’s actions disrupted the creation of common understanding, decreasing team performance. Additional problems found through research ranged from loss of operator skills when they are needed most, to both over and under reliance on machine intelligence during crises (Chen, Barnes, & Harper-Sciarini, 2011; Parasuraman, Sheridan, & Wickens, 2000). Also, many fear that providing more and more autonomy to armed robots could result in collateral damage or fratricide (Singer, 2009). Pettitt et al. (2010) conducted an experiment to examine the effect of levels of autonomy on robot control in reconnaissance missions during which the operator is fully engaged in additional high cognitive load activities. Thirty warfighters completed reconnaissance exercises using three different levels of robotic automation that were identified as being along the continuum between teleoperation and autonomy. During teleoperation trials, the robot was manually driven using the video feed displayed from the onboard vehicle camera and a joystick gimbal. During semi-autonomous trials, the robot’s obstacle avoidance, mapping, and return-to-start behaviors were enabled, and the robot self-directed itself through the building to the next available open space. The warfighter was able to stop the robot from self-directing itself and take control at any time by pressing a button on the control panel and using the joystick gimbal to fully control the robot’s direction. During the fully-autonomous trials, the robot’s obstacle avoidance, mapping, and return-to-start behaviors were turned on, as well as an exploration behavior that helped the robot selfdirect itself in places to which it had not yet driven, to minimize the overall time to explore the building while maximizing coverage area. Table 9.1 displays the robotic behaviors used for each level of autonomy. Warfighters were instructed that their primary tasks were to provide a map of the floor plan of the building, to identify objects of interest, and to TABLE 9.1 Autonomous behaviors used for each level of autonomy
Teleoperated Semiautonomous Fullyautonomous
SelfDirecting
Obstacle Avoidance
Mapping
Return-toStart
Exploration
No Yes
No Yes
No Yes
No Yes
No No
Yes
Yes
Yes
Yes
Yes
196 • Elizabeth S. Redden et al. return the robot to the starting point as quickly as possible once the entire building had been reconnoitered and mapped. Their secondary tasks were to answer questions concerning details of their mission and to identify items such as IEDs when they appeared in their local areas. Accomplishment of secondary tasks is important as warfighters must multitask (drive the robot while responding to radio requests, looking for enemy, etc.) during robotic operation. Total vehicle reconnaissance times and driving errors were significantly better when the robot was in the autonomous and semi-autonomous modes than when it was in the teleoperation mode. The consensus of the warfighters was that mapping the room, driving the robot, and answering the questions in the teleoperation mode created task demands that were too difficult. On the NASA-TLX workload scale, the warfighters rated teleoperation as creating a higher cognitive and overall workload, higher stress (frustration), more effort, and higher time pressure as compared to the other two modes of operation. This is consistent with Dixon and Wickens’ (2003) suggestion that automation would relieve cognitive overload and with Schipani (2003) who stated that cognitive workload is increased for higher levels of operator involvement. Map accuracy in the teleoperation mode was much poorer than in the autonomous and semiautonomous modes. It was clear that the operators’ mental models of the environment, based upon viewing it through a robotic driving camera, were fairly inaccurate. This is consistent with the findings of Fong, Thorpe, and Baur (2003) who indicated that operators using teleoperation have difficulty building mental maps of remote environments. Results in the literature concerning SA and targets (objects) identified while the robot is moving are mixed concerning whether higher levels of autonomy result in better SA or vice versa. Chen and Joyner (2009) found that participants detected fewer targets when their robot was operating in the semi-autonomous mode rather than the teleoperation mode, due to the low reliability level of the robot as well as the high taskload demand the operators had to deal with concurrently. However, automation seemed to benefit unmanned aerial systems (UAS) pilots’ target detection performance (Dixon et al., 2003). The results from the Pettitt et al. (2010) study were different from both the Chen and Joyner (2009) robotic study and the Dixon et al. (2003) UAS study in that no significant differences were found between the levels of automation. However, if SA is more broadly defined in the Chen and Joyner (2009) study to include the gunner task and the communication task, participants did not have better SA in the teleoperation condition. Thus, the complex
Robots: The New Teammates • 197 nature of SA and how it is defined makes it important to specify the definition of SA when making comparisons between studies. The rapid pace of technology growth will allow robots to perform more and more autonomous tasks in the near future, which theoretically should decrease the operator’s workload, freeing him or her to perform other tasks. It is imperative, however, that autonomous features be thoroughly examined before they are added because not all autonomous features enhance total performance.
SUPERVISORY CONTROL AND INTELLIGENT AGENTS While the previous section discussed autonomous control as a means to reduce operator workload and perhaps enable additional duties while operating one robot, this section will examine the implications of intelligent systems for future military applications and the efficiencies created when one operator is able to supervise multiple systems at once. It is impossible to predict how future systems will be used but we can capture some of the complexity involved by reviewing experimental results from active programs, especially those programs investigating simulated systems emulating future unmanned vehicles (UVs) (Barnes et al., 2006). In particular, we will review research related to applications of intelligent agents, supervisory control, and hybrid systems combining human and machine intelligence (Okamoto, Scerri, & Sycara, 2008; Chen, Barnes, & Harper-Sciarini, 2011).
INTELLIGENT AGENTS The precise definition of an intelligent agent varies among research groups. For example, Russell and Norvig (2003) in their textbook on artificial intelligence emphasized that an intelligent agent is an autonomous entity that operates in its environment and is rational to the extent that it directs its resources to achieving a prescribed objective. This definition covers a wide range of implementations from distributed agents that evince intelligence holistically, to hybrid systems that include agents of varying capabilities and may include human agents. Swarm technologies are a special type of distributed intelligence wherein each component has
198 • Elizabeth S. Redden et al. a limited capacity to respond to its environment, and the intelligence resides within the combined behavior of the group. Swarms are an example of bio-inspired engineering that use techniques that mimic collective behaviors of organisms such as birds, ants, and bees. Another class of intelligent systems involves specialized agents that solve problem sets in a manner that emulates human intelligence. The more sophisticated agent can use algorithms that adjust its behaviors for contingencies. Multi-agent architectures were developed to deal with more complex environments; these multi-layer systems use teams of specialized agents to solve more general problems such as controlling UVs in real world situations. The more intelligent agents are located at the apex of the hierarchy soliciting information and directing lower level agents that perform specific tasks (e.g., control one type of sensor). The multiagent technology has proven to be successful in a number of applications. For example, Carnegie Mellon University conducted simulation experiments indicating that agents operating multiple UASs could locate emitters using trade–off algorithms that maximize triangulation accuracy, minimize number of messages (communications), and minimize flight paths among the set of UASs (Scerri et al., 2008). Human–Agent Teaming Similar to the tradeoffs that must be undertaken when developing the appropriate levels of autonomy for various robotic tasks, the crucial issue underlying human–agent teaming will be the prospective roles of the team members. On a superficial level, there are human processing capabilities such as pattern-recognition that may be better assigned to the human, whereas other tasks related to computational speed may be more effectively assigned to agents. At a deeper level, human understanding is more varied and more intuitive than the type of algorithmic or rulebased understanding underlying agents (Klein, 1998; Searle, 1980; Zsambok & Klein, 1997). Even when human-like processes are mimicked using neural net modeling, agents will have a limited ability to react to environmental or goal changes. This is partly due to the huge number of human neural interconnections (Damasio, 2010) that are available to address past experiences and their contextual cues and partly due to human moral and empathic understanding permitting humans to deal with ambiguous real world problems especially those having ethical or political ramifications (Barnes & Evans, 2010; Damasio, 1995). Thus, humans, particularly experts or experienced operators, will have a more global understanding of the real
Robots: The New Teammates • 199 world environment and its implications, whereas agents may be better able to solve complicated technical problems related to specific mission objectives. In the future, undoubtedly, there will be situations where robots will have the capacity to operate autonomously. However, this is exactly the eventuality that will require human supervision in order to ensure safety, maintain moral integrity, and to react to unforeseen events (Barnes & Evans, 2010; Chen, Barnes, & Harper-Sciarini, 2011). RoboLeader: A Hybrid System RoboLeader was designed by researchers at the U.S. Army Research Laboratory (ARL) and the University of Central Florida (Chen, Barnes, & Qu, 2010) as a research tool to understand the limitations and advantages of combined human–agent teams for controlling multiple robots. The underlying rationale for the research is attentional demand theory that posits diminishing residual cognitive capacity as workload increases. Barnes, Parasuraman, & Cosenzo (2006) argued, based on previous research, that the relationship would be an inverted u-shaped curve with sub-optimal performance at both ends of the activation spectrum. With too little activation, humans are not sufficiently engaged in the task and show complacency effects. With too much activation, humans have too little spare cognitive capacity to attend to the task. The experimental objective of the RoboLeader experiments was to determine the efficacy of the RoboLeader agent to act as an intermediary between the operator and multiple unmanned systems without overloading the operator or diminishing situation awareness (Chen, Barnes, & Qu, 2010). The RoboLeader scenario consisted of an operator controlling from four to eight simulated robots conducting an urban reconnaissance mission in order to find routes for the robots to cover as much of the simulated terrain as possible while searching for potential targets. These simulations were conducted under two conditions: a) robots were controlled by an unassisted human (manual condition); or b) robots were controlled with the assistance of a virtual agent (RoboLeader condition). The simulations also required participants to perform secondary tasks related to equipment monitoring and radio communications while responding to situation awareness probes. In the final experiment, the simulated robots were engaged in the more demanding task of entrapping a moving vehicle by cutting off the vehicle’s escape route in the urban terrain. In all cases, RoboLeader suggested plans for the individual robots but always allowed the human to make the final decision.
200 • Elizabeth S. Redden et al. The first experiment (Chen, Barnes, & Qu, 2010) varied the number of robots requiring supervision (four or eight) and the role of RoboLeader (present or absent). The task was to supervise robots for a reconnaissance mission during which the human supervisor had to identify potential targets remotely from the robot’s video. The robot’s progress was interrupted occasionally by intelligence reports that required the robots to find new paths in order to continue the mission. There was no difference in reconnaissance effectiveness comparing human operator only conditions to conditions wherein RoboLeader suggested a new route. However, there was a 13 percent decrease in time to complete the mission when the operator was assisted by RoboLeader. Not surprisingly, conditions with eight robots were more difficult than four robot conditions. The second RoboLeader experiment investigated the effects of imperfect automation during multitasking robotic mission segments. False alarms had different and less deleterious effects in this experiment than has been shown in previous research (Chen & Terrence, 2009; see also Meyer, 2001). Participants performed better in false cue conditions because the map display was readily available for rapid verification; however, RoboLeader omissions (misses) could not be as easily checked, forcing operators to continually scan the map and causing them to miss targets on the robot’s videos (Chen, Barnes, & Kenny, 2011). Individual differences proved to be important as the data showed that those with low attentional control evidently over relied on the miss-prone RoboLeader. The research taken together paints a complicated picture indicating that type of errors, individual differences, and interface design must be all taken into consideration when setting sensitivity levels for false alarms and misses for automation that is not perfectly reliable (Wickens, Levinthal, & Rice, 2010). The third study showed the advantage of using an intelligent agent during more difficult taskings (Chen et al., 2011). The agent aided the operator by computing the best route for each of the four robots to entrap moving targets. Results showed that RoboLeader was more effective in encapsulating the moving targets than were the human operators. It appears that even semi-autonomous assistance of RoboLeader was sufficiently beneficial for the encapsulation task. This finding is consistent with prior research findings that sharing decisions with automated systems relating to targeting with ground and aerial unmanned systems improves both situation awareness and actual performance most likely because it keeps humans in the decision loop (Parasuraman, Barnes & Cosenzo, 2007). Also, it is important to note that frequent video gamers demonstrated
Robots: The New Teammates • 201 significantly better encapsulation performance than did infrequent gamers; they also had better situation awareness of the mission environment (Chen et al., 2011). Based on the above human–robot interaction research, we established preliminary ground rules for using intelligent agents as intermediate supervisors: •
•
•
•
•
Especially in combat environments, the agent should be subservient to the human operator; only in the most time-constrained situations should agents make important decisions without consulting the human supervisor. In these cases, authority needs to be prescribed prior to the mission. For simple tasks, intelligent agents may have minimal impact but still may be useful for situations when workload is uneven, thus permitting the operator to attend to more crucial tasks during high stress mission segments. Agents that share problem-solving responsibility with humans are preferred over fully-autonomous agents for two reasons. First, greater synergy among the human–agent teams allows human team members to contribute unique human capabilities to the mix. Second, an active partnership improves the human’s ability to overcome complacency effects and to maintain situation awareness. Interfaces improvements can mitigate the effects of false alarms. If the false alarms are easily checked, then reliance on the agents’ suggestions will increase because of the low cost of verification whereas agent omissions may cause the operator to continually check raw data at the expense of other tasks. Individual differences among the human team members contributed to the success or failure of the partnership. Specifically, human confidence levels, spatial abilities, and gaming experience proved to be important determinates of human/agent teaming performance.
In summary, we discussed the problems related to supervisory control of aerial and ground robots. In particular, we addressed the many robotsto-one operator paradigm because of the complexity inherent in future environments. We concluded that autonomy by itself raised as many problems as it solved. We suggested using intelligent agents to monitor and to suggest courses of actions for multiple robots with the caveat that ultimate authority must always reside with humans. We reported
202 • Elizabeth S. Redden et al. experimental results of one such paradigm (RoboLeader) to illuminate issues involved in human/agent teams. Based on these and other results, we suggested guidelines for using intelligent agents as intermediaries between human operators and multiple robotic systems.
DISCUSSION Incorporation of robots into the military to remove warfighters from harm’s way makes sense. However, as robots are becoming more capable, it is clear that there are also many other benefits of including robots in the military and these benefits can potentially be realized in the civilian workplace. Examination of robotic design approaches for future military environments foreshadows many design approaches that will be applicable to civilian applications. For example, it is not too difficult to envision future highway systems with a mixture of autonomous commercial vehicles, private semi-autonomous automobiles, unmanned aerial sentinels redirecting traffic, and humans trying to react to and prevent accidents and gridlock during a typical Los Angeles’ rush hour. Something similar is being proposed by the Federal Aviation Administration for future air traffic control (Willems & Hah, 2008). It is also not too difficult to envision multiple robots in office environments performing routine tasks under a supervisor or a robot performing more physical tasks while its human partner makes decisions about construction work. Results to date are very promising for all three of the approaches to robotic design that we discussed. Multisensory and telepresence emerging capabilities have enhanced performance, reduced workload, and gained operator appreciation. The goal is indeed to develop systems that are “eyesfree, hands-free, and mind-free.” There is no doubt that multisensory and telepresence systems bring us closer to this goal; however, improvements are needed. Further study is needed to improve display capabilities and also to understand the effects of multisensory displays on sustained operations, conditions of fatigue, and conditions of stress. For the telepresence system we examined, there are a few design changes that need to be made for optimal performance. Issues with user discomfort due to motion sickness and also discomfort from the weight and bulk of the display system continue to exist. Soldiers suggested that the 3-D audio could be further improved and additional capabilities should be added, such as a rear vision camera, a more effectively placed probe, crash/collision
Robots: The New Teammates • 203 sensors, greater field of view, and camera zoom capabilities. They also noted that the system would have to be integrated with a combat helmet to be effective in operations. Many civilian applications for telepresence capabilities have been developed and demonstrated. Telepresence technologies have been instantiated in medical surgical operations (Alexander & Maciunas, 1999; Schostek, Schurr, & Buess, 2009; Haidegger & Benyo, 2008; Sankaranarayanan et al., 2007) and for remote patient care (Hu, 2008; Schmidt & Holmes, 2007). They have also been demonstrated in a broad variety of settings, for purposes as diverse as service to the public (Gong, 2008), construction (Sasaki & Kawashima, 2008), ocean exploration (Manley, 2008; Martinez & Keener-Chavis, 2006), NASA team member support (Goza et al., 2004), museum learning tools (All & Nourbakhsh, 2001), and space operations (Foing & Ehrenfreund, 2008; Landis, 2008). Semi- and fully-autonomous system teams provide the capability to free humans for jobs that they do best while increasing work output. However, it is important to effectively allocate tasks between the humans and their robotic team members. Effective task distribution is not static. For example, when a human team member is overloaded with outside jobs, the robotic team member might take over a task that is best handled by a human. Once the workload becomes manageable, it might be more efficient for the human team member to take over. Like teleoperated robots, semi- and fully-autonomous robots have issues that need to be addressed. One such issue is social acceptance of autonomous technologies. If robots are truly going to become team members, humans must accept them, communicate effectively with them, develop shared mental models with them and last but not least, trust them. As robots become more autonomous it is critical to understand this and other issues that may affect the human-robot teams in order to fully exploit both human and technical capabilities. Autonomous robots are quickly becoming a reality. For example, the University of Pennsylvania’s flying robotic platform, the “Pelican,” perfectly navigated a course on its own, going under, over and around obstacles. The research is part of ARL’s Micro Autonomous Systems Technologies (MAST) Collaborative Technology Alliance (CTA). The Pelican is equipped with sensing technologies, a laser camera, and a light-weight, low-power computer to interpret what the robot “sees” and “feels” and it is able to fly both in and outdoors—a feat that takes very complex computing. It determines its own route by taking into account the environment, what it needs to do to fly, and even its battery power.
204 • Elizabeth S. Redden et al. The reported examples for supervisory control were mainly related to the military. However, our contention is that the approaches we addressed are similar to approaches for complex civilian applications such as future vehicular and air traffic control environments. We focused on intelligent agents because as complexity increases, the number of entities that humans can supervise will quickly surpass both economic and safety limits (Lewis & Wang 2010). However, intelligent agents by themselves will not overcome the associated risks (Parasuraman & Riley, 1997). It will be essential to create architectures with redundancy and software that acts as a safety valve for agent systems during unexpected events. But in the final analysis it will be the human supervisor who will be responsible for system safety as well as effectiveness. This is not only because of the human’s greater adaptability but also the human’s moral sense and global understanding (Barnes & Evans, 2010; Searle, 1980).
REFERENCES Alexander, E., & Maciunas, R. (1999). Advanced neurosurgical navigation. New York: Thieme Publishing. All, S., & Nourbakhsh, I. (2001). Insect telepresence: Using robotic tele-embodiment to bring insects face to face with humans. Autonomous Robots, 10(2), 149–161. Axe, D. (2008). War Bots: How U.S. military robots are transforming war in Iraq. AnnArbor, MI: Nimble Books LLC. www.nimblebooks.com/ Barnes, M. J., & Evans A. W. (2010). Soldier-Robot Teams in future battlefields: An overview. In M. Barnes, & F. Jentsch (Eds.), Human-Robot Interactions in Future Military Operations (pp. 9–29). Hampshire, UK: Ashgate. Barnes, M., Cosenzo, K., Jentsch, F., Chen, J. Y. C., & McDermott, P. (2006). The use of virtual media for military applications. In Proceedings of the NATO HFM-136 workshop, 13–15 June 2006. West Point, NY. Barnes, M. J., Parasuraman, R., & Cosenzo, K. A. (2006). Adaptive automation for military robotic systems. In NATO Technical Report RTO-TR-HFM-078, Uninhabited Military Vehicles: Human Factors Issues in Augmenting the Force (pp. 420–440). Chen, J. Y. C., & Joyner, C. T. (2009). Concurrent performance of gunner’s and robotics operator’s tasks in a multi-tasking environment. Military Psychology, 21(1), 98–113. Chen, J. Y. C., & Terrence, P. (2009). Effects of imperfect automation on concurrent performance of military and robotics tasks in a simulated multi-tasking environment. Ergonomics, 52, 907–920. Chen, J. Y. C., Barnes, M. J., & Harper-Sciarini, M. (2011). Supervisory control of multiple robots: Human performance issues and user interface design. IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, 41(4), 435–451. Chen J. Y. C, Barnes M. J., & Kenny, C. (2011). Effects of unreliable automation and individual differences on supervisory control of multiple ground robots. In Proceedings of the 6th ACM/IEEE International Conference on Human Robot Interaction, 6–9 March 2011 (pp. 371–378), Lausanne, Switzerland.
Robots: The New Teammates • 205 Chen, J. Y. C., Barnes, M. J., & Qu, Z. (2010). RoboLeader: A surrogate for enhancing the human control of a team of robots (Technical Report No. ARL-MR-0735). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory, Human Research and Engineering Directorate. Chen, J. Y. C., Barnes, M. J., Quinn, S. A., & Plew, W. (2011). Effectiveness of RoboLeader for dynamic re-tasking in an urban environment. In Proceedings of Human Factors and Ergonomics Society 55th Annual Meeting, 19–23 Sep. 2011 (pp. 1501–1505), Las Vegas, NV. Chen, J. Y. C., Durlach, P. J., Sloan, J. A., & Bowens, L. D. (2008). Human-robot interaction in the context of simulated route reconnaissance missions. Military Psychology, 20, 135–149. Chen, J. Y. C., Haas, E., & Barnes, M. (2007). Human performance issues and user interface design for teleoperated robots. IEEE Transactions on Systems, Man, and Cybernetics— Part C: Applications and Reviews, 36(6), 1063–1076. Coovert, M. D., Prewett, M. S., Saboe, K. N., & Johnson, R. C. (2008). Development of principles for multimodal displays in army human-robot operations (Technical Report No. ARL-CR-651). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory, Human Research and Engineering Directorate. Crandall, B., Klein, G., & Hoffman, R. (2006). Working minds: A practitioner’s guide to cognitive task analysis. Cambridge, MA: MIT Press. Damasio, A. (1995). Descartes error: Emotion, reason and the human brain. New York: Quill. Damasio, A. (2010). Self comes to mind: Constructing the conscious mind. New York: Pantheon Books. Dixon, S. R., & Wickens, C. D. (2003). Control of multiple-UAVs: A workload analysis. In Proceedings of the 12th International Symposium on Aviation Psychology, 14–17 April, Dayton, OH. Dixon, S. R., Wickens, C. D., & Chang, D. 2003. Comparing quantitative model predictions to experimental data in multiple-UAV fight control. In Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting, 13–17 October (pp. 104–108), Santa Monica, CA. Ekstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1976). Manual for kit of factorreferenced cognitive tests. Princeton, NJ: Educational Testing Service. Elliott, L. R., Coovert, M. D., & Redden, E. S. (2009). Overview of meta-analysis investigating vibrotactile versus visual display options. In Proceeding of the 13th International Conference on Human-Computer Interaction, 19–24 July (435–443), San Diego, CA. Elliott, L., Jansen, C., Redden, E., & Pettitt, R. (2012). Robotic telepresence: Perception, performance, and user experience (Technical Report No. 5928). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory, Human Research and Engineering Directorate. Elliott, L., van Erp, J., Redden, E., & Duistermaat, M. (2010). Field-based validation of tactile display for dismount soldiers. IEEE Transactions on Haptics, 3(2), 78–87, doi:10.1109/TOH.2010.3 Foing, B., & Ehrenfreund, P. (2008). Journey to the Moon: Recent results, science, future robotic and human exploration. Advances in Space Research, 42(2), 235–237. Fong, T., Thorpe, C., & Baur, C. (2003). Multi-robot remote driving with collaborative control. IEEE Transactions on Industrial Electronics, 50(4), 699–704. Gong, L. (2008). How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Computers in Human Behavior, 24(4), 1494–1509.
206 • Elizabeth S. Redden et al. Goza, S., Ambrose, R., Diftler, M., Spain, I. (2004). Telepresence control of the NASA/ DARPA Robonaut on a mobility platform. In Proceedings of ACM International Conference on Human Factors in Computing Systems, 6, 24–29 April (pp. 623–629), Vienna, Austria. Haidegger, T., & Benyo, Z. (2008). Surgical robotic support for long duration space missions. Acta Astronautica, 63(7–10), 996–1005. Halme, A., Suomela, J., & Savela, M. (1999). Applying telepresence and augmented reality to teleoperate field robots. Robotics and Autonomous Systems, 26(2–3), 117–125. Haselager, W. F. G. (2005). Robotics, philosophy and the problems of autonomy. Pragmatiacs and Cognition, 13(3), 515–532. Hu, J. (2008). An advanced medical robotic system augmenting healthcare capabilities (Report No. ADB340156). Boxborough, MA: HSTAR Technologies. Huang, H., Pavek, K., Albus, J., & Messina, E. (2005). Autonomy levels for unmanned systems (ALFUS) framework: An update. In Proceedings of the 2005 SPIE Defense and Security Symposium, 27 May (pp. 439–448), Orlando, FL. Jansen, C., van Breda, L., & Elliott, L. (2012). Remote auditory target detection using an unmanned vehicle – comparison between a Telepresence headtracking 3D audio setup and a joystick-controlled system with a directional microphone. NATO Technical Report No. RTO-TR-HFM-170. Klein, G. (1998) Sources of power: How people make decisions. Cambridge, MA: MIT Press. Klein, G. (2008). Naturalistic decision making. Human Factors, 50, 456–460. Landis, G. (2008). Teleoperation from Mars orbit: A proposal for human exploration. Acta Astronautica, 62(1), 59–65. Lewis, M., & Wang, J. (2010). Coordination and automation for controlling robot teams. In M. Barnes & F. Jentsch (Eds.), Human-Robot Interactions in Future Military Operations, (pp. 397–418). Burlington, VT: Ashgate. Luck, J. P., McDermott, P. L., Allender, L., & Russell, D. C. (2006). An investigation of real world control of robotic assets under communication latency. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, 2–3 March (pp. 202–209), Salt Lake City, UT. Maes, P. (1995). Artificial life meets entertainment: Life like autonomous agents. Communications of the ACM, 38(11), 108–114. Manley, J. (2008). New tools for ocean exploration, equipping the NOAA Ship Okeanos Explorer (Report No. DTIC ADA502120). Duxbury, MA: Battelle Applied Coastal and Environmental Services. Martinez, C., & Keener-Chavis, P. (2006). NOAA Ship Okeanos Explorer: Telepresence in the service of science, education and outreach (Report No. DTIC ADA500905). Narrangansett, RI: National Oceanic and Atmospheric Administration. Meyer, J. (2001). Effects of warning validity and proximity on responses to warnings. Human Factors, 43, 563–72. Mitchell, D. K. (2005). Soldier workload analysis of the Mounted Combat System (MCS) platoon’s use of unmanned assets (Technical Report No. ARL-TR-3476). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory, Human Research and Engineering Directorate. Mitchell, D. K. (2008). Predicted impact of an Autonomous Navigation System (ANS) and Crew-Aided Behaviors (CABS) on soldier workload and performance (Technical Report No. ARL-TR-4342). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory, Human Research and Engineering Directorate.
Robots: The New Teammates • 207 Mitchell, D. K. (2009). Workload analysis of the crew of the Abrams V2 SEP: Phase I baseline IMPRINT model (Technical Report No. ARL-TR-5028). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory, Human Research and Engineering Directorate. Murphy, R. R. (2000). Introduction to AI Robotics. Cambridge, MA: MIT Press. Okamoto, S., Scerri, P., & Sycara, K. (2008). The impact of vertical specialization on hierarchical multi-agent systems. In Proceedings of AAAI, 13–17 July (pp. 138–143), Chicago, Il. Overland, J. (2005). Enhanced Acoustic Remote Sentry (EARS) (Report No. AFRL-HE-WPTR-2005–0086; DTIC ADB312981). Wright-Patterson AFB, OH: Air Force Research Laboratory. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 38, 665–79. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man & Cybernetics, 30, 286–297. ISSN 0018–9472. Parasuraman, R., Barnes, M., Cosenzo, K. (2007). Adaptive automation for human-robot teaming in future command and control systems. The International Command and Control Journal, 1(2), 43–68. Pettitt, R., Redden, E., Pacis, E., & Carstens, C. (2010). Scalability of robotic controllers: Effects of progressive levels of autonomy on robotic reconnaissance tasks (Technical Report No. ARL-TR-5258). Aberdeen Proving Ground, MD: U.S. Army Research Laboratory, Human Research and Engineering Directorate. Prewett, M. S., Johnson, R. C., Saboe, K. N., Coovert, M. D., & Elliott, L. (2010). Managing workload in human-robot interaction: A review of empirical studies. Computers in Human Behavior, 26(5), 840–856. Prewett, M. S., Elliott, L. R., Walvoord, A., & Coovert, M. D. (2012). A meta-analysis of vibrotactile and visual information displays for improving task performance. Manuscript in review for IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 42(1), 123–132. Redden, E., Elliott, L., Pettitt, R., & Carstens, C. (2009). A tactile option to reduce robot controller size. Journal of Multimodal User Interfaces. Springer Berlin/Heidelberg. www.springerlink.com/content/r735q86146218446/. Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd edn). Upper Saddle River, NJ: Prentice Hall. Sankaranarayanan, G., King, H., Ko, S., Mitchell, J., Friedman, D., Rosen, J., & Hannaford, B. (2007). Portable surgery master station for mobile robotic telesurgery (Report No. DTIC ADA490371). Seattle, WA: Washington University, Seattle Biorobotics Laboratory. Sasaki, T., & Kawashima, K. (2008). Remote control of backhoe at construction site with a pneumatic robot system. Automation in Construction, 17(8), 907–914. Scerri, P., Von Gonten, T., Fudge, G., Owens, S., & Sycara, K. (2008) Transitioning multiagent technology to UAV applications. In Proceedings of the 7th Annual AAMAS Conference, 12–16 May (pp. 89–96), Estoril, Portugal. Schipani, S. P. (2003). An evaluation of operator workload during partially-autonomous vehicle operations. In Proceedings of Performance Metrics for Intelligent Systems (PerMIS) 2003, 16–18 September, retrieved 23 February 2004: www.isd.mel.nist. gov/research_areas/research_engineering/Performance_Metrics/PerMIS_2003/ Proceedings/Schipani.pdf (accessed 2010).
208 • Elizabeth S. Redden et al. Schmidt, J., & Holmes, E. (2007). Enhancing technologies to improve telemedicine and surgical technology (Report No. XA-USAMRMC; DTIC Report No. ADB328085). Fort Detrick, MA: U.S. Army Medical Research and Materiel Command. Schostek, S., Schurr, M., & Buess, G. (2009). Review on aspects of artificial tactile feedback in laparoscopic surgery. Medical Engineering & Physics, 31(8), 887–898. Schraagen, J. M., Militello, L., Ormerod, T., & Lipshitz, R. (2008). Naturalistic decision making and macrocognition. London: Ashgate. Searle, J. R (1980) Minds, brains and programs. Reprinted in John Haugeland (Ed.), Mind design II. (1997). Cambridge, MA: MIT Press. Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the 21st century. New York: The Penguin Press. Stubbs, K., Hinds, P. J., & Wettergreen, D. (2007). Autonomy and common ground in human-robot interaction: A field study. IEEE Intelligent Systems, 22(2), 42–50, doi:10.1109/MIS. 21 van Erp, J. B. F. (2007). Tactile displays for navigation and orientation: Perception and behavior. Leiden, The Netherlands: Mostert & Van Onderen. van Erp, J. B. F., Duistermaat, M., Jansen, C., Groen, E., & Hoedemaeker, M. (2006). Telepresence: Bringing the operator back in the loop. In Virtual Media for Military Applications (pp. 9–1–9–18). Meeting Proceedings RTO-MP-HFM-136, Paper 9. Neuilly-sur-Seine, France: RTO. Available from: www.rto.nato.int/abstracts.asp. Wickens, C. D. (2002). Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science, 3(2), 159–177. Wickens, C. D. (2008). Multiple resources and mental workload. Human Factors, 50(3), 449–454. Wickens, C. D., Levinthal, B., & Rice, S. (2010). Imperfect reliability in unmanned aerial vehicle supervision and control. In M. J. Barnes & F. Jentsch (Eds.), Human-Robot Interactions in Future Military Operations (pp. 397–418). Farnham, Surrey, UK: Ashgate. Willems, B. F., & Hah, S. (2008). Future En Route Workstation Study: Part 1-Automation Integration Research DOT/FAA/TC-08/14. Yamauchi, B., & Massey, K. (2008). Stingray: High-speed teleoperation of UGVs in urban terrain using driver-assist behaviors and immersive telepresence. In Proceedings of the 26th Army Science Conference, 1–4 December, Orlando, FL (DTIC Report No. ADM002187). Zsambok, C., & Klein, G. (1997). Naturalistic decision making. Mahwah, NJ: Lawrence Erlbaum Associates.
10 Workplace Monitoring and Surveillance Research since “1984”: A Review and Agenda Bradley J. Alge and S. Duane Hansen
ABSTRACT Although monitoring or the observation of performance against some standard has long been recognized as a critical function of managers, the last quarter century has seen an explosion in the use of technologies as a means through which organizations can monitor their employees. Much progress has been made in our understanding of how such technologically driven approaches to monitoring have impacted workers in areas such as stress, task performance speed and quality, psychological control, security, fairness, and privacy. Given this accumulation of knowledge, the time is ripe for a review and integration of this body of work. In this chapter, we review this literature with an eye on the future as we chart a direction for future research.
INTRODUCTION Organizations have long been concerned with implementing organizational controls targeted at employees to ensure that employee behaviors coincide with the interests of the organization (e.g., Alge, Greenberg, & Brinsfield, 2006; Barnard, 1938; Lawler, 1976; Tenbrunsel & Messick, 1999). A critical component of control that helps ensure this alignment is organizational monitoring. Although monitoring has been studied as part of systems of control since the industrial revolution, it is fair to ask, what is it about 209
210 • Bradley J. Alge and S. Duane Hansen monitoring that has changed in the last 25 years or so, specifically since 1984, that warrants the accelerated level of inquiry we have seen in monitoring research? The answer lies in the means through which monitoring and control of employees can occur. The focus of this chapter is to review research on monitoring and surveillance since 1984, with a particular emphasis on electronic monitoring. We chose the year 1984 for both symbolic and practical reasons. Symbolically, it was Orwell’s book, 1984, which raised societal conscience toward monitoring, but more broadly, made society aware of the level of control that organizations can enact. Also, 1984 saw one of the more famous television commercials in Super Bowl history, by Apple Computer, depict a menacing “Big Blue” (IBM Corporation) as an Orwellian-like “Big Brother” control machine, leading the masses (i.e., consumers) to an almost mindless allegiance, with Apple, of course, serving as the liberator (see Friedman, 2005). Notes Friedman: The ad garnered millions of dollars worth of free publicity, as news programs rebroadcast it that night. It was quickly hailed by many in the advertising industry as a masterwork. Advertising Age named it the 1980s Commercial of the Decade, and it continues to rank high on lists of the most influential commercials of all time. (http://tedfriedman.com/electric-dreams/ chapter-5-apples-1984/)
Practically, it was in the mid-80s that we saw the use of more sophisticated technologies being employed to monitor employees. Indeed, a pivotal 1987 U.S. Congress, Office of Technology Assessment (USOTA) report entitled ‘The Electronic Supervisor: New Technologies, New Tensions’ recognized the growing trend in sophisticated forms of electronic monitoring, and scholarly research quickly followed suit (e.g., Zuboff, 1988). The USOTA report was the culmination of work by an interdisciplinary group of scholars and policy-makers, including noted privacy expert, Alan Westin (cf., Westin, 1967; 1992) and electronic monitoring researcher, Michael J. Smith (cf., Smith et al., 1992; Smith, Carayon, & Miezio, 1986). This report proved to be a catalyst for increasing research on electronic monitoring. In the pages that follow, we define workplace monitoring, including electronic monitoring, review monitoring research since 1984 with a particular focus on technologically-based forms of electronic monitoring in organizations, and conclude with an agenda for future electronic monitoring research.
Workplace Monitoring and Surveillance since “1984” • 211
DEFINITION AND TYPES OF ELECTRONIC MONITORING The terms “monitoring” and “surveillance” are treated synonymously, and are interchangeable throughout this chapter. Monitoring is a central component to any system of control (Alge et al., 2006; Green & Welsh, 1988; Tenbrunsel & Messick, 1999). Here, we define monitoring, consistent with prior research, as the systems, people, and processes used to collect, store, analyze, and report the actions or performance of individuals or groups on the job (Alge, 2001; Ball, 2010; Nebeker & Tatum, 1993). Although traditional supervision often entails management-by-walking around or physically observing employee behaviors and outputs, our primary focus in the present chapter is on the organizational use of technologies to perform monitoring, referred to here as electronic monitoring and surveillance systems (e.g., Riedy & Wen, 2010). These include any technology that aids in the monitoring of employees. Monitoring does not require technology to have a controlling effect. For example, management-by-walking around is a low-tech form of monitoring often associated with traditional supervision. However, it appears that when technology is introduced as an instrument to support monitoring, the nature of monitoring may change and how monitoring is perceived may change as well. As we have documented in our earlier work (Alge & Hansen, 2008), according to a 2005 American Management Association survey on electronic monitoring, we see a rising trend in the use of sophisticated technologies to monitor and control employees. Consider: Web and Internet Monitoring • • • • • •
65 percent of employers block certain websites—a 27 percent increase from 2001. 76 percent of employers monitor employee web surfing. 26 percent of employers have fired employees for inappropriate use of the Web or e-mail. 36 percent of employers track computer key-strokes. 50 percent of employers regularly review total computer content—this is up from 36 percent in 2001. 55 percent of employers retain employee e-mails and review them regularly—this is up from 8 percent in 2001.
212 • Bradley J. Alge and S. Duane Hansen Telephone Monitoring • •
•
57 percent of employers now block certain lines on their employees’ phones. 51 percent of employers now keep track of how long their employees talk on the phone, and about half of these tape and review employee voice mail—this is up from about 12 percent in 2001. 6 percent of employers have fired employees for phone misuse.
Video Monitoring • • •
More than 50 percent of employers video monitor their employees (up from 33 percent in 2001). 10 percent of employers video monitor for performance purposes. 6 percent videotape all of their employees.
Global Positioning Systems (GPS) Monitoring • •
8 percent of employers use GPS to track employee ID cards. 8 percent of employers use GPS technology to track employer-owned cars.
Other estimates of organizations that monitor employee internet use indicate as many as 14 million American workers and 27 million workers globally have their internet activities monitored while at work (Alder et al., 2008; Firoz, Taghi, & Souckova, 2006). In particular, we are seeing more firms monitor employee locations using GPS (global positioning technology) and monitoring employee behavior outside of work and this can affect employee perceptions such as their perceived control (e.g., McNall & Stanton, 2011). For example, companies now have policies on the use of personal blogs and social media (e.g., Facebook). These policies pose new challenges for organizations (e.g., employee grievances, potential legal claims) because unlike prior monitoring policies, these newer policies are beginning to extend into the private lives of their employees (i.e., what they do outside of work hours). In the past, courts have generally protected organizations that have chosen to electronically monitor, arguing that because monitoring takes place during work, using organizational assets (i.e., corporate networks, electronic mail, etc.), it is acceptable to monitor
Workplace Monitoring and Surveillance since “1984” • 213 employees. An in-depth review of the laws surrounding organizational use of electronic monitoring is outside the scope of the current chapter, but we refer our readers to Kidwell and Sprague (2009) as a starting point for understanding the legal ramifications. But, imagine electronic monitoring taken to the extreme. In discussions we have had with a provider of location-based software services, we have learned that the following scenario is within the realm of possibility. Consider the following hypothetical example of electronic monitoring: Tom, Jennifer, and Mark work for an agriculture supply company, AgriSeed, as field sales agents and report to Eddie, the Midwest General Manager. Each salesperson is primarily responsible for sales and service to clients in one of three border states: Tom (Indiana), Jennifer (Ohio), and Mark (Michigan). All have been issued a company smart-phone with location-based services, equipped with software and global positioning systems (GPS) that allow managers from any location worldwide to track employee locations.
Tracking can be an important business function; however, employee reactions to the specific application of technologies for tracking can range from relatively benign to quite aversive. For example, Eddie may wish to examine salesperson time spent within each region of each state to ensure that equal time and attention is being devoted to key markets throughout each salesperson’s territory. Knowledge of a salesperson’s location can also be used to provide the salesperson accurate directions to a potential new client, ensuring timely customer service. Both of these examples might seem quite reasonable to the salespersons involved. In perhaps more extreme examples, however, managers can create geographic, invisible fences using monitoring software to bound a salesperson’s territory and receive alerts when a salesperson physically moves beyond his or her territory (e.g., if Mark were to venture into Tom’s territory, Eddie could receive an e-mail alert indicating Mark’s breach into Tom’s territory). Or consider this, suppose Eddie observes that both Mark and Jennifer are pinpointed at the same location for several hours, at a hotel close to the border of their territories. Both of these more extreme examples raise questions as to the appropriateness of some forms of electronic monitoring. The above example should give pause to managers who must decide whether to monitor and to what extent to monitor. Electronic monitoring research is beginning to help address these important decisions.
214 • Bradley J. Alge and S. Duane Hansen
RESEARCH SINCE 1984 Electronic monitoring research prior to 1984 was virtually non-existent, and it wasn’t until the mid-1980s when research began in earnest. A number of environmental forces played a role in triggering the mid-1980s as the birth of electronic monitoring research. First, prior to this time, the lack of available technologies precluded most companies from using technology to monitor employees. Second, even if technically possible, other feasibility concerns such as the cost to monitor factored into the decision (computing costs, for example, were much higher, and outside the reach of many companies). Third, the types of work were beginning to change. Manual systems were being replaced with automated systems. Unskilled and semi-skilled work was being replaced by knowledge work that often employed computing technologies and required that critical information be stored on corporate networks and databases. Indeed, work was becoming increasingly “textualized” (Zuboff, 1988); that is, work was easier to record electronically, creating a permanent audit trail of one’s actions. With the onset of more advanced technologies including decentralized computing systems, networks, personal computers, spy cameras, internet, and global positioning systems (GPS), organizations’ capabilities to electronically monitor employees, and indeed, their temptation to do so, has significantly increased. Early research tended to examine electronic monitoring with a “broad stroke.” For example, research would often treat electronic monitoring dichotomously by comparing companies that monitor against those that do not monitor (e.g., Irving, Higgins, & Safayeni, 1986)—disregarding the type of electronic monitoring that may be occurring. This undifferentiated approach led to less granular predictions on both the positive and negative effects of monitoring. In other words, electronic monitoring (in any form) generated certain positive and negative outcomes. The question researchers asked, was “what would happen if organizations monitor?” with less attention to the question of how they monitor. Much of the early electronic monitoring research is limited in a number of ways. First, early research tended to focus heavily on case studies and anecdotal evidence. Second, this rather inductive approach was more descriptive and atheoretical. Third, as mentioned, this early research failed to capture variation in ways electronic monitoring can be implemented across organizations. Since this early research, our understanding of electronic monitoring has evolved to consider the types of monitoring and how the types, features,
Workplace Monitoring and Surveillance since “1984” • 215 and characteristics combine to affect outcomes. Aiello and his colleagues were among the first scholars to examine a differentiated perspective on electronic monitoring (Aiello, 1993; Aiello & Kolb, 1993). These and other scholars viewed electronic monitoring as neutral, but argued that the form in which electronic monitoring is enacted and how such systems are used play a significant role in understanding the positive and negative effects of electronic monitoring (see also, Ambrose & Alder, 2000; Griffith, 1993; Kidwell & Bennett, 1994a; 1994b). Our goal here is not to provide an exhaustive review of electronic monitoring research, but rather, to provide a representative review. In reviewing the literature, we first examine the sparse research that has looked at electronic monitoring as a dependent variable. Then, we turn to research that has examined electronic monitoring as an independent variable.
ELECTRONIC MONITORING AS A DEPENDENT VARIABLE In considering the full body of electronic monitoring research, relatively little attention has been given to the front-end monitoring decision. That is, scholars have been more concerned with understanding employee reactions to electronic monitoring and less interested in understanding the factors that lead a manager or organization to engage in electronic monitoring in the first place. There are but a few exceptions. In a virtual lab simulation, Alge, Ballinger, and Green (2004) manipulated leader trust in team members and leader dependence on team members to test if these variables affected a leader’s choice to electronically monitor. Results indicated that leaders were more likely to electronically monitor when they were both dependent on their subordinate and when subordinate reliability was low (a proxy for trust). This tendency to initiate electronic monitoring increased over time for consistently unreliable subordinates. These results suggest under conditions of heavy dependence, cognition-based trust (i.e., a belief in one’s reliability and trustworthiness) may be an important factor that determines whether managers engage in electronic monitoring of subordinates. Theoretically, as well, trust has been identified as an important cognition in a manager’s psyche that may predict the decision to monitor electronic-
216 • Bradley J. Alge and S. Duane Hansen ally. Chen and Ross (2005) developed a model for the managerial decision to monitor. They posited that trust in employees is an important theoretical driver of the decision to monitor, consistent with Alge, Ballinger & Green (2004). Moreover, in addition to trust, they outlined a managerial calculus that included other cognitive evaluations. For example, they suggest that managers would consider the costs of their actions in terms of employee reactions and other positive and negative outcomes that might result. Managers will also consider the past history and reliability of potential targets of monitoring. Finally, they point to the role of organizational culture as a contextual factor that predicts managerial monitoring. Unfortunately, with few exceptions (e.g., Alge, Ballinger, & Green, 2004), there has been no evidence of any attempts to validate the proposed relationships in Chen and Ross (2005). Indeed, the bulk of research prior to and since Chen and Ross (2005) has focused almost exclusively on employee reactions to electronic monitoring.
ELECTRONIC MONITORING AS AN INDEPENDENT VARIABLE In considering potential positive and negative outcomes of electronic monitoring, research has tended to focus on a few key outcomes including stress, job performance, attitudes (e.g., satisfaction, attachment), and perceptions related to fairness, privacy, and personal control. We review research for each of these areas below. Stress One of the more robust findings in the organizational control literature generally and electronic monitoring literature specifically, is that increased monitoring is associated with increased stress for those targeted (e.g., Schleifer, Galinsky, & Pan, 1996). Close supervision is associated with increased stress (e.g., Lu, 2005), and electronic monitoring takes close supervision a step further insofar as manager need not be physically present in order to monitor. Thus, the potentially unceasing or continuous capability of electronic monitoring, even when the supervisor is not physically collocated, creates an unrelenting control that is seen by employees as particularly stressful.
Workplace Monitoring and Surveillance since “1984” • 217 Electronic monitoring in general has been shown to cause arousal and lead to higher blood pressure (Henderson et al., 1998). Moreover, when monitoring systems are control-based rather than developmental, employees will likely experience greater negative outcomes including increased burnout (Castanheira & Chambel, 2010; Griffith, 1993). One reason why electronic monitoring is associated with increased stress and burnout is that it serves to change the nature of work itself. For example, scholars have suggested that electronic monitoring elevates stress because it increases employees’ perceived job demands, reduces their perceived control over their work, and reduces social support (Carayon, 1993; Amick & Smith, 1992; Smith et al., 1992). Organizationally imposed control reduces autonomy and increases perceived job demands—factors that contribute to burnout. The effects of electronic monitoring on stress may be qualified by whether the monitoring system targets employees individually or as a group. Aiello and colleagues have provided valuable insights into the effects of electronic monitoring on a number of outcomes. In one study, Aiello and Kolb (1995) found that electronic monitoring increased stress for employees, but that the effect was stronger when individual employees were targeted versus when a group of employees was the target. When groups are the focus (e.g., department output is tracked rather than individual output), the social impact of the monitoring is diffused across the group and the deindividuation associated with group monitoring is generally less stressful than monitoring that identifies individuals particularly. It is worth noting that not all empirical studies have found a positive relationship between electronic monitoring and stress. For example, Nebeker and Tatum (1993) conducted an experiment employing a simulated organization. They found no difference between groups that were electronically monitored (under varying levels of performance goals) and an unmonitored control group. Monitored participants performed at a higher rate than unmonitored participants, though there was no difference in quality or satisfaction. This study is noteworthy in that it was one of the first empirical research studies to show that electronic monitoring (at least when coupled with goal setting) may not always lead to negative consequences (such as increased stress). Nebeker and Tatum (1993) suggested that stress might only result when electronic monitoring is paired with goals and performance-contingent rewards. Indeed, electronic monitoring does not exist in a vacuum, and studies like Nebeker and Tatum’s began to look at electronic monitoring as embedded in organizational systems such as goals and rewards.
218 • Bradley J. Alge and S. Duane Hansen Attitudes Many factors can lead employees to be dissatisfied on the job, but it appears that electronic monitoring may be one factor that drives dissatisfaction (e.g., Irving et al., 1986). Smith and colleagues conducted a large survey of telecommunications workers and found that compared to non-monitored employees, monitored employees experienced more adverse perceptions of working conditions, in addition to increased stress and strain (Smith et al., 1992). Monitored workers reported higher workloads, less variety in their work, less job control, and greater dissatisfaction. They also experienced greater boredom, anxiety, depression, and anger. Finally, despite receiving more feedback than non-monitored employees, monitored employees felt work standards were more unfair (we will expand on fairness below). Chalykoff and Kochan (1989) proposed and tested a model that examined the relationship of electronic monitoring systems with attitudes and turnover propensity. How electronic monitoring systems are designed in terms of the characteristics of the associated feedback system, they argue, will affect satisfaction (both with monitoring and the job in general), and turnover intentions. To test their model, Chalykoff and Kochan (1989) surveyed 744 automated collection workers from the IRS. Factors such as immediacy and clarity of feedback, expertise of supervisor, consideration by the supervisor, and the sign of feedback affected job satisfaction through its influence on satisfaction with monitoring. This study was also important because it recognized that individuals might see electronic monitoring as an invasion of privacy. Also, this study, like research by Aiello and colleagues, recognized that the aversiveness of electronic monitoring was less of a function of if electronic monitoring was implemented, and more of a function of how it was implemented. For example, insofar as electronic monitoring provides feedback to employees in a timely manner and supervisors show consideration, positive attitudes will be enhanced. Ultimately, attitudes in general, and attitudes toward monitoring specifically will be more positive when organizations conduct monitoring within a supportive culture. In the context of electronic monitoring, supportive cultures would reflect monitoring systems that: a) allow employees input into the monitoring system’s design; b) focus on groups of employees rather than singling out individuals; and c) focus on performance relevant activities (see Alder, 2001). Finally, although positive cultures can help ensure more positive attitudes in the context of electronic
Workplace Monitoring and Surveillance since “1984” • 219 monitoring, organizations need to recognize that individual differences (e.g., ethical orientations) can combine with the environment to predict employee attitudinal reactions to monitoring (Alder et al., 2008). Performance What is the relationship between electronic monitoring and performance? Research suggests that answer is complex. Organizations often use performance as a justification for why electronic monitoring is utilized. The assumption underlying this perspective is that increased monitoring will lead to increased performance. However, performance can be quite subjective. For example, does performance include only task performance? Does it include quality, in addition to quantity output? Does it include inrole and extra-role behaviors? Does it include rule adherence or deviation from rules? Depending on whom you ask, the answer is all of the above. Let us look at various theoretical perspectives that have been employed to understand the effects of electronic monitoring on performance and related outcomes. Proponents of electronic monitoring will point to the motivational benefits of electronic monitoring in terms of tracking and providing feedback for attainment of goals (cf. Aiello & Shao, 1993). According to this perspective, electronic monitoring can be good for both the individual employee and organization as research on goal setting points to the benefits of feedback combined with goals. Thus, insofar as an electronic monitoring system identifies clear, attainable goals, and provides specific feedback to individuals in terms of attainment of those goals, motivation, and consequently, performance ought to be enhanced. Earley (1988) found that computer-based performance feedback of magazine subscription employees enhanced performance when the feedback received from computer monitoring system was specific (vs. general) or self-generated (vs. provided by supervisor). One potential advantage of computer-based monitoring and feedback compared to supervisory feedback is that it can and is often perceived as being more objective. Thus, employees may see automated systems of monitoring and feedback as more accurate and therefore, fair. Moreover, they may process this information more seriously because the information or feedback provided can be trusted (e.g., Earley, 1988). Finally, employees may benefit from the ability to receive more timely feedback. A recent study of warehouse employees examined how the implementation of a wireless electronic system that monitored employee goals
220 • Bradley J. Alge and S. Duane Hansen and provided them performance feedback would affect performance (see Goomas & Ludwig, 2009). Results indicated that the new technology monitoring and feedback system led to an increase in performance of 24 units picked per person or a 12.9 percent boost in performance. Stanton and Sarkar-Barney (2003) found no overall difference in performance quantity across three monitoring conditions (no monitoring, computer monitoring, human supervision), but did observe that both no monitoring and computer monitoring led to higher performance quality than human supervision. Sometimes, however, electronic monitoring systems not only provide feedback to employees directly, they provide information to managers who must interpret and control the performance of their subordinates. Supervisors, for example, will utilize electronically monitored performance data to enhance the accuracy of performance evaluations (Fenner, Lerch, & Kulik, 1993; Kulik & Ambrose, 1993). Aiello and colleagues have focused research attention less on the question of whether electronic monitoring enhances performance, but rather, when does electronic monitoring enhance performance (Aiello & Kolb, 1995; Aiello & Svec, 1993)? Applying a social facilitation theoretical lens (see Zajonc, 1965), these scholars argue that electronic monitoring (as an extension of physical supervision) can enhance performance on simple or well-learned tasks and impede performance on novel or complex tasks. Thus, type of task serves as an important moderator of the electronic monitoring-performance relationship. Several studies have supported the extension of social facilitation effects to electronic supervision (Aiello & Kolb, 1995; Aiello & Svec, 1993; Griffith, 1993). For example, Douthitt and Aiello (2001) found that performance on complex tasks was lower for electronically monitored workers compared to non-monitored workers. In examining participants performing a relatively simple task, Aiello and Kolb (1995) found that high ability workers who were electronically monitored outperformed high ability workers that were not electronically monitored. Conversely, low ability workers that were not monitored outperformed low ability workers that were electronically monitored, in support of social facilitation effects. Like other studies on stress, individuals in the monitored conditions reported higher levels of stress than those in the non-monitored condition. Interestingly, there were no reported differences in performance when the focus of monitoring was on the group versus the individual. However, individually monitored participants reported higher stress than group-monitored participants.
Workplace Monitoring and Surveillance since “1984” • 221 It appears that individuals respond to the electronic presence of a supervisor in much the same way they would if the supervisor were physically present. Using a complex task, Aiello and Svec (1993) found that individually monitored subjects in both the physical monitoring and electronic monitoring condition performed at the same level and this performance was significantly lower than non-monitored subjects. A different theoretical bent toward the monitoring-performance relationship derives from theories of organizational control, agency, reinforcement, and deterrence. Agency theory, for example, assumes that employees will behave opportunistically, and that monitoring mechanisms need to be in place in order to ensure performance and adherence to organizational rules (Jensen & Meckling, 1976; Eisenhardt, 1989). Reinforcement theories, too, are predicated on the notion that if an undesirable behavior is detected (from a monitoring system) and subsequently punished, misbehavior will decrease or desirable behavior will increase (if rewarded) (Skinner, 1953). In the information systems literature, deterrence theory suggests that the salience and presence of organizational controls (including monitoring and sanctioning) increases the perceived severity of misbehavior and therefore such systems of control are seen as a preventative measure or deterrent to misbehavior (e.g., Straub & Welke, 1998). Indeed, studies have found that when employees are aware of computer monitoring it can elevate the perceived risk of misbehavior and reduce individuals’ intentions to misbehave (D’Arcy, Hovav, & Galletta, 2009). Studies have also shown that having a positive attitude toward surveillance can increase employee intentions to comply (Spitzmuller & Stanton, 2006). Relational factors may also combine with electronic monitoring to predict performance. For example, how people respond to monitoring by their leaders may depend in part on whether those being monitored are in the leader’s in-group or outgroup. Subasic et al. (2011) found that surveillance of outgroup members tended to lead to conformance, whereas the ability of a leader to influence followers was attenuated for followers in the leader’s in-group. Electronic monitoring appears to have an effect of focusing employees’ attention on some dimensions of job performance, while undermining others. For example, employees being monitored for performance may focus their attention on production quantity to the exclusion of quality (Grant, Higgins, & Irving, 1988; Irving et al., 1986). Moreover, although electronic monitoring may focus employee attention on the objectives being monitored, potentially increasing production of monitored metrics, insofar as privacy or personal freedom is impinged, employees may react
222 • Bradley J. Alge and S. Duane Hansen against the organization through deviance (Bennett, 1998). Indeed, this type of reactance behavior may manifest itself through subtle or difficult to detect deviance (Alge et al., 2010). A recent case study by Duane and Finnegan (2007) highlights the potential unintended effects of monitoring. These authors examined a 1,200 employee Irish division of a multinational healthcare organization that was implementing electronic monitoring of employee e-mail. They tracked email usage longitudinally over a 12-month period and observed that electronic monitoring of e-mail reduced the amount of non-work related e-mail. However, it also created an environment of mistrust that affected both the work and non-work communication climate. In some instances, workers refused to use e-mail for business communication as it was perceived as a less trusted medium. When they did use the medium, employees were much more measured, and took longer to craft their messages.
ETHICAL IMPLICATIONS: FAIRNESS AND PRIVACY A significant body of research on electronic monitoring has focused on the perceived fairness and invasiveness associated with these systems (Alge, 2001; Alge et al., 2006; Ambrose & Alder, 2000; Douthitt & Aiello, 2001; Eddy, Stone, & Stone-Romero, 1999; Hovorka-Mead et al., 2002; Kidwell & Bennett, 1994a; 1994b; Stanton, 2000; Westin 1992; Zweig & Webster, 2002). The 1987 USOTA report raised the importance of understanding differences and variations in how systems of monitoring may be received by employees. This report was important because it provided a framework for understanding reactions to electronic monitoring. The report suggested that future research and applications of electronic monitoring in the workplace should consider the purpose of the monitoring (why is monitoring being implemented in a particular situation), method (how is monitoring being implemented), and effect (what is the effects of monitoring on employees). In particular, both privacy and fairness were key considerations in the USOTA report, particularly with respect to purpose and method. Since then, theoretical and empirical work has attempted to identify the monitoring design features or rules that contribute to employee perceptions of fairness and invasiveness (e.g., Ambrose & Alder, 2000; Kidwell & Bennett, 1994a, 1994b), many of which mirror rules of justice
Workplace Monitoring and Surveillance since “1984” • 223 in general. For example, electronic monitoring systems that enable targets of monitoring to participate (e.g., having input into the design and implementation of monitoring), are consistent (e.g., in how data is collected and used), free from bias (e.g., selective administering of monitoring), and accurate in terms of data collected are generally deemed to be more fair. A person’s sense of control seems to be one of the reasons why monitoring may impact fairness and privacy perceptions. People have a basic need for control. Monitoring is thought to usurp control, but research has shown that when monitoring systems restore control by providing targeted employees advance notice or voice in the system, fairness and privacy will be enhanced (Alge, 2001; Douthitt & Aiello, 2001; HovorkaMead et al., 2002). When people can affect their own outcomes, control perceptions are often enhanced and this is true when it comes to electronic monitoring (Stanton & Barnes-Farrell, 1996). People also want to understand the decisions and systems that affect them. Greater understanding is associated with a greater sense of control as uncertainty is reduced. Monitoring systems whereby people receive a justification or adequate explanation enhances positive reactions including fairness (Hovorka-Mead et al., 2002; Stanton, 2000; Zweig & Webster, 2002). When electronic monitoring is perceived as unfair or invasive, organizations run the risk that employees may not comply with rules and procedures, may slack on their job, or engage in deviant behaviors (Alge et al., 2010; Zweig & Scott, 2007). Moreover, electronic monitoring of employee communication such as e-mail or instant messaging has been shown to curtail some forms of communication (Botan, 1996; Duane & Finnegan, 2007; Holton & Fuller, 2008). Privacy and fairness reactions are also affected by context (Alge et al., 2006). Whereas the electronic monitoring of employees in bureaucratic cultures may be construed as standard operating procedure and therefore expected by many employees, in supportive cultures, electronic monitoring may be seen as unnecessary and lead to higher perceptions of unfairness (Alder, 2001). Organizations that adopt restrictive policies with respect to organizational assets such as computers and e-mail systems (under the assumption that employees believe their utilization of these assets will be electronically monitored to detect use) risk increasing the perceived invasiveness and unfairness of such policies (Paschal, Stone, & StoneRomero, 2009). Electronic monitoring may be possible without negative outcomes when it is embedded in a high-trust context and is seen as fair (Westin, 1992). Of course, the very act of monitoring can potentially undermine organizational trust. Indeed, trust appears to be a recurring
224 • Bradley J. Alge and S. Duane Hansen theme when analyzing electronic monitoring-related behavior. In the next section we present a framework for understanding the interplay between electronic monitoring and trust, what we refer to here as the trustmonitoring cycle.
CONCEPTUAL FRAMEWORK AND FUTURE AGENDA The Trust-Monitoring Cycle: A Framework In the above review, we have attempted to provide the reader with a representative overview of the antecedents and outcomes of electronic monitoring. In this section, we develop a general framework to understand electronic monitoring, which we hope will serve as a guide for researchers moving forward. We propose a framework that begins to holistically address the upstream factors that drive managerial decisions to electronically monitor, as well as the downstream effects of electronic monitoring on those targeted. As we turn toward the future, trust is central to our conceptualization of electronic monitoring. Trust Up to this point, trust has played a cursory role in our discussion of electronic monitoring. Now, it plays a pivotal role as we develop a framework to understand monitoring. Trust refers to the willingness of one party to be vulnerable to another party in situations that are risky (Mayer & Gavin, 2005; Mayer, Davis, & Schoorman, 1995; McAllister, 1995). A number of factors can increase risk in a situation, thereby increasing the importance of trust. For example, uncertainty or lack of information increases managerial risk as favorable outcomes become less predictable. Trust may be seen as unnecessary in high monitoring contexts because monitoring reduces the uncertainty or risk that is needed for there to be trust. Monitoring and trust, therefore, are often viewed as substitutes or alternative mechanisms of control (Luhmann, 1979; Schoorman, Mayer, & Davis, 2007). Several factors that we have identified in our review above have been correlated with trust. For example, as we have noted, monitoring affects employee performance. Yet, employee performance, or more specifically
Workplace Monitoring and Surveillance since “1984” • 225 reliable performance, enhances managerial trust in employees. Thus, there is a reciprocal nature to the monitoring–trust relationship. Fairness is also correlated with trust (e.g., DeConinck, 2010). Employees, who see their managers or organizations as fair, will have greater trust. Indeed, fairness is often seen as a heuristic where one can infer trust, absent other information (Lind & van den Bos, 2002). Moreover, overly oppressive systems of control, characterized by high monitoring, may be seen as unfair, lowering employee trust in managers and organizations. It is because of the interplay of trust with the prior antecedents and outcomes of monitoring that we believe it to be a central construct that will help scholars and practitioners best understand electronic monitoring. Electronic monitoring is not a single decision, but many decisions made over time. These decisions have important effects on employees, and employee reactions affect future decisions. We will now introduce our framework by discussing some of these pivotal decisions. Pivotal Decisions As can be seen in the Trust Monitoring Cycle in Figure 10.1, trust appears at several pivotal junctures. First, managers’ initial trust in employees is a key cognition that affects managerial decision-making. That is, when managers have low trust in employees, both the discrete decision to monitor (or not) as well as the intensity of monitoring will increase. Once the decision to monitor has been made, individual employees will likely become aware of the monitoring. This awareness may cause them to align their actions with the rules and performance standards of the organization, to focus on the task, and to improve performance. Employees that respond in this manner will likely increase managers’ trust in those employees. However, the monitoring decision also conveys critical cues or social information about employees’ identities and their value to their groups (Alge et al., 2006), and as a consequence, employees will question managers and their organizations. In particular, it is expected that employee trust in managers and the organization will be lower as monitoring of employees increases (e.g., Stanton & Weiss, 2003). Finally, to complete the trust-monitoring cycle, when employees mistrust their managers and organizations, negative consequences will follow. In particular, as the psychological contract of trust between manager and employee deteriorates, employees will be less motivated to engage in behaviors that support the source of their mistrust. In fact, we contend, that they may actively seek to undermine the source of their mistrust. Mistrust can motivate employees
226 • Bradley J. Alge and S. Duane Hansen
Initial Managerial Trust in Employees
Managerial Differences • Propensity to trust • Ethical Orientation
Decision to Monitor?
Cost/Benefit Analysis of Monitoring & Breaches
How? (Part I)
What? & So What? Awa A reness & Follow-through Awareness • Monitoring Capability • Standards/Rules • Sanctions
• Secrecy • Technology T co (task, • Purpose & Scope frequency)
Potential Adherence to T Tasks within Scope
Managerial T Trust in Employees
How? (Part II)
• Fairness • Privacy • Control
Employee Trust in Manager
Reactance/Employee Breach
Managerial Trust in Employees
Key for Boxes Trust Cognitions T Design Features s& Cognitions
FIGURE 10.1
Trust-monitoring Framework
Employee Behaviors Manager Diffe Differences f rences in Decision-Making
Workplace Monitoring and Surveillance since “1984” • 227 to engage in seemingly minor infractions (e.g., taking longer breaks, slacking on the job) or major infractions (e.g., stealing company trade secrets, sabotage). All of these behaviors will fuel managers’ mistrust in employees; increasing, once again, the likelihood that managers will engage in (or continue to) monitor. The cycle has the potential to escalate and continue into a downward spiral of increased monitoring, followed by mistrust, followed by more monitoring. Questions to consider when examining the trust–monitoring relationship include the following: First, we have to consider the different types of trust. McAllister (1995), for example, distinguishes between cognitionbased trust—based on the belief that another party is reliable—and affect-based trust—based on the emotional bond one has towards another. Lewicki and colleagues distinguish between knowledge-based trust (similar to McAllister’s cognition-based trust), identity-based trust (similar to McAllister’s affect-based trust), and calculus-based trust (see Lewicki & Bunker, 1995). This latter form of calculus-based trust, sometimes referred to as deterrence-based trust, refers to a trust that exists because of the potential controls that are in place to reward trusting behavior and punish behaviors that breach trust. Second, we must ask, from whose vantage point is trust being examined? That is, who is the trustor and who is the trustee? As others have noted, the foci of the trustor or trustee can be the manager, the organization, top management team, or individual employee (Ferrin, Bligh, & Kohles, 2007). Managers, who monitor to enhance their own “trust” in employees, may also create the conditions that undermine employees’ trust in them. Indeed, whereas Lewicki and colleagues might suggest that such increased monitoring creates a high degree of calculus-based trust, Schoorman and colleagues would argue that control-based forms of “trust” do not represent trust at all (Schoorman, Mayer, & Davis, 2007). Part of the logic for this position is that, with controls in place, the risk of someone violating expectations in a relationship is diminished, and without risk, trust is irrelevant. Rather, trust would be reflected in the decision not to monitor because by eschewing monitoring the manager is signifying his or her reliance on another in a situation that is risky. The decision not to monitor would be more of a manifestation of trust than the decision to monitor. Putting the debate on calculus-trust aside, what this discussion suggests is that monitoring may increase some forms of trust (calculus-based trust), while lowering other forms of trust (identity or affect-based). Indeed, as Ferrin, Bligh, and Kohles (2007) recognized, the relationship between monitoring and trust is complex.
228 • Bradley J. Alge and S. Duane Hansen Several studies have examined the relationship between monitoring and trust. Strickland (1958) found that subjects playing the role of superordinate held less trust in those “subordinates” that they could monitor, compared to those they did not monitor. Moreover, subsequent decisions to monitor were influenced by trust, with a higher tendency to monitor when trust was low. McNall and Roch (2009) studied call center representatives and found that: when electronic monitoring was seen as developmental (versus coercive) interpersonal justice was enhanced; when an adequate explanation was provided, informational justice was enhanced; and, both interpersonal justice and informational justice were positively related to employee trust in management. Alder, Ambrose, and Noel (2006) conducted an experiment and found that advanced notice and organizational support enhanced trust following the implementation of electronic monitoring. As we consider our trust-monitoring framework in Figure 10.1, there are three critical junctures or decision points that require some further explanation. These are reflected by the decision triangles in Figure 10.1 and include “The Decision to Monitor,” “How to Monitor—Part I,” and “How to Monitor—Part II.” We expand on these three decision points by identifying factors, in addition to trust, that affect these three decision points. As noted earlier, and supported by research (Alge, Ballinger, & Green, 2004; Chen & Ross, 2005), a manager’s decision to monitor will depend on the level of trust a manager has in his or her subordinates. In addition, however, we posit that there exist intra-individual differences in managers that, in addition to trust, will affect the decision to monitor. These differences include both dispositional differences, such as propensity to trust and ethical orientation, and subjective differences in the cost-benefit of monitoring. First, dispositional differences in managers will affect the decision to monitor. Specifically, we contend that a manager’s propensity to trust and ethical orientation will predict monitoring decisions. Propensity to trust refers to one’s innate tendency to view others as trustworthy across a wide variety of situations (McKnight, Cummings, & Chervany, 1998). Individuals who have a high propensity to trust will be less likely to enact electronic monitoring (Alge et al., 2004). Moreover, we suspect that this relationship will affect the decision to monitor directly, as well as indirectly (through its effect on trust). Additionally, managers (and employees) may differ in their ethical orientation. For example, Alder et al. (2008) distinguish formalism (actions are viewed as ethical to the
Workplace Monitoring and Surveillance since “1984” • 229 extent that they adhere to accepted rules) and utilitarianism (actions are ethical if they lead to the greatest good) as two distinct ethical belief systems that can affect one’s reactions to monitoring, but, we posit here, may also influence decisions to monitor. Not only might these belief systems influence decisions, but the absence of these belief systems will influence decisions as well. These dispositional tendencies will also help shape how managers will ultimately design the monitoring system (addressing the “how” questions in Figure 10.1). Additionally, managers process information subjectively. Thus, managers are likely to vary in terms of the relative weights they put on the costs to monitor (e.g., the cost to purchase monitoring software, the time and effort to process the information gathered, etc.) or not monitor (e.g., increased employee deviance), and the benefit of monitoring (e.g., employee does what he or she is supposed to; improved productivity) or not monitoring (e.g., free time to do something else). Managers who view the benefit of monitoring as outweighing the cost to monitor will be more likely to engage in monitoring (Chen & Ross, 2005). The cost/benefit analysis and decision to monitor implies a set of system goals that would be achieved if monitoring were enacted. These goals drive the “how” decision. Once the decision to initiate monitoring has been made, managers must decide how monitoring will be conducted. Although our framework in Figure 10.1 breaks the “how” decision into two separate decisions, in reality, these decisions may occur as part of an overall decision. The reason we separate them in the framework is to highlight the different behavioral responses (i.e., adherence and reactance) that monitoring might invoke. The first “how” question deals with many of the initial design parameters that managers must determine as they implement electronic monitoring. Managerial and organizational goals are implied in the cost/benefit analysis, and these goals will affect the design of the monitoring system. Some of the initial decisions will include whether to electronically monitor with transparency (where the targets are made aware that they are being monitored, versus secret monitoring), which technology to use, and the purpose and scope of monitoring. Some managers, in an attempt to minimize cost, may believe that monitoring in secret is less costly. However, managers that choose to monitor in secret risk backlash from employees who eventually will discover the monitoring. We posit that monitoring transparently will have less of a long-term negative effect on trust than monitoring in secret. Managers must also decide on the types of technologies they will use to monitor, what the purpose of monitoring will be, and the frequency at
230 • Bradley J. Alge and S. Duane Hansen which they will monitor. When managers choose technologies with the purpose of developing their employees, employees will react favorably leading to greater adherence to desired organizational goals. However, if the monitoring system is viewed as coercive, or if it is seen as overly oppressive (e.g., continuous, unrelenting monitoring) it can set the stage for employees to react against managers and their organizations. As has been shown, the intent or purpose (i.e., developmental or coercive) of the monitoring system itself, can affect how employees respond (Allen et al., 2007; McNall & Roch, 2009; Sewell & Barker, 2006). Secrecy, technology, and scope are intertwined components. The miniaturization of technology, for example, increases the ease with which technologies can secretly observe others’ actions. Technologies also do not require the presence of a supervisor and, therefore, electronic monitoring can be conducted continuously. Together, secrecy, technology, and scope affect awareness. Awareness is critical for organizational control. For control to exist there must be a monitoring mechanism to detect deviations from desired behaviors or outcomes. There must be standards or rules against which employee behavior is judged (e.g., performance goals), and there must be sanctions. If any of these conditions are lacking, organizations run the risk that monitoring will not achieve any of its goals. That is, absent these control dimensions, adherence to tasks is less likely. Absent monitoring and control mechanisms, individuals are more likely to behave opportunistically in their own interest, rather than the interest of their organization. How much employees adhere to or deviate from desired organizational goals depends, in part, on their awareness of the monitoring system and the controls that are a part of the system. For example, employees will adhere to organizational goals and norms under electronic monitoring systems that clarify those goals and norms, identify when performance falls short of those goals, and sanctions behaviors deemed unsatisfactory toward pursuit of those goals. Following theories of agency, reinforcement, and deterrence, monitoring without “teeth” or consequences will be less effective in ensuring alignment of employee behavior with the interests of the organization. In sum, the Part 1 “How” decision suggests that managers can make choices concerning secrecy, technology, and purpose and intent. These choices affect employees’ awareness of monitoring. However, in order for behaviors to be controlled in such a way that supports organizational
Workplace Monitoring and Surveillance since “1984” • 231 goals, the control system must be able to monitor the appropriate behaviors and standards, clearly identify what those standards are, and have meaningful sanctions for under performance. This will help ensure adherence to an organization’s goals. It must be noted, however, that monitoring will only be effective in ensuring adherence to tasks within the scope of monitoring. Not every behavior can necessarily be tracked. Managers make choices on what goals are important, thereby focusing employee attention. Studies have shown, for example, that electronic monitoring systems focused on quantity, have led to increases in quantity of production, but sometimes at the expense of quality (Grant et al., 1988; Irving et al., 1986). When employees follow the rules, reach performance goals, and adhere to the norms of the organization, managerial trust in employees will likely increase. On the flip side of the coin, the mere awareness of monitoring risks undermining trust on the part of employees. Monitoring increases self-awareness causing one to examine one’s own identity (Alge et al., 2006) and examine their value to identified groups, such as their organization. This self-evaluation raises the salience of identity-related concerns of fairness, privacy, and personal control. The second part of the “How” question refers to the design features that affect identity. Many of these features have been identified in prior electronic monitoring research (Alge, 2001; Alge et al., 2006; Ambrose & Alder, 2000; Kidwell & Bennett, 1994a; Zweig & Webster, 2002), and scholars and practitioners would be well served to design monitoring systems that adhere to fundamental rules safeguarding fairness and privacy. Knowledge of monitoring raises selfawareness such that people will now care about how the monitoring system affects them in terms of fairness, privacy, and autonomy; whereas lacking that self-awareness, individuals might maintain indifference. The key point, however, is that monitoring systems that are deemed to be unfair, invasive, or that reduce autonomy, will be rejected by those targeted—their trust in managers (and organization) will deteriorate. Indeed, those targeted are likely to engage in reactance against the source of trust deterioration, and seek to reestablish control and restore the perceived damage to their identity (Alge et al., 2010). This reactance can take the form of a wide variety of counter-productive work behaviors, deviance, and withdrawal. Of course, such behaviors fuel manager mistrust in employees starting the cycle over again. Let us now, look ahead toward some key issues that will drive electronic monitoring research as we move toward the future.
232 • Bradley J. Alge and S. Duane Hansen
FUTURE DIRECTIONS Much has been accomplished in our understanding of electronic monitoring since 1984. However, there remain some critical questions that scholars are called upon, here, to address. It is hopefully clear, that relatively little is understood about why managers choose to implement electronic monitoring (versus those that do not), and more specifically, what factors shape the form under which electronic monitoring occurs, once the decision has taken place. Research on monitoring has addressed reactions to various characteristics in monitoring design (e.g., Ambrose & Alder, 2000), but we must delve deeper to identify and address the factors that affect these design choices. The psychology of the manager as it pertains to the choices surrounding electronic monitoring is fertile ground for future research. A second area of electronic monitoring research should focus on identifying how to break the downward spiral of increased monitoring leading to mistrust, leading to increased monitoring, leading to mistrust. In this regard, the trust repair literature may be particularly insightful (e.g., Dirks & De Cremer, 2011). For example, theorizing by Kim, Dirks, and Cooper (2009) identified three key areas to focus on in understanding trust repair: 1) a realization that trustees want to be trusted; 2) trustors are inclined to believe that greater trust is unwarranted; and 3) the efforts of trustors and trustees to resolve these discrepant beliefs. Finally, as a third area of focus, both researchers and practitioners should seek to understand the costs and benefit of “reach” when it comes to electronic monitoring. That is, what are the boundaries of electronic monitoring? Is it fair game to monitor employee behaviors outside of work? For example, is it acceptable to monitor social media outside the control of the organization, and sanction employees for behavior deemed inappropriate? Organizations may have the ability to track or monitor employees outside of work, but should they? What are the appropriate policies governing employee behavior outside of work. For example, what principles and practicalities should guide policy surrounding the use of electronic resources and the monitoring of those resources? Is it, for example, within the right of the organization to prohibit blogging, even outside of work? The boundaries between work and non-work will continue to blur, further complicating this issue. At the same time, technologies are becoming increasingly invasive and controlling, raising ethical concerns. Indeed, as we move toward greater technological advancement, Zweig and
Workplace Monitoring and Surveillance since “1984” • 233 Webster’s (2002) question, of “Where is the line between invasive and benign” will continue to be relevant. The breadth and depth of an organization’s “big brother” footprint will continue to be a societal level debate that legal experts, sociologist, political scientists, psychologists, and managers will continue to weigh in on.
REFERENCES Aiello, J. R. (1993). Computer-based work monitoring: Electronic surveillance and its effects. Journal of Applied Social Psychology, 23, 499–507. Aiello, J. R., & Kolb, K. J. (1993). Electronic performance monitoring: A risk factor for workplace stress. In S. Sauter & G. P. Keita (Eds.), Job stress 2000: Emergent issues. Washington, DC: American Psychological Association. Aiello, J. R., & Kolb, K. J. (1995). Electronic performance monitoring and social context: Impact on productivity and stress. Journal of Applied Psychology, 80, 339–353. Aiello, J. R., & Shao, Y. (1993). Electronic performance monitoring and stress: The role of feedback and goal setting. In M. J. Smith & G. Salvendy (Eds.), Human-computer interaction: Applications and case studies. Amsterdam: Elsevier Science Publishers. Aiello, J. R., & Svec, C. M. (1993). Computer monitoring of work performance: extending social facilitation framework to electronic presence. Journal of Applied Social Psychology, 23, 537–448. Alder, G. S. (2001). Employee reactions to electronic performance monitoring: A consequence of organizational culture. Journal of High Technology Management Research, 12, 323–342. Alder, G. S., & Ambrose, M. L. (2005). An examination of the effect of computerized performance monitoring feedback on monitoring fairness, performance, and satisfaction. Organizational Behavior and Human Decision Processes, 97, 161–177. Alder, G. S., Ambrose, M. L., & Noel, T. W. (2006). The effect of formal advance notice and justification on internet monitoring fairness: Much ado about nothing? Journal of Leadership and Organizational Studies, 13, 93–107. Alder, G. S., Noel, T. W., & Ambrose, M. L. (2006). Clarifying the effects of internet monitoring on job attitudes: The mediating role of employee trust. Information & Management, 43, 894–903. Alder, G. S., Schminke, M., Noel, T. W., & Kuenzi, M. (2008). Employee reactions to internet monitoring: The moderating role of ethical orientation. Journal of Business Ethics, 80, 481–498. Alge, B. J. (2001). Effects of computer surveillance on perceptions of privacy and procedural justice. Journal of Applied Psychology, 86, 797–804. Alge, B. J., & Hansen, S. D. (2008). Information privacy in organizations. In C. Wankel (Ed.), 21st century management: A reference handbook (Vol. 2, pp. 380–390). Thousand Oaks, CA: Sage Publications. Alge, B. J., Anthony, E. A., Rees, J., & Kannan, K. (2010). Controlling A, while hoping for B: Deviance deterrence and public versus private deviance. In L. Neider & C. Schriesheim (Eds.), The Dark side of management (pp. 115–141). Charlotte, NC: Information Age Publishing.
234 • Bradley J. Alge and S. Duane Hansen Alge, B. J., Ballinger, G. A., & Green, S. G. (2004). Remote control: Predictors of electronic monitoring intensity and secrecy. Personnel Psychology, 57, 377–410. Alge, B. J., Ballinger, G. A., Tangirala, S., & Oakley, J. L. (2006). Information privacy in organizations: Empowering creative and extra-role performance. Journal of Applied Psychology, 91 221–232. Alge, B. J., Greenberg, J., & Brinsfield, C. T. (2006). An identity-based model of organizational monitoring: Integrating information privacy and organizational justice. In J. J. Martocchio (Ed.), Research in personnel and human resource management (Vol. 25, pp. 71–135). Oxford, UK: Elsevier. Allen, M. W., Coopman, S. J., Hart, J. L., & Walker, K. L. (2007). Workplace surveillance and managing privacy boundaries. Management Communication Quarterly, 21, 172–200. Ambrose, M. L., & Alder, G. S. (2000). Designing, implementing, and utilizing computerized performance monitoring: Enhancing organizational justice. In G. R. Ferris (Ed.), Research in Personnel and Human Resources Management (Vol. 18, pp. 187–219). Greenwich, CT: JAI Press. American Management Association (2005). Electronic monitoring and surveillance survey. New York: Author. Amick, B. C. III, & Smith, M. J. (1992). Stress, computer based monitoring and measurement systems: A conceptual overview. Applied Ergonomics, 1, 6–16 Ball, K. (2010). Workplace surveillance: An overview. Labor History, 51, 87–106. Barnard, C. I. (1938). The functions of the executive. Cambridge, MA: Harvard University Press. Bennett, R. J. (1998). Perceived powerlessness as a cause of employee deviance. In R. Griffin, A. O’Leary-Kelly, & J. Collins (Eds.), Dysfunctional behavior in organizations: Violent and dysfunctional behavior (pp. 221–239). Stamford, CT: JAI. Botan, C. (1996). Communication work and electronic surveillance: A model for predicting panoptic effects. Communication Monographs, 6, 293–313. Carayon, P. (1993). Effect of electronic performance monitoring on job design and worker stress: Review of the literature and conceptual model. Human Factors, 35, 385–395. Castanheira, F., & Chambel, M. J. (2010). Reducing burnout in call centers through HR practices. Human Resource Management, 49, 1047–1065. Chalykoff, J., & Kochan, T. A. (1989). Computer-aided monitoring: Its influence on employee satisfaction and turnover. Personnel Psychology, 42, 807–829. Chen, J. V., & Ross, W. H. (2005). The managerial decision to implement electronic surveillance at work: A research framework. The International Journal of Organizational Analysis, 13, 244–268. D’Arcy, J., Hovav, A., & Galletta, D. (2009). User awareness of security countermeasures and its impact on information systems misuse: A deterrence approach. Information Systems Research, 20, 79–98. DeConinck, J. B. (2010). The effect of organizational justice, perceived organizational support, and perceived supervisor support on marketing employees’ level of trust. Journal of Business Research, 63, 1349–1355. Dirks, K. T., & De Cremer, D. (2011). The repair of trust: Insights from organizational behavior and social psychology. In D. De Cremer, R. van Dick, & J. K. Murningham (Eds.), Social psychology and organizations (pp. 211–230). New York: Routledge/ Taylor & Francis Group.
Workplace Monitoring and Surveillance since “1984” • 235 Douthitt, E. A., & Aiello, J. R. (2001). The role of participation and control in the effects of computer monitoring on fairness perceptions, task satisfaction, and performance. Journal of Applied Psychology, 86, 867–874. Duane, A., & Finnegan, P. (2007). Dissent, protest, and transformative action: An exploratory study of staff reactions to electronic monitoring and control of e-mail systems in one company based in Ireland. Information Resource Management Journal, 20, 1–13. Earley, P. C. (1988). Computer-generated performance feedback in the magazine subscription industry. Organizational Behavior and Human Decision Processes, 41, 50–64. Eddy, E. R., Stone, D. L., & Stone-Romero, E. F. (1999). The effects of information management policies on reactions to human resource systems: An integration of privacy and procedural justice perspectives. Personnel Psychology, 52, 335–358. Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of Management Review, 14, 57–74. Fenner, D. B., Lerch, F. J., & Kulik, C. T. (1993). The impact of computerized performance monitoring and prior performance knowledge on performance evaluation. Journal of Applied Social Psychology, 23, 573–601. Ferrin, D. L., Bligh, M. C., & Kohles, J. C. (2007). Can I trust you to trust me? A theory of trust, monitoring, and cooperation in interpersonal and intergroup relationships. Group & Organization Management, 32, 465–499. Firoz, N. M., Taghi, R., & Souckova, J. (2006). E-mails in the workplace: The electronic equivalent of ‘DNA’ Evidence. Journal of American Academy of Business, 8, 71–78. Friedman, T. (2005). Electric dreams: Computers in American culture. New York: NYU Press. See also, http://tedfriedman.com/electric-dreams/chapter-5-apples-1984/. Goomas, D. T., & Ludwig, T. D. (2009). Standardized goals and performance feedback aggregated beyond the work unit: Optimizing the use of engineered labor standards and electronic performance monitoring. Journal of Applied Social Psychology, 39, 2425–2437. Grant, R. A., Higgins, C. A., & Irving, R. H. (1988). Computerized performance monitors: Are they costing you customers? Sloan Management Review, 29, 39–45. Green, S. G., & Welsh, M. A. (1988). Cybernetics and dependence: Reframing the control concept. Academy of Management. The Academy of Management Review, 13, 287–301. Griffith, T. L. (1993). Teaching big brother to be a team player: Computer monitoring and quality. Academy of Management Executive, 7, 73–80. Henderson, R., Mahar, D., Saliba, A., Deane, F., & Napier, R. (1998). Electronic monitoring systems: An examination of physiological activity and task performance within a simulated keystroke security and electronic performance monitoring system. International Journal of Human-Computer Studies, 48, 143–157. Holton, C., & Fuller, R. (2008). Unintended consequences of electronic monitoring of instant messaging. IEEE Transactions on Professional Communication, 51, 381–395. Hovorka-Mead, A. D., Ross, W. H., Whipple, T., & Renchin, M. B. (2002). Watching the detectives: Seasonal student employee reactions to electronic monitoring with and without advance notification. Personnel Psychology, 55, 329–362. Irving, R. H., Higgins, C. A., & Safayeni, F. R. (1986). Computerized performance monitoring systems: Use and abuse. Communications of the ACM, 29, 794–801. Jensen, M. C., & Meckling, W. H. (1976). Theory of firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3, 305–360.
236 • Bradley J. Alge and S. Duane Hansen Kidwell, R. E., & Bennett, N. (1994a). Electronic surveillance as employee control: A procedural justice interpretation. Journal of High Technology Management Research, 5, 39–57. Kidwell, R. E., & Bennett, N. (1994b). Employee reactions to electronic control systems: The role of procedural fairness. Group and Organization Management, 19, 203–218. Kidwell, R. E., & Sprague, R. (2009). Electronic surveillance in the global workplace: Laws, ethics, research and practice. New Technology, Work, and Employment, 24, 194–208. Kim, P. H., Dirks, K. T., & Cooper, C. D. (2009). The repair of trust: A dynamic bilateral perspective and multilevel conceptualization. Academy of Management Review, 34, 401–422. Kulik, C. T., & Ambrose, M. L. (1993). Category-based and feature-based processes in performance appraisal: Integrating visual and computerized sources of performance data. Journal of Applied Psychology, 78, 821–830. Lawler, E. E. (1976). Control systems in organizations. In M. Dunnette (Ed.), Handbook of Industrial and Organizational Psychology (pp. 1247–1292). Chicago: Rand McNally. Lewicki, R. J., & Bunker, B. B. (1995). Trust in relationships: A model of development and decline. In B. B. Banker & J. Z. Rubin (Eds.), Conflict, cooperation, and justice (pp. 133–173). San Francisco, CA: Jossey-Bass. Lind, A. B., & van den Bos, K. (2002). When fairness works: Toward a general theory of uncertainty management. Research in Organizational Behavior, 24, 181–223. Lu, J. L (2005). Perceived job stress of women workers in diverse manufacturing industries. Human Factors and Ergonomics in Manufacturing, 15, 275–291. Luhmann, N. (1979). Trust and power. New York: Wiley Mayer, R. C., & Gavin, M. B. (2005). Trust in management and performance: Who minds the shop while the employees watch the boss? Academy of Management Journal, 48, 874–888. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709–734. McAfee, Inc. (2009). Unsecured economies: Protecting vital information. Available at: http://resources.mcafee.com/content/NAUnsecuredEconomiesReport. McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38, 24–59. McKnight, D. H., Cummings, L. L., & Chervany, N. L. (1998). Initial trust formation in new organizational relationships. Academy of Management Review, 23, 473–490. McNall, L. A., & Roch, S. G. (2009). A social exchange model of employee reactions to electronic performance monitoring. Human Performance, 22, 204–224. McNall, L. A., & Stanton, J. M. (2011). Private eyes are watching you: Reactions to location sensing technologies. Journal of Business & Psychology, 26, 299–309. Nebeker, D. M., & Tatum, B. C. (1993). The effects of computer monitoring, standards, and rewards on work performance, job satisfaction and stress. Journal of Applied Social Psychology, 23, 508–536. Paschal, J. L., Stone, D. L, & Stone-Romero, E. F. (2009). Effects of electronic mail policies on invasiveness and fairness. Journal of Managerial Psychology, 24, 502–525. Riedy, M. K., & Wen, J. H. (2010). Electronic surveillance of Internet access in the American workplace: Implications for management. Information & Communications Technology Law, 19, 87–99. Schleifer, L. M., Galinsky, T. L., Pan, C. S. (1996). Mood disturbances and musculoskeletal discomfort: Effects of electronic performance monitoring under different levels of VDT data-entry performance. International Journal of Human Computer Interaction, 8, 369–384.
Workplace Monitoring and Surveillance since “1984” • 237 Schoorman, F. D., Mayer, R. C., & Davis, J. H. (2007). An integrative model of organizational trust: Past, present, and future. Academy of Management Review, 32, 344–354. Sewell, G., & Barker, J. R. (2006). Coercion versus care: Using irony to make sense of organizational surveillance. Academy of Management Review, 31, 1–24. Skinner, B. F. (1953). Science and human behavior. Macmillan, New York. Smith, M. J., Carayon, P., and Miezio, K. (1986). Motivational, behavioral and psychological implications of electronic monitoring of worker performance, Office of Technology Assessment, Washington, DC: U.S. Congress. Smith, M. J., Carayon, P., Sanders, K. J., Lim, S. Y., & LeGrande, D. (1992). Employee stress and health complaints in jobs with and without electronic monitoring. Applied Ergonomics, 23, 17–28. Spitzmuller, C., & Stanton, J. M. (2006). Examining employee compliance with organizational surveillance and monitoring. Journal of Occupational and Organizational Psychology, 79, 245–272. Stanton, J. M. (2000). Traditional and electronic monitoring from an organizational justice perspective. Journal of Business & Psychology, 15, 129–147. Stanton, J. M., & Barnes-Farrell, J. L. (1996). Effects of electronic performance monitoring on personal control, task satisfaction, and task performance. Journal of Applied Psychology, 81, 738–745. Stanton, J. M., & Sarkar-Barney, S. T. M. (2003). A detailed analysis of task performance with and without computer monitoring. International Journal of Human-Computer Interaction, 16, 345–366. Stanton, J. M., & Weiss, E. M. (2003). Organisational databases of personnel information: Contrasting the concerns of human resource managers and employees. Behaviour and Information Technology, 22, 291–304. Straub, D. W., & Welke, R. J. (1998). Coping with systems risk: Security planning models for management decision making. MIS Quarterly, 22, 441–469. Strickland, L. (1958). Surveillance and trust. Journal of Personality, 26, 200–215. Subasic, E., Reynolds, K. J., Turner, J. C., Veenstra, K. E., & Haslam, S. A. (2011). Leadership, power and the use of surveillance: Implications of shared social identity for leaders’ capacity to influence. The Leadership Quarterly, 22, 170–181. Tenbrunsel, A. E., & Messick, D. M. (1999). Sanctioning systems, decision frames, and cooperation. Administrative Science Quarterly, 44, 684–707. U.S. Congress, Office of Technology Assessment. (1987). The electronic supervisor: New technology, new tensions. (OTA-CIT-333). Washington, DC: U.S. Government Printing Office. Westin, A. F. (1967). Privacy and freedom. New York: Atheneum. Westin, A. F. (1986). Privacy and the quality-of-work-life issue in employee monitoring. Washington, DC: U.S. Office of Technology Assessment. Westin, A. F., (1992). Two key factors that belong in a macroergonomics analysis of electronic performance monitoring: Employee perceptions of fairness and the climate of organizational trust or distrust. Applied Ergonomics, 23, 35–42. Zajonc, R. B. (1965). Social facilitation. Science, 149, 269–274. Zuboff, S. (1988). In the age of the smart machine: The future of work and power. New York: Basic Books, Inc. Zweig, D., & Scott, K. (2007). When unfairness matters most: Supervisory violations of electronic monitoring practices. Human Resource Management Journal, 17, 227–247. Zweig, D., & Webster, J. (2002). Where is the line between benign and invasive? An examination of psychological barriers to the acceptance of awareness monitoring systems. Journal of Organizational Behavior, 23, 605–633.
11 The Impact of Technology on Employee Stress, Health, and Well-Being Ashley E. Nixon and Paul E. Spector
THE IMPACT OF TECHNOLOGY ON EMPLOYEE STRESS, HEALTH, AND WELL-BEING Ever advancing technological developments are altering the very nature of work life, impacting how, when, and where we work. Traditional work schedules, established during the industrial revolution, are beginning to be left behind in favor of more efficient, modern alternatives. In particular, information and communication technologies (ICTs) such as laptops, cellular phones, Blackberries®, smart phones, tablet computers, and PDAs have made it possible for employees to conduct their work tasks while remaining in touch with a central office from remote locations. Remote work has been examined under a variety of names, such as virtual work, telework, telecommuting, and technology-assisted supplemental work (TASW) to name a few. Whereas collaborative work efforts conducted at different locations is not a recent development, the sophistication and speed at which it occurs, as well as the extent to which it is practiced, far exceeds what remote workers could have expected prior to modern technological advances (e.g., Brooks, 1976; Chandler, 1962). Along with these changes in the nature of work, we must update our understanding of how employees’ stress, health, and well-being are impacted by remote work through the use of ICTs. The increasing use of ICTs and the unique manner in which ICTs are used by today’s workforce allows for employees to constantly be connected to their work, and for employers to be constantly connected to their employees. Moreover, employees work in an array of settings, often physically and psychologically 238
The Impact of Technology on Employees • 239 isolated from their central offices. Our current understanding of work stressors and well-being of employees must be expanded to account for these increasingly prevalent experiences. Furthermore, our theoretical understanding of how work experiences affect employee health, stress, and well-being must be refined to adequately capture and represent the current state of work. The goal of this chapter is to provide and describe a revised theoretical model for the job stress process based on the control theory of job stress (Spector, 1998). To this end, we discuss prominent ways in which ICTs are used in traditional and nontraditional work arrangements, how these modern adaptations related to employee control and well-being, as well as important future endeavors.
ICTS AND THE CHANGING NATURE OF WORK Telecommuting Telecommuting broadly refers to alternative work arrangements in which employees perform work tasks in locations other than a central office, facilitated by the use of electronic media to interact with co-workers and clients, for some portion of their work schedule (Bailey & Kurland, 2002; Baruch, 2001; Gajendran & Harrison, 2007; Feldman & Gainey, 1997). This definition is inclusive of several distributed work constructs, including telework and virtual work. Telecommuters may work in remote offices or telework centers, but they often work from home (Davis & Polonko, 2001; Hill et al., 1998) with the proportion of telecommuters working from home increasing as this form of distributed work increases (WorldatWork, 2009). Telecommuting arrangements can vary in the amount of time the telecommuters spend outside of the office setting. For instance, fewer than 10 percent of telecommuters reported working from a remote location on a full-time basis (Davenport, 2005), with most telecommuters engaged in part-time arrangements (Qvortrup, 1998; Standen, Daniels, & Lamond, 1999). Furthermore, telecommuting is often engaged in on a voluntary basis, and is often considered a benefit or reward for employees (WorldatWork, 2011). Employers who utilize telecommuting arrangements report that they only offer the option of telecommuting to just over one-third of their employees (WorldatWork, 2011). Although involuntary telecommuting is expected to increase along with voluntary telecommuting, this proportion of the telecommuting population is
240 • Ashley E. Nixon and Paul E. Spector expected to remain minimal compared to voluntary telecommuting (Mokhtarian & Salomon, 1996). The use of telecommuting arrangements has previously grown rapidly throughout the United States and across the globe (Davis & Polonko, 2001). In the United States alone, an estimated 33.8 million employees telecommuted for some portion of their scheduled work time in 2008, although this number fell to 26.2 million in 2010, potentially due to increased unemployment rates and job insecurity during this time frame (WorldatWork, 2011). Worldwide, senior-level executives report that their staff was engaged in telecommuting in approximately two-thirds of their global offices (AT&T, 2004). There are several purported benefits to telecommuting that contribute to its widespread use. First, telecommuting allows for organizations to reduce costs associated with real estate and office overhead expenses (e.g. Apgar, 1998; AT&T, 1997; Dannhauser, 1999). Second, providing telecommuting arrangements can assist organizations with the recruitment and retention of high quality employees (Davenport & Pearlson, 1998). Some employees might prefer the flexibility of telework and see it as a benefit of working for a particular company. Furthermore, employees able to telecommute would not have to relocate to take a position with an out-of-town company. Third, telecommuting can serve to facilitate organizations’ ability to comply with government regulations, legislation, and memorandums aimed at improving employee opportunities (e.g., the Americans With Disabilities Act of 1990; U.S. Equal Employment Opportunity Commission, 2005), transportation difficulties (e.g., Department of Transportation and Related Agencies Appropriations Act of 2000), and work–family balance (e.g., U.S. Office of Personnel Management, 2005). Finally, telecommuting has been touted as a way to enable employees to save commuting costs and adjust their schedule to better meet their non-work and family needs (HR Focus, 2002; Nickson & Siddons, 2004). Technology-Assisted Supplemental Work Technology-assisted supplemental work (TASW) has been defined “as the performance of role-prescribed job tasks by full-time employees with the aid of advanced information and telecommunications technology” (Fenner & Renn, 2004, p. 179). This can occur away from the central office, during or after work hours, and even on vacation. Like telecommuting, TASW represents a form of distributed work conducted through electronic means. However, while telecommuting refers to employees spending a certain
The Impact of Technology on Employees • 241 amount of their scheduled work time away from a central office, TASW more specifically targets how employees discretionarily extend their workday at home after hours. The TASW literature is an extension of Venkatesh and Vitalari’s (1992) research on supplemental work, specifying the role that technology will have in altering how and when supplemental work will occur, as well as the type of supplemental work that can be completed away from work. TASW is conducted through advanced digital and information technologies, such as laptops, cellular phones, Blackberries®, smart phones, tablet computers, and PDAs. Accompanying these technologies is the expectation that employees will work longer hours, whenever and wherever they are required. The growing adoption of TASW is associated with changes in organizational norms and climate, requiring the constant connectivity of employees to achieve organizational goals (Fenner & Renn, 2010). Indeed, 10.3 million employees in the U.S. reported performing around seven hours per week of additional job related work such as with telecommuting (United States Department of Labor, 2005) at home, without a formal agreement. The use of TASW is rapidly increasing as these technologies become more accessible and ubiquitous. For example, a study with a large nationally representative sample found that 62 percent of employees reported being connected while away from a central office by cellular phones, laptops, Blackberries®, or similar devices; while 45 percent reported using these devices to work at home during the evenings or on the weekends (Madden & Jones, 2008). Furthermore, 49 percent of this sample reported that TASW contributed to their overall stress and work–family conflict. There are several important distinctions between telecommuting and TASW that have implications for how employees will be affected by these forms of distributed work. First, telecommuting is often an arrangement that employees in particular jobs are offered, often voluntarily, whereas TASW is becoming an expectation of all employees through changing psychological climates. Second, telecommuting refers to a portion of an employee’s scheduled work time that is conducted in a remote location, while TASW refers to work that is conducted outside of an employee’s scheduled work. Whereas telecommuting can offer the employee more flexibility in work place and time that can reduce some of the burden of work, TASW expands working time and can increase the burden. Furthermore, telecommuting can in many cases increase the employee’s autonomy by allowing employees more choice in working place and time, and taking them out from under a supervisor’s direct vision. Of course,
242 • Ashley E. Nixon and Paul E. Spector there are systems that do the opposite by monitoring employee actions, such as software that keeps track of time spent logged into a computer system, or supervisors who listen in on telephone conversations with customers. In the remainder of this chapter we will discuss features of technology that can serve to either increase or decrease the stressful nature of work and how that might affect health and well-being. Much of this concerns the extent to which technology can enhance or inhibit employee autonomy and control, which are vital elements in the stress process.
THE CONTROL MODEL OF JOB STRESS Researchers have long understood the centrality of control in the job stress process that links environmental conditions and events to physical health and psychological well-being (e.g. Evans & Carreré, 1991; Ganster & Fusilier, 1989). The control theory of stress was proposed by Spector (1998) to depict a more complex role for control in the job stress process, as the more simplified moderating role of control in earlier models such as Karasek’s (1979) demand/control model had received inconsistent support (de Lange et al., 2003). The job stress process that underlies Spector’s model, and indeed most theoretical frameworks, proposes a stimulus-response process in which job stressors lead to strains. The control theory posited that control has effects in several stages in the job stress process. The lack of control itself can act as a job stressor that leads to employee strain directly. Additionally, control may have a moderating effect on employee health and well-being as it could affect how an employee perceives the work environment, thus operating to mitigate or exacerbate an employee’s strain response to a stressor (Spector, 1998). This model is depicted in Figure 11.1. The Job Stressor-Strain Process Job stressors were initially conceptualized as conditions or situations at work that require an adaptive response on the part of the employee (Beehr & Newman, 1978), although this definition is unclear as to what an adaptive versus non-adaptive response would be, nor does it address how to differentiate between stressors at work and any other situations at work. The control model of the job stress process addresses these concerns by narrowing this definition to include only those conditions or situations at work that elicit negative emotional responses from employees (Spector,
The Impact of Technology on Employees • 243
Locus of control Self-efficacy Boundary management
Environmental control Nature of connectivity Perceived control
Environmental stressor
Perceived stressor
Emotional response
Strains: Behavioral Cyberdeviancy Physical Psychological
Affective disposition FIGURE 11.1
The control model of the job stress process
1998). Common emotional reactions to perceived job stressors can include anger, frustration, and anxiety. The positive relationship between perceived job stressors and negative emotional reactions has been well supported empirically (e.g., Jackson & Schuler, 1985; Jex & Beehr, 1991; Spector & Jex, 1998), although the type and intensity of the emotion felt may vary by individual (Keenan & Newton, 1985). An important element of this model is the distinction that is made between objective environmental and perceived job stressors (Parasuraman & Alutto, 1981). Not all employees will perceive the same environmental conditions as job stressors, a process that is impacted by employees’ appraisal of environmental conditions (Lazarus, 1991). The appraisal process requires that the employee must perceive the environmental condition, as an unperceived environmental condition is not likely to invoke a negative emotional reaction. Furthermore, during the appraisal process, the perceived environmental conditions must also be interpreted as stressful by the employee. Thus, not all environmental conditions that could be stressors will be stressors for all employees, making objective measures of environmental stressors more difficult to assess and interpret than perceived job stressors. For example, some employees might enjoy
244 • Ashley E. Nixon and Paul E. Spector being able to work at home because they appreciate the increased autonomy and freedom from direct supervision, whereas others might find working at home to be an unwanted intrusion into their personal lives and have difficulties staying on task without having a supervisor to provide direction and structure. The distinction between environmental and perceived stressors is apparent in the research literature, with the convergence between incumbent perceived stressors and objective or even others’ reports of job stressors varying widely based on the job stressor assessed. This would be expected based on the subjective versus factual nature of specific job stressors. For example, for workload, which is a relatively factual and potentially directly observable aspect of work, incumbent reports of quantitative workload correlated .59 with an objective measure of work quantity (Kirmeyer, 1988), whereas for role ambiguity (i.e., uncertainty about what one’s function and purpose is in the organization), which is an abstract and mostly subjective interpretation of the environment, incumbent and supervisor ratings correlated only .08 (Spector et al., 1988). However, overall there has been at least some degree of convergence identified between incumbent perceptions and other measures of stressors, suggesting that perceptions are linked to objective experience. Negative emotional reactions are thought to mediate the relationship between perceived job stressors and employees’ reactions to these job stressors, referred to as job strains. Job strains can vary in timeframe, including immediate short-term strains that happen with little time lag, intermediate-term strains that might take a few hours or days, and longterm strains that can unfold over months and even years. Furthermore, strain responses have been conceptualized as three broad categories of reactions, specifically psychological, behavioral, and physical (Jex & Beehr, 1991). Psychological strains refer to emotional responses to stressors, including immediate short-term strain reactions such as anxiety or frustration, as well as intermediate-term attitudinal responses to stress, such as job dissatisfaction and lower organizational commitment. Due to the immediate nature of emotional responses to a job stressor, this model posits that these strains occur prior to and mediate the relationship between job stressors with all other intermediate and long-term strains, including attitudinal reactions in addition to behavioral and physical strains. Empirical support has been found for the relationship between negative emotions and long-term psychological, behavioral and physical strains (Spector & Jex, 1998), as well as for the mediating role of negative emotions on the job stressor–strain relationship (Fox & Spector, 1999; Yang & Diefendorff, 2009).
The Impact of Technology on Employees • 245 Behavioral strains are actions or instances of behavior elicited in response to a job stressor. These strains can include behaviors that are an immediate response to a stressor, such as an act of aggression or violence. Behavioral strains can also be long-term in nature, such as leaving the organization for another position. These behavioral strains represent coping behaviors on the employee’s behalf, which can be emotion-focused or problemfocused in nature (Lazarus & Folkman, 1984). Emotion-focus coping refers to behaviors that reduce the emotional reaction to a stressor without directly addressing the stressor, such as emotional venting to a colleague or withdrawing from work by being absent. Technology has made it possible to complain to a colleague in real time with instant messaging, text messages, and even venting to entire Facebook communities. Likewise, employees may withdraw from work while being present, through online communications or “surfing the web.” Problem-focused coping refers to behaviors that are intended to reduce the job stressor. These behaviors may include seeking alternative employment or working with other organizational members to try to resolve stressful situations. Technological advances have improved employees’ ability to obtain new employment, and organizations have the opportunity to use electronic means to help employees resolve situations they find stressful. Behavioral strains can also represent counterproductive work behaviors (CWB; Spector & Fox, 2005) or organizational citizenship behaviors (OCB; Organ & Konovsky, 1989). CWBs are acts at work that interfere with organizational functioning and efficiency, such as gambling online while at work, sabotaging another employee by locking his or her computer, or sabotaging the organization by downloading a computer virus. OCBs are extra-role behaviors that contribute to organizational goals and objectives, including behaviors such as helping a co-worker with new software or working late to complete work. Emotion-focused coping behavior tends to be counterproductive to organizational goals, although this is not always the case. For instance, withdrawing from work through navigating to nonwork related websites is generally thought of as counterproductive as employees are not contributing to organizational functioning. However, employees may use the brief time not working to rest and recover, returning to their work more capable of performing their tasks than if they had not taken a brief internet break. Problem-focused coping often aligns with OCBs, as employees attempting to resolve stressful situations can improve overall organizational functioning. However, employees leaving the organization, a problem-focused coping strategy for the individual, can have a hindering effect on productivity for the organization thus functioning as
246 • Ashley E. Nixon and Paul E. Spector a CWB. The relationships between job stressors with OCB/CWBs, as well as the mediating role of negative emotions, have received empirical support (Bruk-Lee & Spector, 2006; Spector & Fox, 2002; Fox, Spector, & Miles, 2001; Yang & Diefendorff, 2009). Physical strains can include a large number of both short-term and longterm physiological responses (Frese & Zapf, 1988). Short-term strains are physiological reactions, such as increases in blood pressure or suppressed immune system functioning (Cohen & Herbert, 1996; O’Leary, 1990), that result from the experience of a job stressor immediately following exposure to the stressor (Nixon et al., 2011). Intermediate-term physical strains can include noticeable symptoms such as headaches, gastrointestinal problems, or difficulty sleeping. Long-term strains refer to physical illnesses such as cardiovascular disease or irritable bowel syndrome (Landsbergis et al., 2003). The Role of Control Control refers to an employee’s ability to select their actions from two or more potential options (Ganster & Fusilier, 1989). The control theory of the job stress process particularly focuses on behavioral control, as opposed to cognitive control (Averill, 1973), specifying employees’ ability to control work conditions that directly relate to perceived job stressors as critical to understanding the job stress process (Spector, 1998). As with job stressors, it is important to distinguish between environmental and perceived control. Environmental control is the autonomy employees are technically given by their work situation or supervisors, whereas perceived control is the amount of choice the employees believe that they have. Technology can increase control by enabling employees to accomplish tasks from almost any location at any time, but an employee who is uncertain about how to use that technology might not perceive himself or herself to have that control. Perceived control is expected to moderate the relationship between environmental stressors and perceived stressors, such that, when an employee has control over an environmental condition that could act as a stressor, it is less likely to be perceived as a stressor by that employee. For example, if an employee is assigned a task on a tight deadline, having access to needed information through technology might enhance feelings of control over the workload and thus reduce the extent to which the deadline is perceived as a stressor. Likewise, when an employee does not have control over an environmental condition that could act as a stressor, it is
The Impact of Technology on Employees • 247 more likely to be perceived as a stressor by that employee (see Figure 11.1). For example, not having access to required information to complete a task would increase the extent to which a deadline is perceived as a stressor. It is important to note that environmental control, or actual control that an employee may have over their work and work tasks, and perceived control are expected to be imperfectly related to one another. As a further caveat, the perceived control must be specifically over the job stressor to effectively moderate the relationship between environmental and perceived stressors. For example, the use of telecommuting, representing a form of scheduling control, may help reduce conflict arising from employees’ work–family interface, but it will probably not help reduce role ambiguity. Indirect evidence for the moderating role of perceived control on the environmental-perceived job stressor relationship can be drawn from research testing the demand/control model. Specifically, examinations of this model that have drawn on subjective appraisal of job stressors tend to find less support for the model than research examining objective measures, which are intended to capture environmental stressors (Landsbergis et al., 1995; Wall et al., 1996). The moderating role of control between the objective and perceived job stressor measures would explain these findings. Despite this indirect evidence, research directly examining the moderating role perceived control plays in the job stress process is scant (Fox & Spector, 2006). Even though control is most frequently thought of as a mechanism through which employees can minimize stressors at work, in some cases control also functions as a job stressor (Houston, 1972; Frankenhaeuser & Lundberg, 1982). For instance, the added responsibility that often accompanies control can be perceived of as a job stressor to many employees. For example, when employees initiate a telecommuting arrangement, they increase their control over work by integrating flexibility through where and, quite often, when they work. However, these employees also acquire the responsibility of effectively managing their work time and pace, without the formal parameters established in an office setting. Additionally, these telecommuters may experience more work–life conflict because their work is now intruding into their home life, which will be further discussed later. Furthermore, when employees experience a problem over which they have some control (e.g., a computer malfunction), they will often make attempts to overcome the problem. If these attempts are unsuccessful, employees must then deal with the initial job stressor, the effort required to use their control, as well as the failure of those efforts, increasing their perceived
248 • Ashley E. Nixon and Paul E. Spector stressors and negative emotional responses. This problem might be reduced in an office setting in which nearby colleagues might be more readily available to assist in fixing the problem, or where one might access another machine while the malfunctioning computer is repaired by a technician. In scenarios where control is ineffective, control would seemingly increase employee strains, rather than function as a buffer of strains. Beyond its role in moderating the relationship between environmental and perceived stressors or contributing directly to employee strains, control further helps explain how employees’ will react to perceived stressors. Specifically, control is expected to impact employees’ choice between emotion and problem-focused coping behavior such that employees who perceive high control are more likely to engage in problem-focus coping (e.g., fixing an equipment malfunction), with the intent of reducing strains through actually reducing or removing the job stressor. When employees’ feel they have little control, they are expected to engage in more emotionfocused coping, which will not productively alter their environment and can have counterproductive outcomes, such as withdrawal or acts of aggression at work (Spector, 1997). Related research evidence broadly supports these propositions, in that individuals reporting lower control were more likely to respond to stressors with unproductive or counterproductive behaviors, such as acts of aggression, sabotage, or complaining (i.e., Hurrell & Murphy, 1991; Perlow & Latham, 1993; Storms & Spector, 1987); although the majority of research in this area examines the related dispositional construct of locus of control, which is discussed next, rather than situational-based perceived control. The Role of Individual Differences There are two individual difference variables that are important for understanding perceived control: locus of control and self-efficacy. As Spector (1998) points out, employees’ locus of control and self-efficacy impact perceptions of control independent of the actual environmental control. Locus of control refers to people’s tendency to believe that they control over their rewards or punishments (Rotter, 1954). Individuals with external locus of control, or externals, tend to believe that rewards and punishments are more the consequence of fate or the action of powerful others than of their own actions, thus they are likely to perceive limited control across a variety of situations. Conversely, individuals with internal locus of control, or internals, tend to believe that they can directly impact their rewards and punishments through their own actions, leading to
The Impact of Technology on Employees • 249 stronger perceptions of control. Thus, when individuals are presented with situations in which they have equivalent environmental control, internals are more likely to perceive control than externals. Empirical evidence can be found to support the relationships between perceived control and locus of control, particularly research examining autonomy, a form of workplace control. Researchers have identified that internals report greater autonomy than externals (Spector, 1988; Spector & O’Connell, 1994) and have more positive attitudes about computing technology (Coovert & Goldstein, 1980), which may have important implications for strain responses to telecommuting and TASW. Self-efficacy refers to the belief that one is capable of effective performance in specific domains (Bandura, 1977), such that individuals may have high self-efficacy for some of their work tasks (e.g., data analysis), but low self-efficacy for other tasks (e.g. public speaking). In this way, selfefficacy is more specific than the broad tendencies captured by locus of control, yet it is theoretically expected to affect individuals’ perceptions of control in a similar manner. The belief that one cannot effectively perform a task essentially aligns with the belief that one cannot control one’s own task performance; it is a perception of low control in this particular domain of work. Consequently, high and low self-efficacy function similarly to internal and external locus of control, respectively. As with internal locus of control, high self-efficacy has been found to be associated with higher reports of autonomy (Cohrs, Abele, & Dette, 2006). Within the control model of the job stress process, both internal locus of control and high self-efficacy serve to increase employees perceived control thus reducing the likelihood that an environmental condition will be perceived as a stressor by the employees. Conversely, employees with external locus of control and low self-efficacy will feel less capable of successfully handling environmental stressors and therefore react to these environmental conditions with negative emotions and strains. Beyond individual differences in locus of control and self-efficacy, the entire job stress process will be affected by individual differences in affective dispositions such as negative affectivity, which is the tendency to experience negative emotions across situations and time. For instance, individuals with dispositions toward negative affectivity in general (Watson & Clark, 1984) or trait anxiety (the tendency to experience anxiety; Spielberger, 1972) in particular may have more negative emotional reactions to environmental conditions than individuals who are low on these traits. According to the control model of the job stress process, the heightened negative emotional reactivity of employees with higher negative activity and trait anxiety will
250 • Ashley E. Nixon and Paul E. Spector lead to increased intermediate and long-term strains. These types of affective dispositions function independently of control in that they do not affect employees’ perceptions of control, rather affective disposition influence employees’ interpretations and reactions to perceived job stressors. While negative affectivity and trait anxiety have been found to be particularly relevant to the job stress process (Spector et al., 2000), many affective disposition constructs could play a role in how stressors are perceived and responded to. Integrating ICTs into the Control Model of the Job Stress Process Technological advancements have affected several aspects of the control model of the job stress process. We will discuss three of these changes that have important implications, including how ICTs have impacted the distinction between environment and perceived control, how ICTs have opened the door to new forms of CWBs, and how more individual differences (i.e., boundary management) must be taken into account when considering how modern work is conducted. First, we posit that ICT use and the nature of connectivity directly impacts environmental control, as the integration of ICT use in organizations has simultaneously augmented (i.e., telecommuting) and restricted (i.e., TASW) employees’ control over work related conditions, particularly with regard to scheduling and work–life balance. Telecommuting, as a form of distributed scheduled work, typically represents increased control for an employee over scheduling by providing flexibility in the specific places and times an employee can work. In some situations, however, the employee’s use of technology to accomplish tasks is closely monitored, which can decrease control. In terms of work–family issues, a meta-analytic examination of 19 studies suggested that telecommuting is negatively related to work–family conflict (Gajendran & Harrison, 2007), in other words, individuals who telecommute at home experience fewer problems coordinating demands of work and family. In fact, telecommuting has a beneficial bidirectional effect, such that it seems to help employees reduce work demands interfering with family demands as well as family demands interfering with work demands (Gajendran & Harrison, 2007). Conversely, the unscheduled supplemental work conducted through TASW is likely to reduce employees’ control over scheduling and total working hours, as well as reduce employees’ control over restricting
The Impact of Technology on Employees • 251 work from interfering with family obligations. Indeed, researchers have consistently found positive relationships between TASW and subjective stress (Duxbury, Higgins, & Thomas, 1996) as well as incumbent and other reports of work–family or work–life conflict (Boswell & OlsonBuchanan, 2007; Duxbury, Higgins, & Thomas, 1996; Fenner & Renn, 2010). Thus this research seems to support the proposition that the nature of ICT connectivity will directly affect employees’ perceived control over how and when they do work. In regard to how technology has broadened the scope of employee strains, increasing efficiency and accessibility to ICTs has expanded the domain of counterproductive behavior to include cyberdeviant behaviors that can be enacted remotely or at work. Cyberdeviant behavior or cyberdeviancy is generally conceptualized as forms of ICT misuse, which can be further broken down into behaviors that range from withdrawal from work, such as spending work time surfing the internet instead of performing work tasks (e.g., Cyberloafing; Lim, 2002) to interpersonal behaviors, such as sexual harassment or fraud (e.g., Cyberaggression; Weatherbee & Kelloway, 2006). Cyberloafing, along with other terms used to describe production deviance conducted through ICTs (e.g., Cyberslacking; Marron, 2000; Cyberbludging; Mills et al., 2001), has been further categorized based on the types of behavior engaged in and the potential harm to the organization (Mastrangelo, Everton, & Jolton, 2006). Counterproductive computer use refers to behaviors that can expose an organization to liability, such as downloading illegal software or distributing pornography. Non-productive computer use refers to behaviors that do not expose organizations to liability, but lead to a loss of productivity, as they are not work related, such as sending personal e-mails. Additionally, cyberloafing has been associated with one’s perceived ability to hide cyberloafing, which could have important implications for teleworks and those engaged in TASW (Askew et al., 2011). While cyberdeviance is a relatively new area of research, there is some initial evidence that the antecedents and outcomes of these behaviors align with similar, non-ICT based CWBs (Blau, Yang, & Ward-Cook, 2006; Lim, 2002; Weatherbee, 2007). Therefore, cyberdeviancy, like CWBs, seems likely to be associated with job stressors (e.g., Bruk-Lee & Spector, 2006; Spector & Fox, 2002; Fox, Spector, & Miles, 2001; Yang & Diefendorff, 2009). However, this contention needs to be assessed. Finally, boundary management is an individual difference variable that is related to ICT use and must be considered in the job stress model. Boundary management broadly refers to how individuals navigate the
252 • Ashley E. Nixon and Paul E. Spector multiple roles they hold in various domains of life. Boundary theory posits that individuals occupy these various roles and that these roles can be separated by physical, temporal, or psychological boundaries (Ashforth, Kreiner, & Fugate, 2000; Clark, 2000, Nippert-Eng, 1996). Boundaries can vary in the amount of flexibility and permeability that exists between roles (Ashforth, Kreiner, & Fugate, 2000; Clark, 2000). When an employee engages in telecommuting so that they can determine when and where they work, the employee is said to have a flexible work and non-work boundary. Alternatively, when an employee responds to work e-mails while at home with their family, the work and non-work boundary is permeable. Permeable boundaries increase the likelihood of interference between work and non-work roles (Ashforth, Kreiner, & Fugate, 2000; Kossek, Lautsch, & Eaton, 2006), which leads to role stress through work–life conflict. When individuals’ boundaries are inflexible and impermeable, they tend to segment their work and non-work roles, while individuals whose boundaries are flexible and permeable tend to integrate their work and nonwork roles. ICTs allow for greater integration of work and non-work roles, allowing the boundaries between these roles to blur (Batt & Valcour, 2003; Chesley, Moen & Shore, 2003; Fenner & Renn, 2004; Valcour & Hunter, 2005). Thus, individuals’ boundary management dispositions can impact how use of ICTs will impact their perceived control over job stressors, particularly work–family and work–life stressors. Currently, a limited amount of research has examined the role boundary management may play in understanding how ICT use impacts work–family balance. In one study, which examined integrating and segmentation boundary management approaches as a continuum, Kossek, Lautsch, and Eaton (2006) found that telecommuters who tend to integrate their home and work boundaries had greater family to work conflict. Boundary management also affects how TASW relates to work–family conflict. Particularly in the TASW literature, boundary creation around the use of ICTs has recently become an area of investigation, moving more specifically into examining how TASW may relate to work–family conflict. While this is a new area of investigation, researchers have found that employees who integrate work and nonwork or family roles do not establish boundaries regarding the use of ICTs at home while not working, and experience more work and family or non-work conflict (Olson-Buchanan & Boswell, 2006; Park, 2009). Likewise, employees who segregate their work and family or non-work lives are more likely to create boundaries about ICT usage away from work and experience less work and non-work conflict.
The Impact of Technology on Employees • 253
IMPLICATIONS AND FUTURE RESEARCH DIRECTIONS In this chapter, we have expanded the control model of the job stress process to include constructs that have become relevant to understanding employee well-being and health as technology continues to impact employees’ daily work life. Specifically, we identified two prominent ways in which technology impacts the daily work life of employees; both through allowing employees increased flexibility in their work schedule and location (i.e., telecommuting) and through extending employees’ work day by expectations of constant connectivity (i.e., TASW). Additionally, we have extended the control model of the job stress process to include individual differences (i.e., boundary management) and strains (i.e., cyberdeviancy) that are particularly relevant to technological advancements. However, while prior research provides support for the proposed role of these variables, their inclusion in the job stress model requires further investigation. It is critically important that we, as occupational researchers, fully understand differences that may exist between these technological advancements and traditional workplace stressors on employee well-being, as these technology supported alternative work arrangements are becoming increasingly ubiquitous globally. As we noted earlier, technology is allowing increased flexibility in how, when and where people work, and it is enabling employees to be constantly connected to their work. In some workplaces, virtually all employees have personal cell phones that can be used for work-related communication. The spread of technology is occurring not only in the developed world, but in the developing world as well, as access to cell phones and the internet continues to expand. As the world becomes increasingly connected, issues about the effects that will have on health and well-being become of concern. Clearly, we need to better understand conditions under which technology might have positive versus negative effects. Control is a particularly important element that has the potential to both increase and decrease stressors at work. In the future, researchers must further examine distinctions between telecommuting and TASW with regard to job stress and strain, beyond the impact of work–family balance, as employees may experience a multitude of other stressors by engaging in telework and TASW that have previously been neglected. For example, it would be important to determine the impact of these technology-enabled practices on workloads and the impact
254 • Ashley E. Nixon and Paul E. Spector those workloads might have on strains. Likewise, boundary management should be further examined to see if it has a direct influence on perceived control, and, if so, whether this influence buffers employees from a variety of job stressors, as this construct has been examined primarily within the work–family conflict domain. Finally, cyberdeviancy represents a broad field of novel employee misbehavior, and this area requires further investigation to establish a more thorough understanding of the types of behaviors that fall under the umbrella of cyberdeviancy, as well as better understand the antecedents and consequences of these deviant behaviors. The model presented in the chapter is not exhaustive; rather, it is merely a snapshot of what is currently proposed and substantiated with regard to research concerning ICTs supported alternative work arrangements. In the future, there are numerous opportunities for investigating other potential stressors that have not received much attention in the occupational stress literature, including social effects of distributed work. For example, research has recently immerged connecting social isolation theory to distributed work (Marshall, Michaels, & Mulki, 2007), which offers a promising area for further research. Likewise, understanding how cyberdeviancy that is conducted in the form of cyberbullying or cyberincivility impacts other employees engaged in teleworking arrangements could provide valuable information about job stressors related to telework aggression spirals, or incidents where reciprocity norms encourage employees to respond to acts of aggression with further acts of aggression, leading to escalated aggression over time. An additional area of future potential research could include investigating how technology, through telecommuting and TASW, may impact an individual’s coping behavior with regard to technology based job stressors. Technology has the potential to enhance both emotion-focused and problem-focused coping. On the emotion-focused side, technology can facilitate several forms of behavior that may be beneficial to employees, but often at the expense of the organization. First, enhanced communication technology can allow an employee to seek emotional support by sharing feelings and venting with family and friends. This could occur through e-mail, phone calls, and texting during work while the employee is experiencing emotional strain. Occasional use of technology for this purpose might be productive if it enables an employee to better cope with stressful conditions and maintain productivity, but if overdone it could take away from time spent on job tasks and adversely affect job performance. Second, technology might provide additional avenues for work withdrawal. The increased autonomy from telecommuting might allow an employee
The Impact of Technology on Employees • 255 to more easily avoid working than when at an employer’s facility with other employees and supervisors. Furthermore, an employee could use the technology for nonwork activities in order to escape the demands of the job, such as playing games or web surfing. Finally, an employee might cope with emotional strain by engaging in cyberdeviance as a means of venting anger and other negative feelings. Problem-focused coping can be facilitated by technology. Enhanced communication can provide needed information to aid in problem solving, and allow better coordination of activities among work colleagues. Technology can increase efficiency, for example, by allowing one to find needed information on the web rather than having to physically go to a library, or by enabling a video conference rather than having a physical meeting that might require travel. Of course, an employee’s decision to use technology for problem-focused rather than emotion-focused coping is largely affected by perceived control of the situation and technology. To remain problem-focused employees need more than access to technology, but also the appropriate skill and training to use it effectively. Beyond that, they need high levels of technology self-efficacy so they are not reluctant to choose problem-focused approaches in which technology is used to facilitate performance rather than escape it. As technology continues to rapidly advance, the global workforce will continue to experience change, not just in how we conduct work at the office, but in the very nature of our daily work life. In the future, we will see even greater shifts toward telecommuting and TASW. It is imperative that we understand the job stress process for employees engaged in work supported by ICTs. Through our research efforts to understand how technology relates to employee health and stress, we can offer leadership and guidance for organizations, providing the knowledge necessary to achieve a healthy and productive workforce.
REFERENCES Americans With Disabilities Act of 1990, Pub. L. No. 101–336, 104 Stat. 327 (1991). Apgar, M. (1998). The alternative workplace: Changing where and how people work. Harvard Business Review, 76, 121–136. Ashforth, B., Kreiner, G., & Fugate, M. (2000). All in a day’s work: Boundaries and micro role transitions. Academy of Management Review, 25, 472–491. Askew, K. L., Coovert, M. D., Vandello, J. A., Taing, M. U., & Bauer, J. A. (2011). Work environment factors predict cyberloafing. Paper presented at the 23rd Annual Convention for the American Psychological Society, Washington, DC.
256 • Ashley E. Nixon and Paul E. Spector AT&T. (1997). The 1997 EH&S report. Retrieved November 28, 2010, from www.att.com/ ehs/annual_reports/ehs_report/report97/common/l.html AT&T. (2004). The remote working revolution. Retrieved November 28, 2010, from www. business.att.com/resource.jsp?&rtype=Whitepaper&rvalue=the_remote_working_ revolution Averill, J. (1973). Personal control over aversive stimuli and its relationship to stress. Psychological Bulletin, 80, 286–303. Bailey, D. E., & Kurland, N. B. (2002). A review of telework research: Findings, new directions and lessons for the study of modern work. Journal of Organizational Behavior, 23, 383–400. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. Baruch, Y. (2001). The status of research on teleworking and an agenda for future research. International Journal of Management Reviews, 3, 113–129. Batt, R., & Valcour, P. M. (2003). Human resource practices as predictors of work–family outcomes and employee turnover. Industrial Relations, 42, 189–220. Beehr, T. A., & Newman, J. E. (1978). Job stress, employee health, and organizational effectiveness: A facet analysis, model and literature review. Personnel Psychology, 31, 665–699. Blau, G., Yang, Y., & Ward-Cook, K. (2006). Testing a measure of cyberloafing. Journal of Allied Health, 35, 9–17 Boswell, W., & Olson-Buchanan, J. (2007). The use of communication technologies after hours: The role of work attitudes and work-life conflict. Journal of Management, 33, 592–610. Brooks, J. (1976). Telephone: The first hundred years. New York: HarperCollins. Bruk-Lee, V., & Spector, P. E. (2006). The social stressors-counterproductive work behaviors link: Are conflicts with supervisors and coworkers the same? Journal of Occupational Health Psychology, 11, 145–156. Bruk-Lee, V., Nixon, A. E., & Spector, P. E. (Under review). Conflict at work and employee well-being: Good for business? Not for me. Work & Stress. Chandler, A. D. (1962). Strategy and structure: Chapters in the history of the American industrial enterprise. Cambridge, MA: MIT Press. Chesley, N., Moen, P., & Shore, R. P. (2003). The new technology climate. In P. Moen (Ed.), It’s about time: Couples and careers (pp. 220–241). Ithaca, NY: Cornell University Press. Clark, S. C. (2000). Work/family border theory: A new theory of work/family balance. Human Relations, 53, 747–770. Cohen, S., & Herbert, T. B. (1996). Health psychology: Psychological factors and physical disease from the perspective of human psychoneuroimmunology. Annual Review of Psychology, 47, 113–142. Cohrs, J. C., Abele, A. E., & Dette, D. A. (2006). Integrating situational and dispositional determinants of job satisfaction: Findings from three samples of professionals. The Journal of Psychology, 140, 363–395. Coovert, M. D., & Goldstein, M. (1980). Locus of control as a predictor of users’ attitude toward computers. Psychological Reports, 47, 1167–1173. Dannhauser, C. L. (1999). Who’s in the home office? American Demographics, 21, 50–56. Davenport, T. (2005). Rethinking the mobile workforce. Retrieved November 28, 2010, from www.optimizemag.com/article/showArticle.jhtml?printableArticle_true&articleId_ 166402970
The Impact of Technology on Employees • 257 Davenport, T. H., & Pearlson, K. (1998). Two cheers for the virtual office. Sloan Management Review, 39, 51–65. Davis, D. D., & Polonko, K. A. (2001). Telework in the United States: Telework America survey2001. Retrieved from www.workingfromanywhere.org/telework/twa2001.htm de Lange, A. H., Taris, T. W., Kompier, M. A., Houtman, I. L., & Bongers, P. M. (2003). “The very best of the millennium”: Longitudinal research and the demand-control(support) model. Journal of Occupational Health Psychology, 8, 282–305. Department of Transportation and Related Agencies Appropriations Act, Pub. L. No. 106–346, 114 Stat. 1356 (2000). Duxbury, L. E., Higgins, C. A., & Thomas, D. R. (1996). Work and family environments and the adoption of computer-supported supplemental work-at-home. Journal of Vocational Behavior, 49, 1–23. Evans, G. W., & Carreré, S. (1991). Traffic congestion, perceived control, and psychophysiological stress among urban bus drivers. Journal of Applied Psychology, 76, 398–407. Feldman, D., & Gainey, T. (1997). Patterns of telecommuting and their consequences. Human Resource Management Review, 7, 369–388. Fenner, G., & Renn, R. (2004) Technology-assisted supplemental work: Construct definition and a research framework. Human Resource Management, 43, 179–200. Fenner G. H., & Renn, R. W. (2010). Technology-assisted supplemental work and workto-family conflict: The role of instrumentality beliefs, organizational expectations and time management. Human Relations, 63, 63–82. Fox, S., &. Spector, P. E. (1999). A model of work frustration and aggression. Journal of Organizational Behavior, 20, 915–931. Fox, S., & Spector, P. E. (2005). (Eds.), Counterproductive work behavior: Investigations of actors and targets. Washington, DC: APA. Fox, S., & Spector, P. E. (2006). The many roles of control in a stressor-emotion theory of counterproductive work behavior. In P. L. Perrewé & D. C. Ganster (Eds.), Research in Occupational Stress and Well-Being, 5 (pp. 171-201). Greenwich, CT: JAI. Fox, S., Spector, P. E., & Miles, D. (2001). Counterproductive work behavior (CWB) in response to job stressors and organizational justice: Some mediator and moderator tests for autonomy and emotions. Journal of Vocational Behavior, 59, 291–309. Frankenhaeuser, M., & Lundberg, U. (1982). Psychoneuroendocrince aspects of effort and distress as modified by personal control. In W. Bachmann and I Udris (Eds.), Mental Load and Stress in Activity (pp. 97–103). Amsterdam: North-Holland. Frese, M., & Zapf, D. (1988). Methodological issues in the study of work stress: objective vs. subjective measurement of work stress and the question of longitudinal studies. In C. L. Cooper and R. Payne (Eds.), Causes, Coping and Consequences of Stress at Work (pp. 375–410). Chichester, UK: John Wiley. Gajendran, R. S., & Harrison, D. A. (2007). The good, the bad, and the unknown about telecommuting: Meta-analysis of psychological mediators and individual consequences. Journal of Applied Psychology, 92, 1524–1541. Ganster, D. C., & Fusilier, M. R. (1989). Control in the workplace. In C. L. Cooper and I. T. Robertson (Eds.), International Review of Industrial and Organizational Psychology 1989 (pp. 235–280). Chichester, UK: John Wiley. Hill, J. E., Miller, B. C., Weiner, S. P., & Colihan, J. (1998). Influences of the virtual office on aspects of work and work/life balance. Personnel Psychology, 51, 667–683. Houston, B. K. (1972). Control over stress, locus of control, and response to stress. Journal of Personality and Social Psychology, 21, 249–255. HR Focus. (2002). Time to take another look at telecommuting. HR Focus, 79, 6–7.
258 • Ashley E. Nixon and Paul E. Spector Hurrell, J. J., Jr., & Murphy, L. R. (1991). Locus of control, job demands, and health. In C. L. Cooper & R. Payne (Eds.), Personality and Stress: Individual Differences in the Stress Process (pp. 133–149). Chichester, UK: John Wiley. Jackson, S. E., & Schuler, R. S. (1985). A meta-analysis and conceptual critique of research on role ambiguity and role conflict in work settings. Organizational Behavior and Human Decision Processes, 36, 16–78. Jex, S. M., & Beehr, T. A. (1991). Emerging theoretical and methodological issues in the study of work-related stress. Personnel and Human Resources Management, 9, 311–365. Karasek, R. A. (1979). Job demands, job decision latitude, and mental strain: Implications for job redesign. Administrative Science Quarterly, 24, 285–308. Keenan, A., & Newton, T. J. (1985). Stressful events, stressors and psychological strains in young professional engineers. Journal of Occupational Behavior, 6, 151–156. Kirmeyer, S. L. (1988). Coping with competing demands: Interruption and the Type A pattern. Journal of Applied Psychology, 73, 621–629. Kossek, E., Lautsch, B., & Eaton, S. (2006). Telecommuting, control and boundary management: Correlates of policy use and practice, job control, and work-family effectiveness. Journal of Vocational Behavior, 68, 347–467. Landsbergis, P. A., Schnall, P. L., Schwartz, J. E., Warren, K., & Pickering, T. G. (1995). Job strain, hypertension, and cardiovascular disease: Empirical evidence, methodological issues, and recommendations for further reading. In S. L. Sauter & L. R. Murphy (Eds.), Organizational risk factors for job stress (pp. 97-112). Washington, DC: APA. Landsbergis, P. A., Schnall, P. L., Belkic, K. L., Baker, D., Schwartz, J. E., & Pickering, T. G. (2003). The workplace and cardiovascular disease: Relevance and potential role for occupational health psychology. In J. C. Quick & L. E. Tetrick (Eds.), Handbook of Occupational Health Psychology (3rd edn, pp. 265–287). Washington, DC: APA. Lazarus, R. S. (1991). Cognition and motivation in emotion. American Psychologist, 46, 352–367. Lazarus, R. S., & Folkman, S. (1984). Stress, appraisal and coping. New York: Springer. Lim, V. (2002). The IT way of loafing on the job: Cyberloafing, neutralizing and organizational justice. Journal of Organizational Behavior, 23, 675–694. Madden, M., & Jones, S. (2008). Networked workers. Pew Research Center Publications. Available at: www.pewresearch.org/pub/966/networkedworkers (accessed February 12, 2011). Marron, K. (2000). Attack of the cyberslackers: Firms tighten up on on-line lollygagging, but some say it can boost productivity. Available at: www.globetechnology.com/ archive/20000120/TWSLAC20.html Marshall, G. W., Michaels, C. E., & Mulki, J. P. (2007). Workplace isolation: Exploring the construct and its measurement. Psychology & Marketing, 24, 195–223. Mastrangelo, P., Everton, W., & Jolton, J. (2006). Personal use of work computers: Distraction versus destruction. CyberPsychology & Behavior, 9, 730–741. Mills, J., Hu, B., Beldona, S., & Clay, J. (2001). Cyberslacking! A liability issue for wired workplaces. Cornell Hotel and Restaurant Administration Quarterly, 42, 34–47. Mokhtarian, P. L., & Salomon, I. (1996). Modeling the choice of telecommuting 2: A case of the preferred impossible alternative. Environment and Planning A, 28, 1859–1876. Nickson, D., & Siddons, S. (2004). Remote working: Linking people and organizations. Burlington, MA: Elsevier Butterworth-Heinemann.
The Impact of Technology on Employees • 259 Nippert-Eng, C. (1996). Home and work: Negotiating boundaries through everyday life. Chicago: University of Chicago Press Nixon, A. E., Mazzola, J. J., Bauer, J., Krueger, J., & Spector, P. (2011). Can work make you sick? A meta-analysis of the relationships between job stressors and physical symptoms. Work & Stress, 25, 1–22. O’Leary, A. (1990). Stress, emotion, and human immune function. Psychological Bulletin, 108, 363–382. Olson-Buchanan, J. B., & Boswell, W. R. (2006). Blurring boundaries: Correlates of integration and segmentation between work and nonwork. Journal of Vocational Behavior, 68, 432–445. Organ, D. W., & Konovsky, M. (1989). Cognitive versus affective determinants of organizational citizenship behavior. Journal of Applied Psychology, 74, 157–164. Parasuraman, S., & Alutto, J. A. (1981). An examination of the organizational antecedents of stressors at work. Academy of Management Journal, 24, 48–67. Park, Y. (2009). Work and nonwork boundary management: Using communication and information technology. Unpublished Master Thesis, Bowling Green State University, Bowling Green, Ohio. Perlow, R., & Latham, L. L. (1993). Relationship of client abuse with locus of control and gender: A longitudinal study. Journal of Applied Psychology, 78, 831–834. Qvortrup, L. (1998). From teleworking to networking: Definitions and trends. In P. J. Jackson & J. M. V. D. Wielen (Eds.), Teleworking: International perspectives—from teleworking to the virtual organization (pp. 21–39). London: Routledge. Rotter, J. B. (1954). Social learning and clinical psychology. New York: Prentice-Hall. Spector, P. E. (1988). Development of the work locus of control scale. Journal of Occupational Psychology, 61, 335–340. Spector, P. E. (1997). The role of frustration in antisocial behavior at work. In R. A. Jiacalone, & J. Greenberg (Eds.), Anti-Social Behavior in Organizations (pp. 1–17). Thousand Oaks, CA: Sage. Spector, P. E. (1998). A control model of the job stress process. In C. L. Cooper (Ed.), Theories of Organizational Stress (pp. 153–169). London: Oxford University Press. Spector, P. E., & Fox, S. (2002). An emotion-centered model of voluntary work behavior: Some parallels between counterproductive work behavior (CWB) and organizational citizenship behavior (OCB). Human Resources Management Review, 12, 269–292 Spector, P. E., & Fox, S. (2005). The stressor-emotion model of counterproductive work behavior. In S. Fox & P. E. Spector (Eds.), Counterproductive Work Behavior: Investigations of Actors and Targets (pp. 151–174). Washington, DC: American Psychological Association. Spector, P. E., & Jex, S. M. (1998). Development of four self-report measures of job stressors and strain: Interpersonal conflict at work scale, organizational constraints scale, quantitative workload inventory, and physical symptoms inventory. Journal of Occupational Health Psychology, 3, 356–367. Spector, P. E., & O’Connell, B. (1994). The contribution of personality traits, negative affectivity, locus of control and Type A to the subsequent reports of job stressors and job strains. Journal of Occupational and Organizational Psychology, 67, 1–12. Spector, P. E., Dwyer, D. J., & Jex, S. M. (1988). Relation of job stressors to affective, health, and performance outcomes: A comparison of multiple source data. Journal of Applied Psychology, 73, 11–19. Spector, P. E., Zapf, D., Chen, P., & Frese, M. (2000). Why negative affectivity should not be controlled in job stress research: Don’t throw the baby out with the bath water. Journal of Organizational Behavior, 21, 79–95.
260 • Ashley E. Nixon and Paul E. Spector Spielberger, C. D. (1972). Anxiety as an emotional state. In C. D. Spielberger (Ed.). Anxiety: Current Trends in Theory and Research (Vol. I, pp. 23–49). New York: Academic Press. Standen, P., Daniels, K., & Lamond, D. (1999). The home as a workplace: Work–family interaction and psychological well-being in telework. Journal of Occupational Health Psychology, 4, 368–381. Storms, P. L, & Spector, P. E. (1987). Relationships of organizational frustration with reported behavioral reactions: The moderating effect of perceived control. Journal of Occupational Psychology, 60, 227–234. United States Department of Labor—Bureau of Labor Statistics (2005). Work at Home In. Washington, DC: U.S. Department of Labor. Available at: www.bls.gov/news.release/ homey.nr0.htm U.S. Equal Employment Opportunity Commission. (2005). Work at home/telework as a reasonable accommodation. Retrieved January 11, 2011, from www.eeoc.gov/facts/ telework.html U.S. Office of Personnel Management. (2005). Telework: A management priority. A guide for managers, supervisors, and telework coordinators. Retrieved December 13, 2011, from www.telework.gov/documents/tw_man03/ch1.asp Valcour, P. M., & Hunter, L. W. (2005). Technology, organizations, and work–life integration. In E. E. Kossek & S. J. Lambert’s (Eds.), Work and Life Integration: Organizational, Cultural, and Individual Perspectives (pp. 61–84). Mahwah, NJ: Lawrence Erlbaum Associates Venkatesh, A., & Vitalari, N. (1992). An emerging distributed work arrangement: An investigation of computer-based supplemental work at home. Management Science, 38, 1687–1706. Wall, T. D., Jackson, P. R., Mullarkey, S., & Parker, S. K. (1996). The demands-control model of job strain: A more specific test. Journal of Occupational and Organizational Psychology, 69, 153–166. Watson, D., & Clark, L. A. (1984). Negative affectivity: The disposition to experience aversive emotional states. Psychological Bulletin, 96, 465–490. Weatherbee, T. G. (2007). Cyberaggression in the Workplace: Construct Development, Operationalization, and Measurement. Unpublished Doctoral Thesis, Saint Mary’s University, Halifax. Weatherbee, T. G., & Kelloway, E. K. (2006). A case of cyberdeviancy: Cyber aggression in the workplace. In E. K. Kelloway, J. Barling, & J. J. Hurrell (Eds.), Handbook of Workplace Violence (pp. 445–487). Thousand Oaks, CA: Sage Publications, Inc. WorldatWork (2009). WorldatWork telework trendlines 2009. Retrieved November 22, 2010, from www.worldatwork.org/waw/adimLink?id_31115 WorldatWork. (2011). Telework 2011: A WorldatWork Special Report. Retrieved August 2, 2011, from www.worldatwork.org/waw/adimLink?id=53034 Yang, J., & Diefendorff, J. M. (2009). The relations of daily counterproductive workplace behavior with emotions, situational antecedents, and personality moderators: A diary study in Hong Kong. Personnel Psychology, 62, 259–295.
12 Global Development through the Psychology of Workplace Technology Tara S. Behrend, Alexander E. Gloss, and Lori Foster Thompson
Recent years have witnessed an increasing focus among IndustrialOrganizational (I-O) psychologists and other management scholars on and in developing regions of the world. This trend may stem from one or more of a variety of motivations, such as a desire to use the organizational sciences for the greater good, a recognition of the untapped potential of emerging economies, and an interest in testing the universality of prominent theories of behavior developed in predominantly Western, educated, industrialized, rich, and democratic contexts (Henrich, Heine, & Norenzayan, 2010). Regardless of the motivation, studying and applying I-O psychology in lower-income regions of the world entails engagement with a host of factors related to not only organizational development, but also regional, social, and economic development. Examples include topics such as corruption, poverty, malnutrition, gender inequality, the spread of HIV/AIDs, access to education, loss of environmental resources, and other components of international development, which are inextricably linked to the work and well-being of individuals and the regions in which they reside. Information and Communication Technology (ICT) has an important role to play as I-O psychology and related disciplines begin to interface with international development. ICT has shaped the world of work in many ways. It has not only introduced new ways of working, but also new forms of work, new workers, and new workplaces. Together, work and technological innovation are critical to solving some of the most significant challenges in our world and are key drivers of economic development (United Nations, 2011). Yet, a surprisingly high percentage of technological interventions meant to enhance socioeconomic development fail (Dodson, Sterling, & 261
262 • Tara S. Behrend et al. Bennett (2012). The reasons for these failures are diverse, but one key issue is an incomplete understanding of human behavior, particularly in the context of work. In this chapter, we discuss how the efforts of I-O psychologists, ICT innovators, and international development professionals can complement each other, making the whole greater than the sum of its parts. In particular, this chapter considers how the psychology of workplace technology can be used to improve the success rate of global development initiatives designed to address the world’s most pressing problems.
AN INTRODUCTION TO THE FIELD OF ICTD Many researchers and professionals who create and use ICTs for the purposes of socioeconomic development are part of a community referred to as Information and Communication Technologies for Development (ICTD or ICT4D). The ICTD field emerged in a modern form after the widespread adoption of computers and the internet. Between 2000 and 2010, substantial growth in the field occurred, as marked by a dramatic increase in publications, journals, and conferences devoted to ICTD (Gomez, Baron, & Fiore-Silfvast, 2012). The field of ICTD is rather amorphous and overlaps with closely related disciplines (e.g., development informatics; see Heeks, 2010). In general, ICTD in its present form can be understood as a synthesis of two discipline clusters. One cluster includes various social science disciplines such as development studies, economics, and sociology; whereas the other cluster includes computer science and communication/information studies. ICTD researchers and practitioners use information and communication technologies to address a wide range of development goals—from enhancing the quality and effectiveness of government to supporting micro- and small-business entrepreneurs. For example, ICTs can be developed and deployed to help small-scale farmers in rural areas network with each other, obtain information from suppliers, and communicate with prospective customers. If successful, such an ICTD initiative has the potential to improve the work of the individual farmer and the welfare of his or her community, which realizes developmental gains when members of that community thrive and flourish. According to a recent review by Gomez, Baron, and Fiore-Silfvast (2012), the field of ICTD has prioritized several issues including private business growth, empowerment, education, and e-government. The field often assumes a country, organizational, or
Global Development • 263 multiple-country level of analysis in its research and projects and frequently engages in studies that describe best practices and field experience or that make policy recommendations. Information and communication technology is especially relevant to the welfare of lower-income (so-called “developing”) societies. Despite the tremendous growth of ICTs, and the fact that there are now nearly two billion internet users in the world, a “digital divide” still exists. This divide is constituted by global disparities in access to, usage of, and motivation to use ICTs (United Nations, 2011; Van Dijk, 2006). Indeed, while internet penetration is 72 percent in the developed world, it is only 21 percent in the developing world (United Nations, 2011). However, the growth of many ICTs is enormous. Of particular note is the growth and potential of mobile phones in lower income settings. Consider that in 1998, 2 percent of the world’s population had a mobile phone subscription, while in 2008, that rate had risen to 55 percent (Heeks, 2010). If access to mobile phones via sharing is included (e.g., sharing phones with friends and family), mobile phone usage rates likely exceed 80 percent of the population of developing countries (Heeks, 2010). For this reason, mobile phones have been viewed as particularly promising in their potential to enhance economic development and well-being in lower-income settings. For example, they can be used to assist and support healthcare work in rural, developing regions of the world, allowing for remote diagnosis of illnesses, assistance with adherence to medical advice, remote monitoring, and the mass dissemination of public health information (International Telecommunications Union, 2010). Despite ICTD’s growth and potential, a salient concern pertains to the routine failure of ICTD projects aimed to facilitate work performed for and/or by members of the world’s most vulnerable populations. Dodson, Sterling, and Bennett (2012) argue that most ICTD interventions fail. For this reason, a “FAILfaire” was held at the 2012 International Conference on Information and Communication Technologies and Development, which highlighted failed initiatives and attempted to build dialogue and awareness around the issue. Dodson, Sterling, and Bennett (2012) noted several important reasons for the failure of ICTD projects, including: not establishing baseline metrics upon which to judge success; taking a “topdown” instead of “bottom-up” approach that does not emphasize local context and priorities; engaging in projects that are “technology-centric” instead of “community-centric,” thereby seeking to match a solution to a problem instead of the other way around; not having clear and specific goals for projects; and finally, ignoring the effects of workers’ “mental
264 • Tara S. Behrend et al. barriers” and motivation on the success of ICTD initiatives. Our aim in this chapter is to demonstrate some ways that I-O psychology and the psychology of workplace technology can assist in addressing these challenges. We elaborate below.
ICTD AND I-O PSYCHOLOGY “There is nothing so practical as a good theory” (Lewin, 1951, p. 169). A theory-driven approach to ICTD can facilitate an understanding of failed initiatives and promote successful ICTD interventions. This is one area where I-O psychology has a particularly important role to play. Early efforts in ICTD were not commonly based upon firm theoretical and conceptual foundations. They often neglected to incorporate insights into the psychological aspects of human behavior at work (Heeks, 2010). The growing popularity of ICTD studies focused upon theory, however, suggests a shift toward ICTD projects rooted in a firmer understanding of human behavior (Gomez, Baron, & Fiore-Silfvast 2012). Prominent commentators in the field have emphasized the need for a greater role for the organizational and management sciences in particular (see Heeks, 2007). Accordingly, I-O psychology likely has something meaningful to add, particularly to discussions focused on the sources of failure in ICTD projects. From performance evaluation to training needs analyses and goal-setting, I-O psychology tools and theories are quite relevant and can be employed to help the field of ICTD to improve its success rate. The potential for I-O psychology to help ICTD scientists and practitioners understand and address the aforementioned “mental barriers” and motivation issues that arise when ICT is used to perform work in developing regions of the world is especially great. It is worth pointing out that while the aforementioned dialogue has developed within the field of ICTD, parallel discussions have unfolded in the psychological sciences, which focus on I-O psychology’s relevance to international development (e.g., Gloss & Thompson, 2013). Berry et al. (2011) point out that I-O psychologists are particularly well suited to assess the needs and capacity of developing communities, to use past behavioral history to predict the success of development projects, and to ensure the psychometric validity of information collected about development projects. Carr and Bandawe (2011) offer both encouragement and caution. They describe how I-O psychology can help to assist international
Global Development • 265 development through research and practice in areas such as performance management and counterproductive work behavior (e.g., school performance and teacher attendance in schools). They caution, however, that in order to be effective in this sphere, I-O psychology must remain attuned to other disciplines and aligned with local stakeholders’ perspectives. Finally, Pick and Sirkin (2010) point out that psychology inserts the consideration of a wide range of human factors that, together with environmental context, help to shape societal development. These human factors are often relatively overlooked by the heavily economic mode of analysis that is dominant in the field of international development.
THE INTERSECTION OF WORK, PSYCHOLOGY, AND TECHNOLOGY, AND INTERNATIONAL DEVELOPMENT Taken together, therefore, cross-disciplinary discussions and analyses suggest that a clear understanding of the psychology of workplace technology is needed in order to effectively address the most pressing problems facing our world today. To illustrate the potential for work, psychology, and technology to come together for the purpose of international development, we next provide a detailed analysis of five projects or cases, which consider issues of health, economic development, and education. Though they were not designed with I-O psychology in mind, each of these cases demonstrates the importance of psychological principles in the workplace for ICTD. For each case, we introduce the context of the initiative, including the problem to be solved. We then describe the solution as it was implemented. Finally, we analyze the case through the lens of I-O psychology, demonstrating how I-O psychology themes are present, either explicitly by design, or in ways that could be considered to facilitate the success of future ICTD interventions.
CASE 1: MONITORING COMMUNITY HEALTH WORKERS IN TANZANIA Problem/Context: Routine visits by community health workers (CHWs) to patients have been shown to improve community health (e.g., maternal
266 • Tara S. Behrend et al. and infant mortality; medication adherence). CHWs are community members given varying degrees of professional training to provide basic medical care as either paid workers or volunteers. These visits reduce strain on local healthcare systems by acting as a form of preventative care and by reducing the workload for overburdened doctors and nurses. The work performed by CHWs is consistent with trends toward “task shifting,” which entails the redistribution of tasks from highly qualified health workers to people with less training and fewer qualifications, in order to more efficiently use the human resources available for healthcare (World Health Organization, 2007). DeRenzi et al. (2012) describe an intervention focusing on a group of CHWs in Tanzania. The goal of this intervention was to improve the timeliness of CHW client visits (currently at 60 percent timeliness). Missed visits are often attributed to illness of the CHW, phone trouble, forgetting, busyness, a belief that the visit was not needed, and many other reasons. However, the program staff believes many missed visits may be due to low CHW motivation. Intervention: DeRenzi and colleagues (2012) built a phone-based Short Message Service (SMS) reminder system that incorporated real-time tracking of CHWs and sent text message reminders on the day before and day of a scheduled visit. After three days without checking in with an assigned client, a notice was sent to someone in a supervisory role who could follow up about the missed appointment. In this way, the mobile phone was used as both a job aid and a tracking, feedback, and performance monitoring tool. The system was evaluated with two studies that varied the frequency of reminders and whether the supervisor was notified, showing that overall the reminders were effective in increasing timeliness (reducing lateness from an average of 9.7 days to 1.4 days). However, when the supervisor was not included, the reminders were not effective. How can psychology assist in this effort? This intervention has a number of connections to the domain of I-O psychology. The use of a tracking and reminder system like this one falls into the general category of feedback and performance monitoring. In thinking about what makes a system like this effective, one can draw from the extensive literature of work motivation. Motivation is complex, and can come from various sources both internal and external to a person. Examples, such as the following, of how motivation theories can shed light on the problems the CHWs are experiencing could inform the development of technology to best support them. Need theories of motivation propose that people are motivated by a set of needs, and that they will be motivated to the extent that these needs are
Global Development • 267 fulfilled. Self-determination theory (Deci & Ryan, 1985; Ryan & Deci, 2000), a prominent theory in this category, proposes that motivation comes from autonomy (discretion over decision-making), relatedness (meaningful connections to others), and competence (experiencing mastery of workrelated tasks) needs. It is possible that the CHWs’ work and technology could be restructured to better meet these needs. For example, the reminder system may be a threat to autonomy in its current form as it might be interpreted as an intrusion on the discretion a CHW feels should exist in doing her or his job. However, a lack of relatedness might also be at play. Isolation is a salient problem for CHWs; currently many of them interact socially but much of the work is done independently, with a great deal of social interaction or feedback coming from a centralized supervisor or the clients they visit. In contrast, trait theories of motivation suggest that people possess a set of stable traits that determine the extent to which they will be motivated to perform. This approach would imply that CHWs will vary in their motivation to complete visits on time, regardless of the reminder system used. Thus selecting people who have the right set of traits to meet deadlines and goals would ensure higher performance. If this approach is chosen, conscientiousness and goal orientation are motivational traits that could be fruitful (Latham & Pinder, 2005). Arguably the most useful approach in this case, however, is not to either focus on the job itself or the CHWs themselves, but to consider how the two may interact. Trait activation theory suggests that considering the combination of personality and job design features is critical. Conscientiousness, for example, has been shown to be especially important for high-autonomy jobs (Mount & Barrick, 1995). To motivate the CHWs, it may be important to critically examine the features of the job and identify potential CHWs who will be the best fit for that environment. If this is not possible, the situation must be altered to fit the needs of the CHWs. It is possible that the technological reminder system could be more effective if it was tailored to the CHW population. To do this, one can consider cultural values that determine the effectiveness of particular feedback strategies. In general, feedback must be valued to be useful (Tuckey et al., 2002). If the CHWs come from a collectivist culture, group-based feedback may be especially valued (e.g., teams of CHWs could be given feedback on their performance as a whole; Leung, 2001). Further, power distance values may affect the usefulness of feedback and reminders. In a high powerdistance context, feedback is much more likely to be valued when it comes from a supervisor; self-set goals have a much less powerful effect than they
268 • Tara S. Behrend et al. do in low power-distance cultures (Leung, 2001). This cultural value could explain why the reminder system failed to increase performance in the DeRenzi et al. (2012) case when the supervisor was not involved. Finally, cognitive approaches, such as goal-setting and social cognitive theories, can be used. Given the nature of the challenge, these approaches are likely to be the most effective. Program staff might need to increase the timeliness of client visits without changing caseloads, restructuring their work, or hiring more CHWs. Goal setting, a theory with a great deal of empirical support, may suggest a way forward. Goal setting theory proposes that to be effective, goals should be specific and challenging. CHWs can be given clear benchmarks for timeliness and any other indicators of value (e.g., referrals to care providers), and the goal should be challenging enough that reaching it requires effort (Locke & Latham, 2002). Currently no specific goal is assigned to CHWs; rather a “do your best” approach is used, which has been shown to be far less effective (Locke & Latham, 1990; 2002). Further, acceptance of goals is crucial. If the CHWs do not see the value of attaining the goal, they will not be motivated to attain it. In this case, it is possible that the CHWs do not value the goals; this is indicated by the fact that when reminders were sent to the CHW only and no follow up was sent to the supervisor, no change in timeliness occurred. A lack of goal acceptance on behalf of the CHWs might be explained by social cognitive theory. Social cognitive theory emphasizes the fact that people are motivated by goal foresight, which allows for individuals to anticipate and change their behavior. In the case at hand, reminders are only sent when the visit is overdue or nearly overdue. This might be problematic as self-efficacy, a key component of social cognitive theory, is built up over time as people attain goals, causing them to set still-higher goals in the future. Low self-efficacy will cause the CHWs to give up completely if they feel the goal is impossible (Stajkovic & Luthans, 1998). Receiving negative feedback repeatedly when timeliness goals are not met can decrease a CHW’s self-efficacy, whereas building upon past success and anticipating manageable goals in the future is likely to increase selfefficacy and performance. In summary, various theories relating to motivation and goal-setting can provide a better understanding of the performance of CHWs, helping to inform the design and facilitate the success of technological innovations to improve their performance and well-being. This example helps to show how I-O psychologists could work closely with those interested in ICT and international development to assess personal characteristics, job design, and situational constraints to develop culturally appropriate recommendations in a wide number of similar ICTD projects.
Global Development • 269
CASE 2: DEVELOPMENT OF MOBILE PHONE-BASED JOB APPLICATION SYSTEM IN INDIA Problem/context: A rapidly growing economy in India has created the need to efficiently match qualified workers with employers and jobs. Prospective hires typically visit employment centers, but these centers are often not able to meet the demand, resulting in excessive wait time and limited access to information about job opportunities. White et al. (2012) report on an intervention to increase access to information, especially for rural candidates who currently use the newspaper to search for jobs, while providing an efficient matching service for employers and workers. Intervention: White et al. (2012) developed an automated phone-based system to assess KSAOs and match applicants with employer needs. The system is designed to allow candidates to post profiles, search for job postings, and apply for open positions. It also allows employers to post ads and search for candidates whose profiles match their needs. Candidate profiles include age, gender, skills, career interests, and experiences. Job profiles include the job title, location, required skills and experiences, and the time frame required for the position (e.g., six months). To implement this system, the tool was first tested for usability. The goal of this project was to assess adoption intentions, employer and candidate willingness to use the system, and feasibility. Measures of success were based on the quality of an employee–employer match according to the criteria in the job profile. How can psychology assist in this effort? The use of technology-based selection systems has been on the rise due to their capability to process applications efficiently and accurately. However, these systems also have the potential to introduce technology-related anxiety and privacy concerns (Mead et al., this volume). Equity of access is also a concern. Consider that in the U.S., most job search and screening systems are computer-based, while a nontrivial percentage of the population does not have online access. This could introduce unintentional biases against African-Americans, for example, 56 percent of whom have internet access at home compared with 66 percent of the overall population. Biases related to age and socioeconomic status can also occur, thus employers need to consider the population they want to reach before deploying a technology-based system. In the White et al. (2012) case described above, phones are arguably more appropriate than computer-based systems because mobile access is more common in the region studied than is computer access.
270 • Tara S. Behrend et al. The system described in the White et al. (2012) case is meant to be used not only for initial screening, but also recruitment and placement. Effective recruitment and placement systems need to take into account a number of additional factors such as person–organization fit, values, and candidate preferences. The most appropriate variables to include in a placement system will depend entirely on the goal of the system. System efficiency, match quality, or employee turnover or performance, for example, all require different considerations. For instance, if short-term performance is most important, an employer may wish to look for candidates who already have the skills required, whereas an employer looking for longterm employees may focus more on fit and provide on-the-job training. The White et al. (2012) selection system makes use of a “learning network”—an algorithm that takes multiple variables into account to automate matching decisions, learning over time from user input about the quality of a given match. The network was used to make matches instead of relying on employers to choose. The use of a learning network like this one represents a distinct technology beyond the phone-based system, and it carries its own set of challenges. For instance, the employers taking part in the process may not always trust the quality of the recommendations. However, the learning network approach could be enhanced if it included additional psychological variables. Perhaps most importantly, criterion variables such as performance and turnover should be collected to assess the quality of the matching algorithm and adjust it over time. Additional predictor variables might also be recommended, including personality, integrity, and cognitive ability, as well as values, goals, and interests for the purposes of determining the potential “fit” between the applicants and the organization for which they would be working. I-O psychology research and best practices related to recruitment could be further integrated into job selection system, such as the case described by White et al. (2012). Personal referrals have long been known to be a good source of job applicants. Referrals have the benefit of providing realistic job information from the referrer to the applicant. Because of this information, referred applicants are better able to assess whether they would be a good fit in the organization. The referring employee also helps in the screening process as it can typically be assumed that an employee would not choose to refer a destructive or incompetent candidate (Premack & Wanous, 1985; Wanous, 1992). The phone-based selection system in the White et al. (2012) case could benefit from a greater utilization of referrals. Even though referrals were not explicitly included in the selection system,
Global Development • 271 the program designers astutely acknowledge the importance of social networks, which as demonstrated by Landers and Goldberg (this volume) can be an invaluable source of information about jobs, leading to better matches and more realistic job perceptions for prospective workers. Altogether, there are multiple opportunities for I-O psychology to enhance phone-based selection systems such as seen in White et al. (2012) by bringing decades of research on selection and recruitment to bear on laudable and promising innovations in relatively resource-constrained environments.
CASE 3: TECHNOLOGY-ASSISTED MEDICAL DECISION-MAKING AND RECORD KEEPING IN KENYA Problem/context: There is a severe shortage of medical providers in subSaharan Africa, which among other challenges, makes it difficult to collect and maintain accurate patient health records. An intervention by Anokwa et al. (2012) focused on 26 USAID-funded clinics in Kenya. All clinics have cellular coverage (meaning some internet access) but only 15 have reliable power; 10 have computers. Physical movement of paper medical records is unreliable or impossible during periods when many patients visit the clinic at once. Electronic records have been suggested as a way to reduce patient wait time, accuracy of records, and management of chronic conditions (by prompting periodic reminders for lab results and tests). Currently, clinicians do not enter data directly into a medical record system; they complete paper forms and hand them off to medical records clerks who enter the data into a computer. The paper forms are then retained in the patients’ charts. Summaries of the computer data are printed by nurses and placed into charts. These multiple steps constitute a significant workload and may partially explain the high frequency in errors made in the process of medical record creation and updating. Intervention: Anokwa and colleagues (2012) developed ODK Clinic, a computerized decision support system (CDSS). This CDSS is a mobile phone application that providers can use to access patient records, receive reminders about lab tests and results, schedule appointments, and view diagnostic recommendations. The intervention involved providing smart phones to doctors, only some of whom already had a phone for personal
272 • Tara S. Behrend et al. use. Training was provided, and the CDSS designer worked with users for initial testing. How can psychology assist in this effort? The design of ODK Clinic followed a traditional human factors development process. An examination of this process provides a good example of how human factors, I-O psychology, and ICTD complement one another. Further, the case highlights how issues of technology acceptance, trust, and social norms are relevant in the successful implementation of technology. We elaborate below. Technology Acceptance: The two key user perceptions that predict technology adoption are usability and usefulness (Venkatesh et al., 2003; Venkatesh & Bala, 2008). These perceptions come from the tool itself, in this case the features of the CDSS, as well as the context. Technology adoption also necessitates a person’s assessment of the tool’s comparative value (which may affect usefulness judgments), and the availability of training and support (which may affect usability). For example, in this case clinicians who had prior experience with smartphones were able to use the CDSS more easily and fluidly, although novices reported needing only a few days to get accustomed to the system. In other cases, extensive training and support may be required before users are ready to adopt a tool. Usability/Human Factors: Anokwa et al. (2012) used a think-aloud protocol, part of a cognitive task analysis, to assess the tool’s usability and usefulness perceptions. This technique allows developers to identify the cognitive processes behind a user’s interaction with a tool to pinpoint any problems. As a hypothetical example, a user might talk through the process of turning on the phone, opening the application, and finding the menu needed to answer a question. “Let’s see, it should be over on the left, then I click here, it’s giving me an error message but I see the information; then I need to copy it over to the other screen . . .” Portability was an important feature of the CDSS, as the think-aloud revealed that doctors needed to move around the clinic to obtain information as they entered it. A key issue in human factors is whether the new technology has the same affordances as the old technology; indeed, there are some tasks that can sometimes be more difficult to accomplish electronically (e.g., note taking in margins, place marking). Anokwa and colleagues took this into account by explicitly modeling the new system on the paper-based system it was meant to replace. To further improve usability, steps were taken to remove any functions that duplicated work (e.g., entering the same information in multiple places). As noted above, designers should
Global Development • 273 explicitly ask, “what does the old technology (e.g., paper records) allow that the new technology (e.g., cell phones) might not?” For example, passing a paper record between providers is a signal of whose responsibility the record/patient is—if this becomes more difficult to track with electronic records, mistakes can occur. One author of this chapter experienced this challenge recently when her department switched from paper graduate applications to electronic applications. Though the e-records have some clear advantages, it became very difficult to refer a candidate to another faculty member (which previously entailed passing the candidate’s paper folders around to refer them). The e-application system could be built to include this affordance easily, for example by adding a drop-down menu with faculty names. But because the designers of the system did not have a clear understanding of the faculty’s workflow and usage habits, this menu was not included and much confusion ensued. It is also important when designing a tool like the one described by Anokwa and colleagues to understand exactly how a person really uses something, which can only be done by asking, observing, and incorporating iterative design cycles. Anokwa et al. did this well; for example, they note, “Clinicians are often interrupted (usually by other providers asking questions) while seeing patients, so we use no transient status or error messages. The application always requires some user input before it will proceed” (p. 9). In another context, such a design feature might be viewed as cumbersome, but in this context it is likely essential. A tool may be perfectly designed from a human factors perspective and still fail to be adopted if social and contextual norms prevent it. For example, in the group training sessions, clinicians brought up concerns about theft for the relatively expensive phones; these clinicians were also worried about possible legal or financial ramifications of losing the phone with patient data on it. Furthermore, phones are assigned to clinics, not individuals—a fact that could raise questions about fairness if some clinicians have greater access than others. Another possible concern is that clinicians may not wish to appear dependent on a phone to assist in diagnostic decision-making. This could negatively affect their reputation as an expert in the eyes of patients or other clinicians. According to the technology acceptance model, concerns like these can prevent the adoption of a technological tool even if it is perceived to be useful and easy to use (Venkatesh & Bala, 2008). To the extent that clinicians interact and discuss these issues with each other, social norms may develop that discourage the use of the tool.
274 • Tara S. Behrend et al.
CASE 4: INTERACTIVE RADIO INSTRUCTION (IRI) FOR TEACHERS IN GUINEA Problem/context: Guinea faces a severe teacher shortage. In particular, there is a critical need to provide math, science, and French instruction to students. Teachers are often underprepared or undertrained to perform their jobs, especially in areas that have difficulty attracting qualified teachers (e.g., locations that are remote, dangerous, or socioeconomically disadvantaged). Schoolrooms seldom have electricity and infrequently support internet or computer resources. Radio, on the other hand, is ubiquitous and teachers are familiar with it. Intervention: IRI is a USAID-funded program that reaches 22,000 K-6 students with a limited number of teachers. The goal of IRI is to provide expert guidance for novice teachers when in-person training/mentoring is impossible. IRI has the secondary goal of improving teacher quality and instructional quality. The desired outcomes of the IRI program include: improved teacher content knowledge and use of high-quality instructional strategies; improved student attendance and learning; diffusion of treatment (e.g., use of IRI strategies in non-IRI classrooms); and improved teacher attitudes. The IRI program involves communication from an experienced educator to students and an in-class teacher who may have little training or experience. The “master teacher” gives the in-class teacher instruction about how to facilitate a given lesson, for example “please walk to the center of the room and call on a female student to provide the answer to this problem.” The master teacher also engages the students directly, leading activities and songs. It is worth noting that the communication is one-way, meaning that the novice teacher cannot communicate with the master teacher. Further, the master teacher gives standardized instructions to all novice teachers and no customization is possible. The program was met with initial resistance from teachers: for example, Burns (2006) notes that one teacher remarked, “I thought, I am the man of the class, I am the teacher. Why would I use the radio? The students will laugh at me (Primary School Teacher, Conakry, Guinea)”(p. 10). Over time, however, the evaluators report that the program has become quite popular. How can psychology assist in this effort? The shortage of qualified workers is a problem faced by both Fortune 500 companies (see Ford and Meyer, this volume) and impoverished rural schools. In the case described above, IRI is used as a substitute for lengthy and expensive training and
Global Development • 275 development programs that might not be possible in settings of low socioeconomic development. However, despite its potential and advantages, the program does not systematically incorporate features that encourage the transfer of skills from the master teacher to the teachers listening. In light of this fact, lasting improvements in teacher skills are unlikely. The IRI program staff are aware of this issue, and note, “Radio—while an effective tool to help teachers gain the basics in curriculum, content and instructional skills—soon exhausts its capabilities . . . If this TPD [teacher professional development] is to involve ICTs, those technology tools (video, for example) must be used in concert with TPD approaches that promote more advanced instructional skills” (Burns, 2006, p. 12). I-O psychology can contribute to overcoming this challenge by identifying ways to encourage the development of higher-level skills that do not rely on difficult-to-obtain technology or unavailable resources. With such an approach, IRI could eventually be used for lasting teacher development instead of as an emergency measure. Expanding IRI for this purpose, however, would require more resources and greater attention to encouraging transfer of training. The next case describes what a program like that might entail.
CASE 5: NFL EARBUD TECHNOLOGY FOR TEACHER COACHING IN TENNESSEE Problem/Context: Resource constraints, poverty, and ensuing problems related to socioeconomic development are not limited to countries classified as “developing.” Like many school districts in the United States, Memphis City Schools in the state of Tennessee have struggled to meet federal educational benchmarks, owing to a number of financial and contextual factors. District officials implemented a Teacher Effectiveness Plan meant to give teachers the support they need to be successful. However, the district needed a way to deliver feedback quickly and consistently to lowperforming teachers. Principals varied widely in the support they gave to teachers, and some teachers were not being observed regularly. Feedback was often delivered months after the performance observation. Intervention: District officials, working with the Gates Foundation and a program called Teach for America, developed a plan to use earbud technology similar to that used by National Football League (NFL) players and coaches to remotely advise teachers, in real time, as they deliver
276 • Tara S. Behrend et al. instruction. The goal of this intervention was to begin delivering teacher feedback that was meaningful and useful, thus improving teacher performance and adherence to best practices. Prior to the intervention, a performance management rubric existed; however, it was not as effective as it could have been. Thus, a second goal of the intervention was to develop a performance management rubric that was based on accurate performance data. A final goal of the program was to bring expert teaching knowledge to schools whose locations made them less attractive to top performing teachers, a common problem in many rural and urban schools. The intervention was implemented in three phases. In Year 1, the district installed one mobile video camera setup per school to allow teachers to film themselves teaching and to submit the video to the district for evaluation. This video data was used to identify highly effective and ineffective teaching behaviors, which in turn formed the basis of the performance management framework. The video-based innovation was an improvement over the former rubric in that it accurately reflected what teachers were doing in the classroom and focused on specific behaviors. In Year 2, the district began a pilot program that used the aforementioned earbud technology to allow coaches to provide teachers with real-time feedback when they deviated from the behaviors on the rubric. The monitoring occurred during a series of pre-arranged lesson periods. For example, if a teacher was experiencing a classroom management problem during the monitored class, the coach would recommend the correct phrases to use when addressing the problem, and would provide feedback on the action taken by the teacher. The coach and teacher met before each session to decide on a plan for the class, and the coach intervened during class if the teacher deviated from the plan. For this initial pilot test, teachers self-selected into the program based on their self-assessed coaching needs. In Year 3, the district expanded the program by selecting teachers who were receiving low scores on the observation rubric and recommended them for coaching to the principal, addressing potential problems with selfselection. Evaluation data showed that 76 percent of evaluations were completed on-time in 2011–2012 (24,000 observations completed by 600 observers— far more than would have been completed in-person). Seventy percent of teachers reported being satisfied with the feedback they received under the new system. The distribution of performance ratings shifted from 99 percent effective to a more realistic 65 percent effective (Memphis City Schools, 2012). Teachers reported that honest, constructive dialogue was taking place and setting the tone for continuous improvement.
Global Development • 277 How did psychology assist in this effort? This case demonstrates an intervention that was carefully planned and executed with attention to many of the human issues involved in the implementation of workplace technology. Initial evaluation data suggest that the intervention has been successful thus far, partially because of this planning and attention to psychological factors. As described next, this program’s handling of performance management, electronic monitoring, and transfer of training are particularly noteworthy. Performance Management: The benefits and challenges associated with performance management from a distance are described earlier in this book (Farr, Fairchild, & Cassidy, this volume). Farr et al. note that electronic performance management systems have the potential to save an organization time and money in collecting performance data about individual employees and also allow much more real-time feedback to be delivered. This is important because to the extent that feedback is specific and timely, it is much more likely to be accepted and acted upon (Atkins, Wood, & Rutgers, 2002). In addition, frequent and specific feedback is more likely to be viewed as developmental (Kluger & DeNisi, 1996). Pitfalls of electronic performance management can include problems with user acceptance and adoption: in order for employees to use a system, they must view it as fair and easy to use. In the case described above, each school had only one camera, requiring a significant amount of scheduling and coordination. If teachers who want to use the camera cannot use it, there is the potential that it will be seen as cumbersome and unfair. However, the effects of such problems could be offset by positive views fostered by a system that enables teachers to be highly supported, receiving more frequent, objective, and developmental feedback than they did previously. Electronic Monitoring: Electronic performance monitoring should be transparent and clearly job-related if it is to be accepted by employees. In the case described above, teachers appeared to welcome the monitoring, largely because of the way in which it was used. Control over the observation is very important; teachers knew exactly when they were being monitored, what was being captured, and why. This policy meets nearly all of the best practices for electronic monitoring, as Alge and Hansen describe (this volume), in that it is fair, clear, accurate, and job-related. Monitoring was also used not to surveil and catch bad behavior, but to convey the message that the district cared about helping teachers improve their performance and wanted to support them. The developmental as opposed to administrative orientation of the program was probably part of what made it successful (McNall & Roch, 2009). Prior to this, teachers
278 • Tara S. Behrend et al. were only observed every five years and they received feedback only at yearly evaluations, which is too late to be useful for changing behavior. Transfer of Training: Supervisor support is essential for ensuring the transfer of trained behaviors such as those identified in the performance rubric described above. The 200 individual principals in the district involved in this case varied in their ability and willingness to set a supportive tone for the feedback given, and to reinforce the behaviors learned in coaching. In schools with effective leaders, teachers are likely to be more open to feedback and to improving their performance, whereas teachers with less effective leaders might be more defensive. Effective leaders create what Ford and Meyer (this volume) refer to as a “learning culture” where teachers are encouraged to seek out feedback and learn from others, including peers and principals, and where teachers feel comfortable collecting data about their own performance for the purposes of selfimprovement. This culture helps prevent skill decay or mismatches between trained behaviors and those required by supervisors. Overall, this case demonstrates that technology can be used in a resource constrained setting to address performance management and training problems in an effective way when psychological factors are taken into account.
SUMMARY/FUTURE AGENDA As demonstrated in the preceding case studies, an understanding of human behavior at work is essential to the success of ICTD interventions designed to use technology to enhance socioeconomic development around the world. I-O psychology’s engagement with technology in the workplace is well established, as is ICT’s engagement with international development through the field of ICTD. However, I-O psychology’s overt and organized consideration of issues related to international development is less established. While there have always been I-O psychologists who dedicate their science and practice to the welfare of workers and their communities, their efforts have typically not been coordinated and have seldom systematically interfaced with larger trends and initiatives in the international development community. However, this is beginning to change. In 2009, a group of I-O psychologists founded the Global Task Force for Humanitarian Work Psychology, which is devoted to enhancing organizational psychology’s engagement with the humanitarian arena. The
Global Development • 279 establishment of the Global Task Force was presaged by repeated calls for I-O psychology’s greater collective engagement with pro-social and humanitarian issues in the Society for Industrial and Organizational Psychology’s (SIOP’s) quarterly publication TIP (e.g., Berry, Reichman, & Schein, 2008; Carr, 2007; Lefkowitz, 1990). These calls have begun to be answered by SIOP and other I-O psychology organizations. For example, consider SIOP’s organized efforts to assist relief efforts in the wake of Hurricane Katrina (Rogelberg, 2006). In addition, the International Association of Applied Psychology Division 1 (IAAP-1) has established its own task force to support the development of the nascent sub-field of humanitarian work psychology. SIOP’s new role as a nongovernmental organization with consultative status at the United Nation’s Economic and Social Council (Scott, 2011) also illustrates I-O psychology’s increasing voice in global arenas related to international development. Despite this progress, there is a great deal of room for expansion in the way I-O psychologists work towards the goals of global development. First, it is our belief that I-O psychology can be more proactive in advocating for a better understanding of human behavior in international development initiatives, many of which entail technological innovations, as demonstrated in this chapter. I-O psychologists can help, for example through an analysis of person- and team-level factors that facilitate or hinder the adoption and use of technological tools, including ability, motivational, and situational factors. Second, I-O psychologists should have greater visibility and establish better connections to policy makers and funding/development agencies, including those focused on ICTD. This chapter demonstrates that I-O psychology theory, research, and practice have the potential to contribute substantially to ICTD. Turning this potential into a reality is a pressing matter for the field of I-O psychology to address, moving forward. Another pressing matter, however, entails a serious consideration of what ICTD might have to contribute to I-O psychology. What could we learn from the world of technology and global development? We offer three possible best practices related to ICTD, which could benefit I-O psychology practice and research, namely: being problem-focused instead of technology-focused; viewing participants as stakeholders; and considering the sustainability of interventions. Problem focus: We should resist the temptation to let the technology drive the research question. The needs of the workers and stakeholders affected should drive the research question, and technology should be developed/identified accordingly. As an example, research in the arenas of training and selection often focuses on new, expensive assessments and
280 • Tara S. Behrend et al. training methods. It might be that social problems are best solved through the use of “old” technologies like land-line phones and radios, or creative uses of scaled-back technologies such as SMS text messaging. Participants as stakeholders: Research conducted in developing settings requires careful consideration of the participants asked to provide data. Participants who feel ignored or taken advantage of, or who see no benefits stemming from their participation in research, will be hesitant to participate again, regardless of whether the new project is conducted by the same researchers. It is also important to remember that power dynamics exist between researchers and participants, for example when educated researchers from a high-income setting enter a lower-income community to collect data on an innovation designed to improve work. These power dynamics can affect the quality of data obtained (e.g., social desirability to the extreme). A related issue is the danger of asking for opinions and not acting on them. Survey respondents who perceive action was taken based on survey results are more open to participating and responding positively in future research endeavors (Church & Oliver, 2006). Sustainability of interventions is a common theme underlying the various cases presented in this chapter. What happens to a development innovation designed to improve work when the funding runs out? To be sustainable, an intervention has to focus on changing behavior in a way that does not depend on a given technology if the technological innovation will not be available when the project at hand comes to a close. This relates to the question of stakeholder involvement. Anokwa and colleagues described their experience with a group of doctors while attempting to introduce a new program: “We interviewed doctors about what their hopes were for ‘telemedicine’ but found a few who were quite bitter about their previous experiences. Many of these consisted of one-off, short-term deployments that had subsequently broken down unannounced” (Anokwa et al., 2009, p. 15).
CONCLUSION Historically, I-O psychology’s understanding of how technology affects workers and the nature of work has focused predominantly on corporate and military settings, which have significantly different situational constraints compared to the settings targeted by global development initiatives. This is not a new observation, but if we wish to contribute in a
Global Development • 281 meaningful way to solving problems of global development, we must begin to explicitly consider the financial and cultural context of our science. Our hope is that the science of workplace psychology, technology, and global development will become integrated over time so that each discipline can benefit from the others. This integration will be essential in working towards solving some of the biggest challenges our world faces today.
REFERENCES Alge, B. J., & Hansen, S. D. (2013). Workplace monitoring and surveillance research since “1984”: A review and agenda. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York: Routledge Academic. Anokwa, Y., Smyth, T., Ramachandran, D., Sherwani, J., Schwartzman, Y., Luk, R., Ho, M., Moraveji, N., & DeRenzi, B. (2009). Stories from the field: Reflections on HCI4D experiences. Information Technologies and International Development, 5, 101–115. Anokwa, Y., Ribeka, N., Parikh, T., Borriello, G., & Were, M. (2012). Design of a phonebased clinical decision support system for resource-limited settings. Information and Communication Technologies and Development (ICTD). Atkins, P. W. B., Wood, R. E., & Rutgers, P. J. (2002). The effects of feedback format on dynamic decision making. Organizational Behavior and Human Decision Processes, 88, 587–604. Berry, M. O., Reichman, W., & Schein, V. E. (2008). The United Nations Global Compact needs I-O psychology participation. The Industrial-Organizational Psychologist, 45(4), 33–37. Berry, M. O., Reichman, W., Klobas, J., MacLachlan, M., Hui, H. C., & Carr, S. C. (2011). Humanitarian work psychology: The contributions of organizational psychology to poverty reduction. Journal of Economic Psychology, 32, 240–247. doi: 10.1016/ j.joep.2009.10.009 Burns, M. (2006). Improving teaching quality in Guinea with interactive radio instruction. InfoDev Working Paper #2, Retrieved from http://www.infodev.org/en/Publication. 500.html. Carr, S. C. (2007). I-O psychology and poverty reduction: Past, present, and future? The Industrial-Organizational Psychologist, 45(1), 43–50. Carr, S. C., & Bandawe, C. R. (2011). Psychology applied to poverty. In P. R. Martin, F. M. Cheung, M. C. Knowles, M. Kyrios, J. B. Overmier, & J. M. Prieto (Eds.), IAAP Handbook of Applied Psychology (pp. 639–662). Oxford, UK: Wiley-Blackwell. doi: 10.1002/9781444395150.ch28 Church, A. H., & Oliver, D. H. (2006). The importance of taking action, not just sharing survey feedback. In A. I. Kraut (Ed.), Getting action from organizational surveys: New concepts, technologies, and applications (pp. 102–130). San Francisco: Jossey-Bass. Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determinaton in human behaviour. New York: Plenum. DeRenzi, B., Findlater, L., Payne, B., Mangilima, J., Parikh, T., Borrielo, G., & Lesh, N. (2012). Improving community health worker performance through automated SMS. Proceedings of ICTD ’12, March 12–15, Atlanta GA.
282 • Tara S. Behrend et al. Dodson, L. L., Sterling, S. R., & Bennett, J. K. (2012). Considering failure: Eight years of ICTD research. Paper presented at the meeting of the International Conference on Information and Communications Technologies and Development (ICTD), Atlanta, GA. Farr, J. L., Fairchild, J., & Cassidy, S. E. (2013). Technology and performance appraisal. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York: Routledge Academic. Ford, J. K., & Meyer, T. (2013). Advances in training technology: Meeting the workplace challenges of talent development, deep specialization, and collaborative learning. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York: Routledge Academic. Gloss, A. E., & Thompson, L. F. (2013). I-O psychology without borders: The emergence of humanitarian work psychology. In J. B. Olson-Buchanan, L. K. Bryan, & L. F. Thompson (Eds.), Using I-O psychology for the greater good: Helping those who help others (pp. 353–393). New York: Routledge Academic. Gomez, R., Baron, L. F., & Fiore-Silfvast, B. (2012). The changing field of ICTD: Content analysis of research published in selected journals and conferences, 2000–2012. Paper presented at the meeting of the International Conference on Information and Communications Technologies and Development (ICTD), Atlanta, GA. Heeks, R. (2007). Theorizing ICTD research. Information Technologies and International Development, 3(3), 1–4. Heeks, R. (2010). Do information and communication technologies (ICTs) contribute to development? Journal of International Development, 22, 625–640. Henrich, J., Hein, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Science, 33, 61–135. International Telecommunications Union (2010). World telecommunication/ICT development report 2010: Monitoring the WSIS targets. Retrieved from www.itu.int/dms_ pub/itu-d/opb/ind/D-IND-WTDR-2010-PDF-E.pdf Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254–284. Landers, R. N., & Goldberg, A. S. (2013). Online social media in the workplace: A conversation with employees. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York: Routledge Academic. Latham, G. P., & Pinder, C. C. (2005). Work motivation theory and research at the dawn of the twenty-first century. Annual Review of Psychology, 56, 485–516. Lefkowitz, J. (1990). The scientist-practitioner model is not enough. The IndustrialOrganizational Psychologist, 28(1), 47–51. Leung, K. (2001). Different carrots for different rabbits: The effects of individualismcollectivism and power distance on work motivation. In M. Erez, U. Kleinbeck, & H. Thierry (Eds.), Work motivation in the context of a globalizing economy (pp. 29–39). Mahwah, NJ: Erlbaum. Lewin, K. (1951). Field theory in social science: Selected theoretical papers. New York: Harper & Row. Locke, E. A., & Latham, G. P. (1990). A theory of goal setting and task performance. Englewood Cliffs, NJ: Prentice-Hall. Locke, E., & Latham, G. (2002). Building a practically useful theory of goal-setting and task motivation: A 35-year odyssey. American Psychologist, 57(9), 705–717.
Global Development • 283 McNall, L. A., & Roch, S. G. (2009). A social exchange model of employee reactions to electronic performance monitoring. Human Performance, 22, 204–224. Mead, A. D., Olson-Buchanan, J. B., & Drasgow, F. (2013). Technology-based selection. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York: Routledge Academic. Memphis City Schools. (2012). Teacher effectiveness initiative stock take progress report. Memphis, TN: Author. Mount, M. K., & Barrick, M. R. (1995). The Big Five personality dimensions: Implications for research and practice in human resource management. Research in Personnel and Human Resource Management, 13, 153–200. Pick, S., & Sirkin, J. T. (2010). Breaking the poverty cycle: The human basis for sustainable development. New York: Oxford University Press. Premack, S., & Wanous, J. P. (1985). A meta-analysis of realistic job preview experiments. Journal of Applied Psychology, 70, 706–719. Rogelberg, S. G. (2006, April). Katrina Aid and Relief Effort (KARE). The IndustrialOrganizational Psychologist, 43 (4), 117–118. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68–78 Scott, J. C. (2011, October). SIOP granted NGO consultative status with the United Nations. The Industrial-Organizational Psychologist, 49 (2), 111–113. Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance. Psychological Bulletin, 124, 240–261. Tuckey, M., Brewer, N., & Williamson, P. (2002). The influence of motives and goal orientation on feedback seeking. Journal of Occupational and Organizational Psychology, 75, 195–216. United Nations. (2011). Millennium development goals report 2011. Retrieved from www.un.org/millenniumgoals/11_MDG%20Report_EN.pdf Van Dijk, J. A. G. M. (2006). Digital divide research, achievements and shortcomings. Poetics, 34, 221–235. Venkatesh, V., & Bala, H. (2008). Technology Acceptance Model 3 and a research agenda on interventions. Decision Sciences, 39, 273–315. Venkatesh, V., Morris, M. G., Davis, F. D., & Davis, G. B. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 425–478. Wanous, J. P. (1992). Organizational entry: Recruitment, selection, orientation and socialization of newcomers (2nd edn.). Reading, MA: Addison-Wesley Publishing. White, J., Duggirala, M., Srivastava, S., & Kummamuru, K. (2012). Designing a voice-based employment exchange for rural India. Proceedings of ICTD ’12, March 12–15, Atlanta GA. World Health Organization (2007). Task shifting: Global recommendations and guidelines. Geneva, Switzerland. From www.who.int/healthsystems/TTR-TaskShifting.pdf. Accessed May 11, 2012.
13 Online Social Media in the Workplace: A Conversation with Employees Richard N. Landers and Andrea S. Goldberg
Recruiters now routinely check job candidate online social networking profiles, like those available through Facebook and LinkedIn, before extending offers (PR Newswire, 2010). Employees routinely check their own social networking profiles and news feeds throughout the workday, taking time away from their primary job responsibilities, despite policies prohibiting this behavior (Cisco, 2010). Some companies have even created private international social networks for their employees to interact, find internal experts with specific skill sets and learn collaboratively across traditional organizational and geographic barriers (e.g., IBM, 2011). Customers gain a global audience when complaining about an organization, and employees may feel compelled to respond publicly via social media in response to such accusations (Weber, 2010). Employees have made offhand comments via social media and were later dismissed after their supervisors discovered the content of those comments (Valentino et al., 2010). Social media is increasingly permeating the daily operations of most organizations, across a wide range of functional areas. And yet there is little or no prior academic research on social media in most of these domains, creating a problem for the industrial-organizational psychologist trying to make organizational decisions or conduct research in this new area. The purpose of this chapter is thus twofold: 1) to introduce and define the concept of social media at work in relation to industrial and organizational (I/O) psychology, alongside descriptions of some of the currently popular technologies in this domain; and 2) to present case studies from organizations innovating with social media in order to lay a foundation for current practice and future research. 284
Online Social Media in the Workplace • 285
DEFINING SOCIAL MEDIA Broadly defined, the term “social media” refers to any internet- or intranetbased system where users of that system can share content they generate with each other (Kaplan & Haenlein, 2010). This user-generated content can take many forms, including text, images, and videos. These systems evolved most directly from the “Web 2.0” movement, which was itself an evolution of the early internet toward a user-centered experience. In the early 1990s, most websites were informational in nature. Marketing and public relations professionals would decide what information best represented their organization and would create a virtual storefront (i.e. a web page) to deliver this information. By 2010, a dramatic shift had occurred; with over 500 million members, the most popular website on the internet was not an e-commerce website or even a search engine like Google (www.google.com); instead, it was Facebook (www.facebook.com), a website focusing on the online representation of and connection to a person’s social network. Facebook functions primarily by encouraging its members to provide information about themselves, which is then shared with their circle of “Facebook friends,” as well as with any organizations to which they choose to release that information. This focus on content generation by independent individuals (rather than by often ambiguous corporate entities) is a defining feature of modern social media (for a more complete history of social media through the technologies that inspire it, see Kaplan & Haenlein, 2010). Facebook is most specifically an example of a social network site (SNS), which is only one type of social media. A SNS has three defining features: 1) users can create a profile to describe themselves, which may or may not be accessible publicly; 2) users can specify others to which they wish to be connected; and 3) users’ social networks are explicitly articulated and often made visible to others (boyd & Ellison, 2008). Typically, all members of a SNS are on the same access level; no user of the system has more control over content than any other user. On Facebook, for example, users create personal profiles that contain various pieces of biographical information, including educational history, work history, likes and dislikes, and interpersonal relationships. They specify to whom they wish to connect, which in Facebook parlance is called “friending.” This friending process results in easier information sharing between “Facebook friends” and the creation of a virtual representation of that user’s real-life social network, which is also visible on their profile. SNSs may be public (like Facebook)
286 • Richard N. Landers and Andrea S. Goldberg or private; the Academy of Management promotes the use of its own private SNS, AOM Connect, in order to provide SNS services for its membership, whereas the Society for Industrial and Organizational Psychology (SIOP) promotes its own SNS, my.SIOP. As the public has become more aware of the challenges associated with online privacy, and as SNSs have increased their focus on organizations, even public SNSs like Facebook now allow for invitation-only groups and closed communities. In an I/O context, SNSs might be used to connect employees and management virtually in order to communicate policies and practices and organizational vision, or to share opinions, experiences, and advice. These conversations may range from private one-to-one exchanges to open communications that are visible to the entire community. Blogs (a portmanteau of “web log”) are a second popular type of social media. Blogs follow a model of content distribution quite unlike a SNS. In a blog, the author or authors typically publish articles to the internet at large, rather than to a closed social network. Content is controlled by a specific individual or group of individuals who have ultimate authority over the blog’s content. Other internet users participate primarily by reading and/or leaving comments on specific articles (or “blog entries”), which are also visible to all future visitors to the blog. For example, SIOP maintains a blog called the SIOP Exchange, which it uses to communicate socially with its membership. Invited authors produce new articles periodically, and anyone who visits the blog can comment on these articles, though it presumably targets members of SIOP. This can be a valuable tool to get informal feedback within an organization, as demonstrated by a series of posts in 2009 regarding a potential name-change for SIOP; 129 comments were made in roughly a week in a discussion on the various pros and cons supported by each participant (see The SIOP Executive Board, 2009 and associated posts). In an organizational context, blogs are most commonly used to create an official online “voice” of the organization to customers, and for employees to publicly comment on the internal daily goings-on within their organization. In 2006, as many as 5 percent of American workers authored personal blogs, but merely 15 percent of organizations had policies regarding blogs (Sogn & Pfeifle, 2006). Only a few years later, each of these estimates doubled, with 11 percent of adult Americans writing blogs (Mendelson, 2011) but only 29 percent of organizations articulating social media policies (McCollum, 2010). Given the popularity and high visibility of blogging, one might have expected a significant number of companies to have developed formal social media guidelines over this timeframe; however, this was not the case.
Online Social Media in the Workplace • 287 The content of employee blogs varies greatly, with some bloggers posting information potentially damaging to their employers. It is unclear what motivates employees to post such information. On one side, it is possible that employees are engaging in what they perceive to be “water-cooler talk,” and do so simply as a way to vent, without considering the global audience to which they are venting. On the other hand, such comments may be a type of whistle-blowing, which is motivated by many different psychological states and traits (Miceli, Near & Dworkin, 2008). For a summary of known employee dismissals related to blogging and a discussion of the legality and ethicality of this practice, see Valentino et al. (2010). Considering the cases they report as well as the many embarrassing employee actions and lawsuits reported on in the press, it is clear that too few companies have articulated social media guidelines. Even among those companies that do have these types of policies, it is unknown how clearly they are communicated and understood by employees or by management. A third type of social media, micro-blogs, gained popularity with Twitter (www.twitter.com), which by 2010 boasted over 175 million users (Twitter, n.d.). Micro-blogs differ from blogs in that the length and complexity of article content is severely limited by some arbitrary restriction. In the case of Twitter, microblog entries (called “tweets”) are limited to 140 characters of text, including spaces. This limit was used to encourage tweets sent from mobile devices; the 160-character limit of text messages allows the content of one tweet plus identifying information (for discussion of these seemingly random limits, see Milian, 2009). Some researchers have examined the value of Twitter for improving the quality of collaborative work with promising results (Zhao & Rhosson, 2009), but Twitter is most commonly used today as a way for organizations to make informal announcements to and connect with their customers. Wikis are a fourth common type of social media. Wikis, generally, are web pages with content editable by any user of that web page, regardless of the user’s credentials or relationship to the wiki’s owner. A prominent example is Wikipedia (www.wikipedia.org), the goal of which is to provide a freely accessible user-contributed encyclopedia rivaling those currently published. This approach has been successful by many metrics, with research indicating that the major encyclopedic content of Wikipedia is nearly as accurate in describing phenomena of interest as is Encyclopedia Britannica (Giles, 2005), a particularly impressive feat considering its writers are entirely unpaid volunteers. The use of such effort to meet organizational goals is most commonly called crowdsourcing. Evidence is beginning to mount that this technique is a valid method by which
288 • Richard N. Landers and Andrea S. Goldberg to gather information from the masses for a wide variety of purposes, from the collection of innovative business ideas (Kozinets, Hemetsberger & Schau, 2008) to the recruitment of research participants (Behrend et al., 2011). Social media is not limited to text-based content on web pages; a fifth type of social media are the multi-user virtual environment (MUVE; sometimes called a virtual world). MUVEs are text-based, graphical 2D, or rendered 3D navigable environments where users can interact with each other over an intranet or the internet. Each user currently participating in a MUVE is represented to other users by his or her agent in real-time; if two users were using a MUVE simultaneously and their agents were collocated in the virtual space, their agents would be able to interact with each other (although the degree to which they could interact depends upon the features of the MUVE). When an agent is graphical or a 3D model, it is called an avatar. For example, in a text-based MUVE, a user might type “go north” or “say hello” to navigate and communicate, respectively. Other users would see a text-based announcement of this action (e.g. “John entered this room from the south”). In a 2D MUVE, the user might click on destinations to indicate where his or her avatar should move, and others would see a graphic representing that user move from location to location. In a 3D MUVE, the user most often presses the arrow keys while controlling a 3D camera with the mouse, and others would see the user’s 3D representation (typically a 3D model of a person) move. The most popular commercial 3D MUVE, Second Life, allows users to create virtual objects and experiences with few limits placed on what might be accomplished. For example, the first author of this chapter runs a virtual assessment and training research laboratory in Second Life, complete with conference room, a virtual projector for delivering presentations, and a virtual office for private meetings (Landers, n.d.). This 3D virtual space can be reconfigured at will, making it a relatively high fidelity but inexpensive alternative to almost any traditional meeting space, including training rooms and assessment centers. If a new virtual object is needed, it can often be created and added to that environment within minutes to be experienced by all users currently in the virtual vicinity. Like the other social media platforms described here, Second Life is open and allows anyone to join the community, but also provides for different levels of access so that companies or individuals seeking secure, proprietary spaces can be served. The second author of this chapter, for example, regularly attended confidential meetings in Second Life while she was
Online Social Media in the Workplace • 289 employed at IBM. IBM had purchased space in Second Life, and only those with IBM.com identifiers were permitted to enter. The list of media above is neither parsimonious nor exhaustive; new types of social media are invented regularly, and new systems often blur the lines between previously defined technologies. For example, YouTube (www.youtube.com) centers on user-posted videos; however, it also includes personal profiles, friend lists, interpersonal communications, and other characteristics of a traditional SNS. Yelp (www.yelp.com) also has these SNS features but is focused on the provision of anonymous, unbiased reviews of local restaurants, hotels, tourist attractions, and so on. Angie’s List (www.angieslist.com) shares anonymous, unbiased reviews of housing contractors, and users have personal profiles, but does not release this information publicly; instead, a paid membership is required to ensure the integrity of reviews. The most appropriate categorizations of these technologies are unclear, and no comprehensive taxonomy exists so far to distinguish them. Perhaps more central to the mission of this book, there is also no clear understanding of the specific psychological effects of including or excluding any particular characteristics of each of these social media. Perhaps the best attempt so far is Kaplan and Haenlein’s (2010) sixcategory classification of technologies based upon their media richness and the degree of self-presentation they enable, but this is far from definitive. Miles and Hollenbeck (this volume) in their review of teams and technology, found that although interaction frequency and quality was of key importance to both face-to-face and virtual teams, interaction frequency and interaction quality had a much greater effect on groups with low authority differentiation. They suggested that virtual teams responsible for quick decisions are likely to be more effective if they have a strong, hierarchical leader. They also found mixed results as to whether virtual teams lag behind in trust formation. Given the less hierarchical nature of social media and the peer-to-peer trust inherent in social networks, it remains to be seen whether the factors that influence virtual team effectiveness operate in the same way in social media communities. While reduced media richness, with effects like the absence of social and status cues, is problematic for members of small teams, these issues may be amplified in these larger groups of often casual acquaintances. In summary, the term “social media” does not refer to a specific technology, but rather to a family of technologies with a common set of ideals at the core of their design: 1) users should be able to generate their own content to share as they wish; 2) information should be free and honestly provided; 3) personal opinions from unbiased persons can be
290 • Richard N. Landers and Andrea S. Goldberg trusted; and 4) the mob is wise. Anyone attempting to deploy a social media policy or solution within their organization should be careful to heed these ideals when planning. Without guidance, the burden placed on employees navigating this brave new world of social media can be quite high. Workers must consider how their personal social media presences impact their work, their reputation at work, and their organization’s reputation. Should an employee’s personal social media presence be separate from their professional presence? Without careful consideration of these issues, even seemingly forward-thinking social media efforts can backfire spectacularly (for a prime example, see Hafner’s (2007) discussion of the scandal triggered by an employee of Anheuser-Busch editing the organization’s Wikipedia entries unethically).
THE IMPACT OF SOCIAL MEDIA OUTSIDE OF I/O PSYCHOLOGY The introduction of social media has fundamentally changed the way people interact with information (Solis & Breakenridge, 2009). Most evidence of this is quite recent and comes from scattered interdisciplinary research and simply by observing world news. To illustrate this, brief descriptions follow of some of the major world events in which social media has recently played a central role. In 2008, social media played a role in the U.S. presidential election primarily through increased political participation among young adults. Survey research conducted by Pew Internet Research revealed that 42 percent of adults aged 18–29 sought campaign information through the internet while 27 percent sought it on social network sites, making social media the second most popular source of information for this demographic. Although social media are not generally produced by trained journalists and is thus in many ways one of the most biased potential sources of political information, this survey also revealed that adults viewed internetbased sources as less politically biased than coverage by traditional news organizations (Kohut, 2008). Almost all political candidates used social media in this election to communicate with the public, and social media became an easy way to actively participate in a political campaign (Hayes, 2008). While correlational evidence is available tying political use of social media to a variety of outcomes, including political participation (Kushin & Yamamoto, 2010), the direction of causality is unclear.
Online Social Media in the Workplace • 291 Yates and Paquette (2011) provide a case study on the use of social media in response to the 2010 Haiti Earthquake, as U.S. government relief efforts between the U.S. Agency for International Development, the U.S. State Department, and the U.S. military were coordinated via two social media platforms. Information during a disaster tends to be made available to relief personnel piecemeal, so social media’s flexibility made it an ideal medium by which to spread that information among the many relief forces that needed it. This drove the relief efforts to coordinate using two platforms: 1) SharePoint, a collaborative tool with many “social” features; and 2) MediaWiki, a customizable wiki platform where individual informational web pages could be created on each element of the relief effort by anyone involved. While a great deal of information was added to the wiki (over 1,000 updates), the people generating the content often did not take the time to categorize, label, and organize their contributions, making it a rich but difficult-to-navigate resource. As relief personnel increased their expertise with both platforms, the value of each improved. In 2011, an 8.9-magnitude earthquake off the coast of Japan and the resulting tsunami caused hundreds of deaths, as well as severe damage to residential and commercial structures, including a nuclear power plant (Buerk, 2011). During the relief efforts, social media played a key role in two ways. First, it provided a quick and convenient medium by which to collect monetary support. Zynga, Inc., a company that creates social games that are played in the context of a SNS (e.g. FarmVille, CityVille, Mafia Wars), used its connection to its player base to collect $2.5 million for Save the Children and the American Red Cross in less than two weeks (Lipton, 2011). Second, Twitter specifically played a major role in the crisis, as 1) a source of first-hand accounts of the disaster as it happened; 2) an official source of information from the crisis management office in Kesennuma, one of the hardest-hit cities; and 3) a lifeline for residents trapped by flood waters (Dugan, 2011). Also in 2011, social media played a role in the political uprisings in the Middle East, including those in Egypt, Tunisia, and Libya. While social media certainly did not create revolution, it was an important tool in the arsenal of those seeking change (Villarreal, 2011). The perceived impact of social media can most clearly be observed by the decision of these governments to shut down internet access for the population as a means by which to slow protester progress (Kanalley, 2011; Olivarez-Giles, 2011a). Social media has been credited by some as the most important news source in Tunisia during regime change despite attempted censorship by those in power at the time (Olivarez-Giles, 2011b).
292 • Richard N. Landers and Andrea S. Goldberg Just as the face of world affairs is changing due to social media, the structure of work and work processes are beginning to change. Rapid communication and easier deployment across physical and temporal boundaries enabled by social media change how work can be conducted. Not only must organizations engage with customers via social media to remain competitive, but they must also adapt to new employee expectations. As “digital natives,” those who grew up with social media, continue to enter the workforce, they bring with them expectations for easy online collaboration with all those around them in both virtual and brick and mortar organizations. With this shift, there will be increased pressure to re-examine traditional I/O knowledge about job roles, teaming, training and leadership in these new frontiers of employee behavior.
THE POTENTIAL OF SOCIAL MEDIA WITHIN I/O PSYCHOLOGY Despite the great deal of power and influence evidently wielded by those using social media, corporations have been somewhat slow to adopt social media policies and strategies. This is peculiar, especially considering the human tendency towards fads and fashion (Dunnette, 1966), as there is little currently as fashionable as social media. Some argue that this resistance is because the introduction of social media, and the resulting shift in the control of information from the rich and powerful to the average person, is fundamentally opposed to the goals of many traditional organizations (Solis & Breakenridge, 2009). From the perspective of the present authors, this wave of change is inevitable; for an organization to maximize its chances for survival, it is far better to embrace it as early as possible. To illustrate the potential of social media in the context of I/O psychology, we will describe two case studies of organizations that have jumped in head first.
CASE 1: RECRUITMENT AND RETENTION AT THE ROYAL BANK OF CANADA One exceptionally innovative use of social media was the deployment of a private online social network by Royal Bank of Canada (RBC), intended
Online Social Media in the Workplace • 293 to improve recruitment and retention among Aboriginal Canadians, part of RBC’s effort to meet the requirements of Canada’s federal Employment Equity Act. The path toward social media as a potential solution began with a consideration of how technology might be leveraged to improve RBC’s relationship with Aboriginal employees. There was a long-term retention challenge among this group of employees. It was thought that one of the reasons for this was that Aboriginal employees had a limited view of how financial services contributed to the Aboriginal community. The head of diversity at RBC felt they needed to do a better job communicating how helping the Aboriginal people to be financially secure did make a significant contribution to the Aboriginal community. Another, perhaps greater issue, was the fact that despite RBC employing approximately 1,500 Aboriginal Canadians, these employees were spread throughout the country. There was concern that these individuals might feel isolated and out of place within RBC, as each would often be the only Aboriginal person within a branch or office. Around the same time that these investigations were being made, the senior project manager for talent management in HR had been asked to assess the degree to which older generations at RBC would be willing to adopt new technologies. She found that if they were trained properly and saw a benefit to a technology, they would be quick to adopt it, thus opening the door for a technology initiative to solve HR problems. The need to enhance communication among socially and physically isolated employees in a population receptive to new technologies created a clear solution—a social network site, which was later named RBC One Heart. Interestingly, this was not the first time a SNS was suggested at RBC. Previously, one of the business groups that ran the contact centers was facing challenges with information exchange among employees. Creating a SNS was a potential solution, enabling employees to keep track of both repetitive and unique customer issues and to share their solutions to those issues. Ultimately, the group did not want to create a separate source of information from what they already had, instead choosing to implement several workarounds. Given the prior reluctance of one of RBC’s business units to create a SNS, HR decided to sponsor the Aboriginal-focused network themselves and recruited personnel from Diversity and RBC’s IT function to collaborate on the project. Sponsorship of the project was also gained from the Royal Eagles Aboriginal Leadership Council, an internal group concerned with management of Aboriginal markets and oversight of the
294 • Richard N. Landers and Andrea S. Goldberg promotion and employment of Aboriginal Canadians at RBC. The Council had previously made a commitment to the Canadian government to improve Aboriginal goals and had difficulty achieving this objective. This motivated them to try something more innovative like the One Heart project. The Council liked the idea of a SNS and felt that it would connect the Aboriginal population across the country. Both the council and RBC HR were concerned with the implications if this new platform were to fail, as this was the first major attempt to implement an employee SNS within RBC and would, as a result, be closely scrutinized. Thus, it was critical to get commitments from all those involved to persevere until a workable solution was found. Support from IT was critical, as the need to install new software to set up the network necessitated a separate program manager to be available as support. One Heart launched in April 2010 on the front page of RBC’s employee website alongside the promotional content, including a quote from the CEO. Information about it was included in the monthly manager newsletter. Although One Heart was focused on the Aboriginal population, anyone could join. The team reached out to the groups who could leverage the site and provide content. Despite these efforts, the initial adoption rate was low. The targeted groups it was intended to help were not participating. It simply was not part of their routine. They ran a contest for the name and to design the logo, which helped, but not as much as was hoped. They improved on the value of the content and increased the aggressiveness of their promotional campaign by expanding the core team to six people, including an Aboriginal employee responsible for Aboriginal recruiting. They also requested and gained increased involvement of the Royal Eagles Council. Finally, they asked this new team to write new more relevant material for the network while seeking commitments from other RBC groups. As information on the adoption of One Heart began to come in, the team discovered that many managers had not understood the purpose of the network and had told their Aboriginal employees that they could not be on the network during company time. This prompted yet another promotional effort, concentrating on communicating that One Heart was an acceptable use of work time: “It’s not Facebook. It’s for work.” While this effort did improve usage, the conversation was primarily social—just what the managers feared. The team quickly realized that if they wanted One Heart to be a business-focused SNS, they would need to make a specific effort to make it that way. They decided to recruit guest authors
Online Social Media in the Workplace • 295 to write meaningful and sometimes even controversial pieces, to stimulate discussion related to work; for example, a piece was written on racism and discussed on the network. After these changes, usage increased. The team next engaged their 30 Aboriginal university students interns by asking them to create a group on the site that encouraged them to talk and connect with one another. The students were asked to start blogging about their experiences at RBC. As usage of the network increased, other parts of the RBC HR organization began using the site for their own purposes, including the Recruitment and Talent Management teams. As of this writing (2011), there are slightly under 1,000 members, although the activity levels of these members varies greatly as is seen with other social network applications. The implementation team continues to monitor the site and advocate it at RBC. They still feel the level of dialog is too safe, and they want to promote mentoring but are concerned that the mentoring model that works in person may not work online. One effort that is going as hoped is participation by the Royal Eagles, who are required to use the site for coordination and work purposes. Since it is required that they use the site for those purposes, it has served to increase general traffic, making use of One Heart more routine. The Royal Eagles leadership team has also taken up writing more impactful and meaningful pieces, and the quality level of the dialog is increasing. In terms of measurable results, the biggest outcomes so far have been for those in recruitment; there is a buzz about One Heart in the larger community. The content provided by Aboriginal employees on One Heart has also improved onboarding for new Aboriginal employees, although it has not yet measurably improved retention. Expansion of the system to improve socialization for all new employees is being actively considered. This case can thus be regarded as a mixed success, with several unexpected benefits and challenges as it evolved. Its value extended far beyond the original expectations for it, but organizational climate played an unexpected role in mitigating its initial effectiveness. Employees in general saw value in being connected, and enthusiasm for the system was high. Although this was targeted at Aboriginal employees, there is potential value for many groups. For example, there might only be one new hire at an RBC location, and an online new hire community could serve to reduce their feelings of isolation and improve socialization into RBC.
296 • Richard N. Landers and Andrea S. Goldberg
CASE 2: LEARNING AND ORGANIZATIONAL DEVELOPMENT AT PEPSICO At PepsiCo, a major organizational goal was established to improve communication within the research and development (R&D) community. In 2010, the original concept was to build a global R&D university with both in-person and online instruction. At the time, each unit/country had its own website and set of both in-person and online courses, creating many redundancies across the organization, with limited collaboration between units. The project team felt that by updating this model and launching a more centralized e-learning initiative, there would be more participation across geographic and functional boundaries. More central to our purpose here, social media elements are being added as the platform evolves to better meet these communication goals. One of the most critical social media elements added was video. Because video is media rich, allowing for the communication of non-verbal cues such as facial expressions and gestures, it was decided that video would have the best odds of helping learners to overcome the various language and cultural barriers involved in connecting around 3,000 worldwide PepsiCo R&D associates and HR partners. Video was thus given a central focus. To promote the project, the team launched a video contest called “Testtube” and modeled it as a virtual international science fair. Employees were asked to develop short films, approximately three minutes in length, focused on learning, with content appropriate to the workplace. Userfriendly video cameras were mailed to 15 of the largest sites and to individual employees who requested one. A vendor was also available to edit participant videos, and this vendor could perform a wide variety of editing tasks for them, including embedding graphics and animations. One video, for example, had a 3D animated character with audio synced to its lip movement. This editing process was also implemented to ensure that videos were not uploaded directly by the employees. There was some initial concern that employees might share inappropriate or confidential content; however, no inappropriate content was ever sent to the editor. While some employees simply shared the work tasks in which they were involved, others focused on new technologies and explored what these technologies could be used for in the R&D context. Program creators initially expected approximately 10 videos to be submitted but ultimately received 15 entries from 7 countries. To encourage
Online Social Media in the Workplace • 297 communication between the entrants, employees were encouraged to comment on the submitted videos, and contact information was shared among entrants. Although some comments were posted online, most employees contacted each other via e-mail. At the end of the contest, several videos were randomly selected, and prize money was provided to charities based on this selection. Follow-up videos are planned to showcase how the charities are using this money and how the employees are currently engaged with these charities. The overall reaction to the video contest was very positive. The project team hoped that in addition to changing how training is to be delivered, this type of platform would enable an organizational culture that was more collaborative. There were requests to repeat the contest, and the team is planning a yearly competition. In the interim, the deans of the R&D University will be adding video blogging as a way to communicate with employees. There is still reluctance, however, among management for employees to be able to directly upload video footage, viewable by all employees without screening. Management was pleasantly surprised at how positive the reaction was to the program and how eager employees were to participate in this new R&D learning community. Seeing this success, other parts of PepsiCo are now looking to launch their own universities to build on this unit’s work. PepsiCo’s IT group is also considering a change in software that would improve collaboration and employee sharing. Measurable outcomes have not yet been collected to assess the overall effectiveness of this system, but surveys, interviews and focus groups are planned. Although the overall project was considered a success, there were many lessons learned, to be taken into consideration for future projects of this type. Although participation in the video contest exceeded expectations, the team discovered that it might have been even more successful if not for its timing. The team started the contest in October, with a deadline at the New Year. Some employees were more technologically sophisticated than expected and did their own editing, which meant that the videos came to the team in many different levels of polish. There was a great deal of variance in terms of both time and money spent on video production; fortunately, there was no backlash about the differences in quality in relation to who won the contest. There were also several technical issues, some of which could not be resolved. Bandwidth was limited in some locations, which made it difficult to view submitted videos and participate in video-based collaborative activities. Firewalls prevented some countries from participating at all. The
298 • Richard N. Landers and Andrea S. Goldberg contest was launched with a web conference, but not all counties could participate and were instead mailed informational DVDs. This delay in reception of materials meant that those employees’ experience with the contest was out of sync with the efforts to promote it. There were also language issues; although all the videos were in English, some participants found some of the videos difficult to understand. Customs was another challenge; not all cameras arrived at their destinations. Because of all of these challenges, it became very clear that demonstrated support from leaders within each country was crucial. When those leaders actively participated themselves, video content was improved. According to the project team, there has been increased discussion at PepsiCo about international collaboration and shared learning as a result of this initiative. There was a recognition that software developed for one purpose or function could provide value to another part of the organization. It was also clear that employees wanted to collaborate and only needed the tools to enable them to do so. The project team made a somewhat risky decision to develop a new platform to do this rather than to refine preexisting processes. Fortunately, senior management was generally supportive of this decision, having regular conversations with the project team and making positive comments about publicity efforts. A few who perceived the project as cutting edge and trendy were exceptionally supportive. At the time of this project, PepsiCo did not have an overall organizational vision of how social media should be used, although they have now hired someone to manage this effort. While they had received a great deal of positive press regarding their use of social media to reach out to customers (i.e. the Pepsi Refresh Project), internal efforts at exploring social media had been slow. Senior management was not negative toward internal use of social media, but they did not see it as an avenue for improving effectiveness or productivity, making it a lower priority than social media marketing. A few peer-to-peer employee communities existed throughout PepsiCo, but there were no large-scale efforts outside of R&D.
LESSONS LEARNED, QUESTIONS REMAINING Because there are so many unanswered questions surrounding social media, it is challenging to pick a particular place to start. Research on social media in organizations is in its infancy. So rather than invent an entire research
Online Social Media in the Workplace • 299 area from the ground up, the remainder of this chapter will relate the lessons learned from these case studies back to concepts common to I/O psychology. The use of social media in this context is, after all, fundamentally about people at work. We hope that this foundation plus our analysis provide a reasonable starting point for future I/O psychologists researching and practicing in this domain. Two important distinctions must be made when discussing social media in an I/O context. First, social media as it is used to connect with customers and social media as it can be used to motivate employees are different phenomena. As demonstrated by the PepsiCo case, these efforts are not necessarily linked. It is somewhat natural to consider a company’s “social media strategy” as a single plan, but it is more appropriate to consider these strategies as separate but linked by a shared technology, especially considering the different organizational goals each is attempting to address. Traditionally, social media strategies are intended to alter consumer behavior, increase brand awareness, and improve customer loyalty (Ang, 2011). To highlight the distinction described above, we will hereafter refer to these efforts as customer-centric or external social media strategies. In contrast, employee-centric or internal social media strategies can be developed to address employee development and motivation needs, as demonstrated by the case studies provided here. Internal and external social media strategies can be linked as loosely coupled systems (see Orton & Weick, 1990)—e.g., employees might be recruited to put together promotional materials for an internal social media-based contest, with the winning video used as part of an external social media strategy—but this connection is not required to be successful at either. Second, social media/social network sites and social network theory are not directly related. Social media refers to the family of technologies discussed here, while social network theory refers to the mapping and exploration of interpersonal social networks. There are likely many interesting uses of social network theory in the context of social media, but these have not yet been explored. For example, are friends and acquaintance networks in interpersonal social networks replicated in SNSs? Early exploration of SNSs as used for leisure purposes indicates that they are quite similar. Online networks are replicated so effectively that most people make no attempt through social media to meet people outside of their in-person social network. This, in fact, is the rationale for defining SNS as “social network site” rather than “social networking site,” as very little networking seems to occur on such sites (boyd & Ellison, 2008). Is this true for employee-centric social media as well? People have different
300 • Richard N. Landers and Andrea S. Goldberg motivations for expanding their social networks in the context of their work, including career advancement. Would employees use an internal SNS to increase collaboration with co-workers? This is a question not yet answered. Social media refers to a family of technologies; it is critical to understand that the use of social media is not itself an organizational intervention but a method by which to deliver such interventions. Arguments like this have occurred in a variety of areas within I/O psychology in the past. Many of us cringe at phrases like “assessment centers are excellent predictors of job performance” and “e-learning is better than traditional learning” because each of these phrases describes technologies that are mere vehicles for the measurement and improvement of other psychological traits and behaviors. We might use an assessment center because we believe it will be better able to elicit leadership behaviors than a paper-and-pencil test, but this does not mean that “assessment center performance” is itself a meaningful construct. We may use e-learning because it more convenient and more cost-effective than in-person training, but this does not mean that elearning itself teaches anything to anyone. Whether the medium is a PowerPoint presentation or lecture videos, it is the training designer behind those materials that drives learning. In a social media context, no one should say “we need to develop an internal social media strategy.” Instead, social media should be used when it is the best tool available to solve a particular organizational challenge. In the RBC case described above, social media was developed specifically to improve retention among the Canadian Aboriginal population; the team tasked with addressing this challenge had specific reasons to develop an internal social media strategy. Although RBC had not yet been successful in reaching this goal, all decisions made along the way were shaped by it, and the effort continued. Without this end-goal in sight, it is unlikely the RBC team would have persevered long enough to discover One Heart’s value. In the PepsiCo case, social media was a solution to a communications problem. An organization should never simply decide, “let’s have an internal employee social network site.” The lack of direction for the SNS that such a declaration would bring would be clearly communicated to the employees through the design of and publicity surrounding the SNS. If management does not know why the company is creating employee-centric social media, the employees will not know either, nor will they ever use it. This does not necessarily mean that a social media strategy should be restricted to its initial purpose; instead, it means that a social media strategy should be a specific and targeted solution to a problem from the very beginning, revised as needs change.
Online Social Media in the Workplace • 301 As organizations begin to develop internal social media strategies, it will be increasingly important to explore issues surrounding employee privacy. Electronic monitoring of employee behavior is already a controversial issue (Zimmerman, 2002; also see Alge & Hansen, this volume), and monitoring of employee internet behavior is even less explored. Although the implementation of monitoring policies may undermine employee trust (Stanton & Weiss, 2003), many organizations choose to do so regardless. When the technology being monitored is part of an internal social media strategy, what changes? When does employee behavior cross the line between a legitimate use of time collaborating with co-workers into production deviance, and how should that line be defined? If an employee is punished for inappropriate activities on a work-sponsored website, how does that affect the justice perceptions of that employee and his or her coworkers, and ultimately, behavior on the site? Any employee that has previously used social media will also have pre-existing beliefs about what social media are and what type of online behavior is permitted, which will not necessarily align with organizational guidelines. Support from upper management appears critical to the success of internal social media projects like those described in these case studies. In the PepsiCo case, participation by management apparently had an important influence on participation by employees. Leadership plays an important role here, but to what extent is online leadership the same as traditional leadership as I/O psychologists study it now? For example, the web and the communication methods it typically employs (i.e., text) is not media rich; as a result, it may be more difficult for charisma to be communicated via SNSs. Some researchers have suggested that transformational leadership is more strongly related to job performance in computer-mediated/virtual teams (Purvanova & Bono, 2009), and this may be because of a lack of strong online leaders. However, this area of research is nearly as new as the study of employee-centric social media; the extent to which leadership is “different” in the context of an internal social media strategy is mostly unknown. Some evidence comes from Li (2010), a prominant researcher and consultant in the social media space, who while exploring organizations that had implemented social media solutions externally and internally, observed that the most successful organizations could be characterized as having “open leadership styles.” In contrast to traditional command-andcontrol leadership, these organizations supported participative approaches to information gathering, information dissemination, and decision-making. The leaders of these organizations shared information and power laterally,
302 • Richard N. Landers and Andrea S. Goldberg effectively using social media tools and participative techniques such as blogs, crowdsourcing and consensus building. These open leaders and their organizations demonstrated a much higher level of tolerance for risk taking and failure than traditional organizations. The open leaders were willing to experiment and their organizations did not punish them for failure; instead, leaders and their subordinates alike saw failure as a sometimes necessary step of the learning process. We see this tolerance for experimentation demonstrated in the RBC case, where the response to One Heart’s initial lack of success was not cancellation of the initiative, but rather a trigger for further exploration of why the effort was not working as expected, followed by the development of more creative ways for it to succeed. I/O research on leadership needs to expand to reflect the potentially unique skills needed by leaders in virtual organizations of all types, especially those that are transparent, where employees and other stakeholders play a much larger role. Perhaps the most immediate need in the study of social media is the publication of proof-of-concept studies where internal social media programs have been deployed with carefully measured outcomes (e.g. Landers & Johnson, 2011). Although many organizations are enthusiastic about their own use of internal social media strategies, we had great difficulty getting these organizations to reveal their stories. The use of employeecentric social media is by definition an organization-wide intervention, and thus these organizations need to share their results so that I/O psychology can advance in this area. Once we have established which interventions can meet organizational goals, the crucial next step is to explore and refine these strategies to understand how they work and under what conditions. Only then will I/O psychology establish a science of social media at work.
REFERENCES Alge, B. J., & Hansen, S. D. (2013). Workplace monitoring and surveillance research since “1984”: A review and agenda. In M. D. Coovert & L. D. Thompson (Eds.), The psychology of workplace technology. New York: Routledge Academic. Ang, L. (2011). Community relationship management and social media. Journal of Database Marketing & Customer Strategy Management, 18, 31–38. Behrend, T. S., Sharek, D. J., Meade, A. W., & Wiebe, E. N. (2011). The viability of crowdsourcing for survey research. Behavioral Research Methods, 43, 800–813. doi: 10.3758/s13428–011–0081–0 boyd, D. M., & Ellison, N. B. (2008). Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication, 13, 210–230. Buerk, R. (2011, March 11). Japan earthquake: Tsunami hits north-east. BBC News. Retrieved May 31, 2011 from www.bbc.co.uk/news/world-asia-pacific-12709598
Online Social Media in the Workplace • 303 Cisco. (2010). Cisco 2010 midyear security report. Retrieved May 31, 2011 from www.cisco. com/en/US/prod/collateral/vpndevc/security_annual_report_mid2010.pdf Dugan, L. (2011, March 17). One man’s story: How Twitter helped during the tsunami. AllTwitter. Retrieved May 31, 2011 from www.mediabistro.com/alltwitter/one-mansstory-how-twitter-helped-during-the-tsunami_b4615 Dunnette, M. D. (1966). Fads, fashions, and folderol in psychology. American Psychologist, 21, 343–352. Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438, 900–901. Hafner, K. (2007, August 19). Seeing corporate fingerprints in Wikipedia edits. New York Times. Retrieved May 31, 2011 from www.nytimes.com/2007/08/19/technology/ 19wikipedia.html Hayes, R. A. (2008, November 20). Providing what they want and need on their own turf: Social networking, the web, and young voters. Paper presented at the 94th annual convention of the National Communication Association, San Diego, CA. Retrieved online May 31, 2011 from www.allacademic.com/meta/p260797_index.html IBM. (2011). The Greater IBM Connection: The business network for current and former IBM employees and retirees. Retrieved May 31, 2011 from www.ibm.com/ibm/greateribm/ Kanalley, C. (2011, January 27). Egypt’s Internet shuts down, according to reports. Huffington Post. Retrieved May 31, 2011 from www.huffingtonpost.com/2011/01/27/ egypt-internet-goes-down-_n_815156.html Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of social media. Business Horizons, 53, 59–68. Kohut, A. (2008, January 11). Social networking and online videos take off: Internet’s broader role in campaign 2008. The Pew Research Center for the People and the Press. Retrieved May 31, 2011 from www.pewinternet.org/~/media/Files/Reports/ 2008/Pew_MediaSources_jan08.pdf.pdf Kozinets, R. V., Hemetsberger, A., & Schau, H. J. (2008). The wisdom of consumer crowds: Collective information in the age of networked marketing. Journal of Macromarketing, 28, 339–354. Kushin, M. J., & Yamamoto, M. (2010). Did social media really matter? College students’ use of online media and political decision making in the 2008 election. Mass Communication and Society, 13, 608–630. Landers, R. N. (n.d.). Old Dominion University Second Life Research Facility [ODU-SLRF]. Accessed May 31, 2011 at http://slurl.com/secondlife/Zamyatin/113/155/62/ Landers, R. N., & Callan, R. C. (2011). Casual social games as serious games: The psychology of gamification in undergraduate education and employee training. In M. Ma, A. Oikonomou, & L. C. Jain (Eds.), Serious Games and Edutainment Applications (pp. 399–424). Surrey, UK: Springer. Li, C. (2010). Open Leadership. San Francisco: Jossey-Bass. Lipton, A. (2011, March 28). Lady Gaga donates $1.5 million in total to Zynga’s Japan earthquake relief campaign and the American Red Cross. Retrieved May 31, 2011 from www.zynga.com/about/article.php?a=20110328 McCollum, J. (2010, February 4). 29% of Companies Have a Social Media Policy. Retrieved June 25, 2011 from www.marketingpilgrim.com/2010/02/29-of-companies-have-asocial-media-policy.html Mendelson, L. (2011, January 21). Pew Research Center Study Finds Older Americans are Becoming More Active on the Internet through Social Networking and Blogging. Retrieved June 25, 2011 from www.digitalworkplaceblog.com/internet/blogs/pewresearch-center-study-finds-older-americans-are-becoming-more-active-on-theinternet-through-soc/
304 • Richard N. Landers and Andrea S. Goldberg Miceli, M. P., Near, J. P., & Dworkin, T. M. (2008). Whistle-blowing in organizations. New York: Routledge. Miles, J., & Hollenbeck, J. (2013) Teams and Technology. In M. D. Coovert & L. F. Thompson (Eds.), The psychology of workplace technology. New York: Routledge Academic. Milian, M. (2009, May 3). Why text messages are limited to 160 characters. Los Angeles Times. Retrieved May 31, 2011 from http://latimesblogs.latimes.com/technology/ 2009/05/invented-text-messaging.html Olivarez-Giles, N. (2011a, February 18). Libya’s Internet reportedly down as violence against anti-government protesters continues. Los Angeles Times. Retrieved May 31, 2011 from http://latimesblogs.latimes.com/technology/2011/02/libya-has-shutdown-the-internet-in-light-of-protests-reports-say.html Olivarez-Giles, N. (2011b, January 15). In Tunisia, social media are main source of news about protests. Los Angeles Times. Retrieved May 31, 2011 from http://articles. latimes.com/2011/jan/15/business/la-fi-tunisia-internet-20110115 Orton, J. D., & Weick, K. E. (1990). Loosely coupled systems: A reconceptualization. Academy of Management Review, 15, 203–223. PR Newswire. (2010, June 1). 86% of recruiters use social media to research applicants citing importance of social media etiquette. The Street. Retrieved May 31, 2011 from www.thestreet.com/story/10771136/1/86-of-recruiters-use-social-media-to-researchapplicants-citing-importance-of-social-media-etiquette.html Purvanova, R. K., & Bono, J. E. (2009). Transformational leadership in context: Face-toface and virtual teams. The Leadership Quarterly, 20, 343–357. Sogn, J., & Pfeifle, J. W. (2006). News from employment law alliance blogging and the American workplace as work-related web blogs proliferate. Retrieved May 31, 2011 from www.lynnjackson.com/news/article.php?news_id=55 Solis, B., & Breakenridge, D. (2009). Putting the public back in public relations: How social media is reinventing the aging business of PR. Upper Saddle River, NJ: Pearson Education. The SIOP Executive Board. (2009). From the Executive Board: Should SIOP change its name? The SIOP Exchange. Retrieved May 31, 2011 from http://siopexchange. typepad.com Stanton, J. M., & Weiss, E. M. (2003). Organizational databases of personnel information: Contrasting the concerns of human resource managers and employees. Behavior & Information Technology, 22, 291–304. Twitter. (n.d.). Retrieved May 31, 2011 from http://twitter.com/about Valentino, S., Fleischman, G. M., Sprague, R., & Godkin, L. (2010). Exploring the ethicality of firing employees who blog. Human Resource Management, 49, 87–108. Villarreal, A. (2011, March 1). Social media a critical tool for Middle East protesters. Voice of America. Retrieved May 31, 2011 from www.voanews.com/english/news/middleeast/Social-Media-a-Critical-Tool-for-Middle-East-Protesters-117202583.html Weber, T. (2010, October 3). Why companies watch your every Facebook, YouTube, Twitter move. BBC News. Retrieved May 31, 2011 from www.bbc.co.uk/news/ business-11450923 Yates, D. & Paquette, S. (2011). Emergency knowledge management and social media technologies: A case study of the 2010 Haitian earthquake. International Journal of Information Management, 31, 6-13. Zhao, D., & Rosson, M. B. (2009). How and why people Twitter: The role that microblogging plays in informal communication at work. Proceedings of GROUP 2009 (pp. 243–252). New York: ACM. Zimmerman, E. (2002). HR must know when employee surveillance crosses the line. Workforce, 81(2), 38–45.
Section IV
Reflections and Future Directions Section Introduction Michael D. Coovert and Lori Foster Thompson
A primary goal of this book is to look at the present and into the future in order to see how technology is changing the workplace and the lives of individuals found there. Having authors look to the future is simultaneously an exciting and intimidating endeavor. This is especially true when dealing with technology. Rapid developments characterize hardware, software, and products, making our personal and work lives more enriched—and as it sometimes feels—more harried. This is because technology is a great enabler, augmenting our intelligence and providing the capacity to produce new and better products and to get more done within a span of time. The twin (and sometimes dark side) of this increased capability is the expectation of both doing and producing more. This expectation, depending on how it is managed or interpreted, can lead to increased motivation on the positive side or to role overload on the negative side. Organizations will continue, however, to feel increased pressure to continually adopt the most efficient technologies in order to remain competitive. I-O psychologists will be called upon to help organizations
306 • Michael D. Coovert and Lori Foster Thompson optimize human–technology symbiosis in order to capitalize on the outcomes associated with effective human–systems integration. The authors of the chapters up to this point are noted experts in their respective areas who have admirably handled the challenge of considering the current state of the art of their topic and providing a look into the future. We believe it is beneficial to consider forward-looking visions from many viewpoints, including those provided by leaders in one’s field with the depth and breadth of perspective gained through experience and seniority. To this end, we have asked three preeminent I-O psychologists to conclude this book by providing their vision of the psychology of workplace technology’s future.
14 Looking Back, Looking Forward: Technology in the Workplace Wayne F. Cascio
There are many intersections between Industrial and Organizational (I/O) psychology and technology, and this book neatly captures many of them. In fact, technology touches almost every aspect of I/O psychology. This is not a new trend. The editors asked me to reflect on my 1995 article in the American Psychologist, namely, “Whither Industrial and Organizational Psychology in a Changing World of Work?” In rereading that article, five technology-related themes seem particularly relevant. I begin the discussion of each theme with material from my 1995 article, and then follow that with brief reflections on the current state of affairs.
GLOBAL LABOR MARKETS In 1995 I identified accelerated global competition as the single most powerful fact of organizational life in the 1990s. I argued that there is no going back, as global labor markets have become a reality. Workers in all developed economies compete regularly with each other for knowledgeintensive jobs. At the same time, it takes more than trade agreements, technology, capital investment, and infrastructure to deliver world-class products and services. It also takes the skills, ingenuity, and creativity of a competent, well-trained workforce. Our competitors know this, and they are spending unstintingly to create one. (p. 928)
Today, cheap labor and plentiful resources, combined with ease of travel and communication, have created global labor markets (World Economic 307
308 • Wayne F. Cascio Forum and Boston Consulting Group, 2011). This is fueling mobility as more companies expand abroad and people consider foreign postings a natural part of their professional development. To be sure, global labor markets enable employment opportunities well beyond the borders of one’s home country. This means that competition for talent will come not only from the company down the street but also from the employer on the other side of the world. It will be a seller’s market, with talented individuals having many choices. Countries as well as companies will need to brand themselves as employers of choice in order to attract this talent (Cascio & Boudreau, 2012).
TECHNOLOGY IN THE WORKPLACE My 1995 article argued that technology is: breaking down departmental barriers, enhancing the sharing of vast amounts of information, creating “virtual offices” for workers on the go, collapsing product-development cycles, and changing the ways that organizations service customers and relate to their suppliers and to their employees. To succeed and prosper in the new world of work, companies need motivated, technically literate employees. (p. 929) [The new world of work] will require constant learning, more higher-order thinking, and the availability to work outside the standard hours of 9AM to 5PM. (p. 930)
It is no exaggeration to say that the information revolution will transform everything it touches, and it will touch everything (Friedman, 2005; Kessler, 2011). Information and ideas are keys to the new creative economy because every country, every company, and every individual depends increasingly on knowledge. Consider just one I/O-related area—staffing. Mead, Olson-Buchanan, and Drasgow (this volume) present detailed examples of technology-based selection. More broadly, however, HR professionals will need to recognize collaborative technology as a key component of their firms’ global hiring strategy—leveraging social-networking sites and researching which sites are most effective in each market. Technology
Technology in the Workplace • 309 enables virtual workplaces, which have spawned a revolution in telework and global virtual teams (Johnson, 2011; Cascio, 2012). Whether employees work only domestically, or also internationally across multiple time zones, the workday has become seamless and also endless for millions of workers in countries everywhere. The internet, along with mobile devices that permit access to it anywhere and any time, makes such collaboration possible. Indeed, the percentage of the world population with access to the internet has increased from 18 to 35 percent just from 2006 to 2011 (“A Wired World,” 2012).
VIRTUAL ORGANIZATIONS AND VIRTUAL TEAMS More and more organizations will be “virtual, boundary-less, and flexible, with no guarantees to workers or managers.” “Much of the work that results in a product, service, or decision is now done in teams” (p. 930). In today’s world of fast-moving global markets and fierce competition, the windows of opportunity are often frustratingly brief. Trends such as the following are accelerating the shift toward new forms of organization, including virtual organizations, in the early part of the twenty-first century (Cascio, 2013): • •
• • • •
The shift from vertically integrated hierarchies to networks of specialists. The decline of routine work (sewing-machine operators, telephone operators, word processors), coupled with the expansion of complex jobs that require flexibility, creativity, and the ability to work well with people (managers, software-applications engineers, artists, and designers). Pay tied less to a person’s position or tenure in an organization and more to the market value of his or her skills. A change in the paradigm of doing business from making a product to providing a service, often by part-time or temporary employees. Outsourcing of activities that are not core competencies of a firm (e.g., payroll, benefits administration, relocation services). The redefinition of work itself: constant learning, more higher-order thinking, less nine-to-five mentality.
The implications of these trends are clear. Leaders need to focus laserlike attention on attracting, deploying, and keeping a workforce that is as
310 • Wayne F. Cascio good as or better than that of the competition. In the long run, all other threats and opportunities pale by comparison. This is where I/O psychologists, aided by technology that takes a variety of forms, as illustrated throughout this book, have a grand opportunity to make signal contributions to the betterment of management practices and employee welfare. Whether the concern is with attracting, selecting, training, or enhancing the performance of individuals or teams, talent is seen more and more as the key to global competitiveness. Fully 97 percent of CEOs in PriceWaterhouseCoopers’ 2011 Global CEO Survey said that having the right talent is THE most critical factor for their business growth. Chapters in this volume address technology-related workplace issues that span a variety of talent-related topics—staffing, training, performance appraisal, teams, robots as teammates, leadership, aging workers, health and stress, and the broad impact of social media. They represent leading-edge thinking at its best.
EMPOWERED WORKERS IN DIVERSE WORKFORCES The empowered worker will be a defining feature [of tomorrow’s workplaces] (p. 930) Demographically [today’s organizations] are highly diverse. They comprise more women at all levels, more multiethnic, multicultural workers, more older workers, workers with disabilities, robots, and contingent workers. Paternalism is out; self-reliance is in. There is constant pressure to do more with less, and steady emphasis on empowerment, cross-training, personal flexibility, self-managed work teams, and continuous learning. (p. 931)
In many ways, technology facilitates worker empowerment, as flexible work arrangements become more and more popular. Since 1995 there has been a massive shift toward managing based on results rather than managing based on the time that employees spend in the office. When the focus is on results, by definition workers are empowered. Worker empowerment has also become a defining feature in manufacturing, but for a different reason, as employers substitute capital for labor. As the 2007–2009 recession (“The Great Recession”) took hold, and businesses laid off employees, corporations were forced to become more
Technology in the Workplace • 311 efficient—and they had the technology to do it. Employers are not about to go back to their larger, less-efficient workforces, and that will hit middleclass workers with no special skills the hardest. Fully 95 percent of the net job losses during the recession were in middle-skill occupations such as office workers, bank tellers, and machine operators. Challenges associated with reskilling, or upskilling these individuals will present a major public policy issue, and also a significant opportunity for I/O psychologists to contribute to the betterment of human welfare. Workers with special skills are most likely to be empowered. This is not a one-shot opportunity either; the MIT Center for Digital Business predicts that the next 10 years will be more disruptive than the last 10 (Dorning, 2012).
WORKPLACE TRAINING In the years since I published my 1995 article in the American Psychologist, three macro-level problems with respect to training and development have not changed appreciably: (1) corporate commitment is lacking and uneven; (2) poaching trained workers poses a major disincentive for training at the level of the individual firm; and (3) despite the rhetoric about training being viewed as an investment, current accounting rules (still) require that it be treated as an expense. There many other things about workplace training that also have not changed. Although we certainly know more today, a perennial question is: “In designing training programs to promote team development and workplace learning, what are the most effective methods for developing skill, knowledge, and attitudinal competencies” (p. 935)? While Ford and Meyer (this volume) address these issues more thoroughly than I can here, there is no question that technology-delivered instruction (TDI) is gaining in popularity. TDI is the presentation of text, graphics, video, audio, or animation in digitized form for the purpose of building job-relevant knowledge and skill (Cascio, 2012). Whether training is web-based or delivered on a single workstation, on a PDA, or on an MP3 player, TDI is catching on. It will continue to grow in popularity because both demand and supply forces are driving it. Thus there is growing demand for: • • •
just-in-time training delivery; cost-effective ways to meet the learning needs of a globally distributed workforce; and flexible access to lifelong learning.
312 • Wayne F. Cascio On the supply side: • • • •
internet access is becoming standard at work and at home; advances in digital technologies now enable training designers to create interactive, media-rich content; increasing bandwidth and better delivery platforms; there is a growing selection of high-quality products and services.
Yes, TDI does represent whiz-bang technology, but don’t be fooled into thinking that’s all there is. When it is done well, as at Boeing (in training mechanics and pilots to operate the new 787 Dreamliner), beneath those bells and whistles there is much that the casual observer cannot see. There is extremely thorough, detailed training-needs analysis, coupled with careful consideration of alternative design and delivery options to optimize learning. These classic tools have not changed appreciably in decades, and they will not change in the future. To maximize returns on training investments, the best designs will always incorporate research-based findings regarding some of the classic principles of learning: goal setting, behavior modeling, meaningfulness of material, practice, and feedback. Much of what is known about these principles derives from the decadeslong work of I/O psychologists.
CONCLUSION While much has changed since I wrote that 1995 article in the American Psychologist, most of the technology-related trends the article identified have come to pass. If anything, the pace of globalization has accelerated, and global labor markets are a reality. Technology has revolutionized how, when, and where we work. Today, virtual organizations, as in consulting, law, and the movie industry, are routinely assembled to complete a project, and when that project is finished, the assembled talent disperses to work on other new projects. Virtual teams, domestic and global, present ongoing management challenges. Workforces are more diverse than ever, both in terms of characteristics that people can see, such as age, gender, and ethnicity, and also in terms of less visible ones, such as functional expertise, experience, and training. Technology has facilitated the empowerment of millions of workers—teleworkers, those on flexible work schedules, and those with specialized skills. Finally, technology-delivered instruction is
Technology in the Workplace • 313 growing fast, fueled by factors of both demand and supply. Yet, as has often been said, “the more things change, the more they stay the same.” This is especially true of the features that characterize the very best training programs, for they are solidly grounded in classic principles of learning— principles that generations of I/O psychologists have developed. The Psychology of Workplace Technology represents the best current thinking in this area—but stay tuned, for the final chapter has yet to be written.
REFERENCES A wired world. (2012, Jan. 25). Bloomberg Businessweek, p. 11. Cascio, W. F. (1995). Whither industrial and organizational psychology in a changing world of work? American Psychologist, 50(11), 928–939. Cascio, W. F. (2011, Oct.). The virtual global workforce: Leveraging its impact. Keynote address presented at SIOP Leading-Edge Consortium, Louisville, KY. Cascio, W. F. (2012, March). Classic Training Design Meets New Technology: Maximizing Your ROI. Keynote address presented at the 8th Annual International ASTD Conference, Drakensburg, South Africa. Cascio, W. F. (2013). Managing human resources: Productivity, quality of work life, profits (9th edn). New York, NY: McGraw-Hill. Cascio, W. F., & Boudreau, J. W. (2012). Short introduction to strategic human resource management. Cambridge, UK: Cambridge University Press. Dorning, M. (2012, May 13). The stuck-in-the-middle recovery. Bloomberg Businessweek, pp. 33–35. Friedman, T. L. (2005). The world is flat. New York, NY: Farrar, Straus and Giroux. Johnson, S. R. (2011, Oct.). We meet again, though we have never met: Leading effective teams globally. Paper presented at SIOP Leading-Edge Consortium, Louisville, KY. Kessler, A. (2011, Feb. 17). Is your job an endangered species? The Wall Street Journal, p. A19. PriceWaterhouseCoopers. (2011, May). 2011 Global CEO Survey. New York, NY: author. World Economic Forum and Boston Consulting Group. (2011). Global talent risk—Seven responses. Cologny/Geneva, Switzerland: Author.
15 Reflections on Technology and the Changing Nature of Work Ann Howard
Poking through an old AT&T file in the mid-1970s, I discovered the original factor analysis of the Management Progress Study dimensions (Bray & Grant, 1966). A large worksheet made by taping together successive sheets of paper contained rows of numbers penciled in following step-bystep machinations of a desktop mechanical calculator. By comparison I had few complaints about punching cards with my dissertation data and submitting them to the mainframe computer at the University of Maryland, even if it did mean commuting from New York to Washington on the weekends. A decade later, as SIOP’s Secretary-Treasurer, I transferred the Society’s handwritten financials onto a personal computer, in the process learning how to use electronic spreadsheet software. The gradual progression of workplace technology; I remember it well. By the mid-1990s, as my colleagues and I were collecting our thoughts for The Changing Nature of Work (Howard, 1995), we recognized that computer technology had shifted into high gear and was propelling a major transformation in the workplace—one as momentous as the shift from the farm to the factory in the late nineteenth century. We became excited about the prospects for a revitalized psychology of work, and the book concluded with “We have much work to do.” Today, nearly 20 years later, we can take a fresh look at technology in the workplace. Which technological applications have had the most impact? How have I/O psychologists contributed to understanding and capitalizing on this new work environment?
314
Technology and the Changing Nature of Work • 315
SIGNIFICANT ADVANCES This volume documents appreciable steps forward in several arenas. For example, Mead, Olson-Buchanan and Drasgow (Chapter 2) identify ways that I/O psychologists have helped organizations move candidates through the selection process more quickly and efficiently. Automating traditional administration methods, as by unproctored online testing or video-based interviews, reaped immediate returns. Another step forward was applying computer-adaptive testing to shortening, with enhanced reliability, not just cognitive tests but the often tedious personality and attitude measures. Besides saving administration time, these efforts likely make applicants happier with the process. Computer scoring has also advanced beyond simple computing for fixed-response tests to applying natural language processing, a form of artificial intelligence, to free response measures, such as fill-in-the-blanks items or essays, which traditionally require human judgment. Many possibilities still remain for creating efficiencies with assessment centers, whose trademark is observing live behavior. In the past a major way of shortening the time for assessment centers was to simply delete exercises; a likely consequence has been shrinkage of validity. A more strategic approach would be to identify the kinds of cues that are most effective in eliciting different types of key behaviors and reducing the scope of exercises to these essentials. Administration of assessment centers has definitely benefitted from technology. E-mail delivery has eliminated armloads of paper, and video technologies have facilitated remote administration and scoring. Virtual reality could add another level of psychological fidelity to exercises like group presentations, which are costly to staff. Although it will be difficult for technology to replace human scoring of complex behaviors, it might be feasible to use natural language processing for lower-level assessments, where there are large numbers of candidates to train the software and a more limited range of possible responses (Howard, 1993). The usefulness of selection methods will depend on the availability of talented people, a situation that surveys reveal as increasingly challenging. Some organizations are making up for the talent shortfall through employee training (see Ford & Meyer, Chapter 3). E-learning is increasingly adopted as organizations recognize its advantages, including lower cost, wider reach, and just-in-time delivery. However, other training methods are needed to develop the deep, specialized knowledge that has become an
316 • Ann Howard essential organizational advantage. Serious management games as well as virtual reality training and practice are innovative techniques that need to be evaluated for the level of learning they produce and the subsequent contributions trainees make to organizational objectives.
MISSED OPPORTUNITIES In other arenas we’ve learned considerably less about the human impact of technology in the workplace. For example, Potosky and Lomax (Chapter 6) noted that scant attention has been paid to the roles that leaders should take relative to technology. How do they introduce technology into a work group? How do they deal with resistance? How do they use technology to connect with large groups of followers? Managers could become significantly more productive by harnessing new technologies, but it’s unclear if or how they have done so to date. Also receiving little attention, according to Nixon and Spector (Chapter 11), is how to deal with the stress created by technology’s invasion of the boundaries between work and family. Although The Changing Nature of Work (Howard, 1995) foresaw the emergence of networked computing and global communications, it did not anticipate the extraordinary impact of social media, particularly their role in several recent world crises. But Landers and Goldberg (Chapter 13) observed that organizations have been slow to adopt social media for their own purposes (although hiring managers, perhaps misguidedly, use them routinely to check candidates’ profiles). By carrying the messages of disgruntled employees or customers, social media can be a force against authority. But instead of trying to quash them, managers might find social media a useful avenue for promoting collaboration and productive information sharing. In retrospect, perhaps we didn’t fully anticipate the cultural lag that would attenuate the adoption of technological developments. The computer engineers at Google might work at breakneck speeds to meet their competition, but I/O psychologists must wait for the human element to catch up. Organizations can be slow to adopt a new technology unless they are convinced that it will bring them a competitive advantage. Moreover, applicants might not be able to use technology (the digital divide is slowly disappearing) or use it fast enough for speeded tests. Then again, applicants might get ahead of us by using technology to find new ways to cheat.
Technology and the Changing Nature of Work • 317 A cultural lag does not excuse I/O psychologists from taking advantage of technology where we find it. For example, as organizations implement Big Data (mining the diverse, messy piles of information they currently collect), we should embrace the opportunity to use the results. What better way to fill in the later boxes of our measurement models and evaluate the organizational consequences of various human resource initiatives? Looking forward, the growth curve of technological change continues to accelerate in a nonlinear fashion. Within work organizations and society at large there is an increasing expectancy of immediacy in everything. At the same time costs continue to decline for computing, communications, and storing data. As operating departments wake up to technology’s efficiencies, we can reasonably expect that information technology departments will no longer be the sole purchasers of computer applications and organizational adoptions of technology will escalate. If, as expected, artificial intelligence begins to replace white collar jobs like auditors or paralegals, might not some of I/O psychologists’ work (e.g., attitude surveys, test validation) be gobbled up too? Think of that as good news; it will leave us more time for innovation and thought leadership. We still have much work to do.
REFERENCES Bray, D. W., & Grant, D. L. (1966). The assessment center in the measurement of potential for business management. Psychological Monographs, 80 (17), 1–27. Howard, A. (1993). Will assessment centers be obsolete in the twenty-first century? Paper presented at the International Congress on the Assessment Center Method, Atlanta. Howard, A, (Ed.) (1995). The Changing Nature of Work. San Francisco: Jossey Bass.
16 Intersections between Technology and I/O Psychology Walter C. Borman
In this commentary, I present observations on some of the chapters contained in Part 2. In particular, I reflect on certain new technological advances that for the most part could not have been discussed as few as 10–15 years ago, at least in anywhere near the way they are discussed in these chapters. This is remarkable in the sense that I/O psychology has been so greatly impacted by these technologies in so little time. I also comment on some of the issues and potential problems posed by the technologies described.
SELECTION The greatest impact on selection derives from automation of many processes within selection methods and practices. This can be as simple as saving time and manpower with the selection interview by using video conferencing to administer the interview rather than an on-site interviewer(s). Another example, videotaping assessment center exercise performance and sending the videos to off-site assessors for scoring. More generally, selection processes can benefit tremendously from gathering applications through an online process, with an automated algorithm perhaps screening applications for minimum qualifications. Further along in the selection process, unproctored internet testing might be employed to efficiently test large numbers of applicants. Considerable progress has been made recently, experimenting with various types of selection tests (e.g., cognitive, personality, situational judgment tests, and simulations) in this unproctored mode. Of course, a potential problem with 318
Intersections between Technology and I/O Psychology • 319 unproctored testing is the possible lack of standardization of the test conditions across applicants (e.g., some applicants interrupted by kids or others during test administration) and possible cheating and an inability to verify the identity of the test taker. These issues have been addressed by partially retesting applicants on-site, especially if the test has correct answers (e.g., an ability test). Employing a computer adaptive test format can more efficiently accomplish this verification of test scores process. Also, Beaty et al. (2011) have shown that unproctored and proctored, on-site test administration demonstrates close to the same levels of validity. Finally, new technology has led to higher fidelity simulations that are realistic, usually creating favorable applicant reactions, and relatively low adverse impact. An example is an air traffic controller job simulation that depicted air traffic scenarios appearing on a computer screen, with pilot and controller transmissions further describing the scenario. The assessment was structured as a situational judgment test, with test takers responding according to what they would do next in the scenario (Hanson et al., 1999).
ROBOTS Early use of robots in organizational applications were relatively mundane, as in loading and unloading stock and performing highly repetitive tasks. However, more recent applications are much more sophisticated, with robots and operators sometimes forming a team to accomplish tasks and missions. For example, in a military context, robots and operators have worked together on combat missions such as collecting reconnaissance information and executing combat operations. In these kinds of missions, the operator may guide one or more robots using a telepresence display, allowing him/her to “be there” with the robot, viewing what the robot “sees” and guiding it in the appropriate direction in a mission that may, for example, involve detecting and tracking enemy vehicles. Telepresence has also been useful in a civilian application where medical doctors have performed remote patient care. A major issue with this kind of robot– operator teaming is to maximize the amount of information from the robot sources that the operator can process, while minimizing the additional cognitive load placed on the operator. Thus, robot–operator teams in which the operator guides the robot can be effective in a variety of applications. However, semi- or fully-autonomous
320 • Walter C. Borman robots go one step further. Autonomous robots may be able to detect and avoid obstacles without operator assistance and, in general, work for relatively long periods of time without operator guidance. It may have knowledge of its location using, for example, lasers and GPS. Used appropriately and effectively, this kind of robot can obviously reduce the cognitive load for an operator. The main concern with the use of robots in work settings is human acceptance of them. Especially with autonomous robots, the control operators have over them is minimal, at least at times, and this creates a need for trust on our part. As robots become more common in the workplace, and we have more experience with successful outcomes relative to their use, the probabilities increase that such trust can be developed.
WORKPLACE MONITORING Technology advances have made possible monitoring and surveillance approaches to evaluate the performance or other work-related behavior of employees. The purpose is usually to assess performance or to monitor employees for improper use of such company equipment as computers, including excessive web surfing and potentially harmful e-mails, and telephones, including improper and excessive personal use. One of the major concerns of researchers in this area is the relationship between electronic monitoring and stress. The general finding is that the presence of this kind of monitoring increases stress and even leads to higher blood pressure and burnout (Castanheira & Chambel, 2010). Also, electronic monitoring has been linked to perceived higher workload, lower job control, and lower job satisfaction (Chalykoff & Kochan, 1989). Recent advances in technology have introduced additional methods of monitoring beyond computer, telephone, and video surveillance. There is an increasing use of global positioning technology monitoring of employees (GPS), especially in company cars. For example, sales people might be monitored in their cars to ensure they are making appropriate sales calls and that they are operating within their assigned sales territory. In general, it appears that electronic monitoring may improve job performance, particularly if the monitoring is perceived as objective and a fair indicator of the employee’s actual task performance (Goomas & Ludwig, 2009). It is also important to have an organization with a climate of high-trust for an electronic monitoring system to be successful without
Intersections between Technology and I/O Psychology • 321 negative outcomes for employees. If employees have low levels of trust toward supervisors or the organization, a highly oppressive electronic monitoring system is likely to be seen as even more unfair, reducing satisfaction on the part of employees. Relatedly, having a transparent system, well understood by employees, is preferable to secret monitoring (Alge & Hansen, 2013). Finally, systems used for feedback and development are judged more favorably than those that are used for administrative purposes (McNall & Roch, 2009), for example, affecting compensation or other rewards or sanctions. The chapters in this section have demonstrated effectively some of the important technological advances that impact on I/O-related areas such as selection assessment, team performance with robot assistance and support, and performance monitoring in the workplace. The work reflected in these chapters and this commentary support the theme of the book, technology has had a highly important impact on the science and practice of I/O psychology. This momentum should certainly continue in the foreseeable future.
REFERENCES Alge, B. J., & Hansen, S. D. (2013). Workplace monitoring and surveillance research since “1984”: A review and agenda. In M. Coovert and L. Foster Thompson (Eds.), The psychology of workplace technology. New York, NY: Routledge Academic. Beaty, J. C., Nye, C. D., Borneman, M. J., Kantrowitz, T. M., Drasgow, F., & Grauer, E. (2011). Proctored vs. unproctored Internet tests: Are unproctored tests as predictive of job performance? International Journal of Selection and Assessment, 19, 1–10. Castanheira, F., & Chambel, M. J. (2010). Reducing burnout in call centers through HR practices. Human Resource Management, 49, 1047–1065. Chalykoff, J., & Kochan, T. A. (1989). Computer-aided monitoring: Its influence on employee satisfaction and turnover. Personnel Psychology, 42, 807–829. Goomas, D. T., & Ludwig, T. D. (2009). Standardized goals and performance feedback aggregated beyond the work unit: Optimizing the use of engineered labor standards and electronic performance monitoring. Journal of Applied Social Psychology, 39, 2425–2437. Hanson, M. A., Borman, W. C., Mogilka, H. J., Manning, C., & Hedge, J. W. (1999). Computerized assessment of skill for a highly technical job. In F. Drasgow and J. Olson-Buchanan (Eds.), Innovations in computerized assessment (pp. 197–220). Mahwah, NJ: Lawrence Erlbaum. McNall, L. A., & Roch, S. G. (2009). A social exchange model of employee reactions to electronic performance monitoring. Human Performance, 22(3), 204–224.
Author Index
Abel, M. J. 135 Abrami, P. C. 57 Ahearne, M. 122, 129 Aiello, J. R. 217, 220–221 Alder, G. S. 228 Aldrich, C. 53 Alge, B. J. 209–237 Allen, M. J. 163 Ambrose, M. L. 228 Anokwa, Y. 271–273, 279 Ansburg, P. I. 60 Arendasy, M. 34 Argawal, R. 125 Atwater, L. 80 Avolio, B. J. 121, 123, 135–136, 140 Baker, N. 89 Ballinger, G. A. 215–216 Balthazard, P. A. 138 Bandawe, C. R. 264 Barnes, M. J. 185–208 Baron, L. F. 262 Baskerville, R. 140 Baur, C. 196 Beaty, J. C. 319 Beersma, B. 99–100, 104–105 Behling, R. 127 Behrend, T. S. 11–12, 261–283 Bejar, I. I. 28 Belkin, L. Y. 80, 86–87 Bell, B. S. 61 Bell, D. 119 Bennett, J. K. 262–263 Bernard, R. M. 57 Berry, M. O. 264 Bias, R. G. 162–182 Birnbaum, M. H. 166 Bjork, R. A. 68
322
Boies, K. 121 Boiros, P. 66 Bono, J. E. 137 Borman, W. C. 318–321 Bornstein, B. H. 63 Borokhovski, E. 57 Brehmer, B. 26, 53 Brennan, J. 124, 126, 129 Brown, J. S. 44 Brown, K. G. 55 Brown, M. E. 121 Buchanan, T. 30 Burns, M. 274 Cameron, K. S. 60–61 Cannon-Bowers, J. A. 68 Cardy, R. L. 94 Carpenter, D. J. 137 Carr, S. C. 264 Cascio, W. F. 307–313 Cassidy, S. E. 77–98 Chalykoff, J. 218 Chang, D. 196 Chapman, D. D. 56 Charlier, S. D. 55 Chen, C. 65 Chen, J. V. 216 Chen, Y. J. C. 196 Chomsky, N. 174 Christensen, C. M. 131 Chung-Herrera, B. G. 90 Church, A. 95 Clark, R. C. 58 Clark, R. E. 64 Cohen, W. M. 48 Cooper, C. D. 232 Coovert, M. D. 1–17, 26, 137, 305–306 Cosenzo, K. A. 199
Author Index • 323 Davis, J. H. 95 Davison, L. 44 Dee, J. 152–153 DeNisi, A. 85, 88 DeRenzi, B. 266, 268 Diamandis, P. H. 1 Dirks, K. T. 232 Dixon, S. R. 196 Diziol, D. 65 Dodge, G. E. 121, 123, 136, 140 Dodson, L. L. 263 Dominowski, R. L. 60 Donders, F. C. 164–165 Douthitt, E. A. 220 Downes, S. 55 Drasgow, F. 21–42, 308, 315 Driskell, J. E. 102, 110 Duane, A. 222 Dumas, J. 167 Earley, P. C. 86, 89, 92, 219 Eaton, S. 252 Ehrhart, K. H. 90 Elliott, L. R. 185–208 Ely, K. 60 Ericsson, K. A. 62 Fairchild, J. 77–98 Falbe, C. M. 78 Farr, J. L. 77–98, 277 Ferran, C. 132 Fetzer, M. S. 23 Finnegan, P. 222 Fiore-Silfvast, B. 262 Fjermestad, J. 126 Fleenor, J. W. 85 Fligo, S. K. 94 Fong, T. 196 Ford, J. K. 43–76, 278 Fox, S. 29 Friedman, T. 210 Fusfeld, A. R. 137 Gadd, R. E. 55 Gardner, W. L. 120, 123–124 Geister, S. 92–93 Gillan, D. J. 162–182 Gioia, D. A. 121 Gloss, A. E. 11–12, 261–283
Goldberg, A. S. 271, 284–304, 316 Golden, T. D. 125 Goldstein, I. L. 50 Gomez, R. 262 Grant, A. M. 95 Green, S. G. 215–216 Gritzo, L. A. 137 Gueutal, H. G. 78, 82 Haenlein, M. 289 Hagel, J. III. 44 Hague, S. 56 Halbesleban, M. M. 121 Halme, A. 190 Hamilton, S. 94 Hancock, P. A. 149–161 Hansen, S. D. 209–237 Harders, M. 52 Harris, B. 52 Hazy, J. K. 123–124 Hertel, G. 92–93 Hiltz, S. R. 126 Hinds, P. J. 194 Hines, S. 94 Hollenbeck, J. R. 99–117, 289 Howard, A. 314–317 Howell, J. M. 121 Huang, H. 87, 194 Hunt, J. G. 134 Jarvenpaa, S. L. 113 Jauch, I. R. 134 Jodoin, M. G. 27 Johnson, L. B. 118 Johnson, L. W. 65 Johnson, R. D. 82, 88 Joyner, C. T. 196 Kahai, S. 121, 123, 136, 140 Kahn, H. 118 Kaplan, A. M. 289 Kasl, S. V. 172 Kefi, H. 140 Keil, M. 88 Kennedy, P. 7 Kidwell, R. E. 213 Killian, D. C. 50 Kim, P. H. 232 Kindlund, E. 168
324 • Author Index Kirschner, P. A. 64 Kleine, B. M. 62 Kluger, A. N. 85, 88 Kochan, T. A. 218 Kolb, K. J. 217, 220 Konradt, U. 92–93 Koohan, A. 127 Kossek, E. 252 Kotler, S. 1 Kozlowski, S. W. J. 61 Kraiger, K. 55, 66 Krampe, R. T. 62 Krantz, D. H. 163 Kurtzberg, T. R. 80, 86–87 Kurzweil, R. 1, 7 Landers, R. N. 271, 284–304, 316 Larson, 56, 74 Latham, G. P. 50 Lau, D. C. 110 Lautsch, B. 252 Leidner, D. E. 113 Leslie, J. B. 85 Levensaler, L. 84, 88 Levinthal, D. A. 48 Li, C. 301 Liu, M. T. 65 Lobsenz, R. 95 Lomax, M. W. 118–146, 316 Lord, R. G. 132–135 Lowe, K. B. 120 Luce, R. D. 163 McAllister, D. J. 227 MacDuffie, J. P. 93 McGee, M. 167–168 McGrath, J. E. 102 McKelvey, B. 133–134 McNall, L. A. 228 Maes, P. 193 Mahsud, R. 45 Makri, M. 121 Marion, R. 133–134 Marks, M. A. 68 Maruping, L. M. 125 Massey, K. 190 Massman, A. J. 68 Mathieu, J. E. 68, 122, 129 Mayer, R. C. 95
Mayhew, D. J. 162 Mead, A. D. 21–42, 308, 315 Means, B. 57 Menaker, E. S. 54 Mentis, H. 175–177 Merbold, U. 52 Meyer, T. 43–76, 278 Michael, D. 119 Miles, J. 99–117, 289 Miller, J. S. 90–92, 94 Mondragon, N. 95 Murnighan, J. K. 110, 202 Myers, C. S. 67 Naisbitt, J. 119 Naquin, C. E. 80, 86–87 Nebeker, D. M. 217 Nielsen, J. 167 Nixon, A. E. 238–260, 316 Noel, T. W. 228 Norman, D. A. 173–174 Northcraft, G. B. 86, 89 Norvig, P. 197 O’Connor, D. L. 54 O’Leary, R. S. 84 Olshan, B. 125 Olson-Buchanan, J. B. 21–42, 308, 315 Oostrom, J. K. 26 Orwell, G. 210 Osborn, R. N. 134 Overton, R. C. 36 Page, D. 124, 128, 130–131 Paquette, S. 290 Parasuraman, R. 199 Payne, S. C. 80, 83 Pettitt, R. 195–196 Pfeffer, J. 131 Pick, S. 265 Pierotti, A. 55 Potosky, D. 118–146, 316 Prensky, M. 89 Pulakos, E. D. 84, 94 Purvanova, R. K. 137 Qrunfleh, S. 126–127 Quinn, R. E. 60–61
Author Index • 325 Radtke, P. H. 102, 110 Ratan, R. 53 Redden, E. S. 185–208 Reeves, T. C. 51 Reisman, D. 118 Rich, A. 167 Richman, W. L. 32 Rickel, J. 65 Riddle, D. L. 26 Ritterfield, U. 53 Roch, S. G. 228 Ross, W. H. 216 Russell, S. J. 197 Ruyter, K. D. 128 Salas, E. 102, 110 Sauro, J. 168 Savela, M. 190 Scandura, T. A. 121 Schepers, J. 128 Schipani, S. P. 196 Schmidt, A. 63–64 Schmidt, R. A. 68 Schouten, M. 99–100, 104–105 Schroeders, U. 35 Sendelbach, N. B. 61 Sirkin, J. T. 265 Sitzmann, T. 56–57, 64 Smith, R. W. 31 Smith, T. 127 Sommer, M. 34 Sonnentag, S. 62 Sosik, J. 126, 137 Spector, P. E. 238–260, 316 Sprague, R. 213 Srinivasan, S 56 Stanton, J. M. 220 Stein, J. 35 Sterling, S. R. 263 Sternberg, S. 164–165, 169 Stevens, S. S. 166 Stone, S. J. 56 Straus, S. G. 102 Strickland, L. 228 Stubbs, K. 194 Subasic, E. 221 Sundwall, H. 122, 131 Suomela, J. 190 Suppes, P. 163
Svec, C. M. 221 Sweller, J. 64 Szulanski, G. 49 Tarafdar, M. 126–127 Tatum, B. C. 217 Taylor, S. 122, 129 Tesch-Romer, C. 62 Thompson, L. F. 1–17, 137, 261–283, 305–306 Thorpe, C. 196 Uhl-Bien, M. 133–135 Valentino, S. 287 Veiga, J. F. 125 Velsor, E. V. 85 Venkatesh, A. 241 Vitalari, N. 241 Vogel, J. J. 56 Waldman, D. A. 138 Wall, T. D. 95 Walsh, I 140 Warren, J. E. 138 Watts, S. 132 Wayne, D. B. 62 Webster, J. 233 Weick, K. E. 85 Weiner, A. J. 118 Weisband, S. 80 Wettergreen, D. 194 Wetzels, M. 128 Wexley, K. 50 White, J. 269–271 Wickens, C. D. 196 Willhelm, I. 35 Winter, S. G. 49 Yamauchi, B. 190 Yates, D. 290 Yen, W. M. 163 Yu, P. T. 65 Yukl, G. 45 Zaccaro, S. J. 60, 68 Zickafoose, D. J. 63 Zollo, M. 49 Zweig, D. 232
Subject Index
Aboriginal peoples 292–295 absorptive capacity 48–49 accepting uncertainty 45–46 accessibility 28–30 accountability 129, 141 adaptability 45–46, 60–61 adaptive structuration theory 101 adaptive training 50, 64, 67 additive factors method 165 Advanced Information Technology (AIT) 121–122, 140 African-Americans 29–30 agency theory 221 American Red Cross 53, 291 anonymity 81, 83, 289 Apple 35, 210 appraisal 77, 88; electronic 78–84, 86, 89, 91–92, 94 artificial intelligence (AI) 193, 197, 315, 317 attitudes 218–219, 244–245 augmented reality 10 authority differentiation 105, 108, 111–112, 289 automation 22, 28, 118, 153, 315, 318 autonomy 3, 155, 202, 231, 244, 249, 254; robots 186, 192–197, 199, 202–203, 319–320 behavioral strains 244–246 blogs 286, 302 boundary management 251–254 Canada 292–295 cheating 30–31, 33–34 cloud computing 141 co-construction 123 cognitive approaches 268
326
cognitive frame changing 45–46, 59, 61 cognitive loads 132, 179, 186, 196 cognitive psychology 169 collaboration 43, 45, 59, 124, 296–298, 301 communication 108–110, 255; computermediated communication (CMC) 102–103, 106, 109–112, 123, 136; social 113–114; team 99; technologies 44, 93, 141 community health workers (CHWs) 265–268 competing values framework 46 computer anxiety 32 computer competency 88, 92 computer-adaptive tests (CAT) 23–25, 32, 37 computerized decision support system (CDSS) 271–272 computerized performance monitoring (CPM) see monitoring conflict management 26, 126 constructivism 55 control model 242, 246–250, 253; perceptions 223–224, 246, 248; supervisory 197, 201, 204 counterproductive work behaviours (CWBs) 5, 245–246, 250–251 crowdsourcing 9, 301 cultural imperatives 155–156 cyberdeviancy 251, 253–255 cybernation 119 decision-making 78–79, 105, 112, 125, 200, 225, 271–273, 301 declarative knowledge 57, 64 deterrence theory 221 development: socioeconomic 261–265, 278–279, 291 see also training
Subject Index • 327 digital natives 89, 292 digitalization 5–6 dilemma theory 46 dimensional scaling 99, 104–106 disabilities 29–30, 168 discovery learning 64–65 disruptive technology 15 diversity 125, 293, 310–312 e-commerce 122, 285 e-government 262 e-learning 51, 54–55, 57, 59, 64–65, 67–68, 300 e-mail 86–87, 89, 92, 109, 134 education 57, 262, 274–278; online 10–11 efficiency 22, 37, 78, 167–169, 186, 203, 251, 255, 270, 315 electronic performance monitoring (EPM) see monitoring emerging economies 261 emotions 46, 130, 242–245, 248, 255 enterprise resource planning (ERP) 125 environmental conditions 243, 246, 249 equal opportunities 240, 292–295 ergonomics see Human Factors and Ergonomics (HF/E) ethics 8–9, 30, 222–224, 232, 287 event-based approach to training (EBAT) 51 Facebook 245, 284–285, 288, 294 fairness 28–33, 37, 83–84, 209, 216, 222–225, 231, 273 feedback 66, 68, 109, 174, 178–179, 219–220, 277–278; loops 176; online 78, 80–81, 83–84, 88; performance appraisal 82, 85–87, 89–90, 92–93 fill in the blank (FITB) testing 27–28, 35 formalism 228 games 52–54, 56–57, 59–60, 66–68 Germany 52 Global Positioning System (GPS) 189, 194, 212, 214, 319–320 Global Task Force for Humanitarian Work Psychology 278–279 globalization 23, 44, 134, 312; labor markets 307–308 Google 1, 169, 285, 316
government 119, 262, 291, 294; regulations 240 group support systems (GSS) 126 Guinea 274–275 health 238–239, 242, 253, 255 hierarchies 112, 135, 186, 198, 289, 309 human capital 45, 89 Human-Computer Interaction (HCI) 157 Human Factors and Ergonomics (HF/E) 3, 149, 152–160, 272 human resources (HR) 22, 37, 78, 293–296, 308; electronic 77–78, 82, 84; management 90 human-systems integration (HSI) 13, 306 IBM 25, 67, 131, 210, 289 India 269–271 individual differences 89, 93, 177, 191, 200–201, 219, 228, 248–250 informal learning 59 information age 120, 129–130, 139 Information and Communication Technologies for Development (ICTD) 262–264, 279–281; Guinea 274; India 269–270; Kenya 271–273; Tanzania 265–268; Tennessee 275–278 Innov8 67 innovation 49, 55–56, 66–67, 86, 121, 127, 134, 138, 153, 262 instant messaging 92, 123 intelligent agents 197–202, 204 International Organization for Standardization (ISO) 167 IT strategies 44 Japan 99 job performance 21, 77–78, 85, 216, 221, 254, 301, 320 job satisfaction 5, 218, 320 Kenya 271–273 labor costs 186 laptops 238, 241 leadership 9, 43–45, 58, 60–61, 278, 301–302, 317; complexity theory 133–134; distributive 122; influence on
328 • Subject Index technology 125–130; relational leadership theory (RLT) 135; relationship with technology 119–121, 123–124, 131, 140–142; robust theories 132–134; transformational 125, 128, 136–139; trust 215 learning culture 48–49, 278 learning networks 270 leisure 118–119, 153–155 Likert scales 171 LinkedIn 284 Linux 10 machine skills 85–86 magnitude estimation 166–167, 171 manufacturing 43 mastering contradictory demands 45–47, 59 measurement theory 21, 163–169, 172–173, 177, 179 see also usability Mechanical Turk 9 MediaWiki 291 medicine 66, 156, 203, 263, 265–266, 271–273, 279, 319 mental barriers 263–264 mental models 47 metacognition 62–63, 67 micro-blogs 287 Microsoft 27, 85 Middle East 291 military 13, 52, 66, 185–186, 196, 199, 201–202, 291, 319 mobile phones 263, 269–273; smartphones 2, 9, 35–36, 55, 92, 238, 241 monitoring 209, 227, 230, 242; computerized performance monitoring (CPM) 90–94, 277; electronic performance monitoring (EPM) 2–3, 210–225, 228–229, 231–232, 277, 301, 320; tracking 213, 266 Moore’s Law 1, 13 morality 150–151, 157–160, 198–199, 204 motion sickness 191–192 motivation 2, 5, 219, 266–267 multimedia technology 51 multimodal interfaces 5–6 Multiple Resource Theory (MRT) 188 multiple-true-false (MTF) testing 34 multisensory displays 186–189, 192
NASA 6, 13, 52, 58, 191, 196, 203 natural imperatives 155–156 negative affectivity 249–250 network theory 135, 158 neural nets 198 Office of Technology Assessment (USOTA) 210 open-source principles 10 organizations: citizenship 5, 245–246; culture 61, 216–218, 223, 297; development 296–298; goals 230–231; support 82, 102, 128 outsourcing 9, 309 peer-to-peer sharing 68 PepsiCo 296–298, 301 performance 219–222, 225; goals 217 personal digital assistants (PDAs) 35, 238, 241, 311 physical labor 150–151 physical strains 246 politics 158, 290 post-industrial society 118–119 power distance values 267–268 Prenav model 188 privacy 21, 209, 222–224, 231, 301 problem-solving 25, 43, 48, 54, 201 procedural knowledge 57, 64 productivity 5, 44, 86, 91, 118, 255 psychological fidelity 11, 26–27, 51, 315 psychometrics 21, 25, 180 radio frequency identification (RFID) 5, 141 recruitment 2, 240, 269–271, 292–295 research and development (R&D) 137, 296–298 retention 56, 68, 240, 292–295, 300 robots 4, 7, 185–188, 190, 198, 203, 319–320; autonomy 192–196; organizational robotics 4–5; RoboLeader 199–202; swarm technologies 197 role ambiguity 244, 247 Royal Bank of Canada 292–295, 300, 302 scaffold learning 54 Second Life 288
Subject Index • 329 selection testing 21–27, 32–34, 318–319 self-determination theory 2, 7, 267 self-efficacy 4, 56, 88, 91, 128–129, 248–249, 255 self-regulation 48, 61–63 serious games see games Short Message Service (SMS) 266, 279 simulators 50–51, 56, 60, 62, 124 situational awareness (SA) 187, 194, 196–197, 199 skill development 12, 60, 69 skill differentiation 105, 111 smartphones see mobile phones social media 8–10, 36–37, 284–285, 289–292; blogs 286, 302; micro-blogs 287; multi-user virtual environments (MUVEs) 288; PepsiCo 296–298; Royal Bank of Canada 292–295; social networking sites (SNS) 285, 299–301, 308, 310 social network theories 49, 64, 66 social networking 68, 141 see also social media socialization 102, 295 Society for Industrial & Organizational Psychology (SIOP) 286 socio-technical systems 8, 153, 158 soft skills 43, 45 spatial ability 191–192 special populations 11–13 specialization 47, 58, 61, 105 standardization 29–31, 33, 319 StatKnow 170 status cues 103–104 stress 5, 132, 196, 209, 216–217, 220, 238–239, 241–244, 246–254, 320 subtractive method 165 tablet computers 36, 238 tactile devices 6, 188–189 Tanzania 265–268 teamwork 43, 45, 102, 195 see also virtual teams technology acceptance model (TAM) 128, 272 technology-assisted supplemental work (TASW) 238, 240–241, 249–255 telecommuting 238–241, 247, 250, 252–254, 312
teleoperation 193, 196, 203 telepresence 10, 186–187, 189, 192, 202–203, 319; immersive 190–191 temporal stability 110–112 Tennessee 275–278 training 10–11, 26, 43–45, 52, 54, 58, 74–75, 315; appraisal 81, 92, 94; classroom instruction 49, 55–57; intelligent tutoring systems (ITS) 50–51, 65; interactive radio instruction (IRI) 274–275; specialization 63–64; technology-delivered instruction (TDI) 311–312; transfer 67–68 trait anxiety 249–250 Triple Revolution, The 118–119 trust 93, 110, 112–113, 127, 215, 223–225, 227–228, 231–232, 321 Twitter 287, 291 unmanned vehicles (UVs) 190, 196–198 unproctored Internet testing (UIT) 23, 31 U.S. Army Research Laboratory (ARL) 199, 203 U.S. Congress Office of Technology Assessment (USOTA) 210, 222 USA 11, 29, 52, 99, 119, 158–159, 212, 240, 269, 275, 291 usability 162–163, 167–170, 177–178, 272; rating 171–173, 178 USAID 271, 274 Usefulness, Satisfaction, and Ease of use (USE) questionnaire 167 utilitarianism 149, 229 videoconferencing 23, 92 virtual reality (VR) 10, 51–52, 56–57, 59, 61–62, 67–68, 124 virtual teams 92–93, 99–106, 108–114, 125, 132, 137–138, 141, 289, 309–310, 312 virtual work environments 124 warfighters 185, 187–192, 195–196, 202 wearable computing 6 web 2.0 64, 285 web-based technology 55–57, 65, 67–68, 132, 162, 311 well-being 2, 7–8, 238–239, 242 Western countries 44, 261
330 • Subject Index Wikipedia 10, 287 Windows 176 work-life balance 2, 118; family 240–241, 250–252, 254
Yelp 289 YouTube 289 Zynga 291
This page intentionally left blank
This page intentionally left blank