130 99 15MB
English Pages 348 Year 2023
COMPUTATIONAL MODELING FOR INDUSTRIAL-ORGANIZATIONAL PSYCHOLOGISTS
This collection provides a primer to the process and promise of computational modeling for industrial-organizational psychologists. With contributions by global experts in the field, the book is designed to expand readers’ appreciation for computational modeling via chapters focused on key modeling achievements in domains relevant to industrial-organizational psychology, including decision making in organizations, diversity and inclusion, learning and training, leadership, and teams. To move the use of computational modeling forward, the book includes specific how-to chapters on two of the most commonly used modeling approaches: agent-based modeling and system dynamics modeling. It also gives guidance on how to evaluate these models qualitatively and quantitatively and offers advice on how to read, review, and publish papers with computational models. The authors provide an extensive description of the myriad of values computational modeling can bring to the field, highlighting how they offer a more transparent, precise way to represent theories and can be simulated to offer a test of the internal consistency of a theory and allow for predictions. This is accompanied by an overview of the history of computational modeling as it relates to I-O psychology. Throughout, the authors reflect on computational modeling’s journey, looking back to its history as they imagine its future in I-O psychology. Each contribution demonstrates the value and opportunities computational modeling can provide the individual researcher, research teams, and fields of I-O psychology and management. This volume is an ideal resource for anyone interested in computational modeling, from scholarly consumers to computational model creators. Jeffrey B. Vancouver is Professor and Byham Chair in Industrial Organizational Psychology at Ohio University, USA. Mo Wang is University Distinguished Professor and Lanzillotti-McKethan Eminent Scholar Chair at Warrington College of Business at the University of Florida, USA. Justin M. Weinhardt is Associate Professor of Organizational Behavior and Human Resources at the University of Calgary, Canada.
SIOP Organizational Frontiers Series Series Editors Angelo DeNisi Tulane University, USA Kevin Murphy University of Limerick, Ireland Editorial Board Derek R. Avery Wake Forest University, USA Jill Ellingson University of Kansas, USA Franco Fraccaroli University of Trento, Italy Susan Jackson Rutgers University, USA Paul Sparrow Lancaster University, UK Hannes Zacher Liepzig University, Germany Jing Zhou Rice University, USA The Organizational Frontiers Series is sponsored by the Society for Industrial and Organizational Psychology (SIOP). Launched in 1983 to make scientific contributions accessible to the field, the series publishes books addressing emerging theoretical developments, fundamental and translational research, and theory-driven practice in the field of Industrial-Organizational Psychology and related organizational science disciplines including organizational behavior, human resource management, and labor and industrial relations.
Books in this series aim to inform readers of significant advances in research; challenge the research and practice community to develop and adapt new ideas; and promote the use of scientific knowledge in the solution of public policy issues and increased organizational effectiveness. The Series originated in the hope that it would facilitate continuous learning and spur research curiosity about organizational phenomena on the part of both scientists and practitioners. The Society for Industrial and Organizational Psychology is an international professional association with an annual membership of more than 8,000 indus trial-organizational (I-O) psychologists who study and apply scientific principles to the workplace. I-O psychologists serve as trusted partners to business, offer ing strategically focused and scientifically rigorous solutions for a number of workplace issues. SIOP’s mission is to enhance human well-being and perfor mance in organizational and work settings by promoting the science, practice, and teaching of I-O psychology. For more information about SIOP, please visit www.siop.org. Vocational Interests in the Workplace Rethinking Behavior at Work Edited by Christopher D. Nye and James Rounds Creativity and Innovation in Organizations Edited by Michael D. Mumford and E. Michelle Todd Social Networks at Work Edited by Daniel J. Brass and Stephen P. Borgatti The Psychology of Entrepreneurship New Perspectives Edited by Michael M. Gielnik, Melissa S. Cardon, and Michael Frese Understanding Trust in Organizations A Multilevel Perspective Edited by Nicole Gillespie, Ashley Fulmer, and Roy J. Lewicki Age and Work Advances in Theory, Methods, and Practice Edited by Hannes Zacher & Cort W. Rudolph Data, Methods and Theory in the Organizational Sciences A New Synthesis Edited by Kevin R. Murphy
Neurodiversity in the Workplace Interests, Issues, and Opportunities Edited by Susanne M. Bruyère and Adrienne Colella Expatriates and Managing Global Mobility Edited by Soo Min Toh and Angelo DeNisi Senior Leadership Teams and the Agile Organization Edited by Stephen J. Zaccaro, Nathan J. Hiller and Richard Klimoski Tackling Precarious Work Toward Sustainable Livelihoods Edited by Stuart C. Carr, Veronica Hopner, Darrin J. Hodgetts and Megan Young Computational Modeling for Industrial-Organizational Psychologists Edited by Jeffrey B. Vancouver, Mo Wang and Justin M. Weinhardt
For more information about this series, please visit: www.routledge.com/Routledge-Handbooks in-Religion/book-series
COMPUTATIONAL MODELING FOR INDUSTRIALORGANIZATIONAL PSYCHOLOGISTS Edited by Jeffrey B. Vancouver, Mo Wang and Justin M. Weinhardt
Designed cover image: © Getty Images First published 2024 by Routledge 605 Third Avenue, New York, NY 10158 and by Routledge 4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2024 selection and editorial matter, Jeffrey B. Vancouver, Mo Wang
and Justin M. Weinhardt; individual chapters, the contributors
The right of Jeffrey B. Vancouver, Mo Wang and Justin M. Weinhardt
to be identified as the authors of the editorial material, and of the authors
for their individual chapters, has been asserted in accordance with
sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
With the exception of Chapter 1, no part of this book may be reprinted
or reproduced or utilised in any form or by any electronic, mechanical,
or other means, now known or hereafter invented, including photocopying
and recording, or in any information storage or retrieval system, without
permission in writing from the publishers.
Chapter 1 of this book is available for free in PDF format as Open
Access from the individual product page at www.taylorfrancis.com.
It has been made available under a Creative Commons Attribution-Non
Commercial-No Derivatives (CC-BY-NC-ND) 4.0 license.
Trademark notice: Product or corporate names may be trademarks
or registered trademarks, and are used only for identification and
explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data Names: Vancouver, Jeffrey B., editor. | Wang, Mo (Industrial psychologist), editor. | Weinhardt, Justin M., editor. Title: Computational modeling for industrial-organizational psychologists / edited by Jeffrey B. Vancouver, Mo Wang and Justin M. Weinhardt. Description: New York, NY : Routledge, 2024. | Series: SIOP organizational frontiers | Includes bibliographical references and index. Identifiers: LCCN 2023034067 (print) | LCCN 2023034068 (ebook) | ISBN 9781032483757 (hardback) | ISBN 9781032483856 (paperback) | ISBN 9781003388852 (ebook) Subjects: LCSH: Psychology, Industrial—Mathematical models. | Organizational behavior—Mathematical models. Classification: LCC HF5548.8 .C5827 2024 (print) | LCC HF5548.8 (ebook) | DDC 158.7—dc23/eng/20230721 LC record available at https://lccn.loc.gov/2023034067 LC ebook record available at https://lccn.loc.gov/2023034068 ISBN: 978-1-032-48375-7 (hbk) ISBN: 978-1-032-48385-6 (pbk) ISBN: 978-1-003-38885-2 (ebk) DOI: 10.4324/9781003388852 Typeset in Times New Roman by Apex CoVantage, LLC
CONTENTS
List of Contributors Preface PART I
The Call for Computational Modeling in I-O 1 Better Theory, Methods, and Practice Through Computational Modeling Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang 2 Toward Integrating Computational Models of Decision-Making Into Organizational Research Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun 3 Computational Modeling in Organizational Diversity and Inclusion Hannah L. Samuelson, Jaeeun Lee, Jennifer L. Wessel, and James A. Grand 4 Computational Models of Learning, Training, and Socialization: A Targeted Review and a Look Toward the Future Jay H. Hardy III
ix xiv
1 3
34
62
95
viii
Contents
5 Models of Leadership in Teams Le Zhou 6 Using Simulations to Predict the Behavior of Groups
and Teams Deanna M. Kennedy and Sara A. McComb PART II
Creating and Validating Computational Models
120
146
179
7 Agent-Based Modeling Chen Tang and Yihao Liu
181
8 Computational Modeling With System Dynamics Jeffrey B. Vancouver and Xiaofei Li
211
9 Evaluating Computational Models Justin M. Weinhardt, PhD
237
10 Fitting Computational Models to Data: A Tutorial Timothy Ballard, Hector Palada, and Andrew Neal
255
11 How to Publish and Review a Computational Model Andrew Neal, Timothy Ballard, and Hector Palada
297
Index
320
CONTRIBUTORS
Dr. Timothy Ballard is Lecturer at the School of Psychology at the University of Queensland. His research focuses on using computational models to understand how humans make decisions in dynamic, complex, and uncertain environments. Dr. Ballard has published in many of top I-O and psychology journals, including the Journal of Applied Psychology, Organizational Research Methods, Leader ship Quarterly, Psychological Science, and Psychological Review. Dr. Michael T. Braun is Associate Professor in the Department of Manage
ment & Entrepreneurship at DePaul University. Dr. Braun’s research focuses on emergent leadership, team cohesion, team knowledge emergence and decision making, and modeling multilevel dynamics. His research articles have been published in numerous journals, including the Journal of Applied Psychology, Organizational Research Methods, and the Journal of Business and Psychology, among others. He is also the recipient of several awards, including the 2013 Organizational Research Method Best Paper Award and the 2015 Owens Schol arly Achievement Award, and is a winner of the 2016 Emerald Group Pub lishing Citations of Excellence all for work integrating multilevel theory and computational modeling.
Shannon N. Cooney is a Ph.D. student in Industrial-Organizational Psychology at the University of South Florida. Dr. James A. Grand (Ph.D., Michigan State University) is Associate Profes sor in the Social, Decision, and Organizational Sciences Program at the Uni versity of Maryland. His main research interests focus on the interplay of
x
Contributors
knowledge-building, decision making, collaboration, and performance at the individual and team levels. A significant theme of his work lies in exploring and understanding the emergent processes underlying these mechanisms by theoreti cally, computationally, and experimentally investigating how the behaviors and cognitions of individuals create, change, and/or maintain dynamics at collec tive levels (i.e., teams, multi-team systems, and organizations) over time. His research has been published in the Journal of Applied Psychology, Leadership Quarterly, Perspectives on Psychological Science, Journal of Business and Psy chology, Organizational Behavior and Human Decision Processes, and Organi zational Research Methods, among others. Dr. Jay H. Hardy is Associate Professor in the College of Business at Oregon
State University. Dr. Hardy’s research is in the field of human resource man agement (HR). His recent work has focused on (a) understanding how selfregulated learning processes can be leveraged for improving dynamic training and development interventions, (b) exploring the implications of the job appli cant experience for shaping applicant behavior, and (c) applying simulation and computational modeling methodologies to understanding practical HR phenom ena, such as learning and systematic bias in the workplace. His research has been published in the Journal of Applied Psychology, Personnel Psychology, Journal of Management, Journal of Experimental Social Psychology, Organizational Behavior and Human Decision Processes, and Human Resource Management Review, among others.
Michelle S. Kaplan is working on her Ph.D. at the University of South Florida. Dr. Deanna M. Kennedy is the Associate Dean and Associate Professor in the
School of Business at the University of Washington Bothell. Dr. Kennedy con ducts behavioral operations research about groups and teams. She is interested in the use of interventions, technologies, and tools that facilitate group/team pro cesses and lead to better task outcomes. Her research has been published in the Journal of Engineering and Technology Management, Decision Sciences and Production Planning and Control, Journal of Applied Psychology, and Journal of Organizational Behavior, among others. Dr. Kennedy, along with her col league Dr. McComb, has a book titled Computational Methods to Examine Team Communication: When and How to Change the Conversation.
Dr. Jaeeun Lee (Ph.D., University of Maryland, College Park) is Assistant Pro fessor of Psychology at Augsburg University. Her research interests include prejudice, discrimination, intersectionality, feminism, and other diversity-related topics. She is particularly interested in investigating the experiences of women of color, as seen in a recent Psychology of Women Quarterly article focused on feminist identity and career aspirations.
Contributors
xi
Dr. Xiaofei Li is Research Analyst at Toastmasters International. She received
her Ph.D. from Ohio University in 2017. Her research focuses on using compu tational modeling methodologies to examine dynamic decision making, work motivation, and self-regulation in the workplace. She has published in Organi zational Research Methods and Personnel Psychology.
Yihao Liu is Associate Professor in the Department of Management at the Terry
College of Business in the University of Georgia. He received his Ph.D. in Man agement from the University of Florida. He primarily studies employees’ adjust ment and regulation at work, especially in three scenarios: 1) when employees encounter adverse work conditions, 2) when they face critical career challenges, and 3) when they work interdependently with others in teams and social net works. His research has appeared in premier management and applied psychol ogy journals including Academy of Management Journal, Journal of Applied Psychology, and Personnel Psychology.
Dr. Sara A. McComb is Professor of Nursing at Purdue University. She has
over 25 years of experience studying team communication and cognition across a variety of domains, including interactions among healthcare professionals, submarine personnel, and project teams; has garnered over $2.7M in external funding from agencies, including the National Science Foundation and Office of Naval Research; and has published in journals ranging from the Journal of Applied Psychology to IEEE Transactions on Systems, Man, and Cybernetics: Systems to the Journal of Advanced Nursing. Dr. McComb, along with her col league Dr. Kennedy, has a book titled Computational Methods to Examine Team Communication: When and How to Change the Conversation. Dr. Andrew Neal is Professor of Business and Organizational Psychology at the
University of Queensland. Dr. Neal leads a large program of applied research into human performance and safety in complex environments. He has made major scientific contributions related to performance, safety, and effectiveness of people at work as well as workload, decision making, and self-regulation. Dr. Neal’s highly cited research and reviews appear in top I-O, psychology, and human factors journals, including the Journal of Applied Psychology, Psycho logical Review, Annual Review of Organizational Psychology and Organiza tional Behavior, and the Academy of Management Journal, among others.
Dr. Hector Palada is Postdoctoral Research Fellow at the School of Psychology at the University of Queensland, where he also received his Ph.D. in 2020. His research focuses on decision making and resource allocation in dynamic, com plex work environments. Dr. Palada has published in Organizational Research Methods, Journal of Experimental Psychology: Applied, and Journal of Experi mental Psychology: Human Perception and Performance.
xii
Contributors
Dr. Hannah L. Samuelson (Ph.D., University of Maryland) is Senior Research
Psychologist at the Department of Defense Office of People Analytics. Her research examines how negative interpersonal relations, workplace incivility, and leadership behaviors impact an individual’s ability to thrive in their work place and throughout their career. Dr. Samuelson has published in Leadership Quarterly and Industrial and Organizational Psychology: Perspectives on Sci ence and Practice. Disclaimer: The views expressed in this publication are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Chen Tang is Assistant Professor of Management in the Kogod School of Busi ness at American University. Chen recently completed his Ph.D. from the Uni versity of Illinois Urbana-Champaign. His research interests primarily include HR analytics and research methods. Chen’s work has been published in journals such as Journal of Applied Psychology and Personnel Psychology. Dr. Jeffrey B. Vancouver is Professor and Byham Chair for IndustrialOrganizational Psychology, Department of Psychology at Ohio University, where he received the Presidential Research Scholar award for 2019. Dr. Vancouver received his degree in 1989 from Michigan State University (Department of Psy chology). His research interests include motivation and decision making. Much of his work involves using complex protocols that generate time series data to assess dynamic computational models of a person or persons interacting with the environment (often an experimental protocol) based on a self-regulation perspec tive. His work has been published in top psychology and management journals such as Psychological Bulletin, American Psychologist, Journal of Applied Psy chology, Personnel Psychology, Organizational Behavior and Human Decision Processes, Journal of Management, Applied Psychology: International Review, Organizational Research Methods, and Human Resource Management Review. Dr. Mo Wang is University Distinguished Professor and the Lanzillotti-McKethan
Eminent Scholar Chair at the Warrington College of Business at the Univer sity of Florida. He specializes in research areas of retirement and older worker employment, occupational health psychology, expatriate and newcomer adjust ment, leadership and team processes, and advanced quantitative methodologies. He received the Academy of Management HR Division Scholarly Achievement Award (2008); Careers Division Best Paper Award (2009); European Commis sion’s Erasmus Mundus Scholarship for Work, Organizational, and Personnel Psychology (2009); Emerald Group’s Outstanding Author Contribution Awards (2013 and 2014); Society for Industrial-Organizational Psychology’s William
Contributors
xiii
A. Owens Scholarly Achievement Award (2016) and Joyce and Robert Hogan Award (2023); and Journal of Management Scholarly Impact Award (2017) for his research in these areas. He also received Cummings Scholarly Achievement Award from the Academy of Management’s OB Division (2017), Early Career Contribution/Achievement Awards from the American Psychological Associa tion (2013), Federation of Associations in Behavioral and Brain Sciences (2013), Society for Industrial-Organizational Psychology (2012), Academy of Manage ment’s HR Division (2011), Research Methods Division (2011), and Society for Occupational Health Psychology (2009). His research has received financial support from NIH, NSF, CDC, and various other research foundations and agen cies. He is an elected Foreign Member of Academia Europaea (M.A.E) and a Fellow of AOM, APA, APS, and SIOP. Dr. Justin M. Weinhardt (Ph.D., Ohio University) is Associate Professor of
Organizational Behavior and Human Resources at the Haskayne School of Busi ness at the University of Calgary. His research examines motivation and deci sion making over time. In addition, his research focuses on developing theory through the use of computational models. His work has been published in jour nals such as Journal of Applied Psychology, Organizational Research Methods, Organizational Behavior and Human Decision Processes, Journal of Manage ment, and Journal of Operations Management, among others.
Dr. Jennifer L. Wessel (Ph.D., Michigan State University) is Associate Profes sor of Psychology at the University of Maryland in College Park. Her research uses both laboratory and field designs to examine obstacles facing individuals at work who are in the minority or otherwise marginalized, with the overarch ing goal of providing high-quality research that can help organizations meet the needs of individuals from diverse backgrounds, as well as provide individuals from diverse backgrounds with tools to thrive personally and professionally at work. Her work has been published in peer-reviewed journals such as Journal of Applied Psychology, Academy of Management Review, Psychology of Women Quarterly, Sex Roles, Journal of Social Issues, and others. Her research has been funded by the National Science Foundation, Society for Human Resource Man agement, the Social Science Research Council, the Hewlett Foundation, and the Democracy Fund. Dr. Le Zhou is Associate Professor at the Mays Business School of Texas A&M University. Her research interests include leadership, work groups and teams, self-regulation at work, and quantitative research methods. Her work has been published in the Journal of Applied Psychology, Personnel Psychology, Organi zational Behavior and Human Decision Processes, Journal of Management, and Organizational Research Methods.
PREFACE
When the SIOP Organizational Frontiers Series was first created, it was intended to be a vehicle for initiating thought leadership with regard to those topics that were just arriving “on scene” but which could be best advanced by a carefully constructed and accessible treatment by well-informed researchers. While this aspiration has been borne out in the many volumes of the Series, a strong argu ment can be made that this offering on computational modeling might indeed well serve as the exemplar. It is both timely and forward-looking. Jeffrey B. Vancouver, Mo Wang, and Justin M. Weinhardt are not only respected scien tists; they also have a way of translating a fairly complex topic into text sec tions and chapters that can be read and understood by readers who vary widely in background and preparation. Moreover, they have gone a long way to “de-mystify” the domain by providing key insights that both motivate and guide the interested reader. For example, they point out in their introductory chapter that “computational modeling is not a statistical technique, but rather, it’s a way of theorizing.” Furthermore, they assert that while it is indeed a “new paradigm” for industrial and organizational psychology, it may be the right approach to take at this time in our field’s history, given the desire to research the kinds of issues or problems now facing society, especially when it comes to those found in the world of work. Along these lines, they further point out that computa tional modeling can improve our research even when done at differing levels of sophistication. Importantly, and at the highest (most aspirational level), it can serve to promote such desired outcomes as better theory integration, new ways for conducting experimentation or exploration, or facilitate achieving the goals of science, including the increased capacity for falsification and better prediction. An especially nice feature of the volume is that the editors and authors routinely offer caveats as to when and where to use the approach wisely. In doing so, they hope to better ensure that those championing computational modeling do not drift into over-claiming. Thus, for these and for other reasons, it is easy to see that this volume has the potential to make a major contribution to improv ing our theories and our research in I-O Psychology. I hope that you will agree. Richard Klimoski George Mason University
PART I
The Call for Computational Modeling in I-O
1 BETTER THEORY, METHODS, AND PRACTICE THROUGH COMPUTATIONAL MODELING Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
It has been said that human behavior is much like the weather—totally unpredictable. Indeed, as recently as the 1980s predicting where, when, and how intense a hurricane might be when it hit land was largely based on regression models that were of little practical use, particularly beyond 24 hours. However, after Hurricane Andrew hit the coast of Florida in 1992, the national weather service and other agencies began ramping up and refining computational models of hurricanes. Since then, predictions regarding hurricane trajectories and their intensities over those trajectories have improved dramatically, allowing local authorities to determine evacuation needs with enough certainty to obtain much more compliance than in the past. Meanwhile, climate models, which reside at a much higher level of abstraction and a much grosser timescale (i.e., are less granular) than weather models, have been instrumental in validating the causes and consequences of greenhouse gases, as well as provide a sobering forecast of the future with or without change in human behavior. These models have motivated authorities to make policies and set goals to try and address the issue. On a different front, in 2020 the United States and many other governments shut down their economies to “flatten the curve” projected by a computational model of disease spread and death due to a coronavirus known as COVID- 19. Within a year, a vaccine for the virus was produced, thanks in part to a computational model of coronaviruses. Indeed, the natural sciences rely heavily on computational and mathematical modeling to explicate their theories and provide a basis for applying those theories. In each of the previous three scenarios, the computational models provided scientists and policy makers with not only better scientific knowledge but also more practical knowledge. However, although scholars got better at predicting DOI: 10.4324/9781003388852-2 This chapter has been made available under a CC-BY-NC-ND 4.0 license.
4
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
the weather, our ability to predict and understand human behavior has not pro gressed at the same rate. Thus, although we have useful computational models of climate change, hurricanes, and infectious disease spread, we also need com putational models to help us understand and predict human responses to these same events. For example, a computational model of a hurricane and where it will touch down is of limited use if the people in the path of the hurricane do not leave the area. We must not only understand the hurricane, but we also must understand the people subject to the hurricane’s wrath. Thankfully, the quest to understand and predict human behavior with com putational models has begun in earnest in psychology (Farrell & Lewandowsky, 2010; Sun, 2008) and other fields relevant to industrial-organizational (I-O) psy chology like organizational science (e.g., Lomi & Larsen, 2001) and economics (e.g., Lee et al., 1997). Computational models are a set of rules (e.g., equations or formally specified propositions) used to represent processes. Moreover, the rules in computational models are represented in a way that can be simulated. The simulations reveal how behavior emerges from the processes represented, and how they interact with one another over time. Fortunately, the ability to simulate computational models means one does not need to solve the equations to deter mine how the system will behave or what states it will predict. Indeed, one does not need to be a mathematician to work with computational models. However, one does need to be a student of the phenomenon and systems they are studying. One area of inquiry where phenomena arise from single or multiple systems is I-O psychology. Specifically, I-O focuses on the human system operating within an organizational system. Indeed, the field has developed a substantial amount of understanding of the phenomena regarding individuals and sets of individuals (e.g., teams) working within organizations, including a growing recognition of its dynamic, interactive quality. Toward that end, increasing attention has been paid to measurement, design, and modeling issues related to data collection and analysis (Wang et al., 2016). On the other hand, there has been relatively less recognition that dynamics, whether across multiple systems, time, or both, create challenges for theory specification. Fortunately, the field has begun to appreciate the usefulness and value of computational modeling when it comes to developing, representing, and communicating theory and the measurement, pre diction, and applications that might emerge from this formal approach to theory representation (e.g., Kozlowski et al., 2013). Still, that appreciation is more narrowly represented in the field than it should be given the nature of the phenomena studied by I-O psychologists. To help expand appreciation for computational modeling, we developed this edited book. As discussed in more detail later, the book includes chapters that review key modeling achievements in several domains relevant to I-O psychology, includ ing decision making, diversity and inclusion, learning and training, leadership, and teams. Moreover, the book provides specific how-to-chapters on the two
Better Theory and Practice Through Computational Modeling
5
most used modeling approaches in the field: agent-based modeling and system dynamics modeling. It also provides information about how to evaluate mod els qualitatively and quantitatively. Finally, it provides advice on how to read, review, and publish papers with computational models. Meanwhile, to help motivate the current readers’ interest in computational modeling, we provide a description of the myriad of values computational mod eling can bring to our science. We also provide a brief history of computational modeling as it relates to the field of I-O psychology. This is followed by a more complete description of our goals for the book. That is, we describe the learning objectives for various levels of computational model aficionados, from scholarly consumers to computational model creators. Finally, we provide an overview of the chapters. The Value of Computational Modeling
It is not difficult to articulate the various ways computational modeling can serve a science. Foremost, it is important to understand that computational modeling is not a statistical technique; rather, it is a way of theorizing. Most often, the theorizing represented in computational models began more informally as natu ral language or verbal theories (Busemeyer & Diederich, 2010). Computational modeling can provide formality and thus discipline to such theories (Farrell & Lewandowsky, 2010). This includes confirmation of a theory’s viability and a greater degree of transparency regarding a theory’s explanation. Moreover, formal modeling can facilitate integration, generalization, and differentiation of theories and constructs, given the abstract universality of mathematics. Com putational models also provide tools for critiquing theory and interpreting find ings, discovering new phenomena, guiding empirical needs and protocols, and providing better prediction and prescription (e.g., Harrison et al., 2007). We note that the value of computational modeling is not only via the representations of theoretical processes that can be simulated, given their formal rendering, but also in the process of computational modeling, from start to finish. That is, com putational models have value, but computational modeling also has value, given the thinking processes, supported by the modeling platforms, that are needed to create a working model of the entities and phenomena of interest (Farrell & Lewandowsky, 2010; Hintzman, 1990). However, before extolling further on the virtues of computational models, a warning against applying the list of values too strictly is needed. Consider, I-O psychologists attempt to address an array of practical problems via the application of understanding of the phenomena of the individual or individu als in some context (e.g., work). This understanding is often derived from one or more general theories that appear relevant to the particular problem at hand. Additionally, it is possible to articulate multiple understandings or alternative
6
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
explanations. To support the application of a particular understanding, it is gen erally required to identify patterns in the data that are consistent with the under standing (i.e., hypotheses). These patterns must be substantiated by data that reject null hypotheses and collected in a way as to minimize the presence of alternative explanations. What is generally not required is that the understanding be represented in a form that can generate the patterns of data presumed to fol low from the understanding/theory. Rather, one must rely on an understanding of the theory and one’s mental model to imagine the patterns that might appear in data (i.e., hypotheses generation) because typical understandings are expressed in informal ways. This is important because when one is evaluating a computational model, one should be wary of asking the computational model to do more than a verbal theory does. This may seem ironic, given we are about to go into detail extolling the value that computational models over verbal theories. Yet, no computational model will be able to realize all the values. Our greatest fear in articulating a list of values is that reviewers will translate the list into a checklist when critiquing a computational model or modeling paper. Such a checklist is almost certainly going to doom the paper. Indeed, if one is tempted as a reviewer to start a sen tence “aren’t most computational models supposed to . . .” and end it with “. . . but this one does not,” please resist. Many responses to reviewers’ comments have been about explaining why some model or set of models in question should not be expected to realize this or that criterion. To be sure, all papers and presum ably the models within them should add some value or there is no point to the model/paper. That is, the same criteria for all scholarly contributions should be applied to computational models—some value is added. The expectation is not, however, that a scholarly product (e.g., an empirical paper) could maximize on all the desideratum (McGrath, 1981). This same affordance should be offered to computational modeling products. Meanwhile, we believe a lot of value will be added by a field that embraces computational modeling in general. We begin our discussion of the value of modeling at this very general level. The Need for a New Paradigm
Heidegger (1927, 1996) proposed that a field is typically absorbed in its prac tices without questioning them or their limitations. Similarly, Kuhn (1962, 2012) proposed that most of the time scholars operate under “normal science,” where scholars problem solve and operate under the dominant paradigm. However, when there are some significant anomalies, breakdown, or disruption, a field might reflect on its practices more closely. Currently, psychology has been reflecting on its practices in the face of issues related to replication, questionable research practices, lack of testable theories, theory proliferation, construct pro liferation, weak prediction, and the generalizability of our theory and findings.
Better Theory and Practice Through Computational Modeling
7
One outcome of such reflection can be a new paradigm. For us, that new para digm is computational modeling. To be sure, we are not naïve enough to believe that computational modeling is the only solution to all the disruptions noted previously. Indeed, to some extent, some may think that computational modeling is more unique from current ways of doing science than it in fact is. However, as explained in the following paragraphs, we believe computational modeling provides a path to addressing many of the issues or at least a new perspective on them. A Different Way of Thinking: Need for a Working Model
The first major specific benefit of modeling is that it forces one to think more completely about a phenomenon and the explanations forwarded about it. Indeed, Farrell and Lewandowsky (2010) point out that simply trying to con struct a computational model may substantially improve one’s reasoning about a phenomenon and/or a theory because simulations of the model directly test one’s reasoning. That is, rather than just saying that if T theory is true, one ought to see effects E1–Ek, a computational modeler must show that effects E1–Ek occur in simulations of theory T. Such a process is a form of theoretical proof unavail able to natural language theories unless they are translated into computational models. Moreover, Farrell and Lewandowski note that our higher-level reason ing skills are not typically up for the task of reasoning about how combinations of simple, dynamic processes work (see also Hintzman, 1990). Thus, if one does get a formal representation of a combination of such processes to work, one has at least a viable explanation, if not a valid one. An example of the aforementioned value of computational modeling can be found in the debates among self-regulation theorists spanning decades. That is, critics of motivational control theories (e.g., Bandura, 1986; Bandura & Locke, 2003) argued that such theories could not explain discrepancy production (i.e., raising one’s goal following previously successful performance) because the theory’s core process is one of discrepancy reduction (i.e., applying resources until a goal level is reached). In contrast, they claimed, social cognitive theory could explain discrepancy production. Yet, Scherbaum and Vancouver (2010) noted that social cognitive theory only claimed that discrepancies were pro duced. Other than naming a construct involved (i.e., self-efficacy), it did not explain the mechanism that produced the discrepancies. Meanwhile, Scherbaum and Vancouver built a computational model that produced discrepancies using a combination of a few basic discrepancy-reducing structures (i.e., consistent with motivational control theory, Powers, 1973). In another case, Vancouver et al. (2020) translated goal theory, which is a pre sumably robust, practical theory of motivation (Locke, 1997; Locke & Latham, 2002), from a static, semi-formal model (i.e., a path diagram with their statistical
8
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
equations at least implied) to (1) a static computational model and (2) a dynamic computational model. In the process of translating the model from static to dynamic, several presumably reasonable process functions at a key point in the dynamics (i.e., at the return in a feedback loop) were tested to see which might be viable, given goal theory was ambiguous on this point. Only one survived. That is, the process of building the models revealed gaps in specification, and simulations of possible formulations allowed the modelers to test what could and could not account for the phenomena the theory presumably explained. Indeed, modeling platforms, particularly ones devoted to theory development, can help substantially in this process, and the chapters in the second half of this book discuss these processes extensively. The Benefit of Computational Models: Theory Integration
Several of the complaints about the state of I-O psychology and management science relate to theory proliferation. Yet, science is about simplifying nature to facilitate understanding, as well as to help predict and navigate nature bet ter. Toward that end, theory proliferation has arguably become an impediment rather than a facilitator of that process. One way to reduce theory prolifera tion is theory integration. Fortunately, computational models can often facilitate theoretical integration by reducing theories to sets of mathematical equations and key, recurring structures. For example, Vancouver et al. (2010b) integrated classic theories of behavior (i.e., self-regulation theories) and classic theories of choice (i.e., subjective expected utility/expectancy theories) using one simple information processing structure. To be sure, the structure was repeated and with different inputs to address the different purposes each individual structure had (e.g., multiple instances of one structure within one model). Subsequently, Van couver and Purl (2017) used a variant of the aforementioned model to integrate the forethought processes key to Bandura’s (1986) social cognitive theory with the feedback processes central to control theory models of self-regulation. In another case of high-level integration, Vancouver et al. (2014) showed that the math used to represent the simple structure noted earlier was identi cal to the math used to represent another psychological process: supervised learning. Supervised learning occurs when one can compare an expected state with an observed state, known as the supervisor signal, to correct the internal, mental model used to create the expected state. Vancouver et al. (2014) added these supervised learning components to the 2010 model and thus provided an integrated model of goal pursuit and learning using only a single, simple, but repeated architecture. Indeed, this simple computational architecture matches, mathematically, connectionist (i.e., neural network) architecture (Rumelhart & McClelland, 1986). Meanwhile, specific neural network models have been applied to lots of specific learning problems. For example, Van Rooy et al.
Better Theory and Practice Through Computational Modeling
9
(2003) showed that the outgroup homogeneity effect (i.e., the impression that members of outgroups have less within-group variation than one’s in-group) could be modeled as a function of learning and the differences in exposure to members of the respective groups. Higher-order processes (e.g., attributions) were not needed to explain the phenomenon. To be sure, the connectionist archi tecture likely does not account for all that is involved in human learning, but it does a good job of emulating how we do it most of the time. Indeed, it often learns better (or at least faster) than humans, such that it is now the basis of machine learning and artificial practical applications. Continuing the process described earlier, Ballard et al. (2016) integrated a cognitive theory of dynamic decision making into the original goal pursuit model of Vancouver et al. (2010b). This was followed by Ballard et al. (2017) show ing how another variant of the structure could account for how humans avoid unwanted states (i.e., avoidance goals) as well as desired states (i.e., approach goals). Finally, Ballard et al. (2018) generalized the original model to include the elements needed to explain how humans handled goals with different deadlines. To be sure, despite the repetition of the simple structure, most of the models build look very complicated. Indeed, it would be foolish to expect anyone to look at the structures and equations and be able to explain what they could do without a detailed explanation or, better yet, interacting with the computational model within the software platform. That is, the core of a theory can be parsimo nious, but the scope of the theory comprehensive. This paradox arises because the results of the interaction of the model parts (i.e., subsystems), even if simple, are beyond our imaginations. The Benefit of Computational Models: Experimentation and Exploration
We noted earlier that computational models provide a proof of concept via a working model that produces the behavior one is attempting to explain with a theory. We also noted that to get to a working model, one might need to explore alternative functions or processes to find one that works. This exploration involves experimenting with the model (Davis et al., 2007). Another form of exploration and experimentation with a model can happen after a working model is produced. It often involves determining which subprocesses and parameters are key. This sensitivity analysis involves running simulations with various parameter values to assess the role the parameters play in the behavior of the model. Some might substantially affect the model behavior, and some might not. When the parameters can be linked to construct measures or manipulations, the results of sensitivity analyses can be used for developing hypotheses for empiri cal studies, including null hypotheses where variation in the parameter is found to have little effect on model behavior.
10
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
Beyond parameters, subprocesses can be tested by turning them on or off across simulations of the model to reveal whether they are key or not to some phenomenon. For instance, Vancouver et al. (2016) assessed several possible explanations for positively skewed distributions of performance using a dynamic computational model. The possible explanations were added as they built the model, stopping when they were able to represent a process that reproduced a referent data set (i.e., an empirical observation of the phenomenon). In contrast, Vancouver et al. (2020) removed components that were shown to be unnecessary when a static model was made dynamic. In this way, the addition of a dynamical perspective increased the parsimony of the theory without needing additional empirical work. Indeed, where exploration and experimentation are especially useful is when empirical data is difficult to acquire. This issue is more common with macro-level models, which Davis et al. (2007) describe to great effect. Meanwhile, when empirical model testing is available, computational mod eling can be very valuable in revealing the empirical patterns to look for when challenging or pitting theories. For example, Vancouver and Scherbaum (2008) used a computational model to first explicate the difference between selfregulating behavior and self-regulating perceptions, including how the models make different predictions, given an experimental protocol. They then imple mented the protocol to see what the data revealed. Similarly, Vancouver et al. (2010a) used a computational model to illustrate how the typical empirical pro tocols used to evaluate a dominant theory of proactive socialization were not up to the task because researchers did not know that they needed to take the rates of processes into account when collecting data. Moreover, they showed how even time-series data might not be diagnostic because an alternative theory, which Vancouver et al. (2010a) also modeled, produced the same pattern of results. However, they subsequently used the models to illustrate what empirical pro tocols might differentiate the competing theories. These examples show how computational modeling can be used to critique and as well as guide empirical protocols. The Benefit of Computational Models: Falsification and Improved Prediction
According to Popper (1963, 2014), what makes a model scientific is that it can be proven wrong (i.e., falsifiability). Of course, all models, whether verbal or computational, are wrong (Sterman, 2002), so usefulness is a better criterion. Still, unlike verbal theories, computational models must be internally consist ent or simulations of them would not run, or they would produce behavior inconsistent with the phenomenon they presumably explain. Thus, the need to produce a working model provides an initial opportunity to falsify a theory (as well as undermine its usefulness). In addition, computational models are more
Better Theory and Practice Through Computational Modeling
11
transparent than verbal theories (Adner et al., 2009; Weinhardt & Vancouver, 2012). Thus, whether mathematical or propositional, the explicit functions provide an opportunity to question the scientific plausibility of a function or process (i.e., set of functions). Moreover, via exploring alternative functions or processes, one might find alternatives that also work, alternatives that might (or might not) change the behavior or predictions of the model as revealed in sim ulations of the alternatives. Via these differences, one might be able to develop protocols to see which or when (i.e., under what conditions) models make better predictions. We described this process earlier in the section on exploration and experimentation. Yet another benefit of computational models is improved prediction (Adner et al., 2009). This not only increases the ability to falsify, but it also increases the usefulness and thus reduces the likelihood the model will be discarded. To explain, Meehl (1990) noted that where many constructs tend to be intercon nected, it is relatively easy to find support for theories. As such, demonstrating a relationship between constructs is often a straightforward task, and thus theo ries that simply predict some relationships are likely to be confirmed. However, computational models improve our ability to make predictions (i.e., ones that move beyond predictions of mere association). Computational models can pro vide predictions not only of the relationships and their direction but also of the levels of variables over time and the shape of relationship between variables and/or over time. To be sure, this feature depends on the quality of inputs when constructing a model (e.g., are the rates and delays of processes well-known?) as well as the quality of inputs into simulations or model fitting exercises. For instance, computational models of emotion intensity onset and decay are ham pered by imprecise knowledge of such rates (Hudlicka, 2023). Still, a computa tional model will be much better positioned to render precise predictions when the measurement models allow it. Yet, even when measurement and design are lacking, computational models can improve prediction via competitive prediction. In a field that boasts a wealth of theoretical and empirical contributions, it frequently lacks competition among theories. That is, merely identifying a theory that accounts for a particular phe nomenon in isolation does not align with the fundamental principles of scientific inquiry (Gigerenzer & Brighton, 2009). Instead, the pursuit of science requires the identification of theories that surpass existing ones in accounting for a par ticular phenomenon or expanding the scope of what has been explained (Popper, 1959, 2002). Therefore, it is crucial to evaluate theories by contrasting them with alternative theories to determine their validity and comprehensiveness in explaining a phenomenon. To realize the benefits of falsifiable models and improved prediction, one needs to look no further than our earlier discussion of the many competing mod els of multiple-goal pursuit. In less than 10 years, the original multiple-goal
12
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
pursuit model (Vancouver et al., 2010) has lost in competition with modifications of the model or simply expanded its predictive power via integration with other models across six papers to better account for human behavior. This rapid pro gress would not likely have occurred if not for the theories being computational. Meanwhile, computational models clearly eliminated some explanations for phe nomena. For example, on the way to providing a sound person-environmental interactional explanation for the positive skew typically found in the distribution of performance across individuals, Vancouver et al. (2016) eliminated several possible person-centered explanations. Meanwhile, pitting theories against each other empirically assumes that the explanations are not both true. For example, Ballard et al. (2018) pitted two explanations for increasing motivation as a deadline approached. They found evidence supporting both explanations. More important, they were able to rep resent the two explanations within a single, relatively simple (i.e., conceptually parsimonious) computational model of goal choice and striving. As noted ear lier, this redundancy of processes is commonly found in biology, but it is rarely recognized in verbal theories in psychology. Even in cases where alternative descriptions of human behavior presumably predict different effects, computational modeling can provide the path to unifica tion. For example, Vancouver and Purl (2017) used their computational model to explain when one might see a positive, negative, null, and curvilinear relation ship between two constructs as a function of what appeared to be two competing theories. The model revealed the logic, and contingencies, where the various empirical models might arise while also reconciling the theories. More recently, Shoss and Vancouver (2023) presented a model that included three countervail ing processes that lead to opposing hypotheses and yet likely work in concert. Such configurations of apparently incongruous parts operating harmoniously with a single system are difficult to describe verbally or believed workable with out demonstration. Computational modeling can address those limitations to conventional theorizing. The Benefits of Computational Modeling: Final Thoughts
To summarize the earlier arguments, computational modeling is needed to real ize the promise of science, which is to simplify nature. Ironically, computational models are often seen as so complex as to be of little scientific value (i.e., the Bonini paradox). However, upon further consideration, the ability of compu tational models to integrate and generalize via repetitions of simple agents, whether subprocesses or actors with a small rule set, offers the desired advantage of conceptual parsimony. To be sure, sometimes the modeling reveals where dif ferentiation is needed (i.e., self-regulated behavior ≠ self-regulated perception; Vancouver & Scherbaum, 2008) or where systems have functional redundancy
Better Theory and Practice Through Computational Modeling
13
across two or more subprocesses (e.g., Ballard et al., 2018), but that too is part of science. However, more often they reveal conceptual overlap via the abstract math used to build the model. One Nobel Prize-winning chemist described the desired state of formal mod eling as where “the models become modules in the theoretical erector set, shut tled into any problem as a first (not last) recourse” (Hoffmann, 2003, quoted in Rodgers, 2010, p. 9). The sentiment expressed by this Nobel Prize-winning scientist in a mature field of inquiry is the goal we have for I-O psychology and management science. Like many sciences, psychology and management science focuses on and tries to understand complex systems interacting over time, nested in a cascade of higher-order systems, and utilizing a cascading set of subsystems. Computational models are built to evaluate explanations of some small part of that network of systems over some presumably relevant timescale, depending on the problem at hand. Eventually, the templates of the models can be used for new parts or problems, or the models themselves can be combined to address more complex parts of some whole. Moreover, because the models are computational, we do not need to rely on human’s limited (but still very impressive) computa tional prowess because we have built the supporting tools needed to utilize the understandings developed. Of course, to get there we need to train ourselves and future scholars in our field to understand the tools and what they can do for us. Unfortunately, for the uninitiated, little about computational modeling seems simple. Moreover, no one person will likely be able to understand all modeling approaches, much less all models. As we attempted to articulate earlier, the value of computational modeling is not merely in its products but also in its process. No paper on computational modeling should be expected to realize any more than one or two of the values we described earlier, and some papers might not even explicitly realize even one. That is, once one learns to computationally model, one might merely use it to check one’s thinking about how a theory or process would play out and thus whether the paradigm one wants to implement to challenge some explanation is up to the task but without publishing the model. Indeed, one might not want to share with the world their own skepticism about their reasoning prowess, even though most modelers will tell them such humil ity is well-placed (Sterman, 2002). That said, in the next section, we review key examples of I-O psychologists laying bare their skepticism regarding their reasoning prowess. Brief History of Computational Modeling in I-O Psychology
The history of computational modeling is a relatively short one, especially in I-O psychology. In general, computational modeling only emerged in psychol ogy and management science in the second half of the 20th century (Simon, 1969). For example, Forrester (1961) pioneered system dynamics modeling,
14
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
which was used to address macro-organizational issues like managerial decision making (e.g., Sterman, 1989) and organizational learning (e.g., Senge, 1990). Using a more general modeling approach, March (1991) considered some dynamics related to organizational learning. Lomi and Larsen (1996) examined the dynamics of organizational populations. Hanisch et al. (1996) modeled the macro (i.e., organizational) implications of different possible withdrawal processes. In that same year, Martell et al. (1996) published a paper in American Psychologist describing simulations of a model where small but accumulating adverse effects of bias on promotions for women could possibly account for the absence of women in the highest offices of organizations. Also around this time, papers related to dynamic decision making and feedback processes at the individual level were being published in what was the precursor to Organizational Behavior and Human Decision Processes (e.g., Gibson et al., 1997; Sterman, 1989). On a more basic, psychological level, John R. Anderson (1983) developed a propositional computational model focused on human learning and behavior called ACT theory that informed theories of learning and training. At an even more granular level, a great deal of attention was paid to neural network models as a way of understanding learning (e.g., McClelland & Rumelhart, 1981; Anderson, 1996). As noted earlier, those models are now the backbone of machine learning algorithms. Seeing the promising but relatively low use of computational modeling in the field at the turn of the century, Ilgen and Hulin (2000) published an edited book about computational modeling of behavior in organizations. The book included several chapters from well-known researchers presenting relatively simple computational models on many topics of potential interest to I-O psychologists, like the effects of faking on personality measures in selection (Zickar, 2000), effects of pay-for-performance systems (Schwab & Olson, 2000), group decision making (Stasser, 2000), applications of a modeling system (i.e., Petri nets) on various problems (Coovert & Dorsey, 2000), cultural norm formation (Latané, 2000), organizational change (McPherson, 2000), organizational adaptation (Carley, 2000), and other implications of the withdrawal processes model (Hanisch, 2000). Unfortunately, the book had limited impact on the field. Part of the problem may have been that the learning curve seemed too steep and the benefit too obscure to put the time in to learn about computational modeling and what it might do for one’s science and career. In another attempt to promote computational modeling to the mainstream of organizational and management research, Academy of Management Review (AMR) published a special issue on computational modeling in 2007. Two important and highly cited articles focusing on the process of computational modeling were published in that edition (Harrison et al., 2007; Davis et al., 2007). Unfortunately, this did not lead to an increase in computational models in AMR largely because of a limited audience amenable to appreciating such
Better Theory and Practice Through Computational Modeling
15
models. Indeed, it seems a critical mass of scholars who can understand, appreci ate, and develop computational models are needed to realize their potential and mature the discipline. Still, macro computational models were being published in other wellrespected journals in management like Administrative Science Quarterly and Organization Science. Yet, most were published in journals specializing in computational or mathematical modeling, like System Dynamics Review, which started in 1985, and Computational and Mathematical Organizational Theory, which started in 2004. Indeed, with the occasional exception in OBHDP, top I-O and management journals were not publishing micro- or meso-level models. However, something changed in 2010. First, Vancouver et al. (2010a) pub lished a computational model in the Journal of Management. In the paper, three related computational models were used to explain a paradox in the socializa tion literature by highlighting how the empirical paradigms used to evaluate the major theories of proactive socialization were not diagnostic, given the dynam ics inherent in the theories. Two of the models were at the individual level, and one was a meso-level model representing an employee-supervisor dyad. Second, Vancouver et al. (2010b) built a computational model of an individual pursuing multiple goals and used it to explain an interesting but unexplained finding in an earlier empirical paper (Schmidt & DeShon, 2007). The model, though invented little new theory, addressed a large theoretical gap in the literature on work moti vation. The model was built using combinations of a simple, single architecture (i.e., the negative feedback control system) that integrated and made dynamic several traditional theories of motivation (e.g., expectancy theory, goal theory). The paper also thoroughly explained the model-building and model-evaluating process. Finally, Dionne et al. (2010) published a computational model in The Leadership Quarterly that focused on leadership, shared mental models, and team performance. This meso-level model also provided a way to address some longstanding theoretical gaps, while employing a new, dynamic twist on existing theory on leadership and teams. Since 2010, many more computational modeling papers have been published in I-O’s top journals. Indeed, the first set of chapters in this book review compu tational modeling work in decision making as it applies to organizations (Cooney et al., Chapter 2, this volume), diversity and inclusion (Samuelson et al. Chap ter 3, this volume), training and socialization (Hardy, Chapter 4, this volume), leadership in teams (Zhou, Chapter 5, this volume), as well as teams and groups more generally (Kennedy & McComb, Chapter 6, this volume). In some cases, like the work on decision making, most of the modeling has been around for a while but not applied to I-O psychology problems. In other cases, like the work on diversity and inclusion, training, leadership, and teams, much of the modeling is more recent (e.g., past 10 years). Still, this work has both computational and theoretical roots that have been long established. Perhaps most importantly, all
16
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
the chapters highlight the vast amount of work that needs to be done, given the nascent nature of the computational approach within the field. Fortunately, top I-O journals have now opened their doors to computational modeling papers, making the presumably steep learning curve worth climbing not only for those interested in applying the tool to their projects but also for those reviewing and reading those journals. Still, to get a critical mass of modelers and a sophisti cated audience, training is needed. Computational Modeling Training: Levels of Schooling
As noted earlier, computational modeling has great potential to facilitate our understanding of organizational phenomena. However, the extent to which these potentials can be realized depends on whether there exist “well-trained craft persons” to leverage the strength of the “tool.” Apprentices and journeylevel workers, in the form of sophisticated readers, reviewers, and researchers, through masters, in the form of model builders, need to be trained before real izing the trade that is computational modeling. Indeed, there may be heterogene ous training goals contingent upon one’s specialized role within the trade. In the following sections, we summarize what is expected of readers and reviewers, researchers, and modelers to support the development of computational mod eling in applied psychology and management. Apprentices: Sophisticated Readers and Reviewers of Computational Modeling Papers
The first type of craftsperson are readers and reviewers of computational mod eling papers. It might seem odd to include reviewers, who are supposed to be experts, with readers, who are not expected to have such qualifications. How ever, reviewers are often chosen for their expertise in the topic but not necessar ily the method or analysis used. Still, we want any reviewer, like most readers, to be able to (1) know the conventional vocabulary of computational modeling; (2) understand what goes into creating, validating, and evaluating the model; and (3) understand how the model can help in understanding, studying, predicting, or controlling the phenomenon of interest. First and foremost, computational modeling involves a systematic repertoire of vocabulary to ensure conceptual ideas and theoretical logic are communicated formally and precisely. Some vocabularies speak to the nature of constructs. For example, dynamic variables (also called level or stock variables) refer to variables that exhibit inertia and only change values when external forces are applied (Forrester, 1968; Vancouver & Weinhardt, 2012; Wang et al., 2017). This feature is extremely important in the modeling of time but is difficult to illus trate precisely in verbal theories. Other vocabularies are borrowed from related domains but carry particular importance in computational modeling, such as
Better Theory and Practice Through Computational Modeling
17
exogenous and endogenous variables that delimit the core mechanisms included in the model. That is, endogenous variables are affected by other variables in the model, whereas exogenous variables are not. Also, like informal theories, computational modeling uses specialized vocabularies and graphics to depict relationships among variables. The central difference is that in addition to verbal/graphic depictions, variable relation ships are also represented in mathematical forms. However, the mathematical functions can be more complex than the four simple operations (addition, sub traction, multiplication, and division), and they are conventionally presented in a way that variables involved in a function all point at the construct (i.e., the result of the function). Thus, when depicting a computational model, one should not expect to see an arrow point at an arrow to depict a multiplicative or moderating function. Instead, consult the actual function. Also, the functions can reveal what variables are dynamic, given that integration and/or differen tiation with respect to time is frequently used to capture dynamic processes. Further, there are specialized vocabularies in computational modeling for model evaluation (Taber & Timpone, 1996), including internal validity (i.e., how well the model successfully translates the corresponding verbal theory), outcome validity (i.e., model fit), process validity (i.e., the correspondence between the model’s mechanisms and the actual processes being modeled), and sensitivity analysis (i.e., how changes in parameters values affect model predictions). Arguably, the conventional vocabulary of computational modeling can be an entry barrier and requires some specialized efforts invested, but such efforts are worth it not only for those seeking to convey precise, unambiguous, and compre hensive theories, but for those seeking to benefit from, build upon, or question such models. As Vancouver and Weinhardt (2012: 605) stated, “If mathematics is the language of science, and computational models are an expression of that language, scientists should be at least familiar with it,” and “the best way to do this is to use the language.” After learning the conventions (e.g., vocabulary), the second objective is to learn the processes. Computational modeling involves systematic procedures in terms of building and evaluating the model. In this regard, Vancouver and Weinhardt (2012) offer a comprehensive summary starting from problem identi fication and ending with model evaluation. Weinhardt (Chapter 9, this volume) offers a comprehensive evaluation framework that outlines the work one must do to evaluate models. Here we offer a quick overview. To begin, the problem of interest is mostly based on an existing conceptual framework that depicts a dynamic process (Busemeyer & Diederich, 2010). Fur ther, the problem is usually defined in a narrow but precise way. This reflects the likely complexities that will arise once one begins to represent the processes needed and clarifies how the computational model contributes to theory refine ment. Problem selection is followed by system definition, typically including
18
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
determining the units of analysis, the time scale and boundary of the dynamic process, and the variables involved. These aspects can be partly determined by the theoretical framework adopted. However, some theories (especially verbal theories) can have much broader scopes than the bounds of typical computa tional models, requiring decisions on which part(s) of the theories to be mod eled. Then the model is built based on the variables and parameters identified. Specifically, relationships between variables are expressed both graphically and mathematically, and multiple software platforms are available to facilitate the process (e.g., C++, Python, MATLAB, Vensim, R, etc.). The model is then evaluated via various approaches and criteria. For example, simulations can be run to see if variables exhibit reasonable dynamic patterns (e.g., reaching an equilibrium level as opposed to increasing or decreasing infi nitely, unless that is what is observed [e.g., wealth accumulation]). The model prediction can be tested against empirical data qualitatively (e.g., whether the empirical data exhibits the patterns predicted by the model) or quantitatively (e.g., the accuracy or fit of the model after parameter estimation). The model can also be compared to alternative models to identify the relevance of different theoreti cal mechanisms. In this book, we cover both qualitative (Weinhardt) and quanti tative (Ballard et al.) model evaluation, but as is always the case with scholarly work, the evaluation depends on the purpose and the purpose depends on where value is likely to be added to the existing literature. That said, in general, a for mal rendering of theory adds value when so little formal modeling exists. Third, besides the terminology and modeling process, it is also important to understand what the model does to illustrate the phenomenon of interest. Gener ally, as a particular type of formal theory, computational modeling has unique advantages in terms of theorizing about and representing complex systems. This can be illustrated by (1) comparing formal theories to informal theories and (2) comparing computational modeling to analytic formal theories (Adner et al., 2009). For example, compared with informal theories, formal theories enable theoretical clarity and a deeper understanding by providing descriptions of pro cess details of complex dynamic phenomena. That is, because the set of pro cesses needed to produce the phenomena must be specified at least somewhat, computational models have less “handwaving” (i.e., processes left unexplained) than informal theories. Of course, computational models will often have to live with black boxes and simplifying assumptions, even when attempting to illumi nate a black box or test an assumption (e.g., any information processing theory must contend with our vague understanding of how information is represented in the mind). Meanwhile, when compared with analytic formal theories, an advantage of computational modeling is that they can handle complex dynamic processes with causal loops and nonlinearities. These features of the phenomena that I-O psychologists study are often intractable using pure analytic approaches (Wang
Better Theory and Practice Through Computational Modeling
19
et al., 2016). Thus, to understand what a computational model does, readers and reviews should pay special attention to the unique and incremental theoretical contribution of computational modeling above and beyond informal theories and analytic formal theories. Relatedly, the processes (the mechanisms) specified in the model should be carefully examined against the scientific plausibility of the processes depicted, which sheds light on what the model does, whether the model has fidelity, as well as how the model refines existing theories. Of course, a more sophisticated reader and reviewer of papers describing computational models will have experience working with such models. This can include working with existing models to find ways to validate or challenge them; however, to truly understand models one should build them. This book will help you do that. Starting with some simple examples is necessary to build the skills needed to start creating your own models. Still, most of the work of model build ing is likely more about extending the models of others via generalizing existing models or applying existing architectures to new problems. Eventually, one may feel existing architectures are insufficient for tackling certain kinds of problems and feel inspired to create one’s own architecture, but it is more likely that one will use simple functions and modules to represent somewhat standard processes or rules that when cobbled together in a coherent formal description of a process will produce a value-added contribution to the field. For example, I-O psychology and management are replete with process theo ries of domain-relevant phenomena. A key mission of “crafts persons” is to vet these process theories computationally. Such vetting requires (1) demonstrating whether core processes of the model can explain the phenomenon of interest and (2) providing core structures that can be expanded for evaluating interventions and boundary conditions or extensions (i.e., moderators). To find where exten sions are needed, one need only consult the discussion sections of published computational models. That is, like empirical papers whose discussion sec tions talk about needed future theory and research, modeling papers talk about research that might challenge the model and the extensions needed to further the scope of what the model can handle. Of course, researchers are free to identify other substantial theoretical limitations or extensions. For example, presumed moderators should not suggest the bounds of a theory or model, but the territory into which the model can expand. Meanwhile, no modeling endeavor can handle all that needs to be represented just as no empirical paper should be considered the final say in some empirical question. Modelers need to provide ideas regard ing where a model needs to go as well as what research might challenge it. They, and the reviewers of the papers, need to be okay with leaving this work to future research (e.g., Zhou et al., 2019). At the same time, readers and reviewers need to retain a skeptical attitude. An unsophisticated readership or reviewer may be too easily impressed by a computational model because it is beyond their comprehension. We see this happen too frequently with sophisticated statistical
20 Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
techniques, especially when new (e.g., interpreting causal conclusions drawn from structural equation models as internally valid). We do not want readers and reviewers rejecting a computational model due to myths regarding compu tational models (i.e., that they are more complex than the phenomena they seek to explain), but we do not want them acquiescing to a computational model due to some blind faith in the accuracy and integrity of the process. It is just a tool like any other. Journey-Level: Researchers Who Can Design Studies to Test Models Empirically or Via Simulations
The second type of craft persons are researchers who can design studies to test computational models empirically or via simulations. This is needed because science is a system just like the phenomena scientists study. Just as computa tional models often depict iterative or circular processes, so too would a good model of the scientific process. Thus, the linear process described by Vancouver and Weinhardt (2012) is merely an arc in a scientific problem-solving process. Few vetted models should be considered settled models. Imperfection should not keep a model from being published, and a published model should not be the last word. Indeed, one reason for a large set of sophisticated readers-to-builders is to make sure the self-correcting nature of science can operate. In between readers and builders are those who vet computational models derived from pro cess theories. Here again, vocabulary can be daunting because of the negative transfer interference with convention and the use of qualitative terms when clean distinctions are hard to come by. For example, there exists, and we have chapters devoted to quantitative (i.e., parameterizing) and qualitative (i.e., pattern matching) fitting methods for model evaluation (Busemeyer & Diederich, 2010). Quantitative fitting involves estimating the free parameters in the model. In line with the common procedures in statistical modeling, parameterizing for computational models is usually done via minimizing some type of loss function (e.g., least squares estimation) or max imizing some type of likelihood function (e.g., maximum likelihood estimation). Notably, since each unit (e.g., individual) tends to exhibit a unique dynamic pat tern, parameter estimation may be repeated for each individual, though Bayesian methods are now commonly employed to handle this multilevel analysis issue (see Ballard et al., Chapter 10, this volume). Parameters obtained from the optimization algorithm can be examined directly (e.g., the distribution of each parameter, the covariation between differ ent parameters). However, a more important function of parameter estimation is to provide a model fit index capturing the extent to which the model fits the observed data (Vancouver & Weinhardt, 2012). Specifically, by freely estimat ing some parameters, the optimization algorithm can find the best fit between
Better Theory and Practice Through Computational Modeling
21
model-expected outcomes and empirically observed outcomes. A fit signifi cantly better than chance suggests that the model is effective (e.g., Vancouver et al., 2005), especially if the modeling fitting algorithm can recover the param eters used when data is generated from the model (see Ballard et al., Chapter 10, this volume). That is, best practice includes fitting data generated from a model that represents the nature of the measures, the sample size, and the design ele ments before attempting to fit observed data. This will show if the model fitting algorithm will work reliably given the model. This is most important if using the model for application (i.e., for obtaining parameter values for a unit to be used for understanding or prediction regarding that unit). When assessing the validity of the model, knowledge of the reliability of parameter estimates is use ful to prevent over- and under-interpretation of the results of model fitting. It can also be useful for assessing the diagnostic power of an empirical design for assessing parameters. Typically, the data generated from a protocol might help allow for the reliable estimate of one or two parameters, but it may not do a good job allowing other parameters to be estimated because some subprocess may or may not be involved in the protocol (e.g., Ballard et al., 2018). In general, typi cal experimental and passive observational designs, even if longitudinal, will struggle to produce reliable estimates if very many processes are involved. For example, Vancouver et al. (2010b) illustrated how multiple parameters could explain the same individual differences in goal-striving behavior over time. Another stringent test is to pit the proposed model against alternative models (not limited to a null or saturated model) to demonstrate its superiority. One challenge of this approach is that computational modeling has not been widely adopted in the field of organizational science, so there are hardly existing models to be compared with. Nevertheless, researchers have sought to develop alter native models on their own. For example, Zhou et al. (2019) compared their proposed model to several alternative models derived from different theoretical perspectives. Such comparisons are valuable in deepening our understanding of core theoretical mechanisms behind observed phenomena. They, like the model fitting studies described earlier, are likely not the last word on a model, given the theoretically infinite number of alternative models and the inability of most data collection enterprises to constrain the possibility of fitting models. As noted, quantitative model fitting is a sophisticated and often difficult enterprise, given the qualities of the data needed, especially if the model has several unknown (i.e., free parameters) that must be estimated. Fortunately, there is much one can do under the qualitative model fitting moniker to evalu ate a computational model. Indeed, qualitative approaches to model fitting is a broad category with some elements looking quite close to what one would con sider a quantitative procedure in typical research projects. For example, in our field, a hypothesis derived from theory and/or past research merely describes a pattern (i.e., some effect exists and is of a certain sign, as in the correlation
22 Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
should be positive). That is, the question asked is if one or more patterns found in the data match what a theory would expect the pattern[s] to look like. Toward that end, some statistical model is used based on the pattern one wants to exam ine, and then the resulting statistics are compared to the predicted patterns (i.e., the hypothesis). Thus, qualitative model fitting often involves quantitatively estimated effects. Meanwhile, which comes first, the data or the model, determines mostly only whether they call the process prediction or postdiction (Taber & Timpone, 1996; Vancouver et al., 2010b). Given that theories are derived to explain observa tions (i.e., inductively or abductively), and computational models are represen tations of theories, postdiction is often a fine choice, especially if there is a lot of research on the topic. Indeed, model builders often need to confirm that their computation model can reproduce phenomena and effects found in the existing literature (Vancouver et al., 2010b; Vancouver & Weinhardt, 2012; Vancouver et al., 2020). At the journey level, we would expect to see researchers applying existing models to existing data sets to assess the generalizability or applicabil ity of the model to phenomena. This typically requires tweaking little in the model of the unit, which is typically what the model is about, but something in the model of the environment or object (e.g., task) with which the unit is interacting. For example, Vancouver et al. (2014) assessed a model against a dataset described by DeShon and Rench (2009) after tweaking it to represent the DeShon and Rench protocol. The model was built by Vancouver et al. (2010b) and assessed originally against a dataset presented by Schmidt and DeShon (2007). Such testing of the generalizability or applicability of models should help reduce theoretical clutter. Indeed, as more models appear in the literature, the postdiction exercise of matching an existing model with an existing study (actual data from the study not required) could be an experiential learning oppor tunity for any modeling trainee. Besides serving as a replication to verify the fidelity or generalizability of the computational model, postdiction can also shed light on the underlying processes through which certain phenomena and effects occur. This can complement infor mal theories, especially when informal theories fail to recognize dynamic factors at play and cannot account for the phenomenon. In this regard, Vancouver et al. (2010b) offered an example where a poorly understood phenomenon discovered in the existing literature was not only reproduced but also explained elabora tively based on the dynamic pattern generated by their model. Meanwhile, prediction is more likely to be used when the existing data may not be up to the task of challenging a model. This can happen because the model produces some surprising pattern no one thought to look for, but more likely it is because computational models produce data not typically consid ered. For example, computational models can produce trajectories with non linearities (e.g., bifurcations, discontinuities); something few verbal theories
Better Theory and Practice Through Computational Modeling
23
would dare propose. And though qualitative, one can use quantitative methods for pattern matching. For example, Vancouver et al. (2005) used interclass cor relation coefficients (ICCs) to fit progress trajectories across time for each indi vidual from a study created to assess a computational model of goal striving. They also matched trial performance with two versions of the model for each individual. Finally, they matched the results for each condition that emerged from the model with the data, averaged across individuals and trials, across the ABA design. The model data emerged from a single run of the data, but it could have been from multiple simulations where one or more parameters are pulled for each run by specified distributions. Either way, this latter type of fit could be considered postdiction, given the predicted effect is exactly what the model was built to explain. It just happened to be a new study was commis sioned to acquire the moment-by-moment trajectories that the informal theory did not speak to. Still, other times the predicted hypothesis is derived to challenge a parameter related to a mechanism added to fill a gap between the theory formalized into a model and thus not previously assessed, or that reveals itself as central to some mechanisms only heretofore described verbally. An example of this lat ter form can be found in Ballard et al.’s (2018) test of two mechanisms thought responsible for increasing motivation for a task as its deadline approaches. In this kind of case, the modeling can identify or confirm a diagnostic test of the explanation depicted in the theory/model. That is, the researcher might “ask” the model to confirm that some effect would arise were the explanation valid or the researcher might observe an effect by conducting sensitivity analyses that would be unlikely to arise from a different process and that had not yet been systemati cally observed. Science has a special fondness for confirming such predictions, but they are only as special as the uniqueness of the explanation for them. That is, prediction is not as critical as the absence, to the degree possible, of alterna tive explanations. Toward that end, the construction of explanatory models is needed. This is the highest level of craft person the field needs. Masters Level: Builders of Computational Model
Most of the time academics and practitioners know what statistical analysis they need to use to examine some dataset. Still, it is very useful to be able to consult a statistics expert or two in one’s department, whether that department is in a uni versity, an organization, or a government agency. Likewise, a mature science will and should have computational modeling experts who specialize in one or more computational architectures, and who can help others build, extend, and evaluate computational models. Preferably, they are familiar with the verbal theories and existing empirical work surrounding some phenomenon, but they also may work with others who broaden that expertise. At the current time, these individuals
24
Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
tend to come from other disciplines or subdisciplines where computational mod eling is more prevalent (e.g., cognitive psychology or organizational theory). This provides opportunities for cross-pollination and thus a broader integration of modeling platforms. However, such individuals are typically hired to further their own discipline’s computational work, need to be trained in the substantive phenomena, or are wedded to the vocabulary of their subdiscipline. Also, there just are not enough of them. Given that I-O psychology has always been one of the more quantitatively sophisticated subdisciplines of psychology, we would hope that the field can also begin to produce these model builders. Still, it is not the intent of this book to produce a master class of computational modelers. Rather, it is to illustrate the potential of computational modeling for our field and to begin the apprentice ship of a new set of scholars, some of whom will become master computational model builders. Indeed, we believe that the information one can gain from the chapters in this book will inspire you to pick up a new tool and that the “how to” chapters will reveal that the learning curve may not be as steep as originally presumed. Indeed, building models will provide the best background for reading, reviewing, and assessing models. Building models related to a topic of interest will also provide a perspective not always appreciated by modelers naïve to the subject matter. For example, modelers from more sophisticated subdisciplines tend to focus on quantitative model fitting. As a result, they emphasize the value of limiting the number of free parameters. In contrast, someone trying to repre sent a process explanation of a phenomenon is likely to consider several sources of differences among units (e.g., individual differences) or unknowns regarding processes (e.g., rates of change) that might matter in important ways. Assessing the effects of these differences and or different values for unknown parameters in the model is likely a prudent and useful step. One will likely find, via sensitivity analysis, that some can be removed because they do not affect model behavior much. Still, others are needed to remain true to the phenomenon and mecha nisms represented, even if they are more than someone wanting to fit the model to data would like. To be sure, parameter proliferation can easily get out of hand such that one does not have a very useful model in terms of understanding. The question a modeler must answer, which may be unknown at the beginning but often reveals itself in the building process, is what the purpose of the model is, at least for some presentation of it. The purpose, which will need to make some contribution to the literature, will also dictate what free parameters might be set to 1 (if a weight, i.e., multiplier) or 0 (if a bias, i.e., additive term), which might be set to some constant (e.g., 0.5, as in “shoulder shrug”) at least temporarily, allowed to be free, or represented as multiple conditions for some key construct (i.e., high and low on x). How things will shake out in the end (or along the way) may often be hard to tell.
Better Theory and Practice Through Computational Modeling
25
Indeed, model builders should be “lazy.” That is, one can and should use existing architectures (e.g., neural networks, control system models) when building new models, if possible (Hoffmann, 2003). This will simplify the build ing process and increase the likelihood that the model will be a special case of a general theory (i.e., a paradigm). Similarly, a modeler can model existing verbal theories applied to the phenomena of interest. Modeling existing theories means that one likely has an empirical and conceptual foundation for the model. It also reduces theory proliferation and can possibly lead to theory trimming if, upon attempting to model a theory, the logic laid bare comes up wanting (see, e.g., Vancouver & Purl, 2017; Vancouver et al., 2020). If there are multiple theories existing within the domain, one might model more than one to provide a way to pit theories or demonstrate the viability of the mechanisms, whether opposing or complimentary, coexisting in a reasonable way (e.g., Ballard et al., 2018). It is important to acknowledge that modeling is not easy. One must translate an explanation into a set of functions and get them to all work together well and in line with the conceptual space that the mathematical symbols and operations are supposed to be representing. One also must defend the translation decisions to an audience often unfamiliar with process explanations of phenomena. One likely must identify errors in coding when the model produces peculiar behavior or clearly track down the source of the behavior to understand and explain why the observed behavior of the model makes sense (i.e., that it is not a coding error, but it is a prediction of the model [but it is usually a coding error]). To accomplish these things with reasonable efficiency, one must know the modeling platform well. It takes time and practice to develop these skills, especially across multiple platforms or in very flexible platforms like R, Python, or MATLAB. Indeed, none of the editors of this book have achieved such a level of proficiency. However, it is a brave new world, and some readers out there will achieve such a level of proficiency. The science is depending on it. Book Chapter Summaries
In total, this book reflects where we have been, where we currently are, and where we are going regarding computational modeling in organizational psy chology. Still, where we go is up to you, the reader. We hope that this book begins a new scientific journey for you and for our field. We have broken down the chapters in this book across two broad themes. In the first section, “The Call for Computational Modeling in I-O Psychology,” the chapters provide over views of how computational modeling has been and could be applied to a range of organizational issues. In the second section, “Creating and Validating Com putational Models,” the chapters provide the reader with the how-to knowledge to develop, validate, publish, and review computational modeling papers. Next, we summarize each chapter.
26 Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
The Call for Computational Modeling in I-O Psychology
Computational models of decision making have been a staple in the applied cog nitive literature since the beginning of computational modeling (Simon, 1969). Indeed, the field of decision making, and its embrace of computational models, is a good roadmap for how we can move our science forward. Cooney, Kaplan, and Braun (Chapter 2) provide a review of computational models of decision making and their application to organizational decision making. The authors present a typology of computational models of decision making broken down by whether the model is static or dynamic, whether the model is descriptive or prescriptive, and whether the model is micro, meso, or macro. This typology will be instrumental for researchers seeking to learn which decision-making models might be useful for their research questions. Samuelson, Lee, Wessel, and Grand (Chapter 3) overview how computa tional modeling has impacted and can further impact diversity and inclusion research. These models also have a long history in the relatively short history of computational models. For instance, one of the first agent-based models was Schelling’s (1971) model of residential racial segregation. This model showed that even a slight preference for living by people who are similar to oneself can lead to severe segregation over time. Chapter 3 reviews past work on computa tional modeling regarding diversity and how these models have benefits rang ing from understanding basic psychological and social processes such as those underlying stereotypes and stigma to more applied issues such as optimizing interventions to reduce bias. Finally, the authors outline how specific modeling approaches (e.g., agent-based, neural network models) can spur future computa tional research on diversity and inclusion. Hardy (Chapter 4) reviews applications of computational modeling to the literatures on learning, training, and socialization. Like the fields of decision making and diversity, this is another topic area that has embraced computational modeling. For example, both March’s (1991) learning model and Anderson’s (1996) adaptive control of thought (ACT-R) model come from this literature and are two of the most influential computational models in psychology and man agement. After reviewing previous models and insights from models regarding learning, training, and socialization, Hardy outlines a future research agenda for computational models focused on adult learning, informal learning, learning interventions, and how models can be used to highlight the utility of training for organizations. Zhou (Chapter 5) reviews computational models of leadership in teams. Lead ership is a complex and dynamic process, which when mixed with the complex ity and dynamics of teams necessitates computational modeling. Zhou reviews past computational work on topics such as the emergence of leadership struc ture and group member participation among other important leadership topics.
Better Theory and Practice Through Computational Modeling
27
Zhou’s review shows that leadership is well suited for computational modeling and that multiple different computational modeling architectures have been applied to understanding leadership. Finally, Zhou shows how computational modeling has implications for advancing empirical and theoretical research on leadership. Moreover, computational modeling of leadership has important prac tical applications such as testing how different leadership structures are likely to affect group performance. Kennedy and McComb (Chapter 6) conducted a systematic review of simula tion research on groups and teams from 1998 to 2018. Their review shows that there has been an increase in simulation studies on groups and teams during this period and that multiple different modeling architectures have been used to study teams and groups. For each simulation paper, they identify the focal variable, the modeling technique, the simulation approach, and the insights gained from the study. The review emphasizes the theoretical considerations and opportunities for pursuing the examination of complex and emergent phenomena, as well as the flexibility simulation offers for tackling different types of research questions. Overall, the review suggests that simulation is a burgeoning approach that has the potential to advance group and team research. Creating and Validating Computational Models
Tang and Liu (Chapter 7) kick off the section on how to create and validate computational models with their chapter on agent-based modeling (ABM). ABM is a powerful computational modeling technique that can help research ers gain insights into the dynamics of interactions among agents in a complex system, such as organizations. ABM involves revealing how the interactions among a collection of heterogeneous and adaptive agents following a (typi cally) small set of rules lead to complex behavioral patterns. Thus, ABM allows researchers to study emergent phenomena that arise from the interac tions among agents. Tang and Liu provide an overview of ABM, including its defining characteristics, strengths, and limitations. They then review articles published in premier organizational and psychological journals that used ABM to model organizational phenomena. Finally, the chapter provides a detailed walkthrough of building a simple ABM for the scenario of newcomers joining a team of seasoned organizational members. This example illustrates how ABM can be used to model the dynamics of organizational phenomena and build organizational theories. Vancouver and Li (Chapter 8) introduce readers to system dynamics mod eling, which is another computational modeling platform useful for a large set of computational modeling opportunities. Systems dynamics has been around for a long time (e.g., Forrester, 1968), and researchers in this field have developed a user-friendly software platform for rendering computational models. After a
28 Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
short history of system dynamics thinking, the chapter focuses on how to build a model in the platform as well as some simple structures responsible for com mon dynamics, like growth, learning, calibration, and goal striving. They also describe how these structures and other common issues can be applied to address typical theoretical issues and phenomena in I-O psychology. Switching from how to build models to how to validate them, Weinhardt (Chapter 9) outlines a framework for evaluating computational models called the Model Evaluation Framework (MEF). MEF specifies that a well-justified and useful model has three criteria: (1) logical consistency, (2) accurate, crucial predictions, and (3) generalizability. This provides the reader with a guide on how to qualitatively evaluate computational models. The MEF specifies that the work of evaluation requires two roles: the modeler and the evaluator. The mod eler’s work is to specify and justify their model to the full extent of their ability. The evaluator’s work is to evaluate the usefulness and validity of the model—a process that will likely require iterations with the modeling role to accomplish the necessary work. Weinhardt guides the reader not only on the step-by-step process of model evaluation but also guides the reader through the philosophical underpinnings of this model evaluation process. Ballard, Palada, and Neal (Chapter 10) provide a tutorial on quantitatively fit ting computational models to data. Fitting computational models to data can be important not only when testing the validity of a model but also when estimating free parameters that are needed to make predictions for the specific system(s) being simulated. That is, the fitting process allows researchers to quantify the degree of correspondence between the model and the observed data, as well as compare the fit of alternative models. The chapter provides a tutorial on the process of model fitting, using the multiple-goal pursuit model as an example. The tutorial covers the steps required to code a model, estimate its parameters, and assess its fit to the data. The tutorial is aimed at readers who have some basic familiarity with computational modeling, but who may have limited experience with the R programming language often used for quantitative fit and parameter estimation. By the end of the tutorial, readers will have gained practical experi ence in model fitting and be equipped to extend the approach to more complex research questions in their own work. Finally, Neal, Ballard, and Palada (Chapter 11) provide practical insights and recommendations on how to publish and review computational models. As the practice of computational modeling is still in its infancy within our field, there is often a lack of clear guidance on how to write and assess papers that utilize this approach. The chapter first outlines the steps involved in creating a com putational modeling paper and recommendations for how these papers can be published in top journals within the field. They then discuss the key issues that authors and reviewers need to consider when publishing and reviewing these types of papers, with special attention paid to the similarities and differences
Better Theory and Practice Through Computational Modeling
29
between traditional papers and computational modeling papers. By the end of this chapter, both authors and reviewers will have a better understanding of how to approach and evaluate computational modeling research in the organizational sciences. Conclusion
Computational modeling is exploding within the general field of psychology. In 2007, the Cambridge Handbook of Computational Psychology included 26 chapters on computational modeling. In 2023, the new edition has 38 chapters, including one devoted to I-O psychology. Indeed, like the trajectory illustrated in our brief history of computational modeling in I-O psychology and manage ment, the references in the new handbook include a smattering of classic compu tational work within each subdiscipline, though going back a bit (i.e., the 1950s or so). Still, many more references are to work done in the past 10 to 15 years not because the charge was to review recent computational modeling work, but because so much more work is being done. Some of this work is very relevant to I-O psychologists but likely largely unknown. For instance, Read et al. (2017) created a neural network model relating to the activation of three motivational systems and used it to recreate the psychometric structure of the Big Five per sonality dimension. Indeed, it likely reveals a whole new set of worlds with which few in I-O psychology are familiar. Likewise, we think this book will surprise and, dare we say, delight readers regarding the value and opportunities computational modeling can provide the individual researcher, research teams, and the fields of I-O psychology and man agement. Computational modeling provides a way of thinking about phenomena in the field that is refreshing and much needed. The book describes this tool that supports the scientific enterprise to a level not seen since the introduction of statistical methods. Using the tool provides a way for one to have confidence in the internal consistency of one’s mental models and other’s verbal models of the way things are thought to work. If all goes well, the field might be able to say that it is as good at what it does as meteorologists! References Adner, R., Pólos, L., Ryall, M., & Sorenson, O. (2009). The case for formal theory. Acad emy of Management Review, 34(2), 201–208. Anderson, J. R. (1983). The architecture of cognition. Hillsdale, NJ: Lawrence Erlbaum Associates. Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American Psycholo gist, 51(4), 355–365. Ballard, T., Vancouver, J. B., & Neal, A. (2018). On the pursuit of multiple goals with different deadlines. Journal of Applied Psychology, 103(11), 1242–1264.
30 Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
Ballard, T., Yeo, G., Loft, S., Vancouver, J. B., & Neal, A. (2016). An integrative formal model of motivation and decision making: The MGPM*. Journal of Applied Psychol ogy, 101(9), 1240–1265. Ballard, T., Yeo, G., Vancouver, J. B., & Neal, A. (2017). The dynamics of avoidance goal regulation. Motivation and Emotion, 41(6), 698–707. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall. Bandura, A., & Locke, E. A. (2003). Negative self-efficacy and goal effects revis ited. Journal of Applied Psychology, 88(1), 87–99. Busemeyer, J., & Diederich, A. (2010). Cognitive modeling. Thousand Oaks, CA: Sage. Carley, K. M. (2000). Organizational adaptation in volatile environments. In D. R. Ilgen & C. L. Hulin (Eds.), Computational modeling of behavior in organizations: The third scientific discipline (pp. 241–273). Washington, DC: American Psychologi cal Association. Coovert, M. D., & Dorsey, D. W. (2000). Computational modeling with Petri Nets: Solu tions for individual and team systems. In D. R. Ilgen & C. L. Hulin (Eds.), Com putational modeling of behavior in organizations: The third scientific discipline (pp. 163–188). Washington, DC: American Psychological Association. Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. Academy of Management Review, 32(2), 480–499. DeShon, R. P., & Rench, T. A. (2009). Clarifying the notion of self-regulation in organ izational behavior. In G. P. Hodgkinson & J. K. Ford (Eds.), International review of industrial and organizational psychology (Vol. 24, pp. 217–248). West Sussex: Wiley-Blackwell. Dionne, S. D., Sayama, H., Hao, C., & Bush, B. J. (2010). The role of leadership in shared mental model convergence and team performance improvement: An agentbased computational model. The Leadership Quarterly, 21(6), 1035–1049. Farrell, S., & Lewandowsky, S. (2010). Computational models as aids to better reasoning in psychology. Current Directions in Psychological Science, 19(5), 329–335. Forrester, J. (1961). Industrial dynamics. Cambridge, MA: The MIT Press. Forrester, J. (1968). The principles of systems. Cambridge, MA: Wright-Allen Press. Gibson, F. P., Fichman, M., & Plaut, D. C. (1997). Learning in dynamic decision tasks: Computational model and empirical evidence. Organizational Behavior and Human Decision Processes, 71(1), 1–35. Gigerenzer, G., & Brighton, H. (2009). Homo heuristics: Why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143. Hanisch, K. A. (2000). The impact of organizational interventions on behaviors: An examination of models of withdrawal. In D. R. Ilgen & C. L. Hulin (Eds.), Com putational modeling of behavior in organizations: The third scientific discipline (pp. 33–68). Washington, DC: American Psychological Association. Hanisch, K. A., Hulin, C. L., & Seitz, S. T. (1996). Mathematical/computational mode ling of organizational withdrawal processes: Benefits, methods, and results. Research in Personnel and Human Resources Management, 14, 91–142. Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32(4), 1229–1245. Heidegger, M. (1927). Being and time. Southampton: Basil Blackwell.
Better Theory and Practice Through Computational Modeling
31
Heidegger, M. (1996). The principle of reason. Bloomington, IN: Indiana University Press. Hintzman, D. L. (1990). Human learning and memory: Connections and dissocia tions. Annual Review of Psychology, 41(1), 109–139. Hoffmann, R. (2003). Marginalia: Why buy that theory? American Scientist, 91(1), 9–11. Hudlicka, E. (2023). Computational models of emotion and cognition-emotion interac tion. In R. Sun (Ed.), The Cambridge handbook of computational cognitive sciences (2nd ed., pp. 973–1036). Cambridge: Cambridge University Press. Ilgen, D. R., & Hulin, C. L. (Eds.). (2000). Computational modeling of behavior in organizations: The third scientific discipline. Washington, DC: American Psychologi cal Association. Kozlowski, S. W., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2013). Advanc ing multilevel research design capturing the dynamics of emergence. Organizational Research Methods, 16(4), 581–615. Kuhn, T. S. (1962). Historical structure of scientific discovery: To the historian discov ery is seldom a unit event attributable to some particular man, time, and place. Sci ence, 136(3518), 760–764. Kuhn, T. S. (2012). The structure of scientific revolutions. Chicago, IL: The University of Chicago Press. Latané, B. (2000). Pressures to uniformity and the evolution of cultural norms: Modeling dynamic social impact. In D. R. Ilgen & C. L. Hulin (Eds.), Computational modeling of behavior in organizations: The third scientific discipline (pp. 189–220). Washing ton, DC: American Psychological Association. Lee, H. L., Padmanabhan, V., & Whang, S. (1997). Information distortion in a supply chain: The bullwhip effect. Management Science, 43(4), 546–558. Locke, E. (1997). The motivation to work: What we know. In M. Maehr & P. Pintrich (Eds.), Advances in motivation and achievement (Vol. 10, pp. 375–412). Greenwich, CT: JAI Press. Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist, 57, 705–717. Lomi, A., & Larsen, E. R. (1996). Interacting locally and evolving globally: A computa tional approach to the dynamics of organizational populations. Academy of Manage ment Journal, 39, 1287-1321. Lomi, A., & Larsen, E. (2001). Dynamics of organizations: Computational modeling and organization theories. Menlo Park, CA: AAAI Press/The MIT Press. March, J. G. (1991). Exploration and exploitation in organizational learning. Organiza tion Science, 2(1), 71–87. Martell, R. F., Lane, D. M., & Emrich, C. (1996). Male-female differences: A computer simulation. American Psychologist, 51(2), 157–158. McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological Review, 88(5), 375–407. McGrath, J. E. (1981). Dilemmatics: The study of research choices and dilemmas. Ameri can Behavioral Scientist, 25(2), 179–210. McPherson, J. M. (2000). Modeling change in fields of organizations: Some simulation results. In D. R. Ilgen & C. L. Hulin (Eds.), Computational modeling of behavior in
32 Jeffrey B. Vancouver, Justin M. Weinhardt, and Mo Wang
organizations: The third scientific discipline (pp. 221–240). Washington, DC: Ameri can Psychological Association. Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66(1), 195–244. Popper, K. R. (1959). The propensity interpretation of probability. The British Journal for the Philosophy of Science, 10(37), 25–42. Popper, K. R. (1963). Science as falsification. Conjectures and Refutations, 1, 33–39. Popper, K. R. (2002). An unended quest: An intellectual autobiography. New York, NY: Routledge. Popper, K. R. (2014). Conjectures and refutations: The growth of scientific knowledge. New York, NY: Routledge. Powers, W. T. (1973). Behavior: The control of perception. Chicago, IL: Aldine. Read, S. J., Droutman, V., & Miller, L. C. (2017). Virtual personalities: A neural network model of the structure and dynamics of personality. In R. R. Vallacher, S. J. Read, & A. Nowak (Eds.), Computational social psychology (pp. 15–37). London: Routledge. Rodgers, J. L. (2010). The epistemology of mathematical and statistical modeling: A quiet methodological revolution. American Psychologist, 65(1), 1–12. Rumelhart, D. E., McClelland, J. L., & The PDP Research Group. (1986). Parallel dis tributed processing: Explorations in the microstructure of cognition. Cambridge, MA: The MIT Press. Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociol ogy, 1(2), 143–186. Scherbaum, C. A., & Vancouver, J. B. (2010). If we produce discrepancies, then how? Testing a computational process model of positive goal revision. Journal of Applied Social Psychology, 40(9), 2201–2231. Schmidt, A. M., & DeShon, R. P. (2007). What to do? The effects of discrepancies, incen tives, and time on dynamic goal prioritization. Journal of Applied Psychology, 92(4), 928–941. Schwab, D. P., & Olson, C. A. (2000). Simulating effects of pay for performance systems on pay-performance relationships. In D. R. Ilgen & C. L. Hulin (Eds.), Computational modeling of behavior in organizations: The third scientific discipline (pp. 114–134). Washington, DC: American Psychological Association. Senge, P. M. (1990). The art and practice of the learning organization. New York, NY: Doubleday. Shoss, M. K., & Vancouver, J. B. (2023). A dynamic, computational model of job inse curity and job performance. In A. Zhou (Chair), Novel directions in job insecurity research on work and nonwork domains. Symposium conducted at the Society for Industrial and Organizational Psychology, Boston, MA. Simon, J. R. (1969). Reactions toward the source of stimulation. Journal of Experimental Psychology, 81(1), 174–176. Stasser, G. (2000). Information distribution, participation, and group decision: Explora tions with the DISCUSS and SPEAK models. In D. R. Ilgen & C. L. Hulin (Eds.), Computational modeling of behavior in organizations: The third scientific discipline (pp. 135–161). Washington, DC: American Psychological Association. Sterman, J. D. (1989). Misperceptions of feedback in dynamic decision making. Organi zational Behavior and Human Decision Processes, 43(3), 301–335. Sterman, J. D. (2002). All models are wrong: Reflections on becoming a systems scientist. Sys tem Dynamics Review: The Journal of the System Dynamics Society, 18(4), 501–531.
Better Theory and Practice Through Computational Modeling
33
Sun, R. (2008). The Cambridge handbook of computational psychology. New York, NY: Cambridge University Press. Taber, C. S., & Timpone, R. J. (1996). Computational modeling. Thousand Oaks, CA: Sage. Vancouver, J. B., Li, X., Weinhardt, J. M., Steel, P., & Purl, J. D. (2016). Using a compu tational model to understand possible sources of skews in distributions of job perfor mance. Personnel Psychology, 69(4), 931–974. Vancouver, J. B., & Purl, J. D. (2017). A computational model of self-efficacy’s vari ous effects on performance: Moving the debate forward. Journal of Applied Psychol ogy, 102(4), 599–616. Vancouver, J. B., Putka, D. J., & Scherbaum, C. A. (2005). Testing a computational model of the goal-level effect: An example of a neglected methodology. Organizational Research Methods, 8(1), 100–127. Vancouver, J. B., & Scherbaum, C. A. (2008). Do we self-regulate actions or perceptions? A test of two computational models. Computational and Mathematical Organization Theory, 14(1), 1–22. Vancouver, J. B., Tamanini, K. B., & Yoder, R. J. (2010a). Using dynamic computational models to reconnect theory and research: Socialization by the proactive newcomer as example. Journal of Management, 36(3), 764–793. Vancouver, J. B., Wang, M., & Li, X. (2020). Translating informal theories into formal theories: The case of the dynamic computational model of the integrated model of work motivation. Organizational Research Methods, 23(2), 238–274. Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu: Compu tational modeling for micro-level organizational researchers. Organizational Research Methods, 15(4), 602–623. Vancouver, J. B., Weinhardt, J. M., & Schmidt, A. M. (2010b). A formal, computa tional theory of multiple-goal pursuit: Integrating goal-choice and goal-striving pro cesses. Journal of Applied Psychology, 95(6), 985–1008. Vancouver, J. B., Weinhardt, J. M., & Vigo, R. (2014). Change one can believe in: Add ing learning to computational models of self-regulation. Organizational Behavior and Human Decision Processes, 124(1), 56–74. Van Rooy, D., Van Overwalle, F., Vanhoomissen, T., Labiouse, C., & French, R. (2003). A recurrent connectionist model of group biases. Psychological Review, 110(3), 536–563. Wang, M., Beal, D. J., Chan, D., Newman, D. A., Vancouver, J. B., & Vandenberg, R. J. (2017). Longitudinal research: A panel discussion on conceptual issues, research design, and statistical techniques. Work, Aging and Retirement, 3(1), 1–24. Wang, M., Zhou, L., & Zhang, Z. (2016). Dynamic modeling. Annual Review of Organi zational Psychology and Organizational Behavior, 3, 241–266. Weinhardt, J. M., & Vancouver, J. B. (2012). Computational models and organiza tional psychology: Opportunities abound. Organizational Psychology Review, 2(4), 267–292. Zhou, L., Wang, M., & Vancouver, J. B. (2019). A formal model of leadership goal striv ing: Development of core process mechanisms and extensions to action team con text. Journal of Applied Psychology, 104(3), 388–410. Zickar, M. J. (2000). Modeling faking on personality tests. In D. R. Ilgen & C. L. Hulin (Eds.), Computational modeling of behavior in organizations: The third scientific dis cipline (pp. 95–113). Washington, DC: American Psychological Association.
2 TOWARD INTEGRATING COMPUTATIONAL MODELS OF DECISION-MAKING INTO ORGANIZATIONAL RESEARCH Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
Decision-making is both a construct and a perspective, such that one could either study the mechanisms of the decision process, broadly conceived, or one could apply what is known about this process to a specific type of decision within a specified context. Decision-making as a perspective is particularly relevant to the organizational sciences, in that it provides an array of paradigms, complete with alternative theorizations and modeling techniques, through which a wide span of organizational topics might be better understood. For example, while this perspective is most frequently associated with the personnel selection arena within the organizational sciences, it is also relevant to many other subjects, including career and goal choice, turnover, resource allocation, and work-family conflict. The decision-making perspective provides a roadmap of study that eventu ally culminates in the provision of actionable insights for improving real-world decisions, a trajectory that is depicted in the development of this area of study over time and across fields. Early decision-making theorists, particularly those hailing from such fields as economics, political science, and sociology, have held choice to be a mechanistic process executed by perfectly rational beings. In this perspective, expected utility (EU) theory (Von Neumann & Morgenstern, 1947) and subjective expected utility (SEU) theory (Savage, 1954) are seen to be descriptive accounts of decision-making behavior (Goldstein & Hogarth, 1997). Such views are contrasted by those of psychologists, who argue that EU and SEU are normative, or idealized conceptualizations of decision-making, and have emphasized and empirically demonstrated the instances in which these models fail to capture the reality of human behavior, as well as the complexity of and variation between decision problems (Hogarth & Reder, 1987). DOI: 10.4324/9781003388852-3
Computational Models of Decision-making for Organizations
35
The latter faction’s emphasis on the human element has allowed for the emer gence of a variety of descriptive models that explicitly depict aspects of decision makers’ cognitive processes, with the goal of resolving the discrepancy between normative prediction and observed reality. These models do away with the strict assumptions of their normative progenitors and acknowledge that homo eco nomicus is merely a myth: decision makers simply cannot be perfectly rational, given that their computational power is finite, knowledge is limited, and prefer ences are in flux. These features are essential in describing the reality of decisionmaking, as they present persistent sources of systematic error that diminishes the validity of the normative model. Furthermore, a model that is well suited to a given type of decision problem and reliably arrives at the correct normative conclusion may become substantially less useful when applied to an unfamiliar problem structure (Luce & Von Winterfeldt, 1994). As such, descriptive models identify how and which elements of the decision-making process are responsible for deviation from the normative ideal, which may include cognitive processes, social factors, or problem characteristics. (Goldstein & Hogarth, 1997). Bridging the divide between the normative and descriptive is the prescrip tive model, which presents decision rules, aids, and tools for the benefit of our admittedly flawed decision maker. While descriptive models are often oriented toward understanding the errors that commonly arise in the decision-making process, prescriptive models propose solutions for these errors, taking as a given the conditions that gave rise to them. The goal of such prescriptions is to reduce the difference in outcomes between what is optimal, as explained by norma tive models, and what is actual, as depicted by descriptive models. With an eye honed on practical application, prescriptive models are often naturalistic, mean ing they are molded around a specific type of decision that is made under a given set of real-world conditions (Brown & Vari, 1992). By way of contrast, descrip tive models are often divorced from the context in which they occur and present decision-making as a generic process. The prescriptive model thereby represents the most obviously useful variant for I-O practice, and of the types listed here is the only one to be evaluated on the basis of its pragmatic utility, as opposed to its theoretical adequacy or empirical validity (Bell, Raiffa, & Tversky, 1988). To summarize, the three models of decision-making presented here look at the decision-making process in different ways. The normative model depicts an idealized form of decision-making that is predicated on pure rationality and ideal conditions. Here, decision behavior is solely determined by an underly ing principle such as utility maximization, which if followed perfectly by per fect people in a perfect world, would produce the maximally optimal outcome (Baron, 2012). However, such perfection is unrealistic, and so enters the descrip tive model, which specifies what imperfect people actually do, as opposed to what a perfectly rational, yet entirely hypothetical person would do if they existed (Brown & Vari, 1992). Finally, bridging the gap between the two lies
36 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
the prescriptive model, which provides guidelines or tools that aim to reduce the discrepancy between what is normative and what is actual (i.e., prescribe what people should do). But this can only be accomplished following the pre cise definition of what constitutes the decision maker’s “desiderata or axioms of the form,” that is, their desired end and means for attaining it. Furthermore, there must exist sufficient understanding of the sources of human error and the effects of problem structure before corrections can be proposed (Bell, Raiffa, & Tversky, 1988). The three models build upon one another with increasing complexity, culmi nating in useful tools for improving decision-making within the organizational context. This sequence of development is an embodiment of the scientistpractitioner philosophy, in which theoretical understanding can be transformed into actionable insights for real-world problems. Unfortunately, these tools remain largely underutilized, frequently relegated to the periphery of organiza tional science, which is unfortunate, given that much of workplace behavior can be conceptualized as instances of decision-making (Highhouse, Dalal, & Salas, 2013). As even small, moment-to-moment choices are in fact decisions that are not typically seen as such, organizational actors are ceaselessly making deci sions that manifest as behaviors, such as who one chooses to interact with or if one chooses to engage in an OCB. Instead of viewing behavior as the basic unit of analysis, it is useful to conceptualize decision-making as the cognitive process that precedes, and to a large degree dictates, behavior (Ajzen, 1985). That is, decision-making is a tool to understand the process by which a decision is made to understand a significant portion of why people behave the way they do—and to understand the “why” of behavior is the first step in changing it for the better. The reader whose interest is piqued in the intersection of decision-making and organizational research may now be wondering what use they might have for computational modeling when pursuing this stream of inquiry. Computational models of decision-making aim to reproduce human behavior, by depicting in algorithmic form the cognitive and social processes that underlie it. The first reason that a decision-making researcher in the organizational sciences might take an interest in the application of computational modeling to their work is the multifunctionality of these models. Computational models are tools that are applicable to a wide span of research questions, of which the most relevant to the current discussion are prediction, explanation, and prescription (Harrison, Lin, Carroll, & Carley, 2007). Additionally, since the use of computational models is not subject to the availability of participants or constraints to data collection imposed by organizational leadership, in some cases they can make research questions viable that were otherwise precluded by external limitations (Calder et al., 2018). This form of modeling is uniquely well suited for use in tandem with decisionmaking, an area wherein convention dictates the provision of mathematical
Computational Models of Decision-making for Organizations
37
scaffolding to structure theoretical explanations. Given that mathematical speci fication of theory likewise forms the basis of many computational models, there exists an alignment between subject matter and analytical tools. Their shared formality lends to a convenient arrangement, wherein computational models can be devised to specify decision-making theories whose mathematical parameters are used to populate said models, ultimately resulting in the creation of inter nally consistent theory, precisely specified. The precision with which these ele ments are instantiated in computational models ensures that no part of the theory is left up to interpretation, providing a clarity that lends to greater falsifiability (Edwards & Berry, 2010; Gray & Cooper, 2010). This is particularly important, given that decision-making models can be quite expansive and complex, capturing not only the inputs and outputs of the decision-making process but also the processes by which the former is trans formed into the latter. Dynamics, in the form of feedback loops and changes in the decision environment or to the decision itself, add an additional layer of complexity (Katzell, 1994). Moreover, prescriptive models that encompass both the normative demands and the descriptive limitations of a decision prob lem are even more complicated. Fortunately, one of the primary advantages of computational modeling is its ability to handle high-complexity systems whose components interact in such a way as to make the prediction of their outcomes very difficult using standard methods (Calder et al., 2018). By closely approximating the substantive features of the decision problem and targeting an optimal outcome, computational models can be tuned to gener ate prescriptions for better decision-making within the context of specific prob lems. Such prescriptions are not possible, given knowledge of only the relations between inputs and outputs in the decision problem; attention must be paid to the process by which the former is transformed into the latter. Correction of a malfunctioning process or enhancement of a merely adequate one requires a thorough understanding of how exactly the elements that comprise it are inter related, as well as how they are affected by the decisional context. Computa tional modeling is beneficial toward this end in that it necessitates and utilizes a nuanced understanding of process (Kozlowski, Chao, Grand, Braun, & Kulja nin, 2013). Additionally, prescriptive investigations often compare the outputs of multiple models in an effort to determine those which are most optimal, and computational modeling allows for this type of direct comparison. Finally, computational models should be used to study decision-making, because they are rapidly becoming more numerous, sophisticated, and usable with each passing year. Decision-making is a vast interdisciplinary field that has benefited from the input of a diverse array of scholars, some of whom possess backgrounds that render them particularly well-equipped to formulate compu tational representations of behavior. Those who have done so provide a useful template for novice computational researchers to draw from when facing the
38 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
admittedly daunting prospect of designing a computational model from scratch. Moreover, the software tools available for the construction and testing of com putational models are proliferating quickly. For an introduction to these tools, interested readers may peruse the review by Abar, Theodoropoulos, Lemarinier, and O’Hare (2017). Throughout this chapter, we discuss how other disciplines have approached the use of computational modeling when studying decision-making, by provid ing a sampling of models found outside of the area of organizational science that might have interesting applications inside it. We limited our inclusion of models to those that we judged possess clear relevance to topics in the organizational sciences and were highly cited. To organize them, we outline an organizational typology and relate the models in each typological bucket to one another, iden tifying characteristic themes and similarities in application. Then, we describe in detail a single exemplar for each. Finally, we discuss how each of these types may be useful in informing topics central to organizational science, providing example research questions that could be addressed using each of these mod eling strategies. A Typology of Computational Models in Decision-Making
A primary objective of this chapter is to provide a simple but useful typology that delineates the different types of computational models of decision-making that exist to date across the social sciences with potential to benefit the organi zational sciences. This typology consists of three dimensions (orientation, tem porality, and level) that are crossed to organize the wide expanse of available models into a more interpretable framework. Ideally, this will assist the reader in narrowing down the available models to those which best align with their particular research question and might thus be a useful starting point for the inte gration of decision models into novel areas of organizational research, or for the development of new decision-making computational models. However, as will become evident in the following chapter, the typological boundaries can become blurred when theory cuts across categories to describe more complex decision behavior. Given this, it should be acknowledged that this typology is merely a guide intended to structure our discussion, rather than a rigid theoretical frame work that neatly delimits homogenous groups of models. Caveats aside, the first dimension is that of model orientation, which has been described previously as the division of decision-making models into three camps: normative, descriptive, and prescriptive. For the remainder of the chap ter, we focus solely on descriptive and prescriptive models as they are generally considered the most relevant and informative for understanding and improv ing real-world decisions (Kahneman & Tversky, 1979; Gigerenzer & Goldstein, 1996). The second primary dimension is that of temporality, which characterizes
Computational Models of Decision-making for Organizations
39
models that focus on static, isolated decisions versus models that focus on sequences of decisions, which describe a recursive, interdependent decisionmaking process that occurs over a span of time. Lastly, we also consider model level, which identifies the vertical position in the organizational structure at which the decision of interest is made (i.e., individual, group, organization). A graphic of the typology, populated with samplings of models of each type is presented in Table 2.1. The examples assembled here are by no means a com prehensive list but rather are rather intended to be a representative sampling of what is currently available. Model Temporality
Time may be a factor of import in the study of decision-making, depending on whether the decisions in question are made in isolation from one another, or if they are temporally connected and thus interdependent. It is important to note that there are many ways in which the temporal nature of a decision-making model can be classified. Dynamics are commonly considered cases in which the principal process under investigation (i.e., the criterion—the decision) is affected by prior realizations of itself (Xu, DeShon, & Dishop, 2020; Dishop, Braun, Kuljanin, & DeShon, 2020). That is, consistent with recent work examin ing dynamics within organizational science, we classify the temporality of these models based on whether the overall outcome of the model (i.e., the decision) is affected by prior decisions or not. Static decisions are the atemporal variant in which a single decision is modeled and the impact of prior or subsequent deci sions are not included in the model.1 The static decisional process is strictly uni directional, such that the output of a prior decision has no effect on the input of a future one, necessarily implying independence among decisions. Static models are most often used for modeling the process and outcome of single decisions with no regard for how each is positioned in a larger system of related decisions, nor for how it is situated in time. Dynamic decision-making is defined by its intertemporal nature, meaning that the broader decision-making process is studied as unfolding over time, rather than the decision as an isolated incident. This allows for changes that occur to the surrounding environment as a result of prior decisions or to the deci sion problem itself to be explicitly modeled. Dynamic models are comprised of either a sequence of structurally identical decisions that are made repeatedly or a sequence of different decisions that are necessarily interconnected. With regard to the former, the model takes the form of a succession of feedback loops, in which the output of previously made decisions becomes an input in later deci sions. “Output,” meaning the positive or negative outcomes of making a particu lar choice, serves as feedback that allows the decision maker to course correct and learn alternative strategies when errors in their existing process become apparent.
40 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
Simply put, past decisions and the subsequent outcomes faced influence how the decision maker will act later on, implying decisional non-independence. On the other hand, when a decision is embedded in a sequence of interrelated but nonidentical decisions that comprise a larger decision problem, the choice of an option early on in the process can constrain or expand the option set for future decisions. In such cases, the “dynamic” characterization of the decision model arises due to the logical interdependencies that exist among the decisions in the sequence. For example, this structure is commonly seen in decision problems regarding resource management, where a decision regarding how to expend time or money determines what quantities of these resources are available for use at a later time, thus constraining the set of feasible choices for future decisions. It could be argued that the majority of all decisions made in the workplace and in life in general are of the dynamic sort: many decisions will be repeatedly made and nearly all will have consequences for how other decisions are made in the future. Given their inclusion of the elements of temporal, situational, and cognitive changes, dynamic models are far more complex forms of decisionmaking than the static sort. Model Level
Decision-making occurs at all levels of the organization and what level of analy sis is the focus for a given study will determine the types of processes being modeled. The typology presented in this chapter segments the collected models into three, mostly distinct, levels commonly considered in the organizational sciences: individual (micro), team/group (meso), and organization (macro) (Kozlowski & Klein, 2000). Models at the individual level, typically, model the intra-individual cognitive processes that give way to a decision, while at the team and organizational levels, interpersonal social processes are examined. Some researchers use the term “meso” to denote a combination of the lower and higher levels into a multilevel model of decision-making, while others use it to describe team or group decision-making. For the purposes of this chapter, we will only be considering the latter and will not consider multilevel topics. However, there are some multilevel models referenced here in which outcomes entirely unrelated to decision-making are shown to emerge at the group level from the collective decisions of individuals (see Table 2.1 for a typology of computational models for decision-making across different levels). For example, different crowd for mations have been shown to coalesce on the basis of individuals’ decisions to evacuate or stay in a contained emergency situation (Pan, Han, Dauber, & Law, 2006); in this case, the decisions made at the individual level are of interest, and the no combinatory decision process occurs across levels to form a group deci sion. Rather, the group “decision” manifests as crowd configuration as a purely emergent outcome.
Computational Models of Decision-making for Organizations
41
TABLE 2.1 A Typology of Computational Models in Decision-making
Descriptive
Static
Dynamic
Micro: • Preference Reversal (Johnson & Busemeyer, 2005a) • Prospect Theory (Kahneman & Tversky, 1979) • Egress Modeling (Pan, Han, Dauber, & Law, 2006) • Decision Field Theory (Busemeyer & Townsend, 1993) • Multialternative Decision Field Theory (Roe, Busemeyer, & Townsend, 2001) • The Leaky, Competing Accumulator Model (Usher & McClelland, 2001) • Multialternative Choice (Usher & McClelland, 2004) • Probabilistic Multidimensional Information (Wallsten & Barton, 1982) • Statement Verification (Wallsten & González-Vallejo, 1994)
Micro: • Adaptive Character of Thought-Rational (ACT-R) Theory (Anderson, 1990) • Probability Calibration and Random Support Theory (Brenner, Griffin, & Koehler, 2005) • Learning in Dynamic Decision Making Tasks (Gibson, Fichman, & Plaut, 1997) • Instance-Based Learning (Gonzalez, Lerch, & Lebiere, 2003) • Cognitive and Emotional Dynamics of Decision Making (Grossberg & Gutowski, 1987) • Multiple-Goal Pursuit (Vancouver, Weinhardt, & Schmidt, 2010) • Consumer Purchases and the Decoy Effect (Zhang & Zhang, 2007)
Meso: • Coalitions and Social Reasoning (David, Sichman, & Coelho, 2003) • The Impact of Social Influence on Vaccination Decision Making (Xia & Liu, 2013) • Social Preference Construction (Dietz & Stern, 1995)
Macro: • The Garbage Can Model of Organizational Choice (Cohen, March, & Olsen, 1972) • Beyond Garbage Cans (Masuch & LaPotin, 1989)
Macro: • A Behavioral Theory of the Firm (Cyert, Feigenbaum, & March, 1959) • Collective decision making (McHugh, Yammarino, Dionne, Serban, Sayama, & Chatterjee, 2016)
Multilevel: • Decision Making in Project Organizations (Jin & Levitt, 1996)
(Continued)
42 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun TABLE 2.1 (Continued)
Prescriptive
Static
Dynamic
Micro: • Two-Alternative Forced-Choice Tasks (Bogacz, Brown, Moehlis, Holmes, & Cohen, 2006) • Fast and Frugal Heuristics/ Bounded Rationality (Gigerenzer & Goldstein, 1996) • A Recognition Primed Decision Model for Multi-Agent Rescue (Nowroozi, Shiri, Aslanian, & Lucas, 2012)
Micro: • Heuristics and Feedback (Kleinmuntz, 1985) • Moral Decision Making in Human and Artificial Agents (Wallach, Franklin, & Allen, 2010)
Macro: • An Optimization Model for Energy Investment Decisions (Malatji, Zhang, & Xia, 2013)
Meso: • Team-Soar (Kang, 2001) • Goal-Setting Theory (Locke & Latham, 2002) • Cooperation in Social Exchange (Macy, 1991)
Multilevel: • Optimal Team and Individual Decision Rules (Pete, Pattipati, & Kleinman, 1993)
Macro: • Markets with ZeroIntelligence Traders (Gode & Sunder, 1993) • Organizational Simulation and Information Systems Design (Kumar, Ow, & Prietula, 1993)
A Survey of the Model Types and Their Potential Applications
To simplify presentation of the information summarized in Table 2.1, we limit our general discussion to four primary categories of decisions created by cross ing the two primary categories of model orientation (descriptive and prescriptive) with the two categories of model temporality (static and dynamic). Ultimately, we chose to collapse across model levels within each category because doing so simplified presentation while minimizing loss of information with regard to how the decision process is studied. Therefore, each following subsection includes a discussion of one of the four main categories in the proposed typology, as well as a description of an exemplar model that is both influential within the category and particularly representative of it. Lastly, we introduce ideas for how compu tational models in each of these categories may be applied to different topics within the organizational sciences.
Computational Models of Decision-making for Organizations
43
Static-Descriptive
The first class of models is both static and descriptive, meaning that they out line the process by which single decisions at a given time point are made by individuals, groups, or organizations. Static-descriptive models serve as the first break from the conceptualization of decisions as being strictly normative, with seminal works such as Kahneman and Tversky’s (1979) prospect theory intro ducing human irrationality as an essential consideration in the study of decisionmaking. This model type takes into consideration the human element, by provid ing explanations of seemingly universal but non-rational properties of human decision-making, such as the instability of preferences through the passing of time and exposure to information (Johnson & Busemeyer, 2005a), the discrep ancy between covert confidence and one’s overt response (Wallsten & GonzálezVallejo, 1994), and the overreliance on salient information when under time pressure (Wallsten & Barton, 1982). As computational models, instantiations of the static-descriptive type usually purpose to predict what choices people will make (a) under specified circum stances, (b) given a certain set of inputs and (c) expected outcomes for each option, and (d) with a defined process of how these elements are combined in making their choice. A static-descriptive model may emphasize one type of parameter over others, or it might propose a comprehensive theory that incorpo rates circumstantial, input, outcome, and process factors in turn. For example, David, Sichman, and Coelho’s (2003) model focuses on the input of agents’ cog nitive social structures in determining their choice in goals and partners within a system of agent interdependencies but largely ignores the circumstances these choices were made under or the choice process itself. In contrast, Usher and McClelland’s (2001) model describes the gradual loss of information used to make a choice as a central feature of the decision-making process, specifying time as an important contextual factor and the number of possible alternatives as an input factor. To elaborate fully on all the aforementioned components of decision-making is a substantial objective for any single model, and so most extricate a smaller portion of this process to examine. Some static-descriptive papers contained in Table 2.1 are decades old, allud ing to their seniority in the developmental timeline of the study of decisionmaking. Yet, regardless of age, these models have served as the enduring foun dation on which dynamic and prescriptive variants have eventually been built. Exemplar Model: Prospect Theory
Presented as a critique of expected utility theory, prospect theory (Kahneman & Tversky, 1979) is a foundational model in this perspective, describing how peo ple choose among prospects (i.e., options) under conditions of risk. By altering some of the theoretical assumptions that underlie expected utility theory, this
44
Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
model explains a number of anomalous findings that have accumulated in the field that could not be reconciled within the framework of expected utility theory. Prospect theory puts forth two central tenets that deviate from the typical understanding of the choice process, which have been influential in guiding later theory building. The first important tenet is that decision makers’ base their decisions on the changes in utility from a set reference point, rather than by examining the abso lute utilities that are attached to each prospect. The concept of the reference point reflects the decision maker’s current state prior to experiencing gains or losses from the choice. It can also be affected by their preexisting expectations and how the decision problem is framed. The assumption that change in state is the primary unit of analysis for choice behavior leads to the conclusion that deci sion makers will be unusually keen to avoid the potential for loss (even loss that is associated with the chance of remunerative gain). This can be seen by portray ing the concept of loss aversion graphically as an S-shaped value function that is bisected by the reference point at the graph’s origin. The concave upper half of the value curve represents gains and is steeper than the convex lower half of the curve, which represents losses—the implication here being that increasing gains are less psychologically valuable than losses, exhibiting a slower decline in salience as losses expand. The second tenet of import is the proposed structure of the decision-making process, in which the decision is broken down into two distinct phases: the edit ing phase and the subsequent evaluation phase. In the former, the decision maker applies heuristic operations to the prospects, either individually or in sets, with the goal of reducing the prospects down to more cognitively digestible forms and eliminating those prospects that are dominated by others. These editing operations are thought to be the source of many of the irregularities that are evi denced in the decision-making process, such as intransitivities of choice. In the subsequent evaluation phase, the decision maker chooses between the remaining edited prospects, each of which is associated with a subjective value v() and a decision weight (p). The subjective value modifier is a mathematical representa tion of the idea that people attach value to changes in position rather than end states, while the decision weight is proposed as a substitution for the expected probabilities that are described in expected utility theory. Applications in Organizational Science
By definition, static-descriptive models examine singular decisions, under the assumption that decisions of the same type made previously are either irrelevant to the current situation or nonexistent. Thus, these models are thus best suited for understanding and predicting choices that are very rare, or at least isolated from the outcomes of similar decisions. Both examination of the decision process as
Computational Models of Decision-making for Organizations
45
it manifests in general, irrespective of context (e.g., prospect theory), and indi vidualized applications of such generalized theory to specific subject areas are possible with this type of model. Organizational researchers will typically draw from the former to achieve the latter. One example of applying this model to the organizational landscape can be decisions relating to turnover. Turnover decisions could be studied using a staticdescriptive model, given that few people routinely quit their jobs, and that the decision to quit one’s current job is not appreciably affected by past decisions to quit prior jobs. Turnover decisions are of particular interest from the organiza tion’s perspective due to their significant financial and logistical implications, warranting additional understanding of how to decrease the likelihood that workers decide to leave. From this view, the ultimate consideration may be less about providing suggestions to individual workers on how to make the best per sonal decision, and more about predicting which inputs to their decision process are most influential, so that intervening action might be taken. Thus, descriptive power is more relevant in this particular context than prescription. In making turnover decisions, there are a number of factors that are con sidered, including current job satisfaction, the quality of relationships with co-workers, and availability of outside opportunities. The means by which these factors are processed by the employee can be precisely and succinctly expressed in the form of a static-descriptive computational model. For instance, these deci sions might be understood through the framework of prospect theory, in which the option to leave and the option to stay are represented as contrasting prospects. Based on the assumption that losses are more salient than gains, it could reason ably be hypothesized that decrements in current satisfaction would manifest as a seemingly disproportionately large decision weight and negative subjective value being attributed to the “Stay” prospect, while expectations for gains in satisfaction would manifest in a disproportionately small decision weight and positive subjective value attached to the “Leave” prospect. Given a current employee who is unhappy, the prospect of leaving the organization looks more appealing. This upwardly influences the value of the leave prospect, whereas the value of the stay prospect would be smaller. Individuals see a larger potential for happiness/satisfaction from leaving rather than staying, regardless of what the objective gains/losses of each would be. A key implication for organizations if this were found to be the case would be that decrements in satisfaction within one’s current position are potentially more dangerous than tempting offers from outside firms. Static-descriptive models of decision-making can also be incorporated into other more complex, dynamic models of organizational behavior. For example, prospect theory is utilized as part of temporal motivation theory (Steel & König, 2006), which is a dynamic model that includes prospect theory as one of its inputs. This extension of static-descriptive components into the dynamic space
46 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
is relatively easily done as needed for different research needs. When applying this notion to computational modeling, existing models can be taken and applied directly into new situations, integrating various aspects from their given inputs to create new and different models of organizational phenomena. Dynamic-Descriptive
While up to this point the focus has been on static models, the majority of decisions made in vivo are neither isolated nor unaltered by the effects of tem poral change but are rather influenced by prior interrelated decisions. Dynamicdescriptive models attempt to explain what people do, modeling the complexities that arise when results of prior decisions are critical inputs to subsequent deci sions by changing levels of the model. This is particularly true in the context of the workplace, where job functions remain relatively constant over time and discrete decision tasks are thus conducted on a routine basis for the majority of employees. In some cases, existing static models can be reformulated to incor porate dynamic elements. For instance, we previously mentioned that decision field theory as it was originally proposed presents a unique case of static-descriptive model: while it was described as being a dynamic model that considers temporal elements, it is in fact static since it does not model similar decisions made in sequence, nor does it invoke updating or feedback mechanisms. That said, a variant of decision field theory proposed by the same authors called rule-based decision field the ory extends their original ideas to include routine choices that are continuously made, as opposed to singular choices made in isolation (Johnson & Busemeyer, 2005b). By including a secondary time parameter that accounts for the learning process by which decision makers update their decision rules and preferences based on the outcomes of prior decisions, it is possible to examine routine choice as a chain of decisions that begins with the deliberative construction of an initial decisional process, is sustained with updates to that process being made as new information becomes available, and ends with effortless, rule-based processing. Similarly to static-descriptive models, models of this type might focus on the non-rational elements of human decision-making to demonstrate how such patterns of decision-making might subvert the learning process. To the extent that a model is able to predict empirical data, an argument can be made for its outcome validity; in dynamic-descriptive models, common evidential crite ria are different types of patterns of decision-making (e.g., overconfidence, risk aversion). For example, Brenner, Griffin, and Koehler (2005) present Random Support Theory as a means to explain why individuals demonstrate systematic patterns of miscalibration in their judgments of probability outcomes, regardless of whether or not they received feedback. They point to the use of case-based, as opposed to class-based judgment as the primary source of this error, meaning
Computational Models of Decision-making for Organizations
47
that information regarding the particular case at hand is privileged over informa tion regarding the class from which the case originates. Case-based judgment is implicated in the diminished development of decisional expertise, both directly and through its contribution to the decisional errors of overconfidence and con servatism (i.e., the undervaluing of novel information). While learning is a common element to many dynamic models, it is not a prerequisite for the label. The defining feature of dynamic models lies in their perspective on time and how the decision process comes to be altered by it. Some dynamic-descriptive models are entirely unconcerned with the learn ing process, instead studying phenomena such as the accumulation of affec tive cues with repeated exposure to a decision problem that ultimately guides later decision-making (Grossberg & Gutowski, 1987) and patterns of consumer decision-making following continued exposure to other consumers’ behavior in a market (Zhang & Zhang, 2007). In such cases, dynamics arise through the way the changing form of the decision process is altered by forces external to the decision maker. Some dynamic-descriptive models posit that when in time a decision is made relative to other decisions will influence how it is made and which option is cho sen. This is typically seen with studies examining resource allocation, where dis pensation of a resource at Time 1 necessarily prohibits its dispensation at Time 2. The Garbage Can Model of Organizational Choice (Cohen, March, & Olsen, 1972) is a well-known example that describes decision participants’ time and energy as a resource that is allocated to a decision depending on their expendi ture on other decisions in the system. Resource allocation has also been shown to be differentially predicted by the relative discrepancy between current and goal states, depending on when in the decisional timeline the allocation choice is made: discrepancy predicts greater resource allocation early on but diminished allocation later (Vancouver, Weinhardt, & Schmidt, 2010). Exemplar Model: Adaptive Character of Thought-Rational (ACT-R) Theory
Anderson (1990) posited a model of how high-level cognitive processes interact with one another. Called the Adaptive Character of Thought-Rational (ACT-R) Theory, it describes how people think and process different forms of information based on two primary criteria, speed and selectivity, in order to maximize their adaptive capacities, given the constraints of the environment. There are several essential components in the ACT-R model: perceptual-motor and memory mod ules, buffers, and pattern matchers. These components transform the information relayed between them through the use of procedural, goal, retrieval, and imaginal operations. Together, components and operations form a type of production sys tem, wherein if-then contingencies are matched to the information stored in the
48 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
perceptual-motor and memory modules, to determine which operation is carried out on said information. Using computational modeling, these components can be input, and the relational aspects of the different components can be viewed and understood as the information is transformed through different operations. While the original ACT-R model was presented in the context of visual cog nition, it has since been extended to many other areas, such as decision-related learning. Utilizing the ACT-R framework as the foundation, it is possible to con struct variations that delineate both the basic ACT-R theoretical assumptions about cognition and the parameters of the specific decision problem being mod eled. An example of applying such a model to decision-making could be to see how people learn to enact a decision process by experiencing and accumulating information regarding similar decisions, weighing and understanding the util ity of previously experienced decision outcomes, and using this information to refine their overarching process (Gonzalez, Lerch, & Lebiere, 2003). Applications in Organizational Science
Dynamic-descriptive models examine how interconnected decisions relate to one another and how this interaction influences future decision-making. As a result, this type of model is most commonly positioned to study decisions where learning can be incorporated into the model and responsiveness to feedback is evident. Research questions could address the structure of the learning process (e.g., how is past experience weighted relative to novel information regarding the current problem?), the exogenous factors that alter how one updates their decision processes (e.g., does the organization’s structure affect the speed of updating?), or inter-individual comparisons of the aforementioned (e.g., are there individual differences that moderate the aforementioned effects?). One area in which nearly all individuals are continually engaged in this type of decision-making is that of creating work-family balance. Employees must regularly assess the time and energy resources available and the degree of demands placed on them to make resource allocation decisions between the home and work domains. These decisions are interdependent, in that outcomes such as family members’ expressions of (dis)satisfaction with their participation in home life and supervisors’ reports on their performance serve as sources of feedback that they must adapt to. As the employee continues to iterate through the decision-making process—allocating time, receiving feedback, adjusting in response—they should gradually converge on a decision process optimized for their situation. Here, computational modeling can be utilized to predict which employees are likely to have issues occur and what nature those issues may have. Using computational modeling techniques to look at how the decisions are occurring and impacting other areas of their job performance and personal lives can help the researcher better understand which individuals are most at risk
Computational Models of Decision-making for Organizations
49
for developing problems and thus provide a means by which to potentially help preclude them. For example, an individual who has just started a new job may be overzeal ous and regularly make the decision to stay late at work. If this results in stress, burnout, and interpersonal conflicts at home, then they will likely adjust their decision process to accord greater weight to health and family responsibilities. However, if they over-correct and begin to make decisions that result in subopti mal resource expenditure at work, negative feedback from their boss will prompt them to re-adjust. By utilizing a dynamic-descriptive computational model such as a variant of ACT-R to enumerate this scenario, it becomes possible to outline in detail how new employees come to devise a decision-making process that is effective in balancing their work and home lives. For example, using ACT-R as a model, work and family demands could be modeled using a mathematical repre sentation of ACT-R, and a computational model could then be trained from these representations to derive relevant parameters. One possible implication of such a model is that the oscillations between extremes will likely be larger and more sustained if the new job is highly dissimilar from any they previously held, due to the fact that they have little relevant information stored in memory that can assist in structuring their decision. A manager with this knowledge may mitigate this concern by providing prompt and direct feedback to new employees who appear to be caught in this pattern. Static-Prescriptive
Once irregularities in decision-making are explicitly modeled, improvements in strategy or heuristics can follow. Static-prescriptive models seek to provide decision makers with methods to contend with the non-ideal conditions of the real world so that they can attain greater optimality than would be the case if they were left to their own devices. When testing such models, it is common that multiple decision strategies are elaborated and compared to determine which produces the best outcome, given a particular pattern of decision-making or common decision constraint. Bogacz, Brown, Moehlis, Holmes, and Cohen (2006) exemplified this structure in their pursuit of the optimal decision-making strategy in two-alternative forced-choice tasks. By including time pressure as a factor that typically inhibits rational choice, they compared six alternative deci sion models and found that one in particular, the drift-diffusion model, produced the most optimal outcomes across criteria. Other static-prescriptive models seek to identify contextual or input variables that affect the performance of the decision-making agent, with the aim of providing prescriptions for their ideal levels or forms. For example, Pete, Pattipati, and Kleinman (1993) found that different methods of aggregating individual judgments to produce a group deci sion were best depending on the relative competencies of the group members
50 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
and the fulfillment of certain conditions pertaining to the cost structure and prior probabilities associated with outcome events. Many static-prescriptive models are defined within the context of a particular decision problem as it exists in a naturalistic setting so that a solution uniquely tailored to the problem may be proposed. Here, the instantiation of elements within the computational code is intended to represent the structure of the deci sion problem, so that the performance of the proposed solution may be assessed relative to typical performance. For instance, Malatji, Zhang, and Xia (2013) proposed a method for optimizing energy investment in building decisions in accordance with multiple objectives (e.g., maximizing energy savings, minimiz ing payback periods). Upon retrofitting their solution to existing data, they found that the implementation of their solution resulted in significantly higher energy savings and lower payback periods than the conventional method. Finally, there exist a few static-prescriptive models that present a question with regard to what it means to be truly “static.” For instance, recognitionprimed decision (RPD) models (e.g., Nowroozi, Shiri, Aslanian, & Lucas, 2012) usually require training with human experts for the system to be populated with instances of possible decision scenarios. Although training is incorporated into the model, in that the RPD system must experience a wide range of scenarios and learn the appropriate response via human input, this does not necessarily imply that the model is a dynamic one. In such instances, “training” is more aptly thought of as “knowledge”—an exogenous variable that does not increase in magnitude with additional iterations of the decision process. Therefore, for the purposes of this discussion, RPD models are considered to be static-prescriptive if they seek to maximize decision-making performance but do not incorporate the process by which prior experience is used to update decision makers’ knowl edge structures. Exemplar Model: Fast and Frugal Heuristics
A well-known static-prescriptive example is the model of bounded rational ity described by Gigerenzer and Goldstein (1996), which posits that rules of thumb that are ecologically rational and aligned with the actual, as opposed to ideal psychological capacities of decision makers, are best suited for real-world decision-making. Normative theory typically relegates heuristics to the category of “human error,” but when time is limited, knowledge is scarce, or computa tional power is finite, the theory of bounded rationality insists that their use is actually adaptive and that purely rational methods will actually produce less optimal outcomes when the idiosyncrasies of human cognition and real-world conditions are taken into consideration. These conditions and cognitive limita tions are not merely irregularities that produce random error; rather, they are consistent patterns that affect the decision process in predictable ways. Although
Computational Models of Decision-making for Organizations
51
decision-making would certainly be more rational in their absence, their regular ity implies actionability, and fast and frugal heuristics are proposed as a viable solution. To test these propositions, the authors put forth a decision scenario in which a choice must be made between two alternatives on a single quantitative dimen sion, but information on the alternatives is not always available. Based on the concept of satisficing, a simple algorithmic heuristic called “take the best” was proposed, in which the simulated decision makers assess the options according to their ability to recognize and match them to information stored in memory. Within this algorithm, more salient memory cues take precedence, and less sali ent cues are ignored. If the most salient cue is available for only one option, that option is selected, but if the cue relates to both options, the decision maker will determine whether or not the cue discriminates between them, that is, makes one option relatively more preferable. If it does, they will select the better option, but if it does not, they will search for additional cues in memory. Finally, if no cues are available for either option or if the only available cue fails to discrimi nate, the decision maker will select at random. This heuristic is decidedly nonoptimal, as it does not allow for missing information to be sought out, nor does it include any integration of current information. Yet, when the performance of the “take the best” algorithm was compared to that of five opposing algorithms in a simulation experiment, it was found to perform just as well, and in some cases better. Notably, the five opposing algorithms were designed to integrate the information, which from a rational perspective is a superior strategy. Applications in Organizational Science
Research questions paired with this model type must be intent on understanding such decisions independent of the output of prior decisions. The types of ques tions best matched to static-prescriptive models are those that examine decisions known to regularly produce suboptimal outcomes. Hindrances to performance may stem from the nature of the decision problem itself (e.g., high complexity), the context in which the decision problem takes place (e.g., time pressure), or faults of the decision maker themselves (e.g., limited computational resources). In each case, static-prescriptive models are useful in proposing and testing solu tions for the given issue. Furthermore, it is most common for these models to be proposed in response to a decision problem that occurs within a specific context, and computational methods are particularly helpful in such scenarios, due to their ability to generate and modify the precise conditions in which the decision occurs. One area in the organizational sciences that has frequently enjoyed the inter jection of the decision-making perspective is that of selection. The models put forth may be either static or dynamic, depending on whether they consider the
52 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
role of prior decisions in informing current ones and the accumulation of expert experience over time. However, the use of this perspective has been largely lim ited to either the development of descriptive theory or the generation of norma tive (i.e., ideal) best practices. With regard to the latter, meta-analytic evidence points to the efficacy of mechanistic data combination via regression as being a significantly more effective strategy in selection decisions in comparison to holistic, experienced-based combination (Kuncel, Klieger, Connelly, & Ones, 2013). Yet, the clearly ineffective strategy remains the preferred method for the majority of decision makers because they either fail to see the selection problem as a probabilistic one or because they mistakenly believe their holistic judgments improve with experience (Highhouse, 2008). One fruitful avenue of decisionmaking research may be the exploration of a prescriptive middle ground between normative standard and descriptive reality: if preference for holistic decisionmaking is taken as a given, what strategies might be proposed to increase its performance to approach the levels achieved by mechanistic decision-making? A static-descriptive computational model might be useful in comparing different alternatives to the mechanistic gold standard in order to generate workable sug gestions for improving the selection process. Dynamic-Prescriptive
Similarly to their static-prescriptive counterparts, dynamic-prescriptive models provide strategies and recommendations for maximizing decisional optimal ity. However, these recommendations address sequences of rather than singular decisions, wherein the outcomes of initial decisions update the decision rules used or constrict the choices available in subsequent ones. The most common variant of dynamic decision-making studied describes a scenario where a sin gle decision is repeatedly made, and the decision maker is able to utilize feed back from prior decisions to inform future ones. When this is the case, decisions are made with greater accuracy, as is evidenced by findings in the learning and expert judgment literatures. But if advancement toward optimality is expected in dynamic decision contexts, then what use are prescriptions? If people will natu rally attune their decisional processes to the problem at hand given enough time, it would seem reasonable to judge providing them with remedial guidelines as redundant. To this point, the dynamic-prescriptive model is clearly the most under-represented of the four types, as evidenced in Table 2.1. Yet here we will make the arguments that prescriptive models do in fact have a place in dynamic contexts and that the dearth of literature in this area does not imply the futility of the efforts contained therein but rather a horizon of untapped opportunity. First, individual-level processing is not the only factor that affects decision outcomes; variables at the group and organizational levels present as prescrip tive targets when they impede the adaptation of one’s decision-making process
Computational Models of Decision-making for Organizations
53
to decision problem. Thus, dynamic-prescriptive models commonly focus on the impact of higher-level variables on individual-level outcomes: of the six mod els reviewed here, only one provides exclusively individual-level prescriptions, while the other five address the effects originating at the group and organizational levels. Several establish the aspects of the organizational structure as a principal force in determining the optimality of decision-making within it (e.g., Gode & Sunder, 1993). In one such study, Kumar, Ow, and Prietula (1993) simulated the decisional processes of hospital units when constructing patient schedules over different intervals of time. They found that their success was most heavily impacted by features of the organizational structure, namely the degree of cen tralization and differences in workflow. What specific type of structure proved to be most effective depended on parameters of the decision problem that were in flux throughout the simulation, such as patent load and their mean interarrival times. Organizations who face similar scheduling problems could benefit from studies such as this, by identifying which profile of decision parameters best approximates the one they most frequently face and adopting the corresponding organizational structures to improve their decision-making effectiveness. Prescriptive work may also identify the elements of and inputs to the dynamic decision-making process that are most impactful on its outcomes, which is use ful for informing change efforts. For example, Macy (1991) examined individu als’ decisions of whether or not to cooperate in a prisoner’s dilemma task. While the group’s social network’s size, mobility, density, and anonymity had a large and significant impact on these decisions, this was much less true for the indi vidual-level variables used. When formulating prescriptive solutions, it should go without saying that only those variables that are both influential and tractable should be targeted for change: if it is not the former, the effort is impotent, and if not the latter, impossible. Naturally, the predictors analyzed in Macy (1991) have varying levels of influence and tractability. For instance, network size, while the most impactful of the studied variables on rates of cooperation, is not one that can be unilaterally adjusted, but another influential predictor, anonym ity, could be reduced by encouraging interactions and facilitating communica tion between all group members, particularly for large groups more prone to more anonymized interaction. Thus, practitioners might conclude based on this study that group anonymity is a variable ripe for targeting when increased coop eration is the goal. Finally, prescriptive models are useful for piloting intuitively appealing solu tions as a step prior to empirical testing. For instance, Kang (2001) demonstrates the importance for decision accuracy of hierarchical sensitivity, the degree to which a team leader effectively weights each team member’s judgments based on relative member competence when aggregating all judgments to form a final decision. Both hierarchical sensitivity and decision accuracy are shown to be negatively affected by the presence of incompetent group members, but one
54
Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
possible solution, increasing information redundancy (i.e., the amount of infor mation shared by at least two members in the group), did nothing to negate the impact of individual incompetence. Although these findings could be interpreted as a failure to identify a workable solution, this failure is still valuable in that it redirects future prescriptive efforts away from an inevitably dead end. Exemplar Model: Cognitive Heuristics and Feedback
Kleinmuntz (1985) demonstrated how the performance of heuristics can vary as a function of certain characteristics of the decision task, namely those that determine availability and efficacy of the feedback relayed to the decision maker. Based on the rationale that a heuristic is merely a type of decisional strategy that is subject to the same feedback-induced refinements as any other, the author hypothesized that emphasizing certain task characteristics that expand the opportunity for feedback will increase the performance of the studied heuris tics. Situating this hypothesis within the context of medical decision-making, the author simulated the treatment decisions of doctors when presented with patient cases that differ in initial patient health, the base-rate distribution of the patient’s illness, and the diagnostic ability of the tests chosen. Of these task characteris tics, initial patient health was the primary determining factor for the amount of feedback available, given that very poor initial health implies the patient will not live through many rounds of treatment. With this simulation, various heuristic strategies are assessed, by comparing their performance relative to a benchmark strategy that is maximally optimal (i.e., the statistically derived, normative solution) and one that is minimally opti mal (i.e., randomly decided). Generally speaking, the performance of the heu ristics fell between these benchmarks at the onset of decision-making, but their relative position within these bounds differed depending on the nature of the task characteristics. By varying the decision-making task on different dimensions, the author was able to determine which factors were most influential in deter mining the heuristics’ performance, as well as which heuristics were most effec tive under different conditions. Of the task characteristics manipulated, initial patient health was by far the most influential in determining the effectiveness of heuristics. The dominance of this factor was taken to mean that feedback should be considered a central variable in the study of heuristic performance, more so than the more commonly studied variables of base-rates and information avail ability (i.e., diagnostic ability). Applications in Organizational Science
Goal setting is one such topic in the organizational sciences that exemplifies dynamic decision-making and would particularly benefit from prescriptive
Computational Models of Decision-making for Organizations
55
modeling. Goal-related decisions are made regarding which goals one should pursue, the order in which one’s chosen goals should be prioritized, how resources should be distributed between goals, when goals should be abandoned, etc. At any time, individuals are usually in pursuit of multiple goals simultane ously, which compete for attention and at times may conflict with one another (Locke & Latham, 2002). Thus, goal-related decisions are dynamic in two senses: first, the successes and failures met in pursuit of past goals can impact how one will choose to set, prioritize, and achieve future goals. If one faces con sistent failures in accomplishing a certain type of goal, they will likely learn to avoid pursuing such goals in the future. Second, choices made at earlier points in time can constrain the choices made at later points in time. For example, selecting a goal of pursuing a career as a licensed psychologist will preclude the simultaneous goal of entering a career as a mechanic, as well as any associ ated goal-setting decisions following thereafter. Dynamic-prescriptive modeling provides a useful tool for optimizing goal-related decisions, accounting for the complex dynamics of how these decisions unfold over time. A specific example of applying dynamic-prescriptive modeling to goal setting pertains to the question of how feedback provided by organizational leadership influences employees’ development of adaptive goal-setting strate gies. Which goals an employee chooses to pursue and how they are prioritized should ideally be reflective of organizational needs and priorities. However, employees may fail to structure their goals accordingly if the organization is not providing effective feedback to guide future behavior. An interested researcher could draw from Kleinmuntz (1985) to construct a model that pits informa tional availability (the degree to which the organization makes clear its goals and priorities) against level of feedback (amount and latency), in determin ing which is more influential in guiding employee goal-setting toward better decision-making. Based on the original study’s findings, it would be hypothe sized that feedback is the more important of the two and thus should be targeted in practitioner efforts. Conclusion
Within the organizational sciences, narrative theories (i.e., based on the story that can be explained by the results) are most often utilized. This manner of generating theory, however, is far less useful for making precise descriptions, predictions, or prescriptions as formal theories (i.e., mathematical or precisely specified rule-based; Braun, Kuljanin, Grand, Kozlowski, and Chao, 2022; Grand, Braun, Kuljanin, Kozlowski, and Chao, 2016). Decision-making, how ever, is an area of research that almost exclusively uses formal models for gen erating theories. Thus, computation is inherent to this area of study in a way that the rest of the organizational sciences are still catching up with.
56 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
Applied to the study of decision-making, computational modeling allows for the creation of internally consistent theory, precisely specified. Such theory is able to capture not only the inputs and outputs of the decision-making process but also the cognitive processes by which the former is transformed into the latter (Kozlowski, Chao, Grand, Braun, & Kuljanin, 2016). Thus, the addition of computational modeling to the study of decision-making provides research ers with a useful tool—a magnifying lens—that produces a fine-grained picture of what choice will be made with a set of given inputs (prediction), the process by which decisions are made (description), and how the said process can be improved (prescription). These three outcomes are achieved via simulation—not mental simulation, as is the case for informal theories, but computational simula tion, which requires the theory modeled be internally consistent (Weinhardt & Vancouver, 2012). But why exactly does theoretical precision matter for the study of decisionmaking? In short, detailed understanding of existing processes is required for the generation of prescriptions for how these processes can be improved, while a mere sense of how variables in a system are directionally related is insufficient for altering its output. Computational models provide detailed representations of both decisional and context factors, and given this enhanced precision, are rela tively more falsifiable and less vague (Gray & Cooper, 2010; Edwards & Berry, 2010). Both characteristics contribute to the development of workable solutions. The types of decision-making computational models developed in the litera ture have been increasingly complex and extend the purview of prior models: first came the static-descriptive models that examined singular decisions with no consideration of intertemporal dynamics, followed by descriptive-dynamic mod els that did account for the element of time. Currently, prescriptive models are increasingly put forth to capitalize on the progression of prior descriptive work, providing actionable guidelines for improving decision outcomes and by con veying practical understanding of specific cases of decision-making. Computa tional modeling provides a more practical and less expensive testing ground for these ideas, generating initial evidence that empirical data collection efforts are warranted. If the empirical data supports the theory outlined in the computational model, then we become able to contribute evidence-based recommendations for improving real-world decision-making, justifying our efforts as researchers. Going forward, there is still much to be done in the way of transforming descriptive knowledge into actionable insights for practitioners. While a sig nificant amount of work has been done to expand our collective conception of the decision-making process in all of its complexity, the next step of prescrip tion generation and testing has yet to be taken for many descriptive models. This may in part be due to the relegation of the study of decision-making to the periphery of the organizational sciences—which is ironic, given that this area has indisputably much to gain from the pursuit of prescriptive work. Few would
Computational Models of Decision-making for Organizations
57
disagree with the assertion that improving decision-making at all levels of the organization is an impactful and worthy goal. Hopefully, this chapter has made the argument that computational models provide an invaluable tool for generat ing both the precise theory that is required by scientists and the evidence-based recommendations that are desired by practitioners. It is now up to us to seize this opportunity and make good use of it. Note 1. Static decisions may include inputs or processes that are, in fact, dynamic. It is likely that static decisions require a decision-making process that spans time rather than existing in a single moment. As such, what differentiates static from dynamic decisions is whether the decision is isolated (not affected by prior decisions) or interdependent (affected by the prior decisions). For instance, decision field theory is described as a dynamic model that integrates the length of time taken to make a decision as an essen tial factor for determining the decision maker’s ultimate preference function (Buse meyer & Townsend, 1993). However, since it does not include an updating feature wherein prior decision outcomes are utilized as an input for subsequent decisions, it is deemed a static model in this typology.
References Abar, S., Theodoropoulos, G. K., Lemarinier, P., & O’Hare, G. M. (2017). Agent based modelling and simulation tools: A review of the state-of-art software. Computer Sci ence Review, 24, 13–33. https://doi.org/10.1016/j.cosrev.2017.03.001 Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In K. J. Beck mann (Ed.), Action control: From cognition to behavior (pp. 11–39). Springer. https:// doi.org/10.1007/978-3-642-69746-3_2 Anderson, J. R. (1990). The adaptive character of thought. Psychology Press. Baron, J. (2012). The point of normative models in judgment and decision making. Fron tiers in Psychology, 3, 577. https://doi.org/10.3389/fpsyg.2012.00577 Bell, D. E., Raiffa, H., & Tversky, A. (1988). Descriptive, normative, and prescriptive interactions in decision making. In D. E. Bell, H. Raiffa, & A. Tversky (Eds.), Deci sion making: Descriptive, normative, and prescriptive interactions (pp. 9–30). Cam bridge University Press. https://doi.org/10.1017/CBO9780511598951.003 Bogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J. D. (2006). The physics of optimal decision making: A formal analysis of models of performance in twoalternative forced-choice tasks. Psychological Review, 113(4), 700–765. https://doi. org/10.1037/0033-295X.113.4.700 Braun, M. T., Kuljanin, G., Grand, J. A., Kozlowski, S. W. J., & Chao, G. T. (2022). The power of process theories to better understand and detect consequences of organiza tional interventions. Industrial and Organizational Psychology: Perspectives on Sci ence and Practice, 15, 99–104. https://doi.org/10.1017/iop.2021.125 Brenner, L., Griffin, D., & Koehler, D. J. (2005). Modeling patterns of probability calibra tion with random support theory: Diagnosing case-based judgment. Organizational Behavior and Human Decision Processes, 97(1), 64–81. https://doi.org/10.1016/ j.obhdp.2005.02.002
58 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
Brown, R., & Vari, A. (1992). Towards a research agenda for prescriptive decision sci ence: The normative tempered by the descriptive. Acta Psychologica, 80(1–3), 33–47. https://doi.org/10.1016/0001-6918(92)90039-G Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review, 100(3), 432–459. https://doi.org/10.1037/0033-295X.100.3.432 Calder, M., Craig, C., Culley, D., de Cani, R., Donnelly, C. A., Douglas, R., Edmonds, B., Gascoigne, J., Gilbert, N., Hargrove, C., Hinds, D., Lane, D. C., Mitchell, D., Pavey, G., Robertson, D., Rosewell, B., Sherwin, S., Walport, M., & Wilson, A. (2018). Com putational modelling for decision-making: Where, why, what, who and how. Royal Society Open Science, 5(6). https://doi.org/10.1098/rsos.172096 Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organizational choice. Administrative Science Quarterly, 17(1), 1–25. Cyert, R. M., Feigenbaum, E. A., & March, J. G. (1959). Models in a behavioral theory of the firm. Behavioral Science, 4(2), 81–95. https://doi.org/10.1002/bs.3830040202 David, N., Sichman, J. S., & Coelho, H. (2003). Towards an emergence-driven software pro cess for agent-based simulation. In J. S. Sichman, F. Bousquet, & P. Davidsson (Eds.), Multi-agent-based simulation II. Springer. https://doi.org/10.1007/3-540-36483-8_7 Dietz, T., & Stern, P. C. (1995). Toward a theory of choice: Socially embedded pref erence construction. The Journal of Socio-Economics, 24(2), 261–279. https://doi. org/10.1016/1053-5357(95)90022-5 Dishop, C. R., Braun, M. T., Kuljanin, G., & DeShon, R. P. (2020). Thinking longitudinal: A framework for scientific inferences with temporal data. In Y. Griep, S. D. Hansen, T. Vantilborgh, & J. Hofmans (Eds.), Handbook on the temporal dynamics of organi zational behavior (pp. 404–425). Edward Elgar Publishing. Edwards, J. R., & Berry, J. W. (2010). The presence of something or the absence of nothing: Increasing theoretical precision in management research. Organizational Research Methods, 13(4), 668–689. https://doi.org/10.1177/1094428110380467 Gibson, F. P., Fichman, M., & Plaut, D. C. (1997). Learning in dynamic decision tasks: Computational model and empirical evidence. Organizational Behavior and Human Decision Processes, 71(1), 1–35. https://doi.org/10.1006/obhd.1997.2712 Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Mod els of bounded rationality. Psychological Review, 103(4), 650–669. https://doi. org/10.1037/0033-295X.103.4.650 Gode, D. K., & Sunder, S. (1993). Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality. Journal of Political Economy, 101(1), 119–137. https://doi.org/10.1086/261868 Goldstein, W. M., & Hogarth, R. M. (1997). Judgment and decision research: Some his torical context. In W. M. Goldstein & R. M. Hogarth (Eds.), Cambridge series on judgment and decision making. Research on judgment and decision making: Currents, connections, and controversies (pp. 3–65). Cambridge University Press. Gonzalez, C., Lerch, J. F., & Lebiere, C. (2003). Instance-based learning in dynamic deci sion making. Cognitive Science, 27(4), 591–635. https://doi.org/10.1207/s15516709 cog2704_2 Grand, J. A., Braun, M. T., Kuljanin, G., Kozlowski, S. W. J., & Chao, G. T. (2016). The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams [Monograph]. Journal of Applied Psychology, 101, 1353–1385. https://doi. org/10.1037/apl0000136
Computational Models of Decision-making for Organizations
59
Gray, P. H., & Cooper, W. H. (2010). Pursuing failure. Organizational Research Methods, 13(4), 620–643. https://doi.org/10.1177/1094428109356114 Grossberg, S., & Gutowski, W. E. (1987). Neural dynamics of decision making under risk: Affective balance and cognitive-emotional interactions. Psychological Review, 94(3), 300–318. https://doi.org/10.1037/0033-295X.94.3.300 Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32(4), 1229–1245. Highhouse, S. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology, 1(3), 333–342. https://doi. org/10.1111/j.1754-9434.2008.00058.x Highhouse, S., Dalal, R. S., & Salas, E. (Eds.). (2013). Judgment and decision making at work. Routledge. Hogarth, R. M., & Reder, M. W. (1987). Rational choice: The contrast between econom ics and psychology. University of Chicago Press. Jin, Y., & Levitt, R. E. (1996). The virtual design team: A computational model of project organizations. Computational & Mathematical Organization Theory, 2(3), 171–195. Johnson, J. G., & Busemeyer, J. R. (2005a). A dynamic, stochastic, computational model of preference reversal phenomena. Psychological Review, 112(4), 841. Johnson, J. G., & Busemeyer, J. R. (2005b). Rule-based decision field theory: A dynamic computational model of transitions among decision-making strategies. In T. Betsch & S. Haberstroh (Eds.) The routines of decision making (pp. 3–20). Lawrence Erlbaum Associates, Inc. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. Kang, M. (2001). Team-Soar: A computational model for multilevel decision making. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 31(6), 708–714. https://doi.org/10.1109/3468.983426 Katzell, R. (1994). Keeping up with the world of work. PsycCRITIQUES, 39(1), 65–66. https://doi.org/10.1037/033821 Kleinmuntz, D. N. (1985). Cognitive heuristics and feedback in a dynamic decision envi ronment. Management Science, 31(6), 680–702. Kozlowski, S. W. J., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2013). Advancing multilevel research design: Capturing the dynamics of emergence. Organi zational Research Methods, 16, 581–615. https://doi.org/10.1177/1094428113493119 Kozlowski, S. W. J., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2016). Examining the dynamics of multilevel emergence: Learning and knowledge build ing in teams. Organizational Psychology Review, 6, 3–33. https://doi.org/10.1177/ 2041386614547955 Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research and methods in organizations: Foun dations, extensions, and new directions (pp. 3–90). San Francisco, CA: Jossey-Bass. Kumar, A., Ow, P. S., & Prietula, M. J. (1993). Organizational simulation and information systems design: An operations level example. Management Science, 39(2), 218–240. Kuncel, N. R., Klieger, D. M., Connelly, B. S., & Ones, D. S. (2013). Mechanical versus clinical data combination in selection and admissions decisions: A meta-analysis. Jour nal of Applied Psychology, 98(6), 1060–1072. https://doi.org/10.1037/a0034156
60 Shannon N. Cooney, Michelle S. Kaplan, and Michael T. Braun
Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal set ting and task motivation: A 35-year odyssey. American Psychologist, 57(9), 705– 717. https://doi.org/10.1037/0003-066X.57.9.705 Luce, R. D., & Von Winterfeldt, D. (1994). What common ground exists for descriptive, prescriptive, and normative utility theories? Management Science, 40(2), 263–279. Macy, M. W. (1991). Learning to cooperate: Stochastic and tacit collusion in social exchange. American Journal of Sociology, 97(3), 808–843. https://doi.org/ 10.1086/229821 Malatji, E. M., Zhang, J., & Xia, X. (2013). A multiple objective optimisation model for building energy efficiency investment decision. Energy and Buildings, 61, 81–87. https://doi.org/10.1016/j.enbuild.2013.01.042 Masuch, M., & LaPotin, P. (1989). Beyond garbage cans: An Al model of organi zational choice. Administrative Science Quarterly, 34(1), 38–67. https://doi.org/ 10.2307/2392985 McHugh, K. A., Yammarino, F. J., Dionne, S. D., Serban, A., Sayama, H., & Chatterjee, S. (2016). Collective decision making, leadership, and collective intelligence: Tests with agent-based simulations and a field study. The Leadership Quarterly, 27(2), 218–241. https://doi.org/10.1016/j.leaqua.2016.01.001 Nowroozi, A., Shiri, M. E., Aslanian, A., & Lucas, C. (2012). A general computational recognition primed decision model with multi-agent rescue simulation benchmark. Information Sciences, 187, 52–71. https://doi.org/10.1016/j.ins.2011.09.039 Pan, X., Han, C. S., Dauber, K., & Law, K. H. (2006). Human and social behavior in computational modeling and analysis of egress. Automation in Construction, 15(4), 448–461. https://doi.org/10.1016/j.autcon.2005.06.006 Pete, A., Pattipati, K. R., & Kleinman, D. L. (1993). Optimal team and individual deci sion rules in uncertain dichotomous situations. Public Choice, 75(3), 205–230. https:// doi.org/10.1007/BF01119183 Roe, R. M., Busemeyer, J. R., & Townsend, J. T. (2001). Multialternative decision field theory: A dynamic connectionist model of decision making. Psychological Review, 108(2), 370–392. https://doi.org/10.1037/0033-295X.108.2.370 Savage, L. J. (1954). The foundations of statistics. Wiley. Steel, P., & König, C. J. (2006). Integrating theories of motivation. Academy of Manage ment Review, 31(4), 889–913. Usher, M., & McClelland, J. L. (2001). The time course of perceptual choice: The leaky, competing accumulator model. Psychological Review, 108(3), 550–592. https://doi. org/10.1037/0033-295X.108.3.550 Usher, M., & McClelland, J. L. (2004). Loss aversion and inhibition in dynamical mod els of multialternative choice. Psychological Review, 111(3), 757–769. https://doi. org/10.1037/0033-295X.111.3.757 Vancouver, J. B., Weinhardt, J. M., & Schmidt, A. M. (2010). A formal, computational theory of multiple-goal pursuit: Integrating goal-choice and goal-striving pro cesses. Journal of Applied Psychology, 95(6), 985–1008. https://doi.org/10.1037/ a0020628 Von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behavior (2nd rev. ed.). Princeton University Press. Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2(3), 454–485. https://doi.org/10.1111/j.1756-8765.2010.01095.x
Computational Models of Decision-making for Organizations
61
Wallsten, T. S., & Barton, C. (1982). Processing probabilistic multidimensional infor mation for decisions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 8(5), 361–384. https://doi.org/10.1037/0278-7393.8.5.361 Wallsten, T. S., & González-Vallejo, C. (1994). Statement verification: A stochastic model of judgment and response. Psychological Review, 101(3), 490–504. Weinhardt, J. M., & Vancouver, J. B. (2012). Computational models and organizational psychology: Opportunities abound. Organizational Psychology Review, 2(4), 267– 292. https://doi.org/10.1177/2041386612450455 Xia, S., & Liu, J. (2013). A computational approach to characterizing the impact of social influence on individuals’ vaccination decision making. PLoS One, 8(4). https://doi. org/10.1371/journal.pone.0060373 Xu, R., DeShon, R. P., & Dishop, C. R. (2020). Challenges and opportunities in the estimation of dynamic models. Organizational Research Methods, 23(4), 595–619. https://doi.org/10.1177/1094428119842638 Zhang, T., & Zhang, D. (2007). Agent-based simulation of consumer purchase decisionmaking and the decoy effect. Journal of Business Research, 60(8), 912–922. https:// doi.org/10.106/j.jbusres.2007.02.006
3 COMPUTATIONAL MODELING IN ORGANIZATIONAL DIVERSITY AND INCLUSION Hannah L. Samuelson, Jaeeun Lee, Jennifer L. Wessel, and James A. Grand
Our ability to reach unity in diversity will be the beauty and the test of our civilization. —Mahatma Gandhi
Diversity, inclusion, and equity have been—and remain—among the most energizing, widespread, and challenging social issues faced by humans. The exclusion of, preference for, and distinctions among individuals based on their physical features, group memberships, and/or value and belief systems are inti mately interwoven into the legal, moral, and cultural fabric of society. These matters continue to carry significance for the workforce, organizations, and their members. Indeed, greater organizational diversity has been linked to improved organizational performance and employee satisfaction, among other desirable outcomes (Adler, 2001; Catalyst, 2004; Fields & Blum, 1997). Consequently, topics such as understanding and overcoming implicit bias in workplace interac tions, integrating and empowering socio-culturally diverse groups, and adapting organizational practices and policies to ensure equal and equitable opportunities for all employees continue to be among the most important workforce topics among organizational researchers and practitioners (SIOP, n.d.). Although diversity and inclusion have been of interest in I-O psychology for nearly a century, recent reviews of this literature claim that progress in this domain has begun to stagnate. For example, Colella, Hebl, and King (2017) note that while research has helped to establish antecedents and negative conse quences of employment discrimination, it has been less successful at providing “a clear direction for its resolution” (p. 507). Furthermore, foundational theory DOI: 10.4324/9781003388852-4
Computational Modeling in Organizational Diversity
63
from the organizational sciences on these topics has remained static and has largely ignored the “black box” regarding how, why, and when exclusionary social contexts and individual behaviors emerge or the advantages of diversity are likely to be realized (e.g., Colella et al., 2017). Another factor contributing to this perceived stagnation concerns the unique methodological challenges faced by diversity and inclusion researchers that make critically examining explana tory accounts and interventions difficult using conventional approaches. Recruit ing samples from minority populations (members of which may be difficult to reach, have concealable identities, and/or be wary due to past mistreatments of their cultural groups by the research community), overcoming social desir ability effects around sensitive topics, collecting data across multiple organiza tional levels, and examining how effects unfold over long stretches of time have all been noted as significant impediments to advancing diversity and inclusion scholarship (Roberson, Ryan, & Ragins, 2017). In light of these accounts, the central purpose of this chapter is to describe the application of computational modeling and the utilities it can afford to research and practice on organizational diversity, inclusion, and equity. Computational modeling techniques are not new in the diversity sciences. Indeed, some of the most well-known and influential computational models in all the social sciences concern matters relevant to diversity researchers (e.g., Schelling’s, 1971, model of residential racial segregation; Axelrod’s, 1997, model of culture dissemina tion). However, the use of computational models for clarifying and testing the ory, examining “what ifs” to inspire and probe the plausibility of new ideas, and extrapolating the impact of interventions is still largely a fringe practice that has gained little traction in the mainstream organizational diversity sciences literature. In this chapter, we aim to provide an accessible point of departure for researchers and practitioners interested in learning about and pursuing computa tional modeling methods for topics germane to the diversity sciences. Our chap ter is divided into three sections. We first briefly make a case for the value of computational modeling to diversity and inclusion researchers and practitioners. To accomplish this goal, we highlight what we see as the “big questions” that orient research and practice in diversity science, the major obstacles to address ing these questions, and the potential for computational modeling techniques to help face those challenges. In the second section of our chapter, we articulate a critical precondition for those interested in integrating computational modeling into diversity science—how to think in computational modeling terms. We root this discussion in substantive content of interest to diversity and inclusion inves tigators by introducing, organizing, and discussing prominent concepts from the diversity sciences into a framework that we believe facilitates computational model “thinking.” The final section of our chapter examines how computational modeling has been applied to explore topics relevant to diversity and inclusion in the
64
Samuelson, Lee, Wessel, and Grand
organizational sciences and adjacent fields. Based on our review, we identify the most common computational modeling approaches previously used and the types of questions in diversity science for which those approaches are par ticularly well suited. We then select and review in greater detail three exem plar models by discussing how each model operates, its key assumptions, and some of the unique insights and predictions it advances. The three models cho sen for this purpose were selected to illustrate how different types of modeling approaches can be meaningfully applied as well as the breadth and scope of topics to which computational models can be directed to advance diversity and inclusion research/practice. Lastly, we conclude by considering ways in which these exemplar models could be further developed to encourage the continued pursuit of computational modeling by diversity and inclusion scholars. Why Computational Modeling for Organizational Diversity and Inclusion Research?
The conceptual and empirical foci of diversity scientists encompass a broad array of topics that span individual, group, and organizational/sociocultural lev els of analysis. This work also runs the gamut from “basic” and highly gen eralized (e.g., theoretical underpinnings of stereotypes) to “applied” and more narrowly tailored (e.g., validating interventions targeting conscious and uncon scious bias). At the risk of oversimplifying such a robust and vibrant area of research, we propose three questions that broadly characterize the impetus of theoretical and empirical work on diversity and inclusion in the organizational and social sciences: 1. What are the psychological and social processes underlying the formation of stereotypes and stigmas, the development of prejudicial attitudes, the enactment of discriminatory behaviors, and the occurrence of stratification/ segregation? 2. What are the consequences of stereotypes/stigmas, prejudice, discrimination, and stratification/segregation for targets, non-targets, and organizations? 3. How can the impact of stereotypes/stigmas, prejudice, discrimination, and stratification/segregation be mitigated or overcome? Although the research conducted within and across these thrusts exhibits con siderable variability, we believe there are at least three “grand challenges” for diversity researchers that crosscut these foci (cf., Roberson, 2012). First, there are multiple ways to conceptualize and operationalize the fundamental concepts underlying the phenomena of interest. For example, some diversity research and theories emphasize the type or visibility of attributes that distinguish individ uals (e.g., Harrison, Price, & Bell, 1998; Pelled, 1996), whereas others stress
Computational Modeling in Organizational Diversity
65
the significance of how those attributes are distributed across individuals (e.g., Harrison & Klein, 2007; Lau & Murnighan, 1998). Second, the phenomena of interest to diversity and inclusion researchers are believed to be a function of multiple mechanisms that operate simultaneously and at different levels of analysis. For example, explanations for the underrepresentation of women in leadership positions have cited several possible contributors, including gender role stereotyping, work-family pressures, ambivalent sexism, tokenism, and the nature of mentoring/developmental opportunities (e.g., Eagly & Carli, 2007; Glicke & Fiske, 2001; King, Hebl, George, & Matusik, 2009). Lastly, the phe nomena of interest to diversity scientists are dynamic, emergent, and unfold as patterns over time. Stereotypes and stigmas are constructed, maintained, and change as individuals’ beliefs and experiences evolve (Colella, McKay, Dan iels, & Signal, 2012); social and demographic groups can become segregated and stratified through repeated enactment and enforcement of behaviors, prac tices, and policies (Schelling, 1971); and the efficacy of interventions aimed at reducing discriminatory actions are reflected in trends, trajectories, and indica tors of inclusion that change over time (Roberson, 2012). These “grand challenges” are formidable. However, they also highlight where and how computational modeling and simulation techniques can provide value to diversity science. With respect to the first challenge, computational models permit one to incorporate, integrate, and examine the implications of different conceptualizations and operationalizations of diversity constructs independently or simultaneously. For example, a computational model can be constructed that allows one to simultaneously explore how differences in the level or type of attributes within individuals interact with differences in the distribution of those attributes across individuals. In relation to the second challenge, computational models are uniquely suited for representing and exploring the effects of multiple and simultaneous processes, including those that operate at different levels of analysis or time scales. Thus, one can construct a computational model in which social categorization processes at the individual-level facilitate segregation in an organization that is subsequently exacerbated by structural differences in the opportunities and resources afforded to particular groups. Finally, computational modeling and simulation can help to address the third challenge by permitting researchers to examine and extrapolate how, which, and under what conditions key variables and outcomes might change or unfold as the dynamics of a system play out over time across individual, group, and organizational levels. For exam ple, “what if” scenarios can be constructed that allow one to emulate the impact of introducing different diversity and inclusion initiatives into an organizational system (e.g., changes designed to reduce biases in selection practices versus performance management systems) that can provide potential insights into bot tlenecks, time lags, and critical points of leverage for improving the experiences of historically disadvantaged groups. In sum, the potential for computational
66 Samuelson, Lee, Wessel, and Grand
modeling and simulation techniques to address some of the most significant challenges in diversity and inclusion research is substantial. Thinking Computationally in Organizational Diversity and Inclusion Research
We suspect that most diversity and inclusion scholars perceive the primary barrier to engaging in computational modeling to be the “quantitative” or pro gramming skills needed to create a model. These proficiencies are undoubtedly important. However, we believe the far more critical development is the need to first (re)train oneself on how to “think” computationally. Computational thinking requires moving beyond describing and/or accounting for patterns of covariation between constructs (e.g., “box-and-arrow” path models, statistical mediation/ moderation among variables) to elaborating the processes believed to generate an observed phenomenon. There are several excellent general discussions on this topic (e.g., Davis, Eisenhardt, & Bingham, 2007; Harrison, Lin, Carroll, & Carley, 2007; Macy & Willer, 2002; Vancouver & Weinhardt, 2012), and other chapters in this volume guide this process in the context of particular modeling approaches. Here, we wish to situate this conversation more concretely for the diversity and inclusion investigator by introducing a set of terminology we have found useful for prompting “computational thinking” and then using it to organ ize several fundamental topics commonly discussed in the diversity sciences (see Table 3.1 for summary). In our view, all computational models require one to consider three ele ments—(1) core concepts, (2) process mechanisms, and (3) emergent/dynamic outcomes. The core concepts of a computational model typically entail the fun damental properties, states, variables, attributes, etc., that will belong to and/ or describe the entities and environment under investigation. In diversity and inclusion research, this will most critically involve the conceptualization and meaning of “diversity” in one’s topic of inquiry (cf., Harrison & Klein, 2007). As shown in Table 3.1, diversity researchers have considered several perspec tives from which “differences between people that may lead them to perceive that another person is similar to, or different from, the self” (Roberson, 2012, p. 1012) could be defined. Though all of these definitions entail something about differences in the type, level, or distribution of attributes in a group of individu als, different conceptualizations draw attention to varying levels of granularity and operationalizations that carry implications for modeling a given phenom enon. For example, distinguishing between surface- and deep-level attributes may be relevant for modeling how individuals organize into social groups based on their shared features. However, incorporating differences in the observability of individuals’ characteristics may not be a core concept in a model examining how competition can contribute to wage discrimination or preferential hiring across demographic groups.
Computational Modeling in Organizational Diversity
67
TABLE 3.1 Representative Core Concepts, Mechanisms, and Emergent Outcomes in
Organizational Diversity and Inclusion Research
Computational Model Element
Representative Topic From Diversity & Inclusion Research
Definition
Core concept
Diversity as factors/ categories
Individual-level characteristics capable of producing between-person identity distinctions and results in unique outcomes (Mannix & Neale, 2005; Tsui & Gutek, 1999) Proportion/variability of specific characteristics within a collective (Tsui & Gutek, 1999) Distribution of task-/job-relevant competencies vs. sociocultural/ demographic characteristics in a unit (Pelled, 1996; Pelled, Eisenhardt, & Xin, 1999) Distribution of easily observable (e.g., demographics, physical features) vs. less easily observable (e.g., values, beliefs, attitudes) attributes in a unit (Harrison et al., 1998; Harrison, Price, Gavin, & Florey, 2002) Within-person identities that result in unique outcomes vis a vis additive, multiplicative, and/or holistic mechanisms (Crenshaw, 1989) Degree to which members differ in their relative standing on an attribute, possess different categories/kinds of attributes, or possess different proportions of a socially valued asset/ resource (Harrison & Klein, 2007) Degree to which a collective unit can be organized into homogenous subgroups based on members’ alignment across multiple attributes (Lau & Murnighan, 1998) Individuals seek to enhance selfconcept by aligning with valued social groups (Tajfel, 1978) Individuals evaluate the attributes and (dis)advantages of social groups when formulating perceptions and judgments (Turner, Brown, & Tajfel, 1979)
Compositional diversity Functional/ sociocultural diversity Surface-level/deep level diversity
Intersectionality and intrapersonal diversity Separation, variety, and disparity
Faultlines
Process mechanisms
Social identification Social comparison
(Continued)
68 Samuelson, Lee, Wessel, and Grand TABLE 3.1 (Continued)
Computational Model Element
Representative Topic From Diversity & Inclusion Research
Definition
Social categorization
Individuals view self and others in terms of group memberships rather than personal identities (Turner, Hogg, Oakes, Reicher, & Wetherell, 1987) Individuals are more attracted to others perceived to have similar features, values, beliefs, and attitudes (Berscheid & Walster, 1969; Byrne, 1971) Individuals ascribe relatively positive characteristics to individuals with whom they share a common group identity (Jackson & Hunsberger, 1999) Interaction among members from different social groups increases likelihood of viewing diverse others in terms of personal vs. group identities (Blau, 1977; Pettigrew, 1982) Competition over scarce resources perpetuates in-group/out-group distinctions (Blalock, 1967; Bonacich, 1972) Generalized (and often negative) attributions or beliefs about the personal attributes associated with a group and its members (Hilton & von Hippel, 1996) Adverse attitudes, negative judgments, or hostile evaluations directed toward one or more individuals because of their group membership (Allport, 1954) Enactment of harmful or detrimental behaviors toward a group or individuals belonging to that group (Al Ramiah, Hewston, Dovidio, & Penner, 2010) Clustering of individuals in distinguishable strata, areas, locations, categories, etc. based on group identification or affiliation (Allport, 1954; Schelling, 1971)
Similarity-attraction (homophily) Ingroup favoritism
Intergroup contact
Social competition
Emergent/ dynamic outcomes
Stereotypes/stigma
Prejudice
Discrimination
Segregation/ stratification
Computational Modeling in Organizational Diversity
69
The process mechanisms of a computational model describe how, why, and under what circumstances the core concepts in a model function, interact, and change over time. The process mechanisms are the “engine” of a computational model and typically describe a series of emotional, cognitive, and/or behavioral actions that translate inputs into outputs and which propel a system from its cur rent state (i.e., how things look at time = t) to a new state (i.e., how things look at time = t + 1). The diversity and inclusion literature is again replete with potential process mechanisms, including social identification, social categorization, and social competition (see Table 3.1 for several examples). However, direct accounts and demonstrations of exactly how these mechanisms are carried out or operate in conjunction to influence outcomes relevant to diversity and inclusion phenomena are often lacking (e.g., What are the differences between social comparison and social identification processes? How do environments, individuals, collectives, etc., change in response to different mechanisms? What conditions influence whether a particular mechanism versus another is likely to be employed?). Striv ing to precisely represent and work through the details of such process mecha nisms is a distinguishing feature and major contribution of most computational models. Consequently, this focus is where diversity and inclusion scholars inter ested in modeling should expect to direct a significant portion of their attention. Lastly, the emergent/dynamic outcomes of a computational model describe the patterns, properties, trajectories, and configuration of variables that come into being as a result of enacting a model’s process mechanisms over time. In some respects, these can be thought of as the “dependent variable” in a computational model—with the important caveat that such outcomes nearly always exhibit recurrent effects that simultaneously serve as inputs into how a process plays out over time. Given this definition, some diversity and inclusion researchers may be surprised (or even disagree) with our characterization of stereotypes/stigma, prejudice, discrimination, and segregation/stratification as emergent/dynamic out comes in Table 3.1 as these are more commonly treated as causal/feed-forward factors in many empirical studies and theories. Indeed, these phenomena can serve this role in computational models as well. However, we purposefully elected to treat these topics as emergent/dynamic outcomes to emphasize that they are inher ently the result of some previous and/or ongoing set of processes and therefore can exhibit dynamic properties. Stereotypes and stigmas are acquired, transmitted, and reinforced through intrapersonal experiences and interpersonal interactions (Cuddy, Fiske, & Glick, 2008); prejudicial attitudes are formulated and maintained through cognitive appraisals and evaluations (Allport, 1954); engaging in discrim inatory behavior is a function of felt emotions and intentions (Talaska, Fiske, & Chaiken, 2008); and groups can become segregated and stratified as a result of perceptions, policies, and practices (Schelling, 1971). The extent to which a diver sity and inclusion researcher is explicitly interested in treating these elements as more endogenous/dynamic versus exogenous/fixed in a computational model will
70 Samuelson, Lee, Wessel, and Grand
likely depend on a model’s focus and purpose. Nevertheless, we believe the recog nition of these phenomena as more than fixed and “feed-forward” factors is useful for stimulating future modeling efforts within this domain. Computational Models Related to Organizational Diversity and Inclusion
Having articulated the value proposition of computational modeling as well as some foundational concepts for thinking computationally in organizational diversity and inclusion research, the remainder of our chapter illustrates sev eral published computational models relevant to diversity and inclusion science. Rather than conduct a systematic review, our primary goal was to collect a sam ple of published models to serve as a resource for those interested in learning more about how computational modeling has been applied to examine diversityrelated topics. Given that computational modeling has not been widely adopted in the mainstream organizational psychology literature, we broadened our search to include published models from general psychology, sociology, economics, organizational behavior, and computational and mathematical outlets. Overall, our review revealed many excellent use cases of computational modeling applied to diversity and inclusion topics. Nevertheless, there are also myriad untapped opportunities to integrate modeling methodologies in this research domain. In order to draw attention to the modeling methodology rather than the substantive foci of the research per se, we organized the results of our review according to the type of modeling approach utilized in past research (see Har rison et al., 2007, for a list of computational modeling techniques). There is seldom a single “right” or best way to model a phenomenon, and different mod eling techniques tend to draw attention to different aspects, perspectives, and inferences of a problem (Page, 2018). Nevertheless, our review revealed that existing computational models of diversity and inclusion have primarily relied on two distinct modeling approaches: neural network/connectionist models and agent-based models.1 In the following sections, we briefly describe each of these modeling approaches and their applicability to questions and topics of interest in organizational diversity and inclusion research. Additionally, we present a more detailed account of a few selected models utilizing these two approaches to highlight the core concepts, process mechanisms, and emergent/dynamic outcomes they considered as well as some of the key takeaways/insights they afford. Table 3.2 summarizes the list of published computational models identi fied through our review categorized by their modeling approach. Neural Network/Connectionist Models
As the name implies, neural network models (also referred to as connectionist models) analogize the way in which neurons in the brain connect and interact
Computational Modeling in Organizational Diversity
71
TABLE 3.2 Representative Examples of Computational Models Relevant to Organizational
Diversity and Inclusion
Model Type
Level
Common Themes
Source
Neural Network
Individual
Origin of attitudes/ stereotypes
Agent-Based
Individual
Origin of attitudes/ stereotypes Consequences of stereotypes
Group
Opinion formation/ spread Group genesis/ intergroup relations Faultlines/ composition
Organizational
Organizational segregation & stratification
Ehret, Monroe, and Read (2015) Freeman and Ambady (2011) Freeman, Penner, Saperstein, Scheutz, and Ambady (2011) Quek and Ortony (2012)
Grand (2017)
Lagos, Canessa, and
Chaigneau (2019) Liu, Datta, Rzadca, and Lim (2009) Schröder, Hoey, and Rogers (2016) Alvarez-Galvez (2016) Flache and Mäs (2008a) Flache and Mäs (2008b) Flache and Macy (2011) Gawronski, Nawojczyk, and Kulakowski (2015) Gray, Rand, Ert, Lewis, Hershman, and Norton (2014) Hong and Page (2004) Joseph, Morgan, Martin, and Carley (2014) Mäs, Flache, Takács, and Jehn (2013) Sohn and Geidner (2016) Abdou and Gilbert (2009) Martell, Lane, and Emrich (1996) Robison-Cox, Martell, and Emrich (2007) Samuelson, Levine, Barth, Wessel, and Grand (2019)
with one another to process information (Rumelhart & McClelland, 1986). Neural network models are comprised of nodes linked together through a con figuration of weighted connections. The nodes in a neural network computa tional model often represent core concepts from a theory (e.g., perceptions about a target’s cognitive ability), properties of the context being modeled
72
Samuelson, Lee, Wessel, and Grand
(e.g., a target’s gender, situational cues), or actions that the modeled system could take (e.g., select the male candidate). At any given time, nodes in a neu ral network exist in varying degrees of “activation,” reflecting the extent to which the concept, property, or action represented by that node is present and/or currently operating. The pattern of interconnections among nodes allows their activation to propagate throughout the neural network and affect the activation of other nodes. Connections between nodes may be directional (i.e., unidirec tional influence from one node to another) or bidirectional (i.e., parallel/simul taneous influence between nodes), and excitatory (i.e., the activation of a node increases the likelihood of activating other nodes that it targets) or inhibitory (i.e., the activation of a node decreases the likelihood of activating other nodes that it targets). The connections feeding into a node are processed through an activation function that combines the strength of all incoming “signals” from the environment and other nodes that feed into it to determine that node’s acti vation strength (e.g., the activation strength for the node “target has high math ability” depends on the activation strengths of the nodes “target is male” and the node “target is female,” which feed into it). Random noise/error is also often incorporated into these activation functions to represent imperfections or other unregulated errors in the processing unit. In sum, the propagation of and competition among activation strengths across the nodes of a neural network and the network’s pattern of interconnections allow for unique and dynamic activation patterns to emerge as signals flow between nodes. These properties of neural networks can be configured and leveraged in specific ways to model several interesting types of phenomena. For example, recurrent neural network models are commonly used for classification applica tions in which the goal is to represent how a stimulus should be assigned to one or more categories based on its features. For example, a recurrent neural network model could be used to model how individuals make attributions about a target based on that target’s demographic characteristics or how different situ ational and environmental cues influence social categorization. In contrast, feedforward neural networks are commonly used for predictive or decision modeling in which the goal is to represent how a selection among alternatives is made. A feed-forward neural network could thus be used to model an individual’s pref erence for affiliating with others based on their group membership or how he/she allocates resources to others based on different social factors. Applications of neural network models in diversity and inclusion research have primarily been used to represent dynamic outcomes related to stereotyp ing and prejudicial attitudes. For example, Ehret et al. (2015) present a neural network model that describes the cognitive processes underlying the emergence of stereotypes about oneself and others. Similarly, Freeman and colleagues (e.g., Freeman & Ambady, 2011; Freeman & Johnson, 2016; Freeman et al., 2011; Freeman, Stolier, Brooks, & Stillerman, 2018) have conducted an impressive
Computational Modeling in Organizational Diversity
73
stream of research that combines neural network modeling with physiological, behavioral, and neuroimaging data to examine the formation, representation, and influences on stereotype attributions. Next, we summarize the neural net work model constructed by these authors and its application to demonstrate the types of insights this modeling approach can afford. Exemplar Model
Freeman and Ambady (2011) discuss how neural network models can be used to represent person construal (i.e., how individuals develop evaluative per ceptions of a target and its attributes) and more specifically the processes associated with stereotyping. Figure 3.1 presents a simplified depiction of
FIGURE 3.1
Neural network model of social categorization based on visual and occupational/status cues.
Source: Note. Figure reproduced from Freeman, J. B., Penner, A. M., Saperstein, A., Scheutz, M., & Ambady, N. (2011). Looking the part: Social status cues shape race perception. PLoS One, 6(9), e25107. https://doi.org/10.1371/journal.pone.0025107. Image copyright 2011 Freeman et al. and distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY) License (https://creativecommons.org/licenses/by/4.0/). No modifications were made to the original figure.
74
Samuelson, Lee, Wessel, and Grand
one such neural network model from Freeman et al. (2011) depicting how stereotypes, visual cues, and situational demands may interact to influence individuals’ social categorization judgments. The nodes in this neural network represent core concepts proposed by the authors as relevant to how people interpret and use information from the environment to infer the social cat egory (e.g., race, occupation) of a target. The nodes are further organized into distinctive clusters (i.e., cue level, category level) that reflect their functional role in this process. The pattern of excitatory and inhibitory connections among nodes in Free man et al.’s (2011) model reflects how signals/information from the environ ment and their interpretation are proposed to influence the activation of these concepts to dynamically generate perceptions about a target. For example, visual input (cue level signals, bottom row in Figure 3.1) such as the skin color and type of clothing worn by a target are proposed to activate nodes repre senting the perceiver’s beliefs about the target’s race and occupation, respec tively (category level signals, second row from bottom in Figure 3.1). Of note, nodes between functional levels in this neural network model are linked via excitatory connections (e.g., activating the “White” node at the category level increases the likelihood of activating the “High-Status” node at the stereotype level), whereas nodes within functional levels are linked via inhibitory con nections (e.g., activating the “White” node at the category level decreases the likelihood of activating the “Black” node at this same level). The excitatory vertical connections reflect the theoretical proposition that particular beliefs/ sources of information tend to positively correlate (e.g., a person wearing a suit is likely to work in business), whereas the inhibitory lateral connections reflect that different attributes within a given functional cluster tend to be negatively correlated and/or mutually exclusive (e.g., a person who works in business is not likely to also work as a janitor). Furthermore, these connections pos sess weights that reflect the strength of activation between any two nodes; the stronger the association between nodes at the category level (e.g., “White”) and nodes at the stereotype level (e.g., “High-Status”), the more likely it is that perceptions of a target’s race will activate related stereotypical attributions about that target.2 As one example of how the dynamics in Freeman et al.’s (2011) neural net work model unfold, consider an observer who is shown an image of a target person with prototypically Black facial features but no visual cues that would suggest their occupation. Figure 3.1 indicates that these specific visual cues make it more likely that the perceiver would categorize the target person as “Black” while simultaneously suppressing their categorization as “White.” This activation pattern carries through to the stereotype level, such that activation of the “Low-Status” category is increased via both the direct excitatory connec tion between Black ↔ Low-Status and the sequence of excitatory and inhibitory
Computational Modeling in Organizational Diversity
75
connections between White ↔ High-Status ↔ Low-Status (i.e., target is not likely to be White, thus making them less likely to be High-Status and more likely to be Low-Status). The pattern of connections that exist between those stereotype attributions and occupational categories also make it more likely that the “Janitor” node will be activated (via the excitatory connection from Low-Status ↔ Janitor and the sequence of excitatory and inhibitory connections between High-Status ↔ Business person ↔ Janitor). Consequently, Freeman et al.’s (2011) neural network model both describes and predicts why individu als would be more likely to associate certain social categories with particular demographic groups even in the absence of information about that target’s social category (e.g., a target perceived as Black is more readily classified as a Jani tor rather than a Business Person)—a commonly observed empirical finding in research examining race-based stereotyping and categorization (e.g., Eberhardt, Goff, Purdie, & Davies, 2004). The neural network model shown in Figure 3.1 also provides an account for the reverse—and arguably more counterintuitive—inference that social sta tus cues can systematically bias perceptions of a target’s race. To demonstrate, Freeman et al. (2011) discuss findings from two empirical studies in which participants were tasked with categorizing the race of different faces whose features were morphed along a continuum from prototypically White to pro totypically Black and shown as wearing either high-status (i.e., suit and tie) or low-status (i.e., janitor jumpsuit) attire. Findings from their research revealed that (a) depicting a face with low-status attire tended to increase the likelihood of categorizing that face as Black, (b) depicting a face with high-status attire tended to increase the likelihood of categorizing that face as White, and (c) the influence of attire on racial categorization grew stronger as the facial features of the target other became more racially ambiguous (i.e., faceless prototypically Black or White). The authors subsequently recreated this experimental para digm using the neural network model shown in Figure 3.1 and observed that their simulation results replicated the empirical result patterns almost perfectly (R2 between model and empirical data = .99). Further, they were able to use their model to elaborate how seemingly innocuous social context cues related to attire could influence perceptions of race when facial features were ambiguous. In terms of the neural network model, the presentation of racially ambiguous facial features means that the “White” and “Black” category nodes are activated to nearly equal levels. Thus, neither category emerges as dominant based on those cues alone. Consequently, the sequence of excitatory and inhibitory con nections that connects social context cues (i.e., attire) to occupation to social status stereotypes and eventually to demographic categories (attire → occupa tion ↔ status ↔ race) results in social context cues playing a more decisive role in determining the categorization of a face as White or Black. In sum, Freeman et al.’s (2011) neural network model provides a compelling demonstration of
76
Samuelson, Lee, Wessel, and Grand
how even a relatively simple computational model can offer a powerful inves tigatory tool for unpacking a central topic of interest to organizational diversity and inclusion researchers. Agent-Based Models
While neural network models tend to focus on within-individual processes, agent-based models (ABMs) tend to emphasize how the interactions between individuals in a social system give rise to emergent patterns and structures at a collective level (Wilensky & Rand, 2015). All ABMs are composed of three fun damental elements: agents, environments, and rules. Agents are the focal units/ entities of interest in a phenomenon (e.g., individuals, teams, organizations) and possess attributes or states that may be either static (e.g., race, sex, personality) or allowed to change over time (e.g., perceptions, goals, motivation levels). The properties of agents or their distribution in a population of agents often represent core concepts of a theory (e.g., race, status, group membership). Environments in an ABM represent the embedding contexts in which agents exist and inter act. Depending on the phenomena of interest, attributes of an environment may change over time and/or as a result of agent behavior (e.g., as individuals use environmental resources). In many cases, environmental properties in an ABM include constraints that influence what behaviors or interactions an agent can perform (e.g., interdependence networks that determine who work with whom, positions/roles that agents may occupy). Lastly, rules in an ABM describe the procedures that agents and environments follow or enact over time. The rules instantiated in an ABM typically reflect the core process mechanisms of a theory and are represented in the form of logic and/or simple mathematical functions that determine how, when, and to what extent agents act, interact, and change (e.g., if two interacting agents hold differ ing opinions, then their perceptions toward one another change by X). Through repeated enactment of rules by the agents in an ABM, unique structures/proper ties at the collective level (e.g., segregation of agents into distinct clusters, group norms) can emerge “bottom-up” through agent-agent and agent-environment interactions. These emergent properties can also exert a “top-down” influence on future behavior/interaction, thus reflecting the reciprocal micro ↔ macro relationship inherent in complex social systems (Page, 2018). Given that many phenomena of interest in the diversity sciences involve social interaction, it is not surprising that our review revealed ABMs as the most frequently used modeling technique by diversity and inclusion research ers. Extant ABMs have spanned several conceptual levels and foci of interest, including the consequences of stereotypes at the individual level (e.g., Schröder et al., 2016), in-group/out-group formation (e.g., Gray et al., 2014; Flache & Macy, 2011), and organizational segregation/stratification (e.g., Abdou &
Computational Modeling in Organizational Diversity
77
Gilbert, 2009; Martell et al., 1996). ABMs have also been used to supplement social network data/methodologies in the study of diversity-related topics. For example, both Alvarez-Galvez (2016) and Sohn and Geidner (2016) used ABMs to elaborate on how social network structure and related contextual fac tors influence the spread of minority opinions throughout a social system, a phe nomenon commonly discussed in the literature on social inclusion, voice, and multiculturalism as the “spiral of silence” (Bowen & Blackmon, 2003; Gawron ski et al., 2015; Ringelheim, 2010). Owing to the larger breadth of existing diversity-related ABMs and the rec ognition that between-person processes often lie at the core of diversity and inclusion theories and research, we elected to elaborate on two ABMs in greater detail that focus on different phenomena and levels of analysis. The first is a meso-/group-level ABM developed by Flache and Mäs (2008a, 2008b) examin ing the emergence of team consensus as a function of team diversity composi tion. The second is a macro-/organization-level ABM developed by Samuelson et al. (2019) that focuses on the emergence of gender disparities within senior organizational leadership positions. Group-Level Exemplar Model
Lau and Murnighan’s (1998) seminal work on team faultlines posits that teams in which members can align themselves into demographically homogenous sub groups are at greater risk for divisiveness, disagreement, and poor communica tion patterns that hold negative implications for team performance. Lau and Murnighan (1998) suggest that these outcomes emerge because (a) individu als prefer to interact with similar others (i.e., homophily) and (b) the opinions/ beliefs of individuals tend to adapt to one another following interactions (i.e., social influence). Played out over time, these mechanisms can “fracture” a team with strong faultlines into subgroups wherein members tend to primarily inter act with and learn from those who share similar demographics, beliefs, and perspectives. These fractured teams are subsequently less likely to benefit from or leverage the unique resources or capabilities afforded by their diverse mem bers when carrying out job tasks and goals (e.g., Hong & Page, 2004; Lau & Murnighan, 2005). Although Lau and Murnighan (1998) offer a narrative account of how team faultlines can breed polarization in groups, Flache and Mäs (2008a, 2008b) note that several assumptions of their theory were not well specified. Most notably, Flache and Mäs (2008b) contend that for the proposed homophily and social influence mechanisms to create dissensus in teams along demographic faultlines, it must also be true that demographically similar individuals hold similar beliefs and opinions.3 Flache and Mäs (2008b) thus suggest that a correlation between demographic membership and beliefs may be a sufficient but not
78
Samuelson, Lee, Wessel, and Grand
necessary condition for belief polarization to emerge in teams. To this end, the authors described two additional processes that can occur in parallel with homo phily and social influence and contribute to belief polarization: heterophobia (individuals actively avoid/dislike individuals who are not like them) and rejec tion (individuals change their opinions/beliefs in ways that make them less simi lar to those they do not like). Together, these four mechanisms are proposed to operate such that individuals are drawn to and adopt beliefs similar to like others (homophily + social influence) while being simultaneously repelled from and adopting beliefs different to unlike others (heterophobia + rejection). Impor tantly, these mechanisms would not require different demographic affiliations to be associated with different beliefs/opinions for fracturing to emerge along team faultlines. To evaluate their propositions, Flache and Mäs (2008b) developed an ABM to examine the extent to which (a) their additional mechanisms were sufficient to generate faultline-induced polarization of beliefs in simulated agent teams and (b) stronger faultlines tend to lead to stronger within-team dissensus as observed in existing empirical data (Lau & Murnighan, 1998, 2005). Table 3.3 summa rizes the pseudocode of their model (i.e., a non-technical summary of the steps/
TABLE 3.3 Pseudocode for Flache and Mäs (2008b) Computational Model of Team
Faultlines
Step 1 2 3 4 5 6 7 8 9 10 11 12 13
Action Initialize iteration timer t = 0 Create team with N members Assign D demographic attributes to each team member such that aggregate team faultline strength = f Randomly assign K work-related opinions to each team member Compute initial interpersonal influence weights (w) between all members Set counter to m = 0 Randomly select one team member i and randomly do ONE of the following: A. Update all K work-related opinions for member i B. Update all interpersonal influence weights w for member i
Increment counter to m = m + 1
If m < N, return to Step 7
Compute aggregate team outcomes for iteration t Increment iteration timer to t = t + 1 If t < tstop, return to Step 6 End
Source: Note. Flache and Mäs (2008b) do not provide an overview of the pseudocode for their model. The order and description of steps are based on our interpretation of the model description provided in the original publication. t = iteration number; tstop = iteration number at which to stop simulation.
Computational Modeling in Organizational Diversity
79
computations carried out in a computational model). In brief, a simulated team of N individuals is constructed in which each agent contains D “fixed” vari ables (representing categorical demographic characteristics) and K “flexible” variables (representing work-related opinions/beliefs). A method for initializing teams with differing levels of faultline strength is then implemented. Most signif icantly, this operationalization is such that increasing faultline strength increases the extent to which the distribution of D categorical variables across agents pro duces more demographically homogenous clusters within a team while ensuring the distribution of K continuous beliefs across agent members is random (i.e., D and K are uncorrelated). Lastly, all simulated team members are made to exist in a fully connected, directional, asymmetric social influence network such that the strength of influence between any two agents is proportional to the similarity between those agents’ standing on their D and K variables. The effects of interactions within the agent team of Flache and Mäs’ (2008b) model are then simulated by randomly selecting a single agent and having that agent either (a) update all of its K work-relevant beliefs or (b) update all of its influence ties. The extent to which an agent changes its standing on a given belief K was made proportional to the sum of the differences between the selected agent’s and each other agent’s standing for that attribute, weighted by the strength of the influence ties linking agents.4 An agent’s dyadic influence tie was changed in a similar fashion such that each tie changed in proportion to the similarity between that agent’s and a given target agent’s current standing on the D and K variables. Taken together, both of these formulations represent agents tending to have their K beliefs “pulled” into alignment with those of agents whom they perceive as similar/influential (i.e., homophily + social influence) and “pushed” out of alignment from those they perceive as dissimilar/uninfluen tial (i.e., heterophobia + rejection) as a result of their interactions. In each model step, this simulated interaction process and updating of agent beliefs or influence ties are carried out N times, after which several aggregate variables representing the team’s polarization across the K opinion/belief variables are recorded. This entire procedure is then repeated several hundred times to provide all agents in the team sufficient opportunity to interact and allow any potential patterns of consensus and/or dissensus to emerge dynamically. In their initial simulations, Flache and Mäs (2008b) demonstrated that their model specification produced results qualitatively consistent with those posited by Lau and Murnighan (1998). Agent teams with stronger faultlines tended to produce stronger belief polarization along demographic clusters (i.e., emergence of subgroups composed of agents whose D attributes and K beliefs were dia metrically opposed), thus demonstrating that a correlation between demographic membership and beliefs was not necessary for faultlines to fracture a team. Per haps, more importantly, the authors were also able to use the model to probe how this fracturing unfolded. Flache and Mäs (2008b) observed that the polarization
80 Samuelson, Lee, Wessel, and Grand
dynamics produced under the assumptions of their model tended to be strongly driven by agents with different demographic profiles that initially held more extreme (as opposed to more moderate) beliefs. These “opposing extremists” had the effect of disproportionately pulling other demographically similar agents toward their extreme views (homophily + social influence) while simultaneously widening the gulf that existed between their views and those of demographically different agents (heterophobia + rejection). In tandem, these forces created a positive feedback loop that eventually led to the emergence of demographically homogenous subgroups with highly shared but opposite views. Given these observations, the authors posited that it might be possible to counteract these polarization dynamics by attempting to control which agents interacted and when. Flache and Mäs (2008a) pursued this question in a separate simulation study by modifying the structure of agent interactions. In one simu lated condition, agents were initially allowed to interact only in small demo graphically homogenous subgroups for a period of time before being allowed to interact with all other agents on the team. In a second simulated condition, agents were initially allowed to interact only in small demographically hetero geneous subgroups for a period of time before being allowed to interact with all other agents on the team.5 Counter to prevailing predictions of intergroup contact theory, which suggest that differences between groups can be reduced through interaction (e.g., Allport, 1954; Pettigrew, 1998), the results of Flache and Mäs’ (2008a) simulation revealed that agent teams initially structured into demographically homogenous subgroups achieved complete belief consensus even under very strong faultlines, whereas agent teams initially organized into demographically heterogeneous subgroups resulted in near-total belief polariza tion even under very weak faultlines. The explanation for this counterintuitive result pattern is rooted in Flache and Mäs’ (2008b) previous observations regarding how “extremist” agents tend to influence the beliefs of other team members. In the simulated condi tions where agents were restricted to first interacting in demographically homog enous subgroups, the initial lack of any highly dissimilar interaction partners meant that agents with more extreme views had no visible “opponents” to push further away from. As a result, the beliefs of those extreme agents could be pulled toward the (usually more moderate) views of their demographically simi lar counterparts. Over time this process resulted in a set of localized beliefs emerging within each subgroup that tended to be more moderate in position. When the subgroup structures were disbanded and the entire team finally allowed to interact, the entire team of agents now possessed a set of beliefs that were likely to be only moderately different and therefore more easily over come despite any demographic dissimilarities. In contrast, initially organizing agents into more demographically heterogeneous subgroups ensured that agents with more extreme views would have a demographically dissimilar opponent to
Computational Modeling in Organizational Diversity
81
begin pushing away from immediately, thus exacerbating any preexisting differ ences and creating polarized clusters of beliefs localized within each subgroup. Once the subgroup structure was disbanded, agents could find others within the broader team environment who shared their now entrenched and more extreme beliefs, leading to rapid team-wide dissensus. In sum, interacting in the smaller but more demographically homogenous subgroups before interacting in the larger but more demographically diverse team setting tended to “temper” the views of agents with more extreme positions who would have otherwise cre ated a divisive wedge within the team. In contrast, interacting in the smaller but more demographically heterogeneous subgroups tended to “radicalize” the views of agents toward more extreme positions that virtually ensured a fractured team. Although these patterns of simulated findings require empirical examina tion before concluding their validity, Flache and Mäs (2008a, 2008b) provide a compelling demonstration of the power of ABMs for probing theory, unpacking complex social processes, and advancing intriguing new predictions relevant to diversity researchers. Organization-Level Exemplar Model
The lack of female representation among senior organizational leadership has long been cited as an area of concern in the contemporary workforce (Silva, Carter, & Beninger, 2012), and significant theoretical and empirical attention has been directed toward describing and rectifying its believed root causes. A significant challenge for researchers, practitioners, and policymakers in addressing this issue, however, is that the obstacles that are likely to impede female employees’ efforts to equitably rise through the organizational ranks exist and dynamically interact across several levels of analysis (i.e., individ ual, organizational, sociocultural) in ways that are difficult to study empiri cally (Eagly & Carli, 2007). To this end, Samuelson et al. (2019) describe an ABM intended to serve as a computational framework and testbed for exploring the simultaneous impact of multiple factors across different system levels that could plausibly impede female representation in organizational leadership positions. The pseudocode for Samuelson et al.’s (2019) model is summarized in Table 3.4.6 At its core, the ABM models a simple performance → turnover → selection → promotion cycle for employee agents within a single hierarchically arranged organization. A group of agents are first created and assigned several attributes, such as gender, ability, and age; these agents represent an organiza tion’s “original” employee population. Once constructed, the agents are then simulated as accumulating job performance/experience by completing tasks assigned to them each month. At the end of a simulated year, some agents may decide to voluntarily turnover from the organization as a function of several
82 Samuelson, Lee, Wessel, and Grand TABLE 3.4 Pseudocode for Samuelson et al. (2019) Computational Model of Gender
Stratification in Organizational Leadership
Step 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Action Initialize time clock t = 0
Create organizational structure and populate with initial employees
Increment time clock t = t + 1
Assign developmental opportunities and determine which employees take
assigned opportunities based on risk-taking propensity
Calculate base performance score, add opportunity values to employees’
performance scores, accumulate total performance scores If remainder of t/12 ≠ 0, return to Step 3 Assign career delays, deduct performance rounds from delay takers, and assign turnover to specified percentage of delay takers Update employee tenure at level and age, calculate likelihood of turning over due to level tenure, age, and tokenism (for women only) Invoke voluntary turnover based on total turnover likelihood Fill specified percentage of open positions with external hires Promote employees into remaining open positions Fill open positions in lowest level of organization with external hires If the number of original employees is greater than 0, return to Step 3 End
Source: Note. T = time period.
factors (e.g., age, time since last promotion). A percentage of these recently vacated positions are then filled by hiring new agents into the organization, after which the top-performing incumbent agents within each level of the organiza tion are promoted to fill any remaining positions at the next highest level of the organizational hierarchy. This entire performance → turnover → selection → promotion cycle is repeated until none of the original agent employees exist, thus ensuring that any processes which could influence the distribution of male and female agents within the organization (e.g., promotion, selection) have had sufficient opportunity to play out. Within these core process mechanisms, Samuelson et al. (2019) incorporated several elements identified in existing research as disproportionately affecting female employees’ chances of reaching senior leadership positions in an organiza tion. For example, all simulated agents had the same fixed probability of experi encing a “career delay”—representing that an individual might need to leave their job for medical or family reasons—which would prevent them from accumulat ing experience/performance important for promotions. However, and consistent with family and medical leave data from the United States (Klerman, Daley, & Pozniak, 2012), the average delay for female agents was made longer than that of male agents, thus resulting in the potential for female agents to fall slightly behind
Computational Modeling in Organizational Diversity
83
their male counterparts with respect to job-relevant experience. Additionally, female agents could experience negative consequences associated with token ism (the experience of being a member of a social minority amongst a dominant majority; Kanter, 1977), which increased their likelihood of leaving an organiza tion if male agents became disproportionately overrepresented (King et al., 2009). Beyond these core model mechanisms, Samuelson et al. (2019) were par ticularly interested in examining the effects of two factors that have received significant attention as drivers of organizational gender stratification: (1) dif ferences in the hiring rate of males versus females into an organization and (2) providing more valuable developmental opportunities to males versus females (i.e., more visible or important jobs, tasks, and opportunities that tend to correlate with upward organizational mobility; Silva et al., 2012). These factors were subsequently incorporated into Samuelson et al.’s (2019) model specifica tion, and a series of simulation studies that manipulated these core mechanisms were carried out to examine their impact on the distribution of male and female agents within an organization’s hierarchy. The results of Samuelson et al.’s (2019) simulations revealed several insights into the possible dynamics of gender stratification within organizational leader ship. For example, although differences in the hiring rates of male and female agents exhibited a clear and obvious impact on which employees entered an organization, it also exerted a sizable but less obvious effect on which employees left an organization. Organizations that tended to hire more male than female agents eventually triggered a “tipping point,” wherein experiences of tokenism for female agents become so commonplace that female agents began to turnover from the organization in higher numbers. Paired with the greater likelihood of then hiring new male agents to replace the vacancies created by departing female agents, these dynamics created a positive feedback loop resulting in even lower representation of women in leadership positions than would be expected based on the external hiring rates of males and females alone. A similar pattern was observed with respect to the impact of developmental opportunity differences for male and female agents. In Samuelson et al.’s (2019) model, agents completing a developmental opportunity received a “boost” to their job performance/experience relative to completing their typical job tasks that reflected the opportunity’s higher relative importance and value to the organization. Because agents with the highest accumulated performance were “first in line” for promotions, completing developmental opportunities was thus crucial in determining which agents advanced up the organizational hierarchy. Although both male and female agents in Samuelson et al.’s (2019) simulations were provided the same number of developmental opportunities, the size of the “boost” received for completing developmental opportunities differed such that those completed by males were more valuable than those completed by females. This difference was intended to emulate empirical findings that male employees
84
Samuelson, Lee, Wessel, and Grand
often receive more important, visible, and high-profile performance opportunities than female employees (Catalyst, 2004; Silva et al., 2012). The primary effect of these developmental opportunities differences in Samuelson et al.’s (2019) simulations was that male agents tended to accumulate higher levels of performance/experience more rapidly than female agents over time, and thus receive more promotions. However, an additional effect was that female agents tended to be held back in the lower levels of the organization for comparatively long periods, a phenomenon some diversity researchers have labeled the “sticky floor” effect (e.g., Booth, Francesconi, & Frank, 2003; Yap & Konrad, 2009). Given that a sustained lack of upward mobility and advancement over time further contributed to an agent’s likelihood of turning over in the model, the sticky floor generated by developmental opportunity differences also increased the turnover rate for female agents and thus made it even less likely that female agents would reach senior leadership positions. In sum, Samuelson et al.’s (2019) ABM of gender stratification offers an interesting demonstration of how computational modeling can usefully integrate theoretical concepts, empirical observations, and policies across multiple system levels to examine complex organizational dynamics of relevance to diversity and inclusion researchers. Extensions and Future Directions
Before concluding our discussion of computational modeling applications in diversity and inclusion research, we wish to take the opportunity to use the models by Freeman et al. (2011), Flache and Mäs (2008b), and Samuelson et al. (2019) described earlier to highlight a final advantage of computational modeling: the potential to build upon their specifications to integrate, advance, and explore new knowledge. Although computational modeling requires researchers to more precisely formalize the core concepts, process mechanisms, and emergent/dynamic outcomes integral to their theory, all such models reside on specific assumptions, simplifications, operationalizations, and boundary conditions that necessarily shape the conclusions and insights that can be drawn from them. These choices mark essential elements for future work to examine and consider ways of improving or extending through additional theory development, empirical verification, and model refinement (Grand, Braun, Kuljanin, Kozlowski, & Chao, 2016; Kozlowski, Chao, Grand, Braun, & Kuljanin, 2013). In this spirit, we briefly consider ways in which the previously reviewed models might be further developed to explore additional questions of interest to diversity and inclusion scholars. Neural Network Models of Stereotyping
Given the considerable empirical validation that has been conducted with Freeman et al.’s (2011) neural network model, we believe this computational
Computational Modeling in Organizational Diversity
85
architecture could offer a valuable platform for expanding into additional top ics of interest to organizational scientists that involve stereotyping, prejudice, and discrimination. One fruitful pursuit could be integrating a form of Freeman et al.’s (2011) model with theories of leader emergence. The leader emergence literature has long suggested that identifying another individual as a leader and “granting” them influence is consistent with social categorization and confirma tion process (e.g., Acton, Foti, Lord, & Gladfelter, 2019; DeRue & Ashford, 2010; Lord, Foti, & De Vader, 1984; Nye & Forsyth, 1991). In other words, an individual’s leadership status is proposed to be dependent on the extent to which the expression and interpretation of their attributes, behaviors, etc., are consist ent with the expectations and stereotypes regarding what followers believe a leader should be like. Although some leadership scholars have discussed the application of con nectionist frameworks as a representation of the leader emergence process (e.g., Lord, Brown, Harvey, & Hall, 2001), Freeman et al.’s (2011) neural network model provides a relatively straightforward and highly generalizable means for representing and empirically examining how particular expressions and features of an individual might impact the categorization of that person as a leader. For example, research has demonstrated that demographic categories, such as race and gender, are incorporated into individuals’ expectations about leadership and impact their categorization of others as leaders (e.g., Forsyth, Heiney, & Wright, 1997; Livingston, Rosette, & Washington, 2012; Rosette, Leonardelli, & Phillips, 2008; Rosette & Tost, 2010; Scott & Brown, 2006). These propositions have even been integrated into narrative conceptualizations of the influence of race and gender on leader categorization (Hogue & Lord, 2007; Sy et al., 2010). However, these treatments and discussions have not attempted to formalize these mechanisms into a precise or testable theoreti cal account. Additionally, attempting to develop such a model would allow a deeper investigation into how the intersectionality of demographic categories (e.g., race and gender) impact the leadership claiming and granting process. Existing work in the diversity sciences (e.g., Rosette, Koval, Ma, & Livingston, 2016) suggests that such intersections are likely to result in leader emergence effects that are complex and difficult to predict across people and conditions— circumstances for which computational modeling and simulation techniques are often beneficial. Agent-Based Modeling of Team Faultlines
Flache and Mäs’ (2008a, 2008b) ABMs on team faultlines primarily focus on how visible and recognizable demographic attributes can affect subgroup forma tion within teams. However, we believe the basic processes represented by their model and outlined in Table 3.3 could also provide a foundation for advancing
86 Samuelson, Lee, Wessel, and Grand
research and theory development around identity management and disclosure within groups, particularly for those with less visible and stigmatized identities (i.e., sexual minorities, religious minorities, mental illness diagnosis; Ellison, Russinova, MacDonald-Wilson, & Lyass, 2003; Ragins, Singh, & Cornwell, 2007; Sabat et al., 2017). One particularly intriguing development we could envision is the use of Flache and Mäs’ (2008a, 2008b) ABMs to explore how demographic faultlines could interact with individuals’ choices to disclose their identity and, vice versa, the downstream effects of such disclosure on group cohesion. Emerging empiri cal evidence suggests that individuals often make very different disclosure decisions to their team and organizational members based on their interaction partners (King, Mohr, Peddie, Jones, & Kendra, 2017; Wessel, 2017), but the mechanisms involved in this process are still not well explicated. However, by building upon the mechanisms specified in Flache and Mäs (2008b) and other models on opinion formation and communication privacy management (e.g., Petronio, 2002), the choice of which and with whom to share information about one’s stigmatized identity as well as how that information might shape team cli mates, functioning, and performance within an organization could also be mod eled. This possible extension also highlights how efforts to develop and refine a computational model on one particular topic (e.g., identity management) may also simultaneously push the state of science and practice in other areas as well (e.g., team effectiveness). Agent-Based Modeling of Organizational Stratification
Although Samuelson et al.’s (2019) ABM focused on the effects of hiring rates and developmental opportunity differences as explanations for the underrepre sentation of female leaders, their basic model architecture is built upon a sim ple yet highly flexible representation of personnel practices and human capital flow in organizations (e.g., performance → turnover → selection → promotion cycles). Consequently, this core process could be easily expanded to incorporate the role that other personnel management techniques (e.g., recruitment, perfor mance evaluation, training) might play in exacerbating or attenuating gender stratification in organizations. Furthermore, expanding the “ecosystem” repre sented in Samuelson et al.’s (2019) ABM could also afford unique opportuni ties to explore additional contributors to and possible remedies for demographic stratification. For example, expanding the model to represent multiple organiza tions and allowing agents to move between organizations rather than only into or out of a single organization would afford the ability to examine how inter- versus intra-organizational mobility might differentially affect the prospects of male and female employees for attaining leadership positions (e.g., Favaro, Karls son, & Neilson, 2014; Valcour & Tolbert, 2003).
Computational Modeling in Organizational Diversity
87
In addition to the potential to broaden Samuelson et al.’s (2019) model to address moves between organizations, the model could also be expanded to address agent actions prior to organizational entry. A commonly cited reason for the lack of diversity in organizational leadership is the “leaky pipeline” or the notion that members from underrepresented groups are often “lost” at vari ous points along the path from schooling to career development (e.g., Ahmad & Boser, 2014; Blickenstaff, 2005; Gasser & Shaffer, 2014; Monforti & Michelson, 2008). This proposition implies that demographic stratification in organizations is not only a matter of what happens in organizations but also has antecedents that stretch as far back as the formation and maintenance of interests and recruit ment practices that help to develop individuals of diverse races, genders, and socioeconomic statuses (Offermann, Thomas, Lanzo, & Smith, 2019). A com putational model of organizational stratification capable of representing these additional mechanisms would not only serve as a useful tool for researchers to integrate the broad streams of work relevant to understanding the leaky pipeline but could also be used to advise policymakers and organizational decision mak ers about where and how to invest resources to improve demographic represen tation across all levels of the workforce. Conclusion
Our primary aims for this chapter were to provide organizational researchers and practitioners interested in diversity and inclusion topics with (a) an under standing of the value of computational modeling for pursuing domain-relevant questions, (b) an entry point for how to approach organizational diversityrelated research and practice from a more computational perspective, and (c) examples of computational modeling efforts relevant to organizational diver sity and inclusion that highlight the potential for advancing unique insights and predictions. The topics and issues pursued by diversity and inclusion research ers are as varied as the people, groups, and cultures to whom they apply. We see many opportunities to leverage the strengths of computational modeling to aid the study and improvement of these organizationally and societally impor tant issues. We hope this chapter will encourage more organizational diver sity researchers and practitioners to both consider and utilize computational modeling techniques as a valuable tool in the pursuit of knowledge and poli cies that promote fair, equitable, and respectful treatment of all employees and individuals. Notes 1. Computational models often blend and borrow features of different modeling approaches. For example, an agent-based model can be constructed in which the deci sion rules governing how agents behave and interact are represented using a neural
88 Samuelson, Lee, Wessel, and Grand
2.
3.
4.
5.
6.
network. Or a systems dynamics model may be constructed that consists of distinctive agents carrying out simultaneous and interacting feedback loops. To facilitate the pre sent discussion, we attempted to categorize the reviewed studies based on the singular model type we felt it most clearly represented. To simplify the current model description, we do not describe all parameters/features of the activation functions used in the model shown in Figure 3.1 (e.g., resting activa tion value for nodes, decay parameters, scaling constants). See Freeman et al. (2011) and Freeman and Ambady (2011) for a more complete description of this model’s parameterization. Without this condition, demographically similar individuals might still be more likely to interact with one another due to homophily. However, if beliefs were randomly distributed across different demographic groups, there would be no guarantee that social influence processes—which are proposed to “pull” the beliefs of interaction partners together—would lead to multiple demographically homogenous subgroups that hold different beliefs. In other words, it would be just as likely for demographi cally homogenous subgroups that hold dissimilar beliefs to emerge as those that hold similar beliefs. The actual updating function used by Flache and Mäs (2008b) divided this weighted sum by 2 to represent a more gradual change in opinions. The authors also included a modification to ensure that values for these parameters could not go out of bounds and to smooth the gradient change as they approached the extreme bounds of the scale. Flache and Mäs (2008a) analogized these two scenarios to creating “caves” within which different subsets/subgroups of agents within a team would interact before all the caves were joined into one. However, the researchers still manipulated team-level faultline strength in the same fashion as in the initial simulations (cf., Flache & Mäs, 2008b), thus providing a common base of comparison. The full model code and all simulated data reported by Samuelson et al. (2019) are avail able for download at https://github.com/grandjam/SamuelsonEtAl_GenderStratModel.
References Abdou, M., & Gilbert, N. (2009). Modelling the emergence and dynamics of social and workplace segregation. Mind & Society, 8(2), 173. Acton, B. P., Foti, R. J., Lord, R. G., & Gladfelter, J. A. (2019). Putting emergence back in leadership emergence: A dynamic, multilevel, process-oriented framework. The Lead ership Quarterly, 30(1), 145–164. Adler, R. D. (2001). Women in the executive suite correlate to high profits. Harvard Busi ness Review, 79, 30. Ahmad, F. Z., & Boser, U. (2014). America’s leaky pipeline for teachers of color: Getting more teachers of color into the classroom. Center for American Progress. Retrieved from https://cdn.americanprogress.org/wp-content/uploads/2014/05/TeachersOfColor report.pdf Allport, G. W. (1954). The nature of prejudice. Reading, MA: Addison-Wesley. Al Ramiah, A., Hewstone, M., Dovidio, J. F., & Penner, L. A. (2010). The social psy chology of discrimination: Theory, measurement, and consequences. In H. Russell, L. Bond, & F. McGinnity (Eds.), Making equality count: Irish and international approaches to measuring discrimination (pp. 84–112). Dublin: Liffey Press.
Computational Modeling in Organizational Diversity
89
Alvarez-Galvez, J. (2016). Network models of minority opinion spreading: Using agentbased modeling to study possible scenarios of social contagion. Social Science Com puter Review, 34(5), 567–581. Axelrod, R. (1997). The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution, 41, 203–226. Berscheid, E., & Walster, E. H. (1969). Interpersonal attraction. Reading, MA: Addison-Wesley. Blalock, H. M. (1967). Toward a theory of minority-group relations. New York, NY: Wiley. Blau, P. M. (1977). Inequality and heterogeneity: A primitive theory of social structure. New York, NY: Free Press. Blickenstaff, J. C. (2005). Women and science careers: Leaky pipeline or gender filter? Gender and Education, 17(4), 369–386. Bonacich, E. (1972). A theory of ethnic antagonism: The split labor market. American Sociological Review, 37(5), 547–559. Booth, A., Francesconi, M., & Frank, J. (2003). A sticky floors model of promotion, pay, and gender. European Economic Review, 47, 295–322. Bowen, F., & Blackmon, K. (2003). Spirals of silence: The dynamic effects of diversity on organizational voice. Journal of management Studies, 40, 1393–1417. Byrne, D. (1971). The attraction paradigm. New York, NY: Academic Press. Catalyst (2004). The bottom line: Connecting corporate performance and gender diver sity. New York, NY: Catalyst. Colella, A. J., Hebl, M., & King, E. (2017). One hundred year of discrimination research in the Journal of Applied Psychology: A sobering synopsis. Journal of Applied Psy chology, 102, 500–513. Colella, A. J., McKay, P. F., Daniels, S. R., & Signal, S. M. (2012). Employment discrimi nation. In S. W. J. Kozlowski (Ed.), The Oxford handbook of organizational psychol ogy (Vol. 2, pp. 1034–1102). Oxford: Oxford University Press. Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. Univer sity of Chicago Law Forum, 1989, 139–168. Cuddy, A. J. C., Fiske, S. T., & Glick, P. (2008). Warmth and competence as universal dimensions of social perception: The stereotype content model and the BIAS map. In M. P. Zanna (Ed.), Advances in experimental social psychology (Vol. 40, pp. 61–149). San Diego, CA: Elsevier Academic Press. Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. Academy of Management Review, 32, 480–499. DeRue, D. S., & Ashford, S. J. (2010). Who will lead and who will follow? A social process of leadership identity construction in organizations. Academy of Management Review, 35(4), 627–647. Eagly, A. H., & Carli, L. L. (2007). Women and the labyrinth of leadership. Harvard Business Review, 85, 62–71. Eberhardt, J. L., Goff, P. A., Purdie, V. J., & Davies, P. G. (2004). Seeing Black: Race, crime, and visual processing. Journal of Personality and Social Psychology, 87, 876–893.
90 Samuelson, Lee, Wessel, and Grand
Ehret, P. J., Monroe, B. M., & Read, S. J. (2015). Modeling the dynamics of evaluation: A multilevel neural network implementation of the iterative reprocessing model. Per sonality and Social Psychology Review, 19(2), 148–176. Ellison, M. L., Russinova, Z., MacDonald-Wilson, K. L., & Lyass, A. (2003). Patterns and correlates of workplace disclosure among professionals and managers with psy chiatric conditions. Journal of Vocational Rehabilitation, 18(1), 3–13. Favaro, K., Karlsson, P. O., & Neilson, G. L. (2014). The 2013 chief executive study: Women CEOs of the last 10 years. Strategy&. Retrieved from https://www.equiposytalento. com/download_estudios/The2013ChiefExecutiveStudy.pdf Fields, D. L., & Blum, T. C. (1997). Employee satisfaction in work groups with different gender composition. Journal of Organizational Behavior, 18, 181–196. Flache, A., & Macy, M. W. (2011). Small worlds and cultural polarization. The Journal of Mathematical Sociology, 35(1–3), 146–176. Flache, A., & Mäs, M. (2008a). How to get the timing right. A computational model of the effects of the timing of contacts on team cohesion in demographically diverse teams. Computational and Mathematical Organization Theory, 14(1), 23–51. Flache, A., & Mäs, M. (2008b). Why do faultlines matter? A computational model of how strong demographic faultlines undermine team cohesion. Simulation Modeling Practice and Theory, 16, 175–191. Forsyth, D. R., Heiney, M. M., & Wright, S. S. (1997). Biases in appraisals of women leaders. Group Dynamics: Theory, Research, and Practice, 1(1), 98–103. Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal. Psychological Review, 118, 247–279. Freeman, J. B., & Johnson, K. L. (2016). More than meets the eye: Split-second social perception. Trends in Cognitive Science, 20, 362–374. Freeman, J. B., Penner, A. M., Saperstein, A., Scheutz, M., & Ambady, N. (2011). Look ing the part: Social status cues shape race perception. PLoS One, 6(9), e25107. Freeman, J. B., Stolier, R. M., Brooks, J. A., & Stillerman, B. S. (2018). The neural repre sentational geometry of social perception. Current Opinion in Psychology, 24, 83–91. Gasser, C. E., & Shaffer, K. S. (2014). Career development of women in academia: Tra versing the leaky pipeline. The Professional Counselor, 4, 332–352. Gawronski, P., Nawojczyk, M., & Kulakowski, K. (2015). Opinion formation in an open system and the spiral of silence. Acta Physica Polonica A, 127(3-A), A-45–A-50. Glicke, P., & Fiske, S. T. (2001). Ambivalent sexism. In M. P. Zanna (Ed.), Advances in experimental social psychology (Vol. 33, pp. 115–188). San Diego, CA: Academic Press. Grand, J. A. (2017). Brain drain? An examination of stereotype threat effects during train ing on knowledge acquisition and organizational effectiveness. Journal of Applied Psychology, 102(2), 115. Grand, J. A., Braun, M. T., Kuljanin, G., Kozlowski, S. W. J., & Chao, G. T. (2016). The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams [Monograph]. Journal of Applied Psychology, 101, 1353–1385. Gray, K., Rand, D. G., Ert, E., Lewis, K., Hershman, S., & Norton, M. I. (2014). The emergence of “us and them” in 80 lines of code: Modeling group genesis in homoge neous populations. Psychological Science, 25(4), 982–990. Harrison, D. A., & Klein, K. J. (2007). What’s the difference? Diversity constructs as separation, variety, or disparity in organizations. Academy of Management Review, 32(4), 1199–1228.
Computational Modeling in Organizational Diversity
91
Harrison, D. A., Price, K. H., & Bell, M. P. (1998). Beyond relational demography: Time and the effects of surface- and deep-level diversity on work group cohesion. Academy of Management Journal, 41(1), 96–107. Harrison, D. A., Price, K. H., Gavin, J. H., & Florey, A. T. (2002). Time, teams, and task performance: Changing effects of surface- and deep-level diversity on group function ing. Academy of Management Journal, 45(5), 1029–1045. Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32, 1229–1245. Hilton, J. L., & von Hippel, W. (1996). Stereotypes. Annual Review of Psychology, 47, 237–271. Hogue, M., & Lord, R. G. (2007). A multilevel, complexity theory approach to under standing gender bias in leadership. The Leadership Quarterly, 18(4), 370–390. Hong, L., & Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101, 16385–16389. Jackson, L. M., & Hunsberger, B. (1999). An intergroup perspective on religion and prej udice. Journal for the Scientific Study of Religion, 38(4), 509–523. Joseph, K., Morgan, G. P., Martin, M. K., & Carley, K. M. (2014). On the coevolution of stereotype, culture, and social relationships: An agent-based model. Social Science Computer Review, 32(3), 295–311. Kanter, R. M. (1977). Some effects of proportions on group life: Skewed sex ratios and responses to token women. American Journal of Sociology, 82, 965–990. King, E. B., Hebl, M. R., George, J. M., & Matusik, S. F. (2009). Understanding token ism: Antecedents and consequences of a psychological climate of gender inequity. Journal of Management, 36, 482–510. King, E. B., Mohr, J. J., Peddie, C. I., Jones, K. P., & Kendra, M. (2017). Predictors of identity management: An exploratory experience-sampling study of lesbian, gay, and bisexual workers. Journal of Management, 43(2), 476–502. Klerman, J. A., Daley, K., & Pozniak, A. (2012). Family and medical leave in 2012: Technical report. Cambridge, MA: ABT Associates Inc. Kozlowski, S. W. J., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2013). Advancing multilevel research design: Capturing the dynamics of emergence. Organ izational Research Methods, 16(4), 581–615. Lagos, R., Canessa, E., & Chaigneau, S. E. (2019). Modeling stereotypes and negative self-stereotypes as a function of interactions among groups with power asymmetries. Journal for the Theory of Social Behaviour, 1–19. Lau, D. C., & Murnighan, J. K. (1998). Demographic diversity and faultlines: The com positional dynamics of organizational groups. Academy of Management Review, 23(2), 325–340. Lau, D. C., & Murnighan, J. K. (2005). Interactions within groups and subgroups: The effects of demographic faultlines. Academy of Management Journal, 48, 645–659. Liu, X., Datta, A., Rzadca, K., & Lim, E. P. (2009). Stereotrust: A group based person alized trust model. Proceedings of the 18th ACM Conference on Information and Knowledge Management, Hong Kong, China, 7–16. Livingston, R. W., Rosette, A. S., & Washington, E. F. (2012). Can an agentic Black woman get ahead? The impact of race and interpersonal dominance on perceptions of female leaders. Psychological Science, 23(4), 354–358.
92 Samuelson, Lee, Wessel, and Grand
Lord, R. G., Brown, D. J., Harvey, J. L., & Hall, R. J. (2001). Contextual constraints on prototype generation and their multilevel consequences for leadership perceptions. The Leadership Quarterly, 12, 311–338. Lord, R. G., Foti, R. J., & De Vader, C. L. (1984). A test of leadership categorization theory: Internal structure, information processing, and leadership perceptions. Organ izational Behavior and Human Performance, 34, 343–378. Macy, M. W., & Willer, R. (2002). From factors to actors: Computational sociology and agent-based modeling. Annual Review of Sociology, 28, 143–166. Mannix, E., & Neale, M. A. (2005). What differences make a difference? The promise and reality of diverse teams in organizations. Psychological Science in the Public Interest, 6(2), 31–55. Martell, R. F., Lane, D. M., & Emrich, C. (1996). Male-female differences: A computer simulation. American Psychologist, 157–158. Mäs, M., Flache, A., Takács, K., & Jehn, K. A. (2013). In the short term we divide, in the long term we unite: Demographic crisscrossing and the effects of faultlines on subgroup polarization. Organization Science, 24(3), 716–736. Monforti, J. L., & Michelson, M. R. (2008). Diagnosing the leaky pipeline: Continuing barriers to the retention of Latinas and Latinos in political science. Political Science & Politics, 41(1), 161–166. Nye, J. L., & Forsyth, D. R. (1991). The effects of prototype-based biases on leader ship appraisals: A test of leadership categorization theory. Small Group Research, 22, 360–379. Offermann, L. R., Thomas, K. R., Lanzo, L. A., & Smith, L. N. (2019). Achieving lead ership and success: A 28-year follow-up of college women leaders. The Leadership Quarterly, 101345. Page, S. E. (2018). The model thinker. New York, NY: Basic Books. Pelled, L. H. (1996). Demographic diversity, conflict, and work group outcomes: An intervening process theory. Organization Science, 7(6), 615–631. Pelled, L. H., Eisenhardt, K. M., & Xin, K. R. (1999). Exploring the black box: An analy sis of work group diversity, conflict, and performance. Administrative Science Quar terly, 44, 1–28. Petronio, S. (2002). Boundaries of privacy: Dialectics of disclosure. Albany, NY: State University of New York Press. Pettigrew, T. F. (1982). Prejudice. Cambridge, MA: Harvard University Press. Pettigrew, T. F. (1998). Intergroup contact theory. Annual Review of Psychology, 49, 65–85. Quek, B. K., & Ortony, A. (2012). Assessing implicit attitudes: What can be learned from simulations? Social Cognition, 30(5), 610–630. Ragins, B. R., Singh, R., & Cornwell, J. M. (2007). Making the invisible visible: Fear and disclosure of sexual orientation at work. Journal of Applied Psychology, 92(4), 1103–1118. Ringelheim, J. (2010). Minority rights in a time of multiculturalism: The evolving scope of the framework convention on the protection of national minorities. Human Rights Law Review, 10, 99–128. Roberson, Q. M. (2012). Managing diversity. In S. W. J. Kozlowski (Ed.), The Oxford handbook of organizational psychology (Vol. 2, pp. 1011–1033). Oxford: Oxford University Press.
Computational Modeling in Organizational Diversity
93
Roberson, Q. M., Ryan, A. M., & Ragins, B. R. (2017). The evolution and future of diver sity at work. Journal of Applied Psychology, 102, 483–499. Robison-Cox, J. F., Martell, R. F., & Emrich, C. G. (2007). Simulating gender stratifica tion. Journal of Artificial Societies and Social Simulation, 10(3), 8. Rosette, A. S., Koval, C. Z., Ma, A., & Livingston, R. (2016). Race matters for women leaders: Intersectional effects on agentic deficiencies and penalties. The Leadership Quarterly, 27(3), 429–445. Rosette, A. S., Leonardelli, G. J., & Phillips, K. W. (2008). The White standard: Racial bias in leader categorization. Journal of Applied Psychology, 93(4), 758–777. Rosette, A. S., & Tost, L. P. (2010). Agentic women and communal leadership: How role prescriptions confer advantage to top women leaders. Journal of Applied Psychology, 95(2), 221–235. Rumelhart, D., & McClelland, J. L. (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1). Cambridge, MA: MIT Press. Sabat, I. E., Lindsey, A. P., King, E. B., Ahmad, A. S., Membere, A., & Arena, D. F. (2017). How prior knowledge of LGB identities alters the effects of workplace disclo sure. Journal of Vocational Behavior, 103(Pt A), 56–70. Samuelson, H. L., Levine, B. R., Barth, S. E., Wessel, J. L., & Grand, J. A. (2019). Explor ing women’s leadership labyrinth: Effects of hiring and developmental opportunities on gender stratification. The Leadership Quarterly, 30(6). Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociol ogy, 1, 143–186. Schröder, T., Hoey, J., & Rogers, K. B. (2016). Modeling dynamic identities and uncer tainty in social interactions: Bayesian affect control theory. American Sociological Review, 81(4), 828–855. Scott, K. A., & Brown, D. J. (2006). Female first, leader second? Gender bias in the encoding of leadership behavior. Organizational Behavior and Human Decision Pro cesses, 101(2), 230–242. Silva, C., Carter, N. M., & Beninger, A. (2012). Good intentions, imperfect executions? Women get fewer “hot jobs” needed to advance. New York, NY: Catalyst. SIOP (n.d.). SIOP top 10 workplace trends. Retrieved June 4, 2019, from www.siop.org/ Business-Resources/Top-10-Workplace-Trends Sohn, D., & Geidner, N. (2016). Collective dynamics of the spiral of silence: The role of
ego-network size. International Journal of Public Opinion Research, 28(1), 25–45.
Sy, T., Shore, L. M., Strauss, J., Shore, T. H., Tram, S., Whiteley, P., & Ikeda Muromachi,
K. (2010). Leadership perceptions as a function of race—occupation fit: The case of Asian Americans. Journal of Applied Psychology, 95(5), 902–919. Tajfel, H. (Ed.). (1978). Differentiation between social groups: Studies in the social psy chology of intergroup relations. London: Academic Press. Talaska, C. A., Fiske, S. T., & Chaiken, S. (2008). Legitimating racial discrimination: Emotions, not beliefs, best predict discrimination in a meta-analysis. Social Justice Research, 21, 263–296. Tsui, A. S., & Gutek, B. A. (1999). Demographic differences in organizations: Current research and future directions. New York: Lexington Books/Macmillan. Turner, J. C., Brown, R. J., & Tajfel, H. (1979). Social comparison and group interest in ingroup favouritism. European Journal of Social Psychology, 9(2), 187–204. Turner, J. C., Hogg, M. A., Oakes, P. J., Reicher, S. D., & Wetherell, M. S. (1987). Redis covering the social group: A self-categorization theory. Oxford: Basil Blackwell.
94 Samuelson, Lee, Wessel, and Grand
Valcour, P. M., & Tolbert, P. (2003). Gender, family and career in the era of boundary lessness: Determinants and effects of intra-and inter-organizational mobility. Interna tional Journal of Human Resource Management, 14, 768–787. Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu: Computational modeling for micro-level organizational researchers. Organizational Research Methods, 15, 602–623. Wessel, J. L. (2017). The importance of allies and allied organizations: Sexual orientation disclosure and concealment at work. Journal of Social Issues, 73(2), 240–254. Wilensky, U., & Rand, W. (2015). An introduction to agent-based modeling: Modeling natural, social, and engineered complex systems with NetLogo. Cambridge, MA: MIT Press. Yap, M., & Konrad, A. M. (2009). Gender and racial differentials in promotions: Is there a sticky floor, a mid-level bottleneck, or a glass ceiling? Relations Industrielles/Indus trial Relations, 64, 593–619.
4 COMPUTATIONAL MODELS OF LEARNING, TRAINING, AND SOCIALIZATION A Targeted Review and a Look Toward the Future Jay H. Hardy III Research on employee learning, training, and development has a rich history in industrial-organizational (I-O) psychology, tracing its roots back a hundred years to the field’s earliest days when articles on topics ranging from skill devel opment (e.g., Chapman, 1919) to instructional design (e.g., Sturdevant, 1918) were first published. Although the literature on employee socialization (training and development’s sister field) is much younger by comparison, a substantial amount of progress has been made since Schein’s formative work on the topic in the late 1960s (Allen, Eby, Chao, & Bauer, 2017; Schein, 1968). In the dec ades since, a series of significant paradigm shifts dramatically altered the direc tion and focus of both literatures (Allen et al., 2017; Bell, Tannenbaum, Ford, Noe, & Kraiger, 2017). For instance, novel theorizing during the cognitive revo lution infused early behaviorist-inspired thinking with a greater emphasis on the importance of internal learning processes. The modern learner-centric paradigm has since expanded upon this approach by advocating for a greater appreciation of the importance of learner control (Brown, Howardson, & Fisher, 2016), selfregulation (Keith & Frese, 2005; Kozlowski & Bell, 2006), and active, rather than passive forms of instruction (Bell & Kozlowski, 2008, 2010). Mirroring these developments, research on socialization has expanded over the past few decades to emphasize the active role newcomers play in the socialization process (Ashford & Black, 1996; Ashforth, Sluss, & Saks, 2007) and to acknowledge that socialization is an inherently dynamic phenomenon (Feldman, 1976). Collectively, these paradigm shifts enabled the literatures on training and socialization to match the substantial pace of economic and organizational change that dominated the late 20th century. However, as the world of work continues to evolve, research in these areas is starting to lag behind. To meet DOI: 10.4324/9781003388852-5
96 Jay H. Hardy III
the demands of a continually changing and increasingly complex world, many companies are looking beyond traditional, structured approaches to learning and socialization as a means of transferring knowledge to their workforce (Allen, Finkelstein, & Poteet, 2011; Noe, Clarke, & Klein, 2014). In fact, a recent study of workplace trends estimated that 75% of learning within modern organizations now occurs through informal outlets (Bear et al., 2008). Even formal instruction has seen shifts, with self-paced and online delivery of instruction now account ing for a little under a third of all formal instructional hours offered by organiza tions (ATD, 2018). New technologies and trends such as the use of gamification techniques to motivate and maintain a learner’s interest in learning (Landers, Auer, Helms, Marin, & Armstrong, 2019) and machine learning interventions designed to predict learner needs and recommend content based on past behavior (Lamson & von Redwitz, 2018) add another layer of complexity. Changes like these have yet to be sufficiently integrated into modern theories of adult learn ing and socialization. If I-O psychologists hope to continue to make relevant contributions to practice in these areas, our models must adapt to reflect the more dynamic, complex, and continuously shifting environments of the modern workplace. As we look forward to the future, the literatures on training and socialization once again rest on the precipice of another foundational paradigm shift—one that will be defined by theories of learning and change that embrace a more con textualized, dynamic, and multilevel perspective. Unfortunately, updating train ing and socialization models to meet the demands of this new paradigm has thus far proven to be easier to propose than implement. For years, researchers have acknowledged the importance of dynamic effects (Allen et al., 2017; Bell et al., 2017). However, studies in these areas often struggle to accurately represent the more nuanced, process-based expression of these dynamic phenomena (Allen et al., 2017; Hardy, Day, & Steele, 2019). A recent meta-analysis lamented that most studies only reported data on key processes at discrete time points (e.g., pre-, mid-, or post-training), which can obscure a complete picture of self regulation’s cyclical development and expression (Sitzmann & Ely, 2011). Even longitudinal research designs may mislead us in these pursuits (DeShon, 2015). In light of evidence that even highly educated individuals struggle with under standing simple dynamic phenomena (Cronin, Gonzalez, & Sterman, 2009; Kleinmuntz, 1990; Sterman, 1989), such shortcomings should not be surprising. Nonetheless, progress in these areas is contingent on the field’s collective ability to overcome challenges inherent to dynamic thinking. To help manage complexities in dealing with dynamic phenomena, it is advisable that researchers actively seek out and implement tools that can help bridge existing, static understandings of learning and socialization with the more dynamic characteristics of modern organizational contexts. In this chapter, I present computational modeling as one such tool that I believe can provide a
Computational Models of Learning, Training, and Socialization
97
foundation for dynamic theories upon which scholars can begin to transition toward this emerging paradigm. To make the case that computational modeling can help reshape thinking in these areas, I start by reviewing established exam ples of computational models within the learning and socialization literature. I then shift toward a discussion of how this tool can be used to update theoreti cal development in the future by discussing potentially fruitful applications of computational modeling to learning research with the ultimate goal of further expanding our understanding of learning and socialization phenomena. Computational Models of Learning, Training, and Socialization: A Review
In this section, I cover a handful of existing applications of computational mod eling to the literatures on learning, training, and socialization. In doing so, I aim to give readers a taste of the kinds of contributions these models can make. Thus, this review does not provide exhaustive coverage of all computational learning, training, and socialization models published to date. The focused nature of my coverage here is not meant to imply the area is saturated with modeling work within the organizational sciences—quite the contrary. As I argue later, there remain many promising—albeit largely unexplored—applications of modeling to learning, training, and socialization research that have the potential to reshape thinking in these areas. However, this initial review offers an initial proof of con cept regarding the potential of computational modeling for facilitating a deeper understanding of learning phenomena. A Review of Computational Models in Organizational Learning and Socialization
Although the primary focus of this chapter is on how computational modeling can help researchers develop a deeper, more dynamic understanding of employee learning, training, and socialization, it is nonetheless useful to start by examin ing how computational models have been used to explore similar dynamic phe nomena conceptualized at the organizational level. One of the most influential contributions of this type was James March’s classic Organizational Science paper on the tensions between exploring new possibilities and exploiting old certainties (March, 1991). The central thesis of the March (1991) paper was that both exploration and exploitation contribute to organizational effectiveness. However, organizations tend to favor allocating resources toward exploitation— rather than exploration-oriented policies because returns from exploration often take longer to realize and are inherently less certain than returns from exploita tion. Exploiters are often rewarded for this preference with immediate, shortterm benefits. Unfortunately, “what is good in the long run is not always good in
98 Jay H. Hardy III
the short run” (March, 1991, p. 73). As such, March suspected that the overrid ing tendency to favor exploitation to the neglect of exploration would inevitably result in self-destructive forms of learning stagnation within organizations. A Model of Exploration and Exploitation
To support these claims, March developed computational models of adaptation that he used to illustrate how the exploration-exploitation paradox might play out. His first model was a mathematical representation of the mutual learning process through which organizational policies and practices adapt to novel infor mation derived from newcomer knowledge structures, which are themselves actively reshaped and normalized through newcomer socialization. Over time, organizational learning and socialization processes converge around an equi librium, which allows the knowledge structures of the individual and organiza tion to become more closely aligned. In this model, the concept of exploration is encapsulated within the raw knowledge structures of the newcomer, which are initially ignorant of existing policies and practices internal to the organiza tion. It is useful for organizations to integrate these novel knowledge structures into future organizational norms and policies in so far as they offer creative and unique perspectives that can be leveraged to create a competitive advantage. However, newcomer knowledge structures can also expose organizations to cer tain forms of risk because they contribute to short-term process inefficiencies that emerge from conflicts with established norms. Organizations actively seek to reduce knowledge structure discrepancies via socialization to manage these inefficiencies. This use of a computational model to explore the implications of these convergent forces highlights several unique insights regarding the dynamics of exploration-exploitation trade-offs in mutual learning. To start, March’s model demonstrated that, under certain conditions, weaker socialization forces (i.e., lower rates) “allow for greater exploration of possible alternatives and a greater balance in the development of specialized competencies” (March, 1991, p. 71). According to March’s simulation results, it may be advantageous for the organization’s long-term learning to slow down the socialization process, which allows for a more exploratory-focused integration of the newcomer’s unique expertise, even if doing so comes at the cost of short-term efficiency losses because organizations can only learn from newcomers when their knowl edge structures are inconsistent with existing codes. Interestingly, additional simulations using March’s model showed that small amounts of turnover could also have a positive effect because turnover increases the influx of new perspec tives into the organization. This finding anticipated future research that found some turnover can actually benefit organizational performance (Glebbeek & Bax, 2004).
Computational Models of Learning, Training, and Socialization
99
Much of the power of the March (1991) models resides in the ability of computational modeling to overcome human limitations in dynamic thinking. Before this paper, many of the concepts included in the March (1991) models (e.g., rates of socialization, turnover, rates of learning) were well understood. However, the joint operation of these dynamic processes was not. By consider ing the implications of the collective influence of multiple dynamic variables, the model became the foundation for decades of subsequent research on the topic. Insights derived from this model are now widely recognized as one of the central pillars underlying current thinking on organizational learning. Indeed, in the opening paragraphs of one of this literature’s most highly cited general frameworks, Crossan, Lane, and White (1999) acknowledge that “recognizing and managing the tension between exploration and exploitation are two of the critical challenges of renewal” that makes it a “central requirement in a theory of organizational learning” (p. 522). However, the reach of this contribution goes beyond macro-level learning. As evidence of this, at the time of this writing, the March (1991) paper has been cited over 22,000 times by researchers from a wide range of interests and backgrounds. A Model of Culture Shifts
In the same year March’s paper was published, Harrison and Carroll (1991) developed a computational model of the socialization process to explore the dynamics determining how organizational cultures shift over time. To do this, the authors ran a series of simulations using a program in BASIC in which they varied key model parameters (i.e., hire rate, turnover rate, growth rate, selective ness, socialization intensity, and the rate of socialization decay) to align with one of seven different organizational styles and structures (e.g., Japanese-style, American manufacturing, entrepreneurial). Consideration of these complex dynamics through the lens of computational modeling enabled Harrison and Car roll (1991) to challenge widely held beliefs within these domains. For instance, their simulation results showed that under the right conditions, turbulence in organizational staffing practices (e.g., increased hiring and attrition rates) could expedite rather than impede the rate at which an organizational culture stabi lizes. It also advanced a compelling argument that the tendency for organiza tions on the decline to develop strong cultures may be attributable to shifts in the demographic makeup of the organization rather than a targeted behavioral response on the part of remaining employees toward uniformity in the face of an external threat. As was the case for March (1991), computational modeling was useful for Harrison and Carroll (1991) because it enabled them to take established mecha nisms familiar to most researchers in the area and explore the implications of what might happen when they are allowed to operate over a specified period of
100
Jay H. Hardy III
time jointly. The result is a deeper understanding of a set of complex effects and a more thorough consideration of the implications of how simple forces com bine that would be difficult to conceptualize using traditional verbal theories. This point was reiterated elegantly by Harrison and Carroll (1991), who noted the formal modeling approach they used was advantageous because it meant they could write “simple but defensible equations of each component of the [cultural transmission] process but concentrate analysis on the joint outcomes” (p. 555). Although not as widely cited as the March (1991) paper, the Harrison and Carroll (1991) simulations nonetheless provided a useful demonstration of the link between personnel decisions and the development of culture. As a result, this study proved to be an inspiration for groundbreaking research on organizational fit. A Model of Proactive Socialization
Although both the March (1991) and Harrison and Carroll (1991) papers estab lish the potential for computational modeling to facilitate the discovery of new effects and interactions within established literatures, the promise of compu tational modeling goes beyond mere theoretical integration. Developing com putational theories can also help researchers resolve apparent discrepancies between established verbal theories and observed empirical data. Vancouver, Tamanini, and Yoder (2010) is an excellent example of this type of contribution. Specifically, Vancouver et al. (2010) developed a dynamic systems model of proactive newcomer socialization built around a simple knowledge acquisition control system. In this case, the use of computational modeling helped reconcile an apparent discrepancy within the literature between the prevailing theoreti cal paradigm derived from a self-regulatory view of socialization and observed data. For instance, simulations of the Vancouver et al. (2010) model demon strated that positive, between-person relationships among newcomer informa tion seeking and role clarity can emerge even when a negative within-person relationship is specified. An important implication of this finding is that positive between-person meta-analytic correlations in this literature (e.g., Bauer, Bodner, Erdogan, Truxillo, & Tucker, 2007) are just as likely to be indicative of meth odological decisions regarding when data is collected as they are to provide diagnostic information regarding the underlying theoretical relationship itself. For instance, their model predicts that individuals with a high propensity to seek information will gain knowledge (i.e., socialize) faster and therefore have less need to seek information later than individuals with a medium propensity to seek information. The cross-over in information seeking shown at about Week 7 in their simulations specifies that collecting data during a newcomer’s sec ond week working within a new organization should result in positive correla tion between information-seeking propensity and information-seeking behavior.
Computational Models of Learning, Training, and Socialization
101
However, collecting that same data at Week 15 will likely result in a negative correlation, even though the fundamental nature of the underlying relationships had not changed. This is an important point, both methodologically and theoreti cally, that would be difficult to derive without the aid of applications of dynamic thinking aided by the specification of computational models of the underlying processes. A second contribution of the Vancouver et al. (2010) model is showing how computational modeling can be useful for testing the internal validity of existing theories. This is particularly beneficial when designing longitudinal data collec tions to study inherently dynamic effects, which are prolific within the literatures on both learning and socialization. The mere act of specifying a computational model of an existing verbal theory can highlight key processes that need empiri cal attention, which increases the efficiency of the theoretical vetting process. On this point, Vancouver et al. (2010) wrote that “modeling focuses our attention on what data are needed to fill gaps in our knowledge as well as what designs will be required to obtain reasonable estimates of these parameters” (p. 786). By specifying a computational model before collecting data, researchers can cultivate more robust, a priori understandings of (a) what data is needed to test a theoretical proposition, (b) which methods and designs are best suited when collecting this data, and (c) the potential added value of the research effort. Finally, the Vancouver et al. (2010) paper teaches us that although dynamic effects can be complicated, the underlying structures that produce these effects often are not. Vancouver et al. (2010) demonstrated this by using their model to explain a wide range of dynamic phenomena, including newcomer informationseeking, knowledge acquisition, competence beliefs, and perceptions of role clarity, as well as the influence of less dynamic phenomena such as performance standards, differences in initial knowledge, and differences in information seek ing/giving propensities, only using the language of a simple control system mechanism. In this vein, I want to emphasize that the decision to embrace com putational modeling does not require that researchers sacrifice theoretical par simony along the way. Indeed, the ability of computational modeling to explain complex effects through the lens of simple theoretical mechanisms should be considered one of the method’s greatest strengths. Computational Models in Training and Development
Much of my review thus far has focused on the contributions of computa tional modeling to the literatures on organizational learning and socialization. However, there are few subject areas for which computational modeling is a seemingly more natural fit than the literature on training and development. For one, the dynamism of the learning process adds a layer of complexity to train ing research that computational models are uniquely well-equipped to handle.
102
Jay H. Hardy III
Furthermore, modern theories of training design and evaluation regularly acknowledge that learning must be understood from a multilevel perspective to account for complex interactions between learners and their environment (Sitz mann & Weinhardt, 2018, 2019). Here again, computational modeling can be easily leveraged to help ensure logical consistency when conceptualizing com plex and occasionally emergent effects across multiple levels of analysis and involving multiple actors. In this section, we explore existing contributions to this area that leverage the strength of modeling to provide unique insights into the study of adult learning. Cognitive Models of Individual Learning
It may surprise you that many of learning literature’s fundamental concepts were initially derived from computational models. For instance, learning theo rists commonly represent skill acquisition as a series of stages in which differ ent types of learning (e.g., declarative knowledge, knowledge compilation, and procedural knowledge) are emphasized (e.g., Kanfer & Ackerman, 1989). These stage-based models borrowed heavily from a computational model of human cognition called the Adaptive Control of Thought (ACT) model (Anderson, 1982, 1996). The ACT framework (along with its successor, the ACT-Rational or ACT-R model) has contributed much to our understanding of how humans learn and subsequently perform complex skills such as driving and flying (Byrne & Kirlik, 2005) and how humans leverage their existing knowledge structures to solve complex problems such as algebraic equations (Anderson, 2005). These models have also been applied to the development of intelligent tutoring sys tems that track the learning process, anticipate student needs, and tailor train ing protocols in real time as needed (Anderson & Gluck, 2001). Although basic insights from the ACT-R model are familiar to most training and development researchers, the importance of its computational underpinnings is often over looked within this literature. Even less familiar to many I-O psychologists are the other foundational com putational architectures preceding ACT-R that were developed with the explicit purpose of shedding light on psychological mechanisms underlying skill acquisi tion. Cognitive scientists have been making substantial progress on these topics for over 50 years. A 1958 article published in Psychological Review by Newell, Shaw, and Simon was the first to introduce computer simulation as a theoretical tool useful for understanding human problem solving, with explicit analogies to human learning. A few decades later, a paper by Anzai and Simon (1979) pre sented the first computational model focused more specifically on human learn ing. This paper demonstrated the feasibility of computer simulation as a useful theoretical tool for understanding the execution of skills and their acquisition (Ohlsson, 2011). Since then, numerous models of skill acquisition have been
Computational Models of Learning, Training, and Socialization
103
developed, including Ron Sun’s CLARION model (Sun, Merrill, & Peterson, 2001), which focuses on bottom-up rule generation, VanLehn’s Cascade model (VanLehn, 1999), which focuses on analogical learning, and Ohlsson’s Heuristic Self-Improvement (HS) model (Ohlsson, 2011), which focuses on how we learn errors. Although computational models of skill acquisition often speak directly to applications commonly discussed in the training and development literature, such as error management training (EMT, Keith & Frese, 2008), their potential research contributions remain mostly untapped. An Individual-Level Exploration and Exploitation Model
The scarcity of computational models within the literature on training and devel opment presents an opportunity for researchers looking to expand the collec tive understanding of training phenomena (Vancouver & Weinhardt, 2012). Not only can computational modeling help researchers develop a more complete picture of the dynamic learning process, but it can also be used to identify sig nificant gaps in our knowledge of these topics that remain to be explored. For instance, a recent theoretical article I published in collaboration with Eric Day and Winfred Arthur Jr. describes the development of a theory of learner-guided knowledge and skill acquisition called the Dynamic Exploration-Exploitation Learning model (DEEL; Hardy, Day, & Arthur Jr, 2019). The DEEL model is an integrated theoretical framework describing how learner perceptions guide learner resource allocation decisions. Our model was inspired heavily by the work of March (1991) because we were also interested in the implications of the exploration-exploitation paradox for organizational learning. However, unlike March, the purpose of the DEEL model was not to make the broader case for exploration over exploitation or to consider the implications of socialization but instead to explore how individual learners who are engaged in skill acquisition make decisions regarding (a) how much effort to allocate to the learning process and (b) how much effort should be allocated toward exploration versus exploita tion focused strategies. The information-knowledge gap is the central regulatory process we drew upon to represent the mechanisms underlying these decisions in DEEL. The core functions of the information-knowledge gap are similar to the control theory mechanism featured in the Vancouver model. However, in this case, the primary discrepancy is between the learners’ perception of what they believe they know/can do (i.e., competence beliefs) and their novelty-oriented learning goals (i.e., novelty motives). Simulations of the DEEL provide insights into several aspects of learner behavior, such as how effort allocation strategies shift throughout practice, how learners respond to adaptive changes in their learning environment, and how per ceptual biases conspire to shape (and ultimately undermine) the learning process. For instance, as shown in Figure 4.1, DEEL predicts that an influx of novelty in
104
Jay H. Hardy III Motvaton to learn Informaton knowledge gap
Strategy tradeoff
Novelty recogniton bias Changes in the learning environment
Competence bias
Resource preservaton functon
Learning functon
Exploraton vs. exploitaton
Learning effort
Novelty motves
What one wants to know/do
Competence beliefs
Exploitaton
Depth in understanding
Exploraton
Depthbreadth synergy
Breadth in understanding
Perceptons of what one knows/can do
Knowledge and skill
Task performance
FIGURE 4.1
A visual representation of the Dynamic Exploration-Exploitation Learning model (DEEL) from Hardy, Day, & Arthur (2019).
the task environment causes learners to increase their overall learning-oriented effort, a disproportionate amount of which will initially be devoted to explo ration (and then again reallocated toward exploitation as the original sources of novelty are resolved). Furthermore, as shown in Figure 4.2, DEEL predicts that perceptual biases that influence the competence and/or novelty perceptions underlying the information-knowledge gap will contribute to reductions in the information-knowledge gap, which ultimately work to slow knowledge and skill acquisition. Drawing from these and other simulations of our model, we derived nine distinct propositions regarding (a) the relative utility of exploration versus exploitation during employee knowledge and skill acquisition, (b) the role of information-knowledge gaps and motivation to learn in driving learner decisions regarding when and where to allocate learning-oriented effort, and (c) how shifts in competence beliefs and novelty motives can further alter self-regulatory pro cesses. The DEEL model also provides insights into potential pathways through which bias can disrupt effective self-regulated learning, leading to suboptimal resolutions of perceived knowledge gaps.
Computational Models of Learning, Training, and Socialization
105
5
Learning effort
4 3 2 1 0
FIGURE 4.2
Overall effort Exploitation Exploration 0
50 100 150 Simulated learning trial
200
Results of computational simulations of DEEL reported by Hardy et al. (Hardy, Day, & Arthur, 2019) demonstrating shifts in explorationexploitation effort allocation as a result of increases in environmen tal complexity halfway through the learning process (i.e., during the 100th trial).
As was the case for the Harrison and Carroll (1991) model of socialization described earlier, many components of the DEEL model originated in theories that are well understood among training and development researchers, such as control theory (Carver & Scheier, 1982; Vancouver, 2005), theories of skill acquisition (Anderson, 1982; Anderson et al., 2004), and macro-level models of organizational learning such as the March (1991) model discussed earlier. However, the foundation of DEEL’s unique contribution to this literature is that it used the power of computational modeling to integrate seemingly divergent theoretical perspectives in a way that can highlight previously unexplored pro cesses that contribute (or detract from) learning. The Effects of Stereotype Threat on Learning in Organizations
A final example of how computational modeling can enhance the long-term rel evance of research on training and development phenomena is a recent paper published in the Journal of Applied Psychology by James Grand (2017). In this paper, Grand simulated the practical impact of stereotype threat during skill development and knowledge acquisition on organizational performance. The results of these simulations suggest that even small amounts of stereotype threat during employee development can significantly affect long-term organizational
106
Jay H. Hardy III
Information-knowledge gap
100 90 80 70 60 50 40 30 20 10 0
0
Information-knowledge gap
100 90 80 70 60 50 40 30 20 10 0
Competence bias
FIGURE 4.3
50 100 150 Simulated learning trial
200
No bias Novelty bias
0
50 100 150 Simulated learning trial
200
Results of computational simulation of DEEL reported by Hardy, Day, & Arthur (2019) demonstrating the effect of competence bias (i.e., over estimates of the capabilities relative to performance level) and novelty recognition bias (i.e., underestimates of the novelty and complexity of a task to be learned) on information-knowledge gaps.
performance. As shown in Figure 4.3, Grand’s model demonstrates that differ ences in organization-level performance can emerge over time as a function of stereotype threats present in learning settings. Moreover, Grand’s model demonstrates that this negative impact can result in practically meaningful reductions in organizational performance that are roughly the equivalent of a 5% increase in voluntary turnover rates. Without
Computational Models of Learning, Training, and Socialization
FIGURE 4.4
107
Average effect size (Cohen’s d) of differences in organizational perfor mance potential between simulated organizations with varying amounts of stereotype threat (ST) effects during learning/training over time reported by Grand (2017).
computational modeling, the magnitude of potential problems associated with otherwise small effects would be easy to overlook (Martell, Lane, & Emrich, 1996). Computational models like the one reported by Grand (2017) are use ful for substantiating arguments regarding the potential downstream effects of observed phenomena. This is particularly relevant to training and development research because training transfer effects tend to be small. For this reason, com putational model promises to offer a means by which a case can be made for the practical relevance of subtle changes in employee knowledge and skill that have the potential to define an organization’s success. In the case of the Grand (2017) model, computational modeling demonstrated its capabilities not only for theoretical development but also for linking established empirical findings to practical realities. Summary
This brief review of computational models within the extant literature on learn ing, training, and socialization highlights the versatility and power of compu tational modeling to guide our understanding of dynamic phenomena that are
108
Jay H. Hardy III
difficult to acquire using verbal approaches to theory representation alone. Within the literatures on organizational learning and socialization, modeling has been used to integrate multiple streams of research by exploring the implications of key constructs within simulations of dynamic contexts typical of real-world organizations. Modeling has also been used to reconcile perplexing discrepan cies between guiding theoretical frameworks and observed data and has helped focus future efforts on fruitful questions needing answers. Within the training and development literature, insights derived from computational models bor rowed from the cognitive sciences continue to provide a foundation for under standing the structure and expression of the learning process. Models have also been used to establish the practical relevance of observed phenomena within learning contexts for broader organizational functioning and to integrate diver gent theoretical perspectives within a dynamic theoretical framework. Although these contributions are notable, a substantial amount of work remains. In the next section, I shift our focus toward the future by highlighting what I believe are potentially fruitful applications of computational modeling going forward. A Look Toward the Future
Learning, training, and socialization researchers share several common goals, beliefs, and constraints. We all want to understand how employees seek out, acquire, and apply knowledge and skills within the workplace. All generally accept that job-related knowledge and skills are acquired through formal and informal means (Chao, 2007; Noe et al., 2014). All recognize that context and perceptions of organizational support can influence decisions about whether and/or when employees seek learning opportunities (Kammeyer-Mueller, Wanberg, Rubenstein, & Song, 2013; Sitzmann & Weinhardt, 2019). Moreover, because of the inherent nature of the phenomena, many researchers within these domains are starting to acknowledge that we all need to develop a more robust understanding of within-person dynamics (Allen et al., 2017; Feldman, 1976; Hardy, Day, & Steele, 2019; Sitzmann & Ely, 2011). Furthermore, many researchers in these areas have reported facing challenges in evaluating theories arising from such lines of inquiry, such as in deciding when, where, and how much data to collect to enable more meaningful tests of existing theories (Sitzmann & Ely, 2011; Vancouver et al., 2010). Given these similarities, it is clear that research on employee development can benefit from more precise specifications on how adult learning unfolds over time; how individual-, group-, and organizational-level factors can alter these trajectories; and the effects of these trajectories on outcomes and auxil iary variables. I believe that computational models can help researchers within the learning and socialization literatures progress toward common goals while overcoming common constraints. Given the inherently difficult questions ahead
Computational Models of Learning, Training, and Socialization
109
of us, researchers willing to implement novel approaches will be rewarded. Indeed, these fields remain ripe with unexplored opportunities for those will ing to embrace the potential of computational modeling going forward. In this section, I present four broad topic areas that I hope can serve as inspiration for researchers interested in finding new ways to expand our understanding of learn ing and socialization processes using computational modeling. Models of the Adult Learning Process
Translating existing verbal theoretical accounts of learning and socialization at work into more formal computational frameworks is an excellent place to start for those interested in applying computational modeling to develop better theo ries of learning. At its core, science works best when it builds and improves on the current state of knowledge. Rendering informal, verbal theories into formal, dynamic computational representations can contribute to knowledge in this area in several meaningful ways. First, the mere process of specifying formal, mathematical representations of verbal descriptions of learning processes can help researchers identify and resolve logical and theoretical inconsistencies that are likely embedded within established verbal theories (Weinhardt & Vancouver, 2012). This is particularly useful when dealing with dynamic phenomena such as learning, which are noto riously difficult to understand using human rationality alone (Cronin et al., 2009). In many ways, this translation process can be viewed as a long-overdue audit of current thinking in this area, which can be challenging to reconcile through tra ditional methods. Vancouver, Wang, and Li (2018) provide an excellent example of such translational work and the value it can provide. Second, translating existing theories of learning into computational frame works will empower researchers to begin integrating the role of time more explicitly into models of the learning process. At the very least, these efforts can guide researchers tasked with making difficult decisions regarding when key constructs should be measured while evaluating a theory or intervention. Moreover, these efforts have the potential to help practitioners determine the optimal timing for targeted learning interventions (Sitzmann & Ely, 2011). They are also likely to highlight the need for empirical work to establish rates, lags, and delays. For instance, we know very little about how long learning and socialization interventions can take to show positive benefits and whether such benefits can be expected immediately. Similarly, despite decades of research, we still know very little about the speed of knowledge change and its long-term effects. The lack of empirical attention to these topics such as these can present challenges to subsequent theoretical development. However, as computational models are built and tested, critical gaps in knowledge become more apparent and can be targeted for subsequent research.
110
Jay H. Hardy III
Finally, developing computational translations of existing theories will facili tate model expansion in such a way that will enable our theories to more directly represent a more comprehensive range of iterative and multilevel interactions between the environment and the learner that continuously shape and reshape the learning process. Much of the research in this area has shied away from pursuing such lofty goals due to the inherent complexities of simultaneously accounting for individual and organizational influences, which are inherently dynamic and constantly in flux (Bliese, Schepker, Essman, & Ployhart, 2019). New models will open up new areas of inquiry that will improve evidence-based recommenda tions for how organizations should seek to manage learning processes within their workforce. For instance, the Vancouver et al. (2010) model expanded on their original simulations by considering how newcomer behavior might change when the influence of a developmentally minded supervisor was added to the equation. In this case, computational modeling enabled a preliminary consideration of mul tilevel nesting factors that moved beyond a rigid focus on the individual. There are many theories and topics for researchers to choose from that could benefit greatly from applications of computational modeling. One example is Bell and Kozlowski’s active learning theory (Bell & Kozlowski, 2008, 2010), which offers a wealth of cognitive, motivational, and emotional processes to model and raises important questions regarding the interplay among these mechanisms in popular interventions. Furthermore, there is a rich literature on effective learning techniques (e.g., feedback, distributed testing) that have been shown to benefit learning more than others (e.g., Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013; Soderstrom & Bjork, 2015). An exciting applica tion of computational modeling would be developing models that, for exam ple, could explain the mechanisms by which distributed practice leads to better learning than massed practice. Furthermore, elucidating theoretical principles underlying phenomena such as skill decay (Arthur Jr, Day, Bennett Jr, & Portrey, 2013) represents another potentially fruitful avenue for modeling research. Although I advocate for theory translation as a first step, it should be noted that this is not the only path forward for future computational modelers. While attempting to translate existing models, researchers may find that existing verbal accounts are insufficient for adequately representing the learning process, the motivations to engage in learning behaviors, or what is learned during formal and informal training and development. In these cases, researchers may decide they need to take a step back and more deeply consider how models of the adult learning processes should be designed and developed and how computational modeling can guide these new areas of theory development. This is one area where a return to the foundational work done by cognitive scientists over the last half-century can be particularly informative. Regardless of the strategy selected (be it model translation or new model development), there is a lot to be gained from these efforts. The future of learning research will depend on the
Computational Models of Learning, Training, and Socialization
111
quality of the theories guiding our research and practice. The more thoroughly we can describe the processes that underlie learning and socialization, the bet ter equipped we will become for designing effective interventions that match the complexity of modern organizational life (Bell & Kozlowski, 2002). Along these lines, models that contextualize existing cognitive theories within modern organizations can help provide a more complete picture of how employee learn ing and development are influenced by and contribute to a broader individual, team, and organizational functioning within a more comprehensive systemslevel framework. Intervention Focused Models
The development of computational models to represent the functioning, strengths, and potential pitfalls of learning interventions can be informative for research ers interested in designing and implementing better evidence-based employee learning interventions. Much of the value of interventions based on the modern, learner-centric paradigms that underlie research on training and development is centered on the proposition that learner control benefits the learning process (Keith & Wolff, 2015). However, surprisingly little theoretical attention has been devoted to answering fundamental questions related to this proposition, such as how or why increased learner involvement can be expected to benefit learning outcomes. Instead, studies in this area often forgo the specification and testing of underlying theoretical assumptions in favor of conducting experiments where learner cognitive, motivational, and emotional self-regulatory pathways are directly targeted through interventions that leverage manipulations of core training design elements (e.g., exploration, framing, and emotion control; Bell & Kozlowski, 2008). Although this approach is not without merit, focusing exclu sively on training design without answering more fundamental theoretical ques tions can make it difficult for researchers and practitioners down the road to diagnose precisely what components of the intervention were contributing to changes in learning outcomes. Even in a tightly controlled experiment, it can be difficult to say for sure whether any particular mechanism was driving changes in learner cognition or behavior. The result is that we have numerous interven tions that have been shown to be effective (e.g., error management training, guided exploration, mastery training; Debowski, Wood, & Bandura, 2001; Keith & Frese, 2005; Kozlowski et al., 2001; Wood, Kakebeeke, Debowski, & Frese, 2000) but only a limited understanding of what is driving effects or what the possible long-term consequences of the interventions might be. To be clear, I am not trying to argue that developing learning interventions designed to target learner self-regulatory processes directly is unwise. The current evidence seems to support the notion that there is much to be gained from shaping core training design elements that can support and enhance learner self-regulation
112
Jay H. Hardy III
(Bell & Kozlowski, 2008). However, effective implementation of these tech niques requires that we also develop a better understanding of why interventions work and what factors are necessary to realize their benefits. This point was stated elegantly by Repenning and Sterman (2001), who argued that the reason why few organizations can reap the benefits of learning interventions is that few individuals involved in these efforts are willing to take the time to consider how the invention “interacts with the physical, economic, social, and psychological structures in which implementation takes place” (p. 66). They go on to note that “it’s not just a tool problem . . . it’s a systemic problem, one that is created by the interaction of tools, equipment, workers, and managers” (p. 66). In this regard, computational modeling can help advance a science built around evidence-based learning by enabling (and, in some ways, forcing) researchers to be more explicit regarding the precise mechanisms underlying intervention success. The first step in building an intervention-focused computational model is to specify how a particular intervention can be expected to benefit learning out comes within existing models of the learning process. This step provides a unique opportunity for scholars to reach out to industry experts for guidance regarding the specification of key model parameters that are well-grounded in the realities of the modern workplace (for an excellent example of how this can be done, I encourage readers to check out the work of systems modeler John Sterman, who has been successfully building models of organizational learning with industry input for years; e.g., Repenning & Sterman, 2001; Sterman, 2001, 2002). From there, simulations can be conducted to develop propositions regard ing how intervention effects change as a function of time, organizational sup port, and learner motivation. This information can then be leveraged to develop and test recommendations for enhancing learning outcomes while mitigating external contingencies. Moreover, these models can be expanded to pre- and post-training phases, with the goal of developing a broader, systems-based per spective of the learning process by incorporating the factors that motivate learn ing and the use of what is learned. Eventually, there may be opportunities to leverage model insights to build dynamic, AI-backed interventions that track reoccurring generic learning problems and provide real-time employee support in a similar manner to the intelligent tutors built upon the foundation of the ACT-R framework. This technology may be years down the road. However, the opportunities for intelligent applications of computational modeling to under stand and improve training and development interventions are quite compelling. Although I framed this discussion around training interventions, the same rationale applies to organizational approaches to newcomer socialization. Appli cations of computational modeling that more clearly delineate the facilitators and implications of various socialization strategies have already demonstrated their potential to advance new lines of thinking around the causes and consequences of socialization in organizational contexts (Harrison & Carroll, 1991; March, 1991)
Computational Models of Learning, Training, and Socialization
113
as well as the difficulties in designing studies that empirically and fairly chal lenge theoretical ideas (Vancouver et al., 2010). Although policy-focused research has fallen out of favor within the socialization literature in recent years (Allen et al., 2017), there may be an opportunity to reignite future interest in this topic through the use of more precise and more comprehensive specifications of organizational and individual-level mechanisms within computational models of newcomer learning, socialization, and development. Models of Relevance and Impact
Although the field has accumulated a substantial amount of evidence over the years around the proposition that a commitment to learning initiatives benefits organizational functioning (e.g., Arthur Jr, Bennett Jr, Edens, & Bell, 2003), a recent survey of 200 senior executives revealed that many organizational lead ers still view investments in learning and development as a “token” activity that is not valued in the same way as other functions (Asher, 2017). One reason for this nagging skepticism regarding the financial value of learning initiatives to organizational functioning is that the positive impact of learning outcomes is often distal and hard to conceptualize. Despite significant progress over the past few decades within the domain of training evaluation, much work remains to be done (Bell & Moore, 2018). Along these lines, organizational stakeholders are increasingly demanding evidence that learning initiatives positively impact financial metrics. However, the literature on training and development has yet to develop a useful technique for translating gains in knowledge associated with effective learning initiatives into meaningful estimates of utility. This is one area where computational modeling can make an immediate and widespread impact, not just on academic research within this domain but also in practice. For some time, computational modeling has demonstrated its value for communicating how empirical estimates can result in meaningful organizational outcomes. For instance, Martell et al. (1996) used a simple computational model to make the case that small effects of gender bias, compounded over many years, can explain significant gender discrepancies commonly observed among organi zational leadership. This same approach can be used to conceptualize and dem onstrate the potential benefits of learning initiatives in the language of financial impact and organizational success most relevant to key decision makers. The Grand (2017) model discussed earlier made the case that small effects derived from sources of stereotype threat on learning outcomes can have dramatic, com pounded consequences when conceptualized up to the level of organizational performance. In this regard, Grand’s model represents a prime example of how computational modeling can be used to make a case for the relevance of learn ing and socialization research. In essence, these economies-of-scale-type argu ments should resonate with MBAs and other decision makers. I am optimistic
114
Jay H. Hardy III
that if the field chooses to embrace computational modeling for this purpose, we may someday develop a method of utility analysis for estimating the potential costs and benefits of training and development initiatives that is similar in scope and contribution to techniques developed for communicating the relevance of predictive validities in employee recruitment and selection (e.g., Gleser, Cron bach, & Rajaratnam, 1965). Models of Informal Learning
As noted in the opening, the future of learning in organizations is shifting toward greater recognition of the central role of the learner in the learning process. As such, recent years have seen a shift toward more nuanced, learner-centric views of employee learning and socialization that acknowledge significant contribu tions to knowledge and skill acquisition on the job beyond that provided via formal instruction (Brown et al., 2016). However, the challenges that underlie this new perspective are complex and require an understanding of the learn ing process that is inherently contextual. Moreover, this perspective places a more central role on learner motivation in achieving learning outcomes than is necessary for formal learning. The dynamism of this new approach to learn ing research makes it a natural fit for computational modeling. Leveraging the power of mathematical language can facilitate the specification of complex interactions between learners and their environment, not just in terms of changes to knowledge and skill but also to changes in direction and level of effort applied to make those changes (Neal, Ballard, & Vancouver, 2017). Furthermore, mod eling can help define what types of behavior should be considered under the broad umbrella of informal learning. Informal learning looks to be the next frontier for learning and socialization research. Fortunately, computational modeling is well-equipped to handle the complexity inherent in research on forms of learning that are less structured and more dynamic (e.g., Ballard, Vancouver, & Neal, 2018). The goal in developing these initial computational models of the learning process should be to gener ate a roadmap for identifying the key questions the field needs to tackle. This will inform research on this topic by enabling studies that quantify and begin to unfold the most central theoretical phenomena underlying this domain. Conclusion
The future for applications of computational modeling to research on employee learning, training, and socialization is bright. Over the past few decades, mul tilevel methodologies revolutionized research in the organizational sciences by opening the door for more nuanced and internally valid analyses of dynamic phenomena (Humphrey & LeBreton, 2019). I expect computational modeling
Computational Models of Learning, Training, and Socialization
115
to have a similar paradigm-shifting influence on future theory development. As areas that frequently deal with complex, dynamic, and multilevel phenomena, the literatures on learning, training, and socialization are fertile with opportuni ties for intrepid researchers looking to make an impact. I hope this chapter can be a source of inspiration and encouragement for future modelers within these domains and beyond. References Allen, T. D., Eby, L. T., Chao, G. T., & Bauer, T. N. (2017). Taking stock of two relational aspects of organizational life: Tracing the history and shaping the future of socializa tion and mentoring research. Journal of Applied Psychology, 102(3), 324. Allen, T. D., Finkelstein, L. M., & Poteet, M. L. (2011). Designing workplace mentoring programs: An evidence-based approach (Vol. 30). John Wiley & Sons. Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89(4), 369–406. Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American Psycholo gist, 51(4), 355–365. Anderson, J. R. (2005). Human symbol manipulation within an integrated cognitive architecture. Cognitive Science, 29(3), 313–341. Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111(4), 1036–1060. Anderson, J. R., & Gluck, K. (2001). What role do cognitive architectures play in intel ligent tutoring systems. In Cognition & instruction: Twenty-five years of progress (pp. 227–262). Psychology Press. Anzai, Y., & Simon, H. A. (1979). The theory of learning by doing. Psychological Review, 86(2), 124. Arthur Jr, W., Bennett Jr, W., Edens, P. S., & Bell, S. T. (2003). Effectiveness of training in organizations: A meta-analysis of design and evaluation features. Journal of Applied Psychology, 88(2), 234–245. Arthur Jr, W., Day, E. A., Bennett Jr, W., & Portrey, A. M. (2013). Individual and team skill decay: The science and implications for practice. Routledge. Asher, P. (2017). The challenges of global L&D survey. Retrieved from www. peoplematters.in/article/strategic-hr/the-challenges-of-global-ld-survey-15417 Ashford, S. J., & Black, J. S. (1996). Proactivity during organizational entry: The role of desire for control. Journal of Applied Psychology, 81(2), 199–214. Ashforth, B. E., Sluss, D. M., & Saks, A. M. (2007). Socialization tactics, proactive behavior, and newcomer learning: Integrating socialization models. Journal of Voca tional Behavior, 70(3), 447–462. ATD. (2018). Association for Talent Development 2018 State of the industry report. ATD Research. Ballard, T., Vancouver, J. B., & Neal, A. (2018). On the pursuit of multiple goals with different deadlines. Journal of Applied Psychology, 103(11), 1242–1264. Bauer, T. N., Bodner, T., Erdogan, B., Truxillo, D. M., & Tucker, J. S. (2007). Newcomer adjustment during organizational socialization: A meta-analytic review of anteced ents, outcomes, and methods. Journal of Applied Psychology, 92(3), 707–721.
116
Jay H. Hardy III
Bear, D., Tompson, H., Morrison, C., Vickers, M., Paradise, A., Czarnowsky, M., King, K. (2008). Tapping the potential of informal learning: An ASTD research study. American Society for Training and Development. Bell, B. S., & Kozlowski, S. W. J. (2002). Adaptive guidance: Enhancing self-regulation, knowledge, and performance in technology-based training. Personnel Psychology, 55(2), 267–306. Bell, B. S., & Kozlowski, S. W. J. (2008). Active learning: Effects of core training design elements on self-regulatory processes, learning, and adaptability. Journal of Applied Psychology, 93(2), 296–316. Bell, B. S., & Kozlowski, S. W. J. (2010). Toward a theory of learner-centered training design: An integrative framework of active learning. In S. W. J. Kozlowski & E. Salas (Eds.), Learning, training, and development in organizations (pp. 263–300). Rout ledge/Taylor & Francis Group. Bell, B. S., & Moore, O. A. (2018). Learning, training, and development in organizations: Emerging trends, recent advances, and future directions. In D. S. Ones, N. Anderson, C. Viswesvaran, & H. K. Sinangil (Eds.), The SAGE handbook of industrial, work & organizational psychology. Sage Publications. Bell, B. S., Tannenbaum, S. I., Ford, J. K., Noe, R. A., & Kraiger, K. (2017). 100 years of training and development research: What we know and where we should go. Journal of Applied Psychology, 102(3), 305–323. Bliese, P. D., Schepker, D. J., Essman, S. M., & Ployhart, R. E. (2019). Bridging meth odological divides between macro-and microresearch: Endogeneity and methods for panel data. Journal of Management, 70–99. Brown, K. G., Howardson, G., & Fisher, S. L. (2016). Learner control and e-learning: Taking stock and moving forward. Annual Review of Organizational Psychology and Organizational Behavior, 3, 267–291. Byrne, M. D., & Kirlik, A. (2005). Using computational cognitive modeling to diagnose possible sources of aviation error. The International Journal of Aviation Psychology, 15(2), 135–155. Carver, C. S., & Scheier, M. F. (1982). Control theory: A useful conceptual framework for personality—social, clinical, and health psychology. Psychological Bulletin, 92(1), 111–135. Chao, G. T. (2007). Mentoring and organizational socialization. In The handbook of men toring at work: Theory, research, and practice (pp. 179–196). SAGE. Chapman, C. J. (1919). The learning curve in type writing. Journal of Applied Psychol ogy, 3(3), 252–268. Cronin, M. A., Gonzalez, C., & Sterman, J. D. (2009). Why don’t well-educated adults understand accumulation? A challenge to researchers, educators, and citizens. Organi zational Behavior and Human Decision Processes, 108(1), 116–130. Crossan, M. M., Lane, H. W., & White, R. E. (1999). An organizational learning frame work: From intuition to institution. Academy of Management Review, 24(3), 522–537. Debowski, S., Wood, R. E., & Bandura, A. (2001). Impact of guided exploration and enactive exploration on self-regulatory mechanisms and information acquisition through electronic search. Journal of Applied Psychology, 86(6), 1129–1141. DeShon, R. P. (2015). Problems and pitfalls of modeling growth in organizational sci ence. Paper presented at the 30th annual conference of the Society for Industrial and Organizational Psychology, Philadelphia, PA.
Computational Models of Learning, Training, and Socialization
117
Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Inter est, 14(1), 4–58. Feldman, D. C. (1976). A contingency theory of socialization. Administrative Science Quarterly, 433–452. Glebbeek, A. C., & Bax, E. H. (2004). Is high employee turnover really harmful? An empirical test using company records. Academy of Management Journal, 47(2), 277–286. Gleser, G. C., Cronbach, L. J., & Rajaratnam, N. (1965). Generalizability of scores influ enced by multiple sources of variance. Psychometrika, 30(4), 395–418. Grand, J. A. (2017). Brain drain? An examination of stereotype threat effects during train ing on knowledge acquisition and organizational effectiveness. Journal of Applied Psychology, 102(2), 115–150. Hardy, J. H., Day, E. A., & Arthur Jr, W. (2019). Exploration-exploitation tradeoffs and information-knowledge gaps in self-regulated learning: Implications for learnercontrolled training and development. Human Resource Management Review, 196–217. Hardy, J. H., Day, E. A., & Steele, L. M. (2019). Interrelationships among self-regulated learning processes: Toward a dynamic process-based model of self-regulated learning. Journal of Management, 3146–3177. Harrison, J. R., & Carroll, G. (1991). Keeping the faith: A model of cultural transmission in formal organizations. Administrative Science Quarterly, 36, 552–582. Humphrey, S. E., & LeBreton, J. M. (2019). The handbook for multilevel theory, meas urement, and analysis. American Psychology Association. Kammeyer-Mueller, J., Wanberg, C., Rubenstein, A., & Song, Z. (2013). Support, under mining, and newcomer socialization: Fitting in during the first 90 days. Academy of Management Journal, 56(4), 1104–1124. Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/ aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psy chology, 74(4), 657–690. Keith, N., & Frese, M. (2005). Self-regulation in error management training: Emotion control and metacognition as mediators of performance effects. Journal of Applied Psychology, 90(4), 677–691. Keith, N., & Frese, M. (2008). Effectiveness of error management training: A metaanalysis. Journal of Applied Psychology, 93(1), 59–69. Keith, N., & Wolff, C. (2015). Encouraging active learning. In K. Kraiger, J. Passmore, N. R. dos Santos, & S. Malvezzi (Eds.), The Wiley Blackwell handbook of training, development, and performance improvement (pp. 92–116). Wiley-Blackwell. Kleinmuntz, B. (1990). Why we still use our heads instead of formulas: Toward an inte grative approach. Psychological Bulletin, 107(3), 296–310. Kozlowski, S. W. J., & Bell, B. S. (2006). Disentangling achievement orientation and goal setting: Effects on self-regulatory processes. Journal of Applied Psychology, 91(4), 900–916. Kozlowski, S. W. J., Gully, S. M., Brown, K. G., Salas, E., Smith, E. M., & Nason, E. R. (2001). Effects of training goals and goal orientation traits on multidimensional training outcomes and performance adaptability. Organizational Behavior and Human Decision Processes, 85(1), 1–31. doi:10.1006/obhd.2000.2930
118
Jay H. Hardy III
Lamson, M., & von Redwitz, A. (2018). The impact of AI on learning and development. Retrieved from https://trainingindustry.com/articles/learning-technologies/ the-impact-of-ai-on-learning-and-development/ Landers, R. N., Auer, E. M., Helms, A. B., Marin, S., & Armstrong, M. B. (2019). Gamification of adult learning: Gamifying employee training and development. In R. N. Landers (Ed.), The Cambridge handbook of technology and employee behavior (pp. 271–295). Cambridge University Press. March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87. Martell, R. F., Lane, D. M., & Emrich, C. (1996). Male-female differences: A computer simulation. American Psychologist, 51(2), 157–158. Neal, A., Ballard, T., & Vancouver, J. B. (2017). Dynamic self-regulation and multiplegoal pursuit. Annual Review of Organizational Psychology and Organizational Behavior, 4, 401–423. Noe, R. A., Clarke, A. D., & Klein, H. J. (2014). Learning in the twenty-first-century workplace. Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 245–275. Ohlsson, S. (2011). Deep learning: How the mind overrides experience. Cambridge University Press. Repenning, N. P., & Sterman, J. D. (2001). Nobody ever gets credit for fixing problems that never happened: Creating and sustaining process improvement. California Management Review, 43(4), 64–88. Schein, E. H. (1968). Organizational socialization and the profession of management. Industrial Management Review, 9, 1–16. Sitzmann, T., & Ely, K. (2011). A meta-analysis of self-regulated learning in work-related training and educational attainment: What we know and where we need to go. Psychological Bulletin, 137(3), 421–442. Sitzmann, T., & Weinhardt, J. M. (2018). Training engagement theory a multilevel perspective on the effectiveness of work-related training. Journal of Management, 44(2), 732–756. Sitzmann, T., & Weinhardt, J. M. (2019). Approaching evaluation from a multilevel perspective: A comprehensive analysis of the indicators of training effectiveness. Human Resource Management Review, 253–269. Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance an integrative review. Perspectives on Psychological Science, 10(2), 176–199. Sterman, J. D. (1989). Misperceptions of feedback in dynamic decision making. Organizational Behavior and Human Decision Processes, 43(3), 301–335. Sterman, J. D. (2001). System dynamics modeling: Tools for learning in a complex world. California Management Review, 43(4), 8–25. Sterman, J. D. (2002). System dynamics: Systems thinking and modeling for a complex world. Massachusetts Institute of Technology. Engineering Systems Division. Sturdevant, C. R. (1918). Training course of the American Steel and Wire Company. Journal of Applied Psychology, 2(2), 140–147. Sun, R., Merrill, E., & Peterson, T. (2001). From implicit skills to explicit knowledge: A bottom-up model of skill learning. Cognitive Science, 25(2), 203–244. Vancouver, J. B. (2005). The depth of history and explanation as benefit and bane for psychological control theories. Journal of Applied Psychology, 90(1), 38–52.
Computational Models of Learning, Training, and Socialization
119
Vancouver, J. B., Tamanini, K. B., & Yoder, R. J. (2010). Using dynamic computational models to reconnect theory and research: Socialization by the proactive newcomer as example. Journal of Management, 36(3), 764–793. Vancouver, J. B., Wang, M., & Li, X. (2018). Translating informal theories into formal theories: The case of the dynamic computational model of the integrated model of work motivation. Organizational Research Methods, 23(2), 238–274. Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu compu tational modeling for micro-level organizational researchers. Organizational Research Methods, 15(4), 602–623. VanLehn, K. (1999). Rule-learning events in the acquisition of a complex skill: An evalu ation of CASCADE. The Journal of the Learning Sciences, 8(1), 71–125. Weinhardt, J. M., & Vancouver, J. B. (2012). Computational models and organiza tional psychology: Opportunities abound. Organizational Psychology Review, 2(4), 267–292. Wood, R. E., Kakebeeke, B. M., Debowski, S., & Frese, M. (2000). The impact of enac tive exploration on intrinsic motivation, strategy, and performance in electronic search. Applied Psychology: An International Review, 49(2), 263–283.
5 MODELS OF LEADERSHIP IN TEAMS Le Zhou
Leadership is one of the most widely studied concepts in applied psychology and management, which has been defined in many different ways. As Bass (1990) famously noted, “[T]here are almost as many definitions of leader ship as there are persons who have attempted to define the concept” (p. 11). This diversity in leadership concepts and definitions is reflected in verbal theories and, as reviewed later, in computational models of leadership. In the past two decades, leadership research has shifted its focus to processcentered issues (Dinh et al., 2014; Fischer et al., 2017). It has been recog nized that leadership is more than the traits and styles of formally appointed managers and that leadership is a complex and dynamic process, which includes interactions between formal or informal leaders and followers within a situational context (for a review of leadership concepts and frame works, see Day, 2012). One stream of leadership research is devoted to understanding leadership in the context of work groups and teams (Kozlowski et al., 2016).1 A team is (a) two or more individuals who (b) socially interact (face-to-face or, increas ingly, virtually); (c) possess one or more common goals; (d) are brought together to perform organizationally relevant tasks; (e) exhibit interdepend encies with respect to workflow, goals, and outcomes; (f) have different roles and responsibilities; and (g) are together embedded in an encompassing organizational system, with boundaries and linkages to the broader system context and task environment. (Kozlowski & Ilgen, 2006, p. 79) DOI: 10.4324/9781003388852-6
Models of Leadership in Teams
121
Central to the study of team leadership is to unpack what leadership is and what role leadership plays with one or more of these team characteristics considered. For example, given the characteristics of the assigned tasks, how do designated formal leaders motivate team members to work together toward task comple tion? In teams without formal leaders (i.e., self-managing teams), how does an informal leader emerge and what is the emergent leadership structure like? In a dynamic work environment, how do team memberships of leaders and followers shift over time and affect team effectiveness? Researchers have taken different theoretical and empirical approaches to address these questions about team leadership (Kozlowski et al., 2016). Tradi tionally, leadership research has primarily focused on the association between formal managers’ static individual differences and team outcomes (Day, 2012). Great strides have been made by more recent research to delineate leadership as a dynamic process (Fischer et al., 2017). In this endeavor, computational mod eling has served as an important tool (Castillo & Trinh, 2018). A computational model includes some components that are changeable (called “state variables” in the rest of this chapter) and a set of process mechanisms (also called “rules”) that define how these malleable components change from one time period to the next. In computational models of team leadership, leadership can be defined as time-varying model components (e.g., leadership structure in self-managing teams; Serban et al., 2015) and/or parts of the rules governing changes in teams (e.g., different communication networks between leaders and followers; Dionne et al., 2010). Considering the increasing popularity of computational modeling in organiza tional science (Vancouver & Weinhardt, 2012) and the benefits of this approach for team leadership research, this chapter offers a review of recent computational models of team leadership. I focus on articles published after Hazy’s (2007) review. The scope of this chapter is also limited to research using computa tional models to build theories about leadership in smaller groups embedded within a larger organizational system, which excludes research about leadership of organizations or large entities (e.g., ethical leadership of CEOs; Chen, 2010). In addition, studies that used simulated tasks to test theories but did not create computational representations of their theories are excluded. In the following sections, I first explain why computational modeling can help advance team leadership research. Next, recent computational models developed around five topics are reviewed. For each topic, I summarize the central research question, highlight exemplar models (for the purpose of illustration rather than providing an exhaustive review), as well as describe how the computational models and the simulation results connect to more traditional team leadership studies and contribute to team leadership literature. In the last section, I discuss methodo logical issues in model building and model testing that future research should
122
Le Zhou
address as well as potential directions for further developing team leadership theories through computational modeling. I also offer some possible directions for applications of computational modeling in team leadership practices. Why Computational Modeling in Team Leadership Research
Computational modeling is an important tool for team leadership research for several critical reasons. First, building computational models can help refine team leadership theories by pushing theorists to be more precise (Castillo & Trinh, 2018). When developing a computational model, researchers would need to clearly describe which components in their model are dynamic and what are the rules driving the changes in these dynamic components. If the process mechanisms in a theory cannot be described computationally (i.e., represented by equations or explicit rules), it is a signal that the theory needs to be further refined. In addition, when there is more precision in theorizing, different team leadership theories can be better integrated. One way to integrate theories is to compare the computational representations of different theories and exam ine how the same leadership construct is described in different computational models. In addition, computational modeling can help reconcile conflicting find ings from different studies about the same team leadership construct and the same outcome variable such as team performance. This can be achieved through including model components that explicitly represent hidden assumptions in different verbal theories and then running virtual experiments that manipulate these model components and compare the output (e.g., Tarakci et al., 2016). Improving precision in theorizing and integrating similar theories are important benefits of computational modeling for team leadership research, as the general leadership literature is facing an overflow of imprecise and redundant theories and models (Antonakis, 2017). Second, computational modeling can help extend team leadership theories by enabling researchers to simultaneously study multiple dynamic processes within the same team and better connecting team leadership theories with theories about other dynamic processes in the larger organizational system. When researchers solely rely on verbal theories, delineating how multiple dynamic processes work together can be extremely challenging (Davis et al., 2007; Harrison et al., 2007). Computational models, in contrast, are particularly adept at describing the inter actions among multiple dynamic processes (e.g., interactions among multiple leader-follower and follower-follower dyads). In addition, through simulating a computational model, we can examine outcomes at multiple levels of analysis that emerge from interrelated dynamic processes (e.g., outcomes at the leader, follower, dyadic, and team levels). Third, computational modeling can also benefit empirical studies of team leadership. Empirical research on leadership in teams can be costly. For example,
Models of Leadership in Teams
123
a laboratory experiment of leadership emergence would require researchers to repeatedly capture interactions among multiple human participants in a number of teams. Gaining access to teams and collecting longitudinal data can require a substantial amount of human and financial resources. In addition, in order to test the interactive effects among multiple exogenous variables—such as individual differences among team members, team task design factors, and characteristics of the team’s external environment—a large sample size might be required to have the desired statistical power. Further, field experiments that examine team leadership in certain conditions can be difficult or impossible, as some contextual factors in the real world cannot be easily controlled or manipulated by research ers (e.g., membership of online groups; Oh et al., 2016). Given these constraints on empirical research, conducting virtual experiments before primary studies or in conjunction with analyses of archival data can supplement and enhance the effectiveness of empirical research (Burton & Obel, 2011). In virtual experi ments, researchers can simultaneously manipulate multiple exogenous variables and simulate the computational model to generate values for the endogenous variables. Through studying the association between the input (i.e., values of the exogenous variables) and the output (i.e., values of the endogenous variables) of interest, researchers can identify which exogenous variables or which specific range of an exogenous variable are most likely to generate meaningful variances in the endogenous variables and thereby focus on these exogenous variables and their values in subsequent empirical studies. Computational Models of Leadership in Teams
Considering the complexity of leadership concepts, I first provide an overview of how leadership has been conceptualized in computational models of team leadership and then review exemplar models in each topic. Overview
Kozlowski and colleagues (2013) proposed a framework for taking the compu tational modeling approach to investigate emergence processes in teams (for an example of applying this framework, see Grand et al., 2016). This framework can be extended to examine team leadership (illustrated in Figure 5.1), given that leadership is one of many dynamic processes in a team. In this framework, initial states are a team’s and its individual member’s values of a set of properties at the beginning of a time frame (e.g., at the beginning of a team project). Over time, these properties (i.e., states of the team system) evolve by following some process mechanisms. In a computational model, process mechanisms are repre sented by equations (e.g., Yt = Yt-1 + k∙Xt-1) or explicit rules (e.g., if X > k, then Y ) that include model parameters (e.g., k). Model parameters are given certain
124
Le Zhou Contextual Factors
Properties (State Variables)
B A
B A
Leadership as part of process mechanisms A
A
B
C
B
A
C
D
D
Leadership as a property (e.g., status)
B
C
Leadership as a distal factor influencing initial states Time
FIGURE 5.1
Team processes and the role of leadership in teams.
Source: (Notes. This figure is developed based on the framework proposed by Kozlowski et al., 2013. “A,” “B,” “C,” and “D” represent different team members. Different shapes represent different values of certain properties, such as different types of knowledge and skills. Solid arrows represent interpersonal interactions or relationships. Dashed arrows represent process mechanisms.)
values (e.g., fix k at 2). These model parameters can be attributes of individual team members (e.g., personality traits), interpersonal relationships between two members (e.g., frequency of communication), or characteristics of the team (e.g., team incentive structure). In addition, researchers may be interested in describing where a team stands on a given property after team members interact with each other for some time (i.e., emergent team states). There are different ways to describe a team’s property based on its individual members’ properties (Chan, 1998). What team leadership is and what role leadership plays in a team can be portrayed in multiple ways using this framework (see Figure 5.1). This is not surprising, since many different definitions of leadership exist in the literature. Existing computational models have mostly described team leadership in two ways. First, some models define different forms of leadership as different pro cess mechanisms (e.g., Dionne & Dionne, 2008). For example, consistent with a relational perspective, leadership can be defined as the relationships between a formal leader and each of their subordinates (e.g., Graen & Uhl-Bien, 1995). The quality (or structure) of leader-subordinate relationships can influence the extent to which changes in a subordinate’s properties (e.g., a subordinate’s jobrelated skills) from one moment to the next are impacted by their formal leader.
Models of Leadership in Teams
125
In a computational model, this process mechanism can be programmed as a mathematical function that takes the characteristics of the leader-subordinate relationship and the subordinate’s state at t-1 as the input and generates the sub ordinate’s state at t as the output. Second, some computational models define team leadership as properties of a team that evolve over time (i.e., leadership is a state variable). For example, leadership can be defined as the informal power structure in a self-managing team (e.g., Serban et al., 2015). Differences among team members in malleable and time-invariant properties (e.g., expertise and personality traits) together with their interactions with each other can result in differences in informal power and status, which are emergent properties of a team (e.g., Tarakci et al., 2016). In addition to these two approaches, some research taking the computational modeling approach has considered leadership as a more distal factor that impacts the initial states or the process mechanisms (e.g., Will, 2016). For example, a formal leader’s access to resources can affect how they structure their team’s tasks and which members they select into the team, which in turn can impact team processes and team emergent states. Further, it is possible to have differ ent components in a single model describe different leadership constructs. For instance, a model can include formal reporting relationships between the formal team leader and their subordinates as part of the process mechanisms, and emer gent relationships between an informal leader and other team members as state variables. To build computational models about team leadership, researchers often draw from verbal theories to define relevant leadership constructs and extend existing theories by delineating more detailed process mechanisms that are often under specified in prior research. These process mechanisms are formally specified in computational models, which are simulated in virtual experiments. Agent-based modeling has been the most popular technique used to program computational models of team leadership (see Table 5.1 and Hazy, 2007). Through build ing more detailed process mechanisms, generating predictions through virtual experiments, and comparing simulation results to empirical findings, research using computational modeling has helped advance team leadership literature around several topics. A summary of some exemplar models of team leadership is provided in Table 5.1. Leadership and Team Decision-Making Process
Teams are often tasked with making decisions as a collective. Earlier research has demonstrated that leaders could influence team decision quality (e.g., Peter son, 1997). A major motivation for taking the computational modeling approach to studying the leadership-team decision relationship is to unpack decisionmaking as a dynamic process, rather than treating it as a black box. Dionne and
Topic/Research Purpose
Leadership and team decision-making process Dionne and To compare the effectiveness of Dionne four team leadership forms in (2008) optimizing group decisions
McHugh et al. (2016)
Will (2016)
To understand the process of decision-making and the impact of leadership on this process for larger groups; to examine the relationship among individual intelligence, collective intelligence, and collective decision quality To introduce the flocking model and explain how it can be used to represent organizational dynamics; to examine the relationship between interaction norms in a collective and patterns of collective behavior
Leadership Construct/ Simulation Leadership Theory Approach Authoritarian leadership, individualized leadership, LMX, and participative leadership
Metropolis Monte Carlo technique
Collective leadership, participative leadership
Agent-based
Flock leadership
Agent-based
Modeling Results
Group-based team leadership form (i.e., participative leadership) produces decisions that are closer to the optimal decision than dyadic-based leadership forms (i.e., individualized leadership and LMX), which are more effective than individualbased leadership forms (i.e., authoritarian leadership). This pattern holds regardless of group members’ cognitive ability and expertise. Collective intelligence is positively related to collective decision quality. Participative leadership does not affect the relationship between collective intelligence and collective decision quality.
Flexibility in the consensus norm of a collective can affect the collective’s transition between technical capacity and adaptive capacity. Knowing this relationship can help leaders shift a collective toward technical capacity or adaptive capacity as the collective’s tasks change.
Le Zhou
Article
126
TABLE 5.1 Exemplar Models of Leadership in Teams
Leadership and shared team mental model development Participative Dionne et al. To examine the role of leadership leadership, LMX (2010) in developing shared team mental models and improving team performance
Shared team mental models can be consistently reached in participative leadership condition but not always in LMX condition. As for team performance, only when team members are both heterogeneous in domains of expertise and share strong mutual interest, participative leadership is more effective than LMX; when team members are homogeneous in domains of expertise or do not share strong mutual interest, LMX is more effective than participative leadership.
Agent-based
When environmental uncertainty is high, differential LMX is more effective than uniform LMX in sustaining member participation regardless of group size and group network structure. When group network structure is decentralized and when the group is in the early stage in the life cycle, uniform LMX is more effective than differential LMX in promoting member participation. (Continued)
Models of Leadership in Teams
Leadership and group member participation LMX Oh et al. How leadership (uniform LMX (2016) and differential LMX) influences member participation in online collaborative work communities; how contextual factors (environmental uncertainty, group size, group network structure, and stage in group life cycle) moderate the relationship between leadership and member participation
Agent-based
127
Topic/Research Purpose
Leadership and team goal pursuit Zhou et al. How leaders decide whether and (2019) when to intervene by performing team task; how deadlines, external disturbances, task interdependence, leader time sensitivity, and competing demands from outside the team the leader faces influence leaders’ decisions
Leadership Construct/ Simulation Leadership Theory Approach
Modeling Results
Functional leadership theory
System dynamics
Leaders are more likely to perform team tasks when the teams are further than closer to the deadline or when external disturbances occur. Leaders’ intervention on different subordinates is less evenly distributed when the deadline is short (versus long).
Hybrid
The effect of justice perception on LMX is more positive when dyadic tenure is shorter (versus longer). The effect of LMX on justice perception is less positive when dyadic tenure is shorter (versus longer). Perceived justice change is positively related to subsequent LMX while job performance change is negatively related to subsequent LMX.
Emergence of leadership structure LMX Park et al. To examine the reciprocal (2015) relationships that job performance and perceived justice have with LMX over time
Le Zhou
Article
128
TABLE 5.1 (Continued)
Serban et al. (2015)
Tarakci et al. (2016)
Agent-based
Virtuality (i.e., virtual versus face-to-face) moderates the effects of cognitive ability, extraversion, and self-efficacy on leadership emergence. Density of team network moderates the effects of cognitive ability and self-efficacy on leadership emergence.
Power disparity in a team, team leader competence
Agent-based
Election of leaders, democratic leadership, cooperative leadership
Hybrid
Power disparity is positively related to group performance when it is dynamic and those with more power (i.e., group leaders) are competent. Power disparity is negatively related to group performance when it is static and/or when group leaders are not competent. Direct election system would lead to a more cooperative leader than indirect election system when voters’ preferences of leadership are heterogeneous.
LMX
Agent-based
Source: Note. Studies are listed in chronological order under each topic. LMX = Leader-Member Exchange
LMX can quickly stabilize in the early phase and then significantly change when critical events occur later.
129
How leader election system (direct election by majority-votes of a group versus indirect election in which group leader is elected from elected subgroup leaders) influences characteristics of the elected leader and the emergence of cooperation in the group Castillo and How extraversion and Trinh (2018) agreeableness of a leader and a follower influence the development of their LMX
Informationprocessing theories of leadership categorization, leader cognition theories
Models of Leadership in Teams
Chiang and Hsu (2017)
To examine the impact of cognitive ability, extraversion, conscientiousness, self-efficacy, and comfort with technology on leadership emergence in selfmanaging teams; to examine the moderating role of team virtuality and density of team network in this relationship When power disparity helps or hurts group performance
130
Le Zhou
Dionne (2008) proposed a computational model to explain how group leaders and their followers interact with each other in a decision-making process and to compare the effectiveness of four different leadership forms in producing highquality decisions. Taking the computational modeling approach in this research has several benefits. First, prior research has seldom compared the effective ness of multiple leadership forms, as different leadership constructs are often studied in literatures focusing on different levels of analysis. Taking the compu tational modeling approach, Dionne and Dionne’s research is able to articulate and compare the differences among leadership constructs discussed in different literatures. Computational modeling also enabled this research to examine both the quality of the final decisions made and the dynamic processes that lead to the final decisions, which can better inform practices. In addition, Dionne and Dionne were interested in the joint effects of multiple exogenous variables. Iden tifying these interactive effects through virtual experiments was a more viable pilot test of their model than collecting data from human participants. Dionne and Dionne (2008) drew from both decision-making and leadership literatures to develop their model. Informed by the decision-making literature, their model includes three key components. First, each person has memories of all group members’ decision points (i.e., their judgments on given issues), which serve as the state variables in the model. Second, each member’s own decision point is updated over time based on interactions with other group members. As a rule for updating a focal member’s judgments (i.e., a process mechanism), influence from a teammate is proportional to the similarities between the focal member and the teammate in intelligence, expertise, and tenure. Third, the group leader uses a “weighting function” to combine individual members’ judgments to form the group’s decision. Leader’s weighting function is influenced by multi ple factors, including similarities among group members’ judgments, judgments of experts in the group, and the leader’s preference for their own judgment. Drawing from the leadership literature, Dionne and Dionne’s (2008) model describes four team leadership forms, each as different weighting functions. In other words, differences in team leadership are conceptualized as differences in process mechanisms. In the authoritarian leadership condition, the group’s deci sion equates the leader’s individual decision (i.e., the weights assigned to group members’ judgments were zero). In the individualized leadership and the leadermember exchange (LMX) conditions, a subordinate’s judgment has a larger weight on the leader’s judgment when their dyadic relationship is stronger. In the individualized leadership condition, strength of dyadic relationship is unique for each subordinate. In the LMX condition, strength of dyadic relationship is identical for members with the same in- versus out-group status. Drawing from LMX research (e.g., Liden et al., 1993), the quality of the dyadic relationship in this model is a function of leader-subordinate similarities in personality and abil ity factors. In the participative leadership condition, judgments from all group
Models of Leadership in Teams
131
members are weighted equally by the leader. Dionne and Dionne’s simulation results showed that participative leadership outperformed individualized leader ship and LMX, which outperformed authoritarian leadership, in terms of group decision quality. Simulation results also suggested when a leader has to adopt individualized leadership or LMX forms, having more competent members can produce better group decisions. Extending Dionne and Dionne’s (2008) research, McHugh et al.’s (2016) model describes how collective intelligence and participative leadership together influence decision-making in larger collectives (e.g., collectives of 50 or more members). As a key component in McHugh et al.’s model, collective intelligence is a function of a variety of individual differences, collaboration method, and mutual reliance among team members. McHugh et al.’s simulation results sug gested that the relationship between collective intelligence and collective deci sion quality may not be affected by participative leadership. These results helped clarify a boundary condition of the leadership-team decision relationship. In addition to these studies that directly included leadership as model components, Will (2016) drew from the flocking model and argued that leaders can be seen as the designers of the process rules. By shaping the interaction processes, leaders can guide group members to align with or move away from certain opinions. The computational models and the simulation results in these studies have important implications for team leadership research. First, Dionne and Dionne’s (2008) research compared and integrated multiple leadership constructs, which contributes to reducing construct overflow in leadership literature (Antonakis, 2017). Second, by incorporating individual differences into the models and manipulating multiple exogenous variables at the same time (e.g., McHugh et al., 2016), these studies helped clarify boundary conditions of leadership-team decision relationship. Leadership and Shared Team Mental Model Development
Besides decision-making, another team process that builds on information exchange and assimilation among team members is the development of shared team mental models. Shared team mental models are similar conceptualizations of various aspects of the team, such as team task requirements, held by team members (Mohammed et al., 2010). Earlier research was unclear about how a “shared” team mental model is reached and what role leadership plays in this process, as describing dynamic processes solely through verbal theories can be vague (Kozlowski et al., 2013). To address this critical gap in the literature, Dionne et al. (2010) took the computational modeling approach to theorize how participative leadership and LMX influence the convergence of mental models among team members. Dionne et al.’s model includes two key state variables: each team member’s opinion about a team problem and their confidence about
132
Le Zhou
their teammates and themselves. A team member’s opinions can deviate from the correct answer of the team’s problem and those opinions inside the member’s domain of expertise are closer to the correct answer. Over time, team members’ opinions and their confidence values change as they interact with each other. To describe the rules in the change process, Dionne et al. (2010) drew from McComb’s (2007) three-phase model of team mental model development. In the orientation phase, the most confident member (called the “speaker”) shares their opinion. In the differentiation phase, individu als directly connected to the speaker in the team’s social network (called the “listeners”) evaluate the speaker’s opinion. Evaluation is based on the similarity between the speaker’s and the listener’s opinions and how confident the lis tener feels about the speaker. Finally, in the integration phase, a listener modi fies their confidence about the speaker according to the “mutual interest” of the team. When mutual interest is low, a listener modifies their confidence about the speaker solely based on their own evaluation; when mutual interest is high, a listener modifies their confidence based on the average evaluation of all listeners in their network. When a speaker receives a positive evaluation, listeners also modify their own opinions based on the opinion of the speaker. Dionne et al.’s (2010) model describes leadership as the structure of a team’s social network, which influences the differentiation and integration phases. This way of conceptualizing leadership is consistent with one perspective in the inter section of leadership and network literatures: leadership is a type of social net work of a collective (see discussion in Carter et al., 2015). Similar to Dionne and Dionne (2008), Dionne et al. (2010) examined participative leadership and LMX. When the team leadership form is participative leadership, all members are connected to each other; when it is LMX, members in the in-group are con nected to each other and the leader, while members in the out-group only have one-to-one connections with the leader but not with peer members. Dionne et al. (2010) defined convergence level of team mental model as the total amount of differences among team members in their confidence values and team performance as the difference between the correct answer and the aggre gated team member opinions weighted by their confidence values. Dionne et al.’s simulation results showed that a team could always reach a shared mental model when its team leadership form was participative leadership, but conver gence did not always occur when taking the LMX form. The association between team leadership and team performance, however, was more complicated and contingent on similarity among members in domains of expertise and strength of their mutual interest. Only when team members’ domains of expertise were heterogeneous (condition one) and when their mutual interest was high (condi tion two), participative leadership related to better team performance than LMX. When either of these two conditions was not met, participative leadership was not superior to LMX in influencing team performance.
Models of Leadership in Teams
133
Dionne et al.’s (2010) study advanced team leadership research in several ways. First, their computational model provides a theoretical account of how leadership, members’ domain of expertise, and mutual interest jointly influence team mental model development. Their simulation results were consistent with previous empirical findings that when similarity among team members was high, there could be more group thinking and thereby worse team performance. Extending previous research, their model clarifies the theoretical mechanisms underlying this effect of team member similarity on team performance. Second, by studying LMX as a type of social network structure, Dionne et al. suggested it is important to consider not only leader-subordinate dyadic relationships but subordinate-subordinate peer connections when examining the role of leader ship in influencing team mental models. Investigating such complicated inter action effects among multiple exogenous variables that span multiple levels of analysis can be costly for empirical studies. Dionne et al.’s simulation provides helpful insight to guide future empirical studies. Leadership and Group Member Participation
In addition to the two topics reviewed earlier, LMX was also incorporated in computational models of group member participation (e.g., Oh et al., 2016). Empirical research has yielded conflicting findings about the impact of LMX differentiation on groups. Some studies showed that differential LMX harms teams as it can create feelings of inequality, while other studies showed that differential LMX benefits teams as it rewards more competent members (see a review of this issue in Henderson et al., 2009). This puzzle is not unique to tradi tional work groups where participating in group task is part of group members’ job requirements. For online collaborative work communities (OCWCs), where members voluntarily join projects initiated by group leaders, it is crucial for group leaders to understand how to manage their relationships with individual members so they can sustain members’ voluntary participation. Computational modeling approach is appropriate for building theories on the impact of lead ership on OCWCs, considering the following characteristics of these groups. First, members in OCWCs are heavily influenced by the peers with whom they are connected as well as the leaders of the projects that they voluntarily join. In addition, OCWCs are constantly evolving in terms of number of members (i.e., size), social network structure, life cycle, and the external environment. Compu tational modeling can effectively describe how several dynamic processes work simultaneously under the impact of interrelated contextual factors. Further, it would be extremely challenging to manipulate the design of OCWCs in empiri cal studies. Oh et al. (2016) drew from leadership, social network, and social conform ity literatures to develop their model. Based on research on social conformity
134
Le Zhou
and social network, Oh et al.’s model describes that an OCWC member’s par ticipation level is influenced by the participation level of other members they are directly connected to (called “neighbors”) and environmental uncertainty (a contextual factor). A member tends to change their participation level to match the participation level of their neighbors and the probability of change is higher when environmental uncertainty is lower (versus higher). The number of neigh bors a member has depends on the degree of centralization of the OCWC’s net work. In more (versus less) centralized networks, the number of a member’s connections is distributed less evenly. When describing the role of leadership, an important assumption was held by Oh et al.’s model: within an OCWC, leader role occupancy and leader’s impact are both constrained by the OCWC’s net work. Specifically, the member with the highest centrality in the social network is identified as the leader. In addition, when an OCWC adopts differential LMX, the leader’s impact on a member’s participation level is proportional to this member’s number of neighbors in the network. In contrast, when an OCWC adopts uniform LMX, the impact of the leader on a member’s participation is equal for all members. Finally, in Oh et al.’s model, when an OCWC is in a more (versus less) mature stage, there are a larger proportion of active members. Simulation results in Oh et al.’s (2016) model showed that regardless of group size and the degree of centralization of the group network, when environ ment uncertainty was higher, differential LMX could lead to higher member participation than uniform LMX. However, uniform LMX could lead to higher member participation when group network was decentralized and the group was in an early phase of the life cycle. This research shed light on the conflicting findings about the impact of differential LMX on teams. Their simulation results suggested that whether differential LMX or uniform LMX is more beneficial for a group depends on a host of contextual factors including group size, shape, and maturity. Therefore, throughout a group’s life cycle, leaders may need to flexibly change between differential LMX and uniform LMX. Leadership and Team Goal Pursuit
Performing team task has been recognized as an important leadership function that supports team goal pursuit (Morgeson et al., 2010). However, prior research was not clear on a critical question: how leaders decide whether they would take actions to work on the team’s task. This decision process is complicated and dynamic, given that leaders have limited attention and time for responsibilities both inside and outside the team and that there are multiple subordinates super vised by the same leader who works on different parts of the team task. Drawing from control theory (Carver & Scheier, 1998) and a functional perspective of leadership, Zhou et al. (2019) built a computational model to address this ques tion. Control theory explains how a self-regulatory system strives toward its
Models of Leadership in Teams
135
goal (i.e., ideal state of a state variable) through negative feedback loops (i.e., process mechanisms; see Figure 5.2). According to control theory, if the system detects a discrepancy between the current state and the desired state, an output is triggered to change the current state. Control theory views a group of individu als who pursue common goals interdependently as subsystems nested in a larger system (Fitzsimons et al., 2015; Vancouver, Tamanini et al., 2010). When there is a discrepancy, all actors in the same larger system can take actions to improve the system’s state. In this sense, a leader and a follower can be viewed as two actors controlling a shared system and can both influence the state of the system (e.g., progress on a team task). Aligned with the control theory’s view on lead ership is the functional leadership theory (Fleishman et al., 1991; Klein et al., 2006; Morgeson et al., 2010), which posits that leadership is about “to do, or get done, whatever is not being adequately handled for group needs” (McGrath, 1962, p. 5). Based on these theoretical foundations, Zhou et al.’s (2019) model treats a leader-subordinate dyad as a regulation system that includes two subsystems: leadership-regulation and subordinate-regulation subsystems. A subordinate and a leader control the same state variable such as the current state of an assigned task. A subordinate’s actions (i.e., subordinate’s output) follow their own selfregulation negative feedback loop. The leadership-regulation subsystem also includes a negative feedback loop that takes the leader’s perception of the Desired State
A Self-regulation System Discrepancy
Gain Perception
Bias
Output
Current State Rate Disturbance
FIGURE 5.2
Basic structure of a negative feedback loop.
136
Le Zhou
subordinate as the input and triggers an output (i.e., leader directly performing the task) when certain conditions in the process mechanisms are satisfied. In addition to these core process mechanisms, Zhou et al.’s model also incorporates several contextual factors, including negative external disturbances, deadlines, and task interdependence. These characteristics of the team context are incorpo rated into the rules within the leadership-regulation subsystem and thereby drive the leader’s actions. Zhou et al. (2019) built their computational model using system dynamics technique. They ran virtual experiments to examine trajectories of leader taking actions on team task under different conditions of several exogenous variables: deadlines, external disturbances, task interdependence, leader time sensitivity (an individual difference that affects reactions to deadlines), and competing demands from outside the team the leader faces. The relationships between the exogenous and the endogenous variables revealed by the simulation results were then tested in a laboratory experiment. Results from the empirical study supported several predictions from the simulation study. For example, both simulation and empiri cal results showed that at the team level leaders’ resource allocation was more evenly distributed among team members when the deadline was longer (versus shorter). The idea that leaders should adjust their actions according to subordi nates’ situations can be traced back to several classical theories of leadership (e.g., Hersey & Blanchard, 1977; House, 1971; Kerr & Jermier, 1978). Zhou et al.’s model advanced prior research by providing a process-oriented account of how leaders decide when they would intervene by directly performing the team task as well as which subordinate would need such intervention at a given time. In addition, Zhou et al.’s model could be extended to integrate additional withinperson and between-person processes that follow negative feedback loops. Emergence of Leadership Structure
Another important question for team leadership researchers is whether and how leadership structure evolves in teams, such as how formal leaders are elected, whether the leadership position is fixed or floating among different members, how informal leaders emerge in self-managing teams, and how relationships between formal leaders and their followers develop and mature. As illustrated in Figure 5.1, when the dynamics of leadership itself are studied, leadership can be defined as a team property that emerges from team members’ interactions (Carter et al., 2015). One issue that has received considerable attention from researchers and practitioners is the emergence of informal leader(s) in self-managing teams. From a social-cognitive perspective on leadership (Day, 2012), when a team member (or multiple members) is perceived by their teammates as a leader fig ure based on the teammates’ implicit theories of leadership, an informal leader has emerged. This emergent leader has higher status and more informal power
Models of Leadership in Teams
137
to influence others’ attitudes and behaviors than those members who are not per ceived as the leader. In empirical research, the emergent leader is often identified by comparing teammates’ leadership ratings or rankings of each other. Earlier research has mostly examined the emergent leadership structure at one point in time but overlooked the emergence process. Serban et al. (2015) proposed a computational model about leadership emergence process in virtual as well as face-to-face contexts. Drawing from previous empirical research, Ser ban et al.’s model includes five personal attributes as the input to leadership emergence process: three time-invariant individual differences (cognitive abil ity, extraversion, and conscientiousness) and two personal attributes that lin early increase over time (self-efficacy and comfort with technology). Over time team members interact with each other and update their perceptions of each other based on these personal attributes. In Serban et al.’s model, the impact of these attributes on perceived leadership is weighted differently for virtual and face-to-face teams. In addition, members have a higher probability of being perceived as a leader when the density of the team’s communication network is higher (versus lower). Serban et al.’s simulation results showed that the inter action effects among team type (i.e., virtual versus face-to-face teams), team network density, and personal attributes on leadership emergence varied over time. Notably, Serban et al. also conducted two empirical studies to test the relationships revealed by the simulation. However, the pattern of the interaction effects varied across the simulation and the two empirical studies. One possible reason was that the operationalization of network density differed between the simulation and the empirical studies. In the simulation network, density captured the average frequency of communication within a team, whereas in the empirical research density captured both communication frequency and quality of rela tionships (e.g., trust) among team members. This explanation can be addressed in future research by further refining Serban et al.’s model and conducting addi tional virtual experiments and empirical tests. It is also important to understand whether the disparity in power among group members benefits or harms group performance. Past research on the relationship between power disparity and group performance has reported conflicting find ings. Some studies found that power disparity was positively related to group performance due to its positive effect on role clarity and coordination (which supported the functionalist theory of power; e.g., Halevy et al., 2012). Other studies found that power disparity was negatively related to group performance through competition and conflict (which supported the conflict theory of power; e.g., Bloom, 1999). To reconcile these conflicting views and findings, Tarakci et al. (2016) proposed and empirically tested a computational model about power disparity and group performance. Tarakci et al. suggested that conflicting findings in prior research were probably due to differences in three assumptions held by the functionalist and the conflicting views of power: (a) whether who
138
Le Zhou
occupies the formal leader role in a team is fixed, (b) whether the leader has the competence required by the group task, and (c) whether it is possible to have an equal amount of power by each group member. The computational modeling approach could help better understand how these three assumptions affect the relationship between power disparity and group performance, as these assump tions can be explicitly included as model components and easily manipulated in simulations. Tarakci et al.’s (2016) model describes three different forms of powercompetence dynamics in a group. In high power disparity and low competence groups, power is concentrated in one leader who has low task competence and this power structure is stable. In high power disparity and high competence groups, the person with the highest competence has power and, importantly, power structure changes if relative competence levels of group members change. In low power disparity groups, members always have an equal amount of power and there is no leader. When a team works on its task, a group mem ber’s solution for the team task changes based on solutions held by others in the group, and members with more power have a larger influence on others in the change process. A member’s competence is determined by the quality of their solution and team performance is determined by the best solution among all members. Simulation results in Tarakci et al. (2016) showed that the high power dis parity and high competence groups had better group performance than the low power disparity groups, which had better group performance than the high power disparity and low competence groups. In addition, the advantage of the high power disparity and high competence groups persisted over time. Tarakci et al. then tested this finding of the simulation study in two empirical studies. Empirical findings showed general support to the prediction from the simula tion: power disparity could be positively related to group performance when the leadership position rotated and was obtained by the most competent group mem ber, whereas when leadership structure was static or when the leader lacked task competence, power disparity could be negatively related to group performance. Tarakci et al.’s research helps advance shared leadership and LMX literatures. In self-managing teams, leader and follower status emerge over time despite equal amount of formal power shared by all members, whereas in teams with formal leaders, in-group versus out-group status based on LMX often surfaces among peer team members. According to Tarakci et al.’s research, whether these differ ences in power and status are beneficial or harmful for team performance should be examined by taking into account characteristics of those occupying formal or informal leadership positions. In addition, some computational models have been built to describe how formal leaders are elected (e.g., Chiang & Hsu, 2017) and how the relation ship between a formal leader and their subordinates develop over time
Models of Leadership in Teams
139
(e.g., Castillo & Trinh, 2018; Park et al., 2015). Together, these computational models help advance team leadership research by providing a theoretical account of leadership structure emergence process, which moves beyond prior research that takes a snapshot of leadership structure. Moreover, research in this area helped demonstrate that through incorporating hidden assumptions held by dif ferent theoretical perspectives into computational models, these different per spectives can be integrated. Future Directions
Following these earlier attempts to understand team leadership through compu tational modeling, more advancements can be made in methodology, theory, and practice. Methodology Issues
First, in terms of simulation approach, existing computational models of team leadership are mostly built using agent-based modeling technique (see Table 5.1). Other simulation approaches are available and have been applied in other areas of organizational research, such as system dynamics, genetic algorithms, cellular automata, and hybrid modeling (Davis et al., 2007; Harrison et al., 2007). These alternative approaches can be more broadly adopted in future team leadership research. For example, system dynamics is particularly suitable for modeling feedback processes (Vancouver & Weinhardt, 2012). The basic negative feed back loop (see Figure 5.2) and more complicated self-regulatory systems build ing on this basic structure can be easily represented using the system dynamics technique (e.g., Vancouver, Weinhardt et al., 2010). Moreover, different simula tion approaches are not categorically different. Formal models built using differ ent modeling techniques share the same language (i.e., mathematical equations and rules). Future research-building models that integrate multiple dynamic pro cesses related to team leadership can also use the hybrid modeling approach, which is highly customized based on the specific research question (also called a “stochastic process” approach; Davis et al., 2007). Second, future research can further take advantage of virtual experiments to examine boundary conditions and robustness of relationships of interest. Researchers may be curious whether the relationships of theoretical interest uncovered by the simulation are robust to changes in parameters, not core to the research question (e.g., number of iterations or number of team members). Addressing these concerns through virtual experiments is low-cost and easy to implement. For example, in sensitivity analyses, Oh et al. found that the num ber of leaders in the same group did not affect the relationships most central to their inquiry.
140
Le Zhou
Third, more research that adopts a mixed-method approach is needed to evaluate computational models of team leadership. As shown in Davis et al.’s (2007) roadmap for research using computational modeling, after new theoreti cal insights are gleaned from virtual experiments, researchers should evaluate the model by comparing simulation results with empirical data. Some team leadership studies have already coupled simulation with empirical studies (e.g., McHugh et al., 2016; Serban et al., 2015; Tarakci et al., 2016; Zhou et al., 2019). Future research can further compare simulation results with findings from exist ing empirical studies (Vancouver, Weinhardt et al., 2010) or collect primary data using longitudinal designs to test hypotheses revealed by simulations (Wang et al., 2016). Theoretical Advancements
First, more research is needed to further clarify what role leadership plays in a team context. Morgeson et al. (2010) summarized a variety of leadership func tions in the team context, such as defining missions, training and developing team members, monitoring progress, and managing team boundary. From a func tional perspective of leadership, these functions can be carried out by a single designated leader or distributed among multiple formal or informal leaders. To better design and distribute leadership responsibilities, it is critical to understand how leadership supports team functioning (i.e., to unpack the dynamic processes through which those in leadership positions influence individual subordinates and the team). Future research can draw from control theory to examine some of the leadership functions discussed by Morgeson et al. For example, to study the training and developing function, future research can examine how formal leaders or more experienced members in self-managing teams train and develop new members while new members influence each other in the learning process. Control theory can also help understand time-related concepts in a team context such as deadlines (e.g., Zhou et al., 2019). For example, future research can examine how team leadership process unfolds when a team works on multiple tasks with different deadlines and different requirements. Second, given the emerging interest in process-oriented theories in organi zational research, more computational models of team leadership can be devel oped through integration with other process theories. For example, theories on emergent team processes and team states can serve as the foundation for future research to clarify the role of leadership in these emergent processes. For exam ple, Grand et al. (2016) developed and tested a computational model about the knowledge emergence process in teams. Future research can build on Grand et al.’s model to examine the role of leadership in the team knowledge emergence process. In addition, team leadership models can be integrated with models about organizational design (e.g., Levinthal & Workiewicz, 2018) to understand
Models of Leadership in Teams
141
how organizational-level factors influence leadership in smaller organizational units. In addition, formal models about the emergence of social roles in larger systems (e.g., Eguíluz et al., 2005) can be integrated with leadership theories to further understand the emergence of hierarchies in self-managing teams. Third, future research can investigate team leadership in new work context that uses team-based structures. New trends are emerging in the workplace, particularly those propelled by developments in information and communica tion technologies, such as gig workers supported by online platforms, telework, crowdsourced projects, and online reviews of small businesses. More research is needed to understand the role of leadership in teams working in these new contexts (e.g., Oh et al., 2016). Computational modeling can allow researchers to integrate process mechanisms developed in psychology (e.g., motivations for initiating projects), sociology (e.g., social structure connecting workers), and economics (e.g., game theory on business decisions) to study team leadership in complicated work context. Researchers can also use computational modeling to examine how traditional organizations redesign formal leadership structures to support more flexible work schedules, more fluid team membership, and more dynamic career goals of employees (e.g., Levinthal & Workiewicz, 2018). Applications in Practices
Computational modeling can also serve as an aid for practicing leadership in teams. First, given an endogenous variable of interest to practitioners (e.g., team decision quality) and a set of well-specified process mechanisms, simulation can be used to estimate values of the exogenous variables that would maximize the value of the endogenous variable. This knowledge about the optimal val ues of the exogenous variables can help guide practices. For instance, knowing what values of certain personality traits (e.g., extraversion) of those in lead ership positions can maximize team task performance; recruitment and selec tion practices can target candidates whose personality traits match the optimal values. As another example, if simulation results suggest that certain types of interactions between leaders and the newcomers they supervise (i.e., certain pro cess mechanisms) can lead to faster adjustment of newcomers, organizations can design socialization programs to stimulate such interactions between leaders and newcomers. In addition, practitioners can estimate values of key parameters in a com putational model using archival data from a team or a designated leader (for an example of estimating model parameters using empirical data, see Ballard et al., 2016). The estimates can be compared to benchmarks, which can help decide whether interventions on the sources of the data are needed. For exam ple, managers leading group decision-making can estimate key parameters in computational models of group decision-making process (e.g., team members’
142
Le Zhou
confidence about teammates) using recordings of team meetings. These esti mates can then be compared to benchmarks that are values of the key parameters known to be associated with higher-quality team decisions. When the estimated values deviate from the benchmarks (i.e., the optimal values), teams and leaders can be coached. Finally, through virtual experiments, computational modeling can be used to challenge existing practices or help test new practices before implementing them. For example, Tarakci et al. (2016) used virtual experiments to test the effects of three different leadership structures on group performance, which can be considered as three different designs of leadership structure evaluated by an organization. Simulation results in Tarakci et al. suggested that organizations that fix the term of the leadership roles could face the risk of lowering group performance if those in these leadership roles are not selected based on task competence. In addition, Tarakci et al.’s simulation results suggested that formal leaders should be willing to delegate when a specific task requires competencies possessed by other team members rather than the formal leaders themselves. Based on these simulation results, organizations can better weigh the costs and benefits of different practices before implementing them. Note 1. “Group” and “team” are used interchangeably in this manuscript.
References Antonakis, J. (2017). On doing better science: From thrill of discovery to policy implications. The Leadership Quarterly, 28, 5–21. http://dx.doi.org/10.1016/j. leaqua.2017.01.006 Ballard, T., Yeo, G., Neal, A., & Farrell, S. (2016). Departures from optimality when pursuing multiple approach or avoidance goals. Journal of Applied Psychology, 101, 1056–1066. http://dx.doi.org/10.1037/apl0000082 Bass, B. M. (1990). Bass & Stogdill’s handbook of leadership: Theory, research, and managerial applications (3rd ed.). New York, NY: Free Press. Bloom, M. (1999). The performance effects of pay dispersion on individuals and organiza tions. Academy of Management Journal, 42, 25–40. http://dx.doi.org/10.2307/256872 Burton, R. M., & Obel, B. (2011). Computational modeling for what-is, what-might-be, and what-should-be studies—and triangulation. Organization Science, 22, 1195– 1202. https://doi-org.ezp3.lib.umn.edu/10.1287/orsc.1100.0635 Carter, D. R., DeChurch, L. A., Braun, M. T., & Contractor, N. S. (2015). Social network approaches to leadership: An integrative conceptual review. Journal of Applied Psy chology, 100, 97–622. http://dx.doi.org/10.1037/a0038922 Carver, C. S., & Scheier, M. F. (1998). On the self-regulation of behavior. New York, NY: Cambridge University Press. http://dx.doi.org/10.1017/CBO9781139174794 Castillo, E. A., & Trinh, M. P. (2018). In search of missing time: A review of study of time in leadership research. The Leadership Quarterly, 29, 165–178. https://doi. org/10.1016/j.leaqua.2017.12.001
Models of Leadership in Teams
143
Chan, D. (1998). Functional relations among constructs in the same content domain at different levels of analysis: A typology of composition models. Journal of Applied Psychology, 83, 234–246. https://doi.org/10.1037/0021-9010.83.2.234 Chen, S. (2010). The role of ethical leadership versus institutional constraints: A stimula tion study of financial misreporting by CEOs. Journal of Business Ethics, 99, 33–52. https://doi.org/10.1007/s10551-010-0625-8 Chiang, Y., & Hsu, Y. (2017). Direct election of group decision-makers can facilitate cooperation in the public goods game. Group Decision and Negotiation, 26, 197–213. https://doi.org/10.1007/s10726-016-9478-6 Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. Academy of Management Review, 32, 480–499. http://dx.doi. org/10.5465/amr.2007.24351453 Day, D. V. (2012). Leadership. In S. W. J. Kozlowski (eds.), The Oxford handbook of organizational psychology (Vol. 1, pp. 696–732). New York, NY: Oxford Academic. http://dx.doi.org/10.1093/oxfordhb/9780199928309.013.0022 Dinh, J. E., Lord, R. G., Gardner, W. L., Meuser, J. D., Liden, R. C., & Hu, J. (2014). Leadership theory and research in the new millennium: Current theoretical trends and changing perspectives. The Leadership Quarterly, 25, 36–62. https://doi. org/10.1016/j.leaqua.2013.11.005 Dionne, S. D., & Dionne, P. J. (2008). Levels-based leadership and hierarchical group decision optimization: A simulation. The Leadership Quarterly, 19, 212–234. https:// doi.org/10.1016/j.leaqua.2008.01.004 Dionne, S. D., Sayama, H., Hao, C., & Bush, B. J. (2010). The role of leadership in shared mental model convergence and team performance improvement: An agentbased computational model. The Leadership Quarterly, 21, 1035–1049. https://doi. org/10.1016/j.leaqua.2010.10.007 Eguíluz, V. M., Zimmermann, M. G., Cela-Conde, C. J., & San Miguel, M. (2005). Coop eration and the emergence of role differentiation in the dynamics of social networks. American Journal of Sociology, 110, 77–1008. https://doi.org/10.1086/428716 Fischer, T., Dietz, J., & Antonakis, J. (2017). Leadership process models: A review and synthesis. Journal of Management, 43, 1–28. http://dx.doi.org/10.1177/ 0149206316682830 Fitzsimons, G. M., Finkel, E. J., & vanDellen, M. R. (2015). Transactive goal dynamics. Psychological Review, 122, 648–673. http://dx.doi.org/10.1037/a0039654 Fleishman, E. A., Mumford, M. D., Zaccaro, S. J., Levin, K. Y., Korotkin, A. L., & Hein, M. B. (1991). Taxonomic efforts in the description of leader behavior: A synthesis and functional interpretation. The Leadership Quarterly, 2, 245–287. http://dx.doi. org/10.1016/1048-9843(91)90016-U Graen, G. B., & Uhl-Bien, M. (1995). Relationship-based approach to leadership: Devel opment of leader-member exchange (LMX) theory of leadership over 25 years: Apply ing a multi-level multi-domain perspective. The Leadership Quarterly, 6, 219–247. https://doi.org/10.1016/1048-9843(95)90036-5 Grand, J. A., Braun, M. T., Kuljanin, G., Kozlowski, S. W. J., & Chao, G. T. (2016). The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams. Journal of Applied Psychology, 101, 1353–1385. http://dx.doi.org/10.1037/ apl0000136 Halevy, N., Chou, E. Y., Galinsky, A. D., & Murnighan, J. K. (2012). When hierarchy wins evidence from the national basketball association. Social Psychological and Per sonality Science, 3, 398–406. http://dx.doi.org/10.1177/1948550611424225
144
Le Zhou
Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32, 1229–1245. https://doi-org.ezp2.lib.umn.edu/10.5465/AMR.2007.26586485 Hazy, J. K. (2007). Computer models of leadership: Foundations for a new discipline or meaningless diversion? The Leadership Quarterly, 18, 391–410. https://doi. org/10.1016/j.leaqua.2007.04.007 Henderson, D. J., Liden, R. C., Glibkowski, B. C., & Chaudhry, A. (2009). LMX differ entiation: A multilevel review and examination of its antecedents and outcomes. The Leadership Quarterly, 20, 517–534. https://doi.org/10.1016/j.leaqua.2009.04.003 Hersey, P., & Blanchard, K. (1977). Management of organization behavior: Utilizing human resources (3rd ed.). Englewood Cliffs, NJ: Prentice Hall. House, R. J. (1971). A path-goal theory of leadership effectiveness. Administrative Sci ence Quarterly, 16, 321–339. https://doi.org/10.2307/2391905 Kerr, S., & Jermier, J. (1978). Substitutes for leadership: Their meaning and measure ment. Organizational Behavior & Human Performance, 22, 375–403. https://doi. org/10.1016/0030-5073(78)90023-5 Klein, K. J., Ziegert, J. C., Knight, A. P., & Xiao, Y. (2006). Dynamic delegation: Shared, hierarchical, and deindividualized leadership in extreme action teams. Administrative Science Quarterly, 51, 590–621. http://dx.doi.org/10.2189/asqu.51.4.590 Kozlowski, S. W. J., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2013). Advancing multilevel research design capturing the dynamics of emergence. Organiza tional Research Methods, 16, 581–615. http://dx.doi.org/10.1177/1094428113493119 Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7, 77–124. https://doi-org. ezp2.lib.umn.edu/10.1111/j.1529-1006.2006.00030.x Kozlowski, S. W. J., Mak, S., & Chao, G. T. (2016). Team-centric leadership: An integra tive review. Annual Review of Organizational Psychology and Organizational Behav ior, 3, 21–54. https://doi.org/10.1146/annurev-orgpsych-041015-062429 Levinthal, D. A., & Workiewicz, M. (2018). When two bosses are better than one: Nearly decomposable systems and organizational adaptation. Organization Science, 29, 207– 224. https://doi.org/10.1287/orsc.2017.1177 Liden, R. C., Wayne, S. J., & Stilwell, D. (1993). A longitudinal study on the early devel opment of leader—member exchanges. Journal of Applied Psychology, 78, 662 − 674. https://doi.org/10.1037/0021-9010.78.4.662 McComb, S. A. (2007). Mental model convergence: The shift from being an individual to being a team member. In F. Dansereau & F. J. Yammarino (Eds.), Research in multi-level issues (Vol. 6, pp. 95–147). Amsterdam: Elsevier. https://doi.org/10.1016/ S1475-9144(07)06005-5 McGrath, J. E. (1962). Leadership behavior: Some requirements for leadership training.
Washington, DC: U.S. Civil Service Commission, Office of Career Development.
McHugh, K. A., Yammarino, F. J., Dionne, S. D., Serban, A., Sayama, H., & Chatterjee,
S. (2016). Collective decision making, leadership, and collective intelligence: Tests with agent-based simulations and a field study. The Leadership Quarterly, 27, 218– 241. https://doi.org/10.1016/j.leaqua.2016.01.001 Mohammed, S., Ferzandi, L., & Hamilton, K. (2010). Metaphor no more: A 15-year review of the team mental model construct. Journal of Management, 36, 876–910. https://doi.org/10.1177/0149206309356804
Models of Leadership in Teams
145
Morgeson, F. P., DeRue, D. S., & Karam, E. P. (2010). Leadership in teams: A functional approach to understanding leadership structures and processes. Journal of Manage ment, 36, 5–39. http://dx.doi.org/10.1177/0149206309347376 Oh, W., Moon, J., Hahn, J., & Kim, T. (2016). Leader influence on sustained participation in online collaborative work communities: A simulation-based approach. Information Systems Research, 27, 383–402. https://doi.org/10.1287/isre.2016.0632 Park, S., Sturman, M. C., Vanderpool, C., & Chan, E. (2015). Only time will tell: The changing relationships between LMX, job performance, and justice. Journal of Applied Psychology, 100, 660–680. http://dx.doi.org/10.1037/a0038907 Peterson, R. S. (1997). A directive leadership style in group decision making can be both virtue and vice: Evidence from elite and experimental groups. Journal of Personality and Social Psychology, 72, 1107–1121. https://doi.org/10.1037/0022-3514.72.5.1107 Serban, A., Yammarino, F. J., Dionne, S. D., Kahai, S. S., Hao, C., McHugh, K. A., Sotak, K. L., Mushore, A. B. R., Friedrich, T. L., & Peterson, D. R. (2015). Leadership emer gence in face-to-face and virtual teams: A multi-level model with agent-based simu lations, quasi-experimental and experimental tests. The Leadership Quarterly, 26, 402–418. https://doi.org/10.1016/j.leaqua.2015.02.006 Tarakci, M., Greer, L. L., & Gronen, P. J. (2016). When does power disparity help or hurt group performance. Journal of Applied Psychology, 101, 415–429. http://dx.doi.org. ezp2.lib.umn.edu/10.1037/apl0000056 Vancouver, J. B., Tamanini, K. B., & Yoder, R. J. (2010). Using dynamic computational models to reconnect theory and research: Socialization by the proactive newcomer as example. Journal of Management, 36, 764–793. https://doi-org.ezp2.lib.umn. edu/10.1177/0149206308321550 Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu: Compu tational modeling for micro-level organizational researchers. Organizational Research Methods, 15, 602–623. https://doi-org.ezp3.lib.umn.edu/10.1177/1094428112449655 Vancouver, J. B., Weinhardt, J. M., & Schmidt, A. M. (2010). A formal, computa tional theory of multiple-goal pursuit: Integrating goal-choice and goal-striving processes. Journal of Applied Psychology, 95, 985–1008. https://doi.org/10.1037/ a0020628 Wang, M., Zhou, L., & Zhang, Z. (2016). Dynamic modeling. Annual Review of Organiza tional Psychology and Organizational Behavior, 3, 241–266. https://doi.org/10.1146/ annurev-orgpsych-041015-062553 Will, T. E. (2016). Flock leadership: Understanding and influencing emergent collec tive behavior. The Leadership Quarterly, 27, 261–279. https://doi.org/10.1016/j. leaqua.2016.01.002 Zhou, L., Wang, M., & Vancouver, J. B. (2019). A formal model of leadership goal striv ing: Development of core process mechanisms and extensions to action team context. Journal of Applied Psychology, 104, 388–410. http://dx.doi.org/10.1037/apl0000370
6 USING SIMULATIONS TO PREDICT THE BEHAVIOR OF GROUPS AND TEAMS Deanna M. Kennedy and Sara A. McComb
Introduction
Organizations need constituents to work together to achieve strategic goals and missions. The way the constituents are organized may depend on what and when the organization needs the goals met. One commonly used organizational form is the group, where groups are “bounded, structured entities that emerge from the purposive, interdependent actions of individuals” (p. 95, McGrath, Arrow, & Berdahl, 2000). Teams are a type of group where individuals share a common goal for a temporary or limited period of time (Goodman & Goodman, 1976). We define teams as “a small group of people with complementary skills who are committed to a common purpose, performance goals and approach for which they are mutually accountable” (p. 112, Katzenbach & Smith, 1993). The proliferation of groups and teams in industry has increased the pressure to analyze how these organizational forms collaborate to achieve their goals and to develop theoretically driven tactics to improve how, when, and what is achieved. For over a hundred years, researchers have pursued a greater understanding of work groups and teams (Mathieu, Hollenbeck, van Knippenberg, & Ilgen, 2017). Yet, much of the research to date has been cross-sectional and/or focused on specific constructs in isolation; particularly concerning is the treatment of team processes as static constructs (Kozlowski & Chao, 2018). The recent availability of, and accessibility to, computing power is changing how researchers are able to advance our understanding. Indeed, the use of simulation in studying organizational phenomena (Vancouver & Weinhardt, 2012; Weinhardt & Vancouver, 2012) propels the investigation of groups and teams in new directions, for instance, being able to manage large quantities of information about groups and DOI: 10.4324/9781003388852-7
Using Simulations to Predict the Behavior of Groups and Teams
147
teams (Kozlowski, Chao, Chang, & Fernandez, 2015), as well as being able to capture and assess the dynamics that occur over time (Kozlowski, Chao, Grand, Braun, & Kuljanin, 2013). As such, researchers can garner insights about the complex dynamics at play (Mathieu, Gallagher, Domingo, & Klock, 2019) and apply theory, measurement, and methodology appropriately (Kozlowski, 2015; Kozlowski & Chao, 2018). To better understand the power and potential of computational methods that advance the science of groups and teams, we conduct a systematic, narrative review (Siddaway, Wood, & Hedges, 2019), with a focus on studies where simu lation techniques are being used to better understand groups and teams. A criti cal mass of group and team research studies published between 1998 and 2018 have been compiled and summarized herein to describe the development of this line of inquiry and demonstrate how computational modeling and simulation contribute to group and team science. Simultaneously, our efforts provide a sum mary of the key themes and trends demonstrated in this body of evidence and highlight considerations for future inquiry. The results of our efforts may moti vate more researchers to incorporate simulation and computational modeling into their investigations of groups and teams. Literature Search Procedure
To uncover the types and trends of simulation in groups and teams, we conducted a systematic review of the literature following the PRISMA (i.e., Preferred Reporting Items for Systematic Reviews and Meta-analyses) approach (Moher, Liberati, Tetzlaff, Altman, The PRISMA Group, 2009). Figure 6.1 provides an overview of our process. We searched within the Web of Science database to ensure broad inclusion of research from relevant fields including business, com puter science, engineering, psychology, and social science. Our primary search terms were group, team, simulation, and computational modeling. The initial search resulted in 330 published articles. This corpus was augmented with 22 papers from other sources (e.g., dissertations, book chapters, legacy articles). The resulting 349 papers were filtered to remove those focused on robotic teams leaving 329 papers. We then reviewed the set of papers and omitted those that met our exclusion criteria, which included any study focused on individuals within groups or teams or the leaders over groups and teams. A chapter addressing such scenarios can be found in this book (see chapter by Zhou). We also exclude papers that were not representative of a workplace or organizational setting (e.g., classrooms) and did not apply simulation and computational modeling, as well as any focused on behavioral simulations or addressing non-human groups and teams (e.g., animals, insects). This screening resulted in 43 papers for inclusion. These included studies were reviewed to identify the research questions, focal variables, simulation approaches, modeling techniques, and insights gained.
148
Deanna M. Kennedy and Sara A. McComb
Identification
Database search of group, team, computational modeling, and simulation n =330
Additional records identified through other sources n =22
Removal of duplicate records n =349
Filtering
Filtering out robot teams n =329
Finalize
FIGURE 6.1
-
Filtering of papers outside of scope Does not address group- or team-level phenomena Is not conducted in a workplace or organizational setting Does not apply computational modeling and simulation Contains behavioral simulations Focuses on non-human groups and teams n = 43
Literature review process for simulation of groups and teams.
Although our focus is on simulation and computational modeling, we quickly realized that different simulation approaches and modeling techniques are used by researchers. To provide greater clarity, we categorize approaches and tech niques. The simulation approaches are comprised of four categories and are summarized in Table 6.1. The simulation category is applied to studies that take a general approach of imitating the real world in a virtual environment to obtain numerical results. Typically, simulation is used to explore what-if scenarios where the impact of parameter adjustments on outcomes may be examined. For example, a researcher may adjust a parameter controlling task complexity to examine the impact on team performance. The Monte Carlo simulation category refers to the use of repeated random sampling to generate numerical results. This sampling technique is often embedded within the virtual environment to explore the impact across a distribution of an input. For example, a researcher may draw from a distribution of project creativity and a distribution of team cohesion to see how the two inputs together affect team performance. The multi-agent sim ulation category refers to specific real-world scenarios where multiple agents are programmed with identical protocols. In such environments, the protocols assigned to all agents are explored to understand how the group interacts and/or affects the team’s performance. For example, by setting a decision-making pro tocol, the researcher can explore the effectiveness of a team moving under one decision-making directive in achieving project goals. Finally, the agent-based modeling category refers to the simultaneous simulation of agent actions and
Using Simulations to Predict the Behavior of Groups and Teams
149
TABLE 6.1 Descriptions of Simulation Approaches and Mathematical Techniques
Identified
Review Category of Simulation Approach
Description
Simulation
The implementation of modeling that produces numerical outcomes The generation of random values from a probability distribution An approach that directs multiple agents with identical protocols An approach that directs agents with potentially uniquely defined protocols
Monte Carlo simulation Multi-agent simulation Agent-based modeling Review Category of Mathematical Technique
Description
Agent operators
Refers to rules-based systems used to set up, direct, or evaluate actions or events of the simulation E.g., if-then scenarios Refers to equation-based systems used to set up and motivate the activities or events of the simulation E.g., neural network modeling, NK model, game theory model, algorithmic Refers to algorithmic formulations used to set up, direct, or evaluate actions or events of the simulation E.g., genetic algorithm optimization, particle swarm optimization, simulated annealing optimization, mixed integer linear optimization, multidisciplinary optimization, multi-objective optimization Refers to statistical analysis techniques used to set up, direct, or evaluate actions or events of the simulation E.g., regression, correlational analysis, Bayesian network analysis, nonlinear difference equations
Mathematical functions
Optimization
Statistical analysis
interactions where agents may have independent protocols and characteristics. That is, a researcher may represent team members as agents in the model and institute different decision-making protocols to explore how effective some or all of the team members are in generating project outcomes. Modeling techniques are comprised of four categories and are also summarized in Table 6.1. The techniques represent the computational model that drives the simulation, specifically giving the formal description of how, what, and when events or agents will occur over time (Grand, Braun, Kuljanin, Kozlowski, & Chao, 2016; Harrison, Lin, Carroll, & Carley, 2007). We use the category of agent operators to represent the rules-based systems used to set
150
Deanna M. Kennedy and Sara A. McComb
up, direct, or evaluate actions or events of the simulation. For example, an opera tor may establish the decision-making protocol of agents such that if a blue square is found in the environment, then the agents change course by 180 degrees. The mathematical function category refers to the use of equation-based systems to set up, direct, or evaluate actions or events of the simulation. For example, using Euclidean distance between agents and green squares in the environment, a pro tocol could direct agents to move toward the closest green square. The optimi zation category is a particular equation-based approach that seeks to optimize the actions or events of the simulation. For example, seeking to optimize per formance through the combination of decision making and team cohesion may drive team member selection. Finally, statistical analysis category refers to the use of statistical outcomes to set up, direct, or evaluate actions or events of the simulation. For example, regression may be used to examine how decision mak ing and cohesion relate to performance. Literature Review Trends and Observations
The 43 papers identified for this literature review are summarized in Table 6.2. Our results reveal a number of interesting observations about the use of simula tion in the study of groups and teams. Herein, we describe five notable obser vations about the breadth of the work that demonstrates the applicability and versatility of simulation for advancing theory about groups and teams. First, there is an increasing number of papers that apply simulation and computational modeling to study groups and teams as can be seen in Figure 6.2. Second, the contributions of these papers are impacting several disciplines based on the pub lication field; that is, the papers were published in journals from a variety of fields including cognitive/computer science (4 papers), communication (1), economics (1), engineering (7), engineering management (3), human factors (4), industrial design (1), information science (1), interdisciplinary (5), management (6), man agement science (5), natural science (1), psychology (3), and sociology (1). Third, the focal variables cover a broad array of topics representing inputs, processes, and outputs. For example, the inputs consider how the team is set up, which can include the characteristics of the members or the organizational structure itself (e.g., human capital (Ashworth & Carley, 2006), power dispar ity (Tarakci, Greer, & Groenen, 2016), team structure (Carley, Prietula, & Lin, 1998)). In terms of processes, researchers have focused on behavioral processes (e.g., communication (Kennedy, McComb, & Vozdolska, 2011; Kennedy, Som mer, & Nguyen, 2017; Patrashkova & McComb, 2004), cooperation (Kang, 2007), coordination (Christensen, Christiansen, Jin, Kunz, & Levitt, 1999; Rojas-Villafane, 2010; Xu, Liu, & Li, 2014)), and cognitive processes (e.g., mental model convergence (Dionne, Sayama, Hao, & Bush, 2010)). Finally, the outcomes show that researchers seek to understand, not only team performance
TABLE 6.2 Reviewed Literature of Simulation About Group and Team Behaviors
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
Ashworth & Carley
2006
How do social position and knowledge held by members affect team performance?
Human capital
Agent operators
Multi-agent simulation
AustinBreneman, Yu, & Yang
2014
Information sharing bias
Optimization
Simulation
Barkoczi & Galesic
2016
What are the effects of challenges (e.g., biased information passing) that design teams face? How do social learning strategies of individual team members affect group performance?
Social learning strategies
Agent operators
Simulation
Bosse, Duell, Memon, Treur, & van der Wal Caimo & Lomi
2015
How do emotion contagion processes unfold in groups?
Emotion contagion
Agent operators
Multi-agent simulation
2015
Under what conditions does the sharing of knowledge occur?
Knowledge sharing
Statistical analysis
Simulation
A member’s knowledge and the task dimension may be more impactful on team performance than social position. Different levels of biased information sharing negatively impacted the time to reach solutions. Different types of network structures matter depending on the learning strategy of the team member. The monitoring of group emotion levels over time can indicate the need for group support actions. Where knowledge exists in an organization may matter and how boundaries across sub units are spanned by managers. (Continued)
151
Year
Using Simulations to Predict the Behavior of Groups and Teams
Author(s)
152
TABLE 6.2 (Continued)
Year
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
Carley & Schreiber
2002
How does the introduction of different types of databases affect team outcomes?
Externalized transactive memory system design
Agent operators
Multi-agent simulation
Carley, Prietula, & Lin
1998
Does group member cognition and organizational design affect organizational performance?
Cognitive capability and organizational design
Mathematical functions
Multi-agent simulation
Carvalho, Oliveira, & Scavarda
2015
Tactical capacity planning
Optimization
Simulation
Chae, Seo, & Lee
2015
Team creativity
Agent operators
Multi-agent simulation
Christensen, Christiansen, Jin, Kunz & Levitt
1999
How does tactical capacity planning help the manufacturing team make decisions? What are the effects of task difficulty and team diversity on team creativity? How is project execution affected by planning and managing activities?
The introduction of different types of databases negatively affects the efficiency of transitionally structured teams. Member cognition and organizational design (i.e., team structure) do interact to affect organizational performance. The team may adapt capacity to address demand or optimally balance demand. Team diversity and managerial facilitation affect team creativity.
Coordination requirements
Agent operators
Simulation
The performance of the team in executing the plan depends on project policies and the project team.
Deanna M. Kennedy and Sara A. McComb
Author(s)
Agent-based model simulation
Team performance sensitivity
Agent operators
Agent-based model simulation
How does shared mental model convergence contribute to team performance?
Mental model convergence and leadership
Agent operators
Monte Carlo simulation
2010
How does individual performance inform formation of crossfunctional teams?
Team formation
Optimization
Simulation
2008a, 2008b
How does the timing of contacts (faultline conditions) affect team cohesion?
Faultlines
Agent operators
Agent-based model simulation
Crowder, Robinson, Hughes, & Sim
2012
Dionne, Sayama, Hao, & Bush
2010
Feng, Jiang, Fan & Fu
Flache & Mäs,
What decision rules explain the cooperation within teams in an intra-team social dilemma where competition is present? How does the interaction of individual- and teamlevel attributes and processes affect team performance?
Self-consistency may explain the way team members decide to help the team cooperate in a social dilemma. The team’s performance (work time, completion time, quality) depends on the difficulty of subtasks, individual characteristics, and teamlevel activities. Leadership and team characteristics affect mental model convergence and performance. Individual performance in combination with collaborative performance is relevant for team selection. There are negative effects due to faultlines because of subgroup formation, and when these subgroups form matters. (Continued)
153
Agent operators
2006
Using Simulations to Predict the Behavior of Groups and Teams
Decision rules
Coen
154
TABLE 6.2 (Continued)
Year
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
Frantz & Carley
2013
Cultural awareness and information sharing
Agent operators
Agent-based model simulation
Gavetti & Warglien
2015
What are the effects at the work-team level of pre existing organization cultures and post-merger integration dynamics? How does heedfulness affect the collective interpretation of an environment?
Collective interpretation
Mathematical functions
Multi-agent simulation
Grand, Braun, Kuljanin, Kozlowski, & Chao
2016
How does collectively held knowledge (i.e., shared knowledge) emerge in teams?
Knowledge emergence
Agent operators
Agent-based model simulation
Huang & Chen
2006
Team performance
Agent operators
Simulation
Juran & Schruben
2004
How can we estimate project completion time based on project task and team information? How can worker personality and demographic information be used for team formation?
Cultural complexity and motivation to share information depend on task and cultural knowledge. Depending on the novelty of the situation, different approaches need to be taken for collective interpretation. Shared knowledge is generated when teams share information efficiently and have communication strategies for information sharing. Task and team attributes contribute to lead-time variability.
Team member characteristics
Mathematical functions
Simulation
Individual worker behavior and personalities contribute to team performance and having more information gives more accurate expectations.
Deanna M. Kennedy and Sara A. McComb
Author(s)
2007
How do activeness and cooperativeness affect team efficiency?
Activeness and cooperativeness
Agent operators
Multi-agent simulation
Kennedy & McComb
2014
When and in what order should team processes unfold during teamwork?
Team processes
Mathematical functions & optimization
Simulation
Kennedy, McComb, & Vozdolska
2011
What is the optimal amount of use for phone, email, and face-to face communication at different levels of project complexity?
Communication
Statistical analysis
Monte Carlo simulation
Kennedy, Sommer, & Nguyen
2017
Communication
Optimization
Simulation
Lunesu, Münch, Marchesi, & Kuhrmann
2018
What is the optimal use of media options at different levels of interdependence and information requirements? How do setup and processes help a distributed software development project team perform?
Project planning policies
Agent operators
Simulation
The amount of information and active style affect cooperativeness and in turn team efficiency. Team performance is improved when certain team processes occur and also how processes occur in a certain order. The amount of use for phone, email, and faceto-face communication depends on the complexity (ambiguity and multiplicity) of the project. The use of media changes based on the interdependence and information required. Performance trade-offs may be brought on by throughput, duration, project size, team size and other work limits.
155
(Continued)
Using Simulations to Predict the Behavior of Groups and Teams
Kang
156
TABLE 6.2 (Continued)
Year
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
MartinezMiranda & Pavon
2012
How can we better compose a virtual team?
Work team configurations and trust
Agent operators
Agent-based model simulation
Mäs, Flache, Takács & Jehn
2013
What are the effects of demographic crisscrossing and faultlines on subgroup polarization and consensus building?
Faultlines
Agent operators
Agent-based model simulation
McComb, Cagan, & Kotovsky
2015
Team performance sensitivity
Optimization
Simulation
Millhiser, Coen, & Solow
2011
How does cognitive aspects of teamwork affect team-based engineering designs? How does interdependence affect team selection?
Team member assignment policies
Agent operators
Simulation
Nieboer
2015
Preference aggregation
Statistical analysis
Simulation
Trust is relevant to team performance and can be used to select an optimal work team. Demographic crisscrossing can help teams with faultlines overcome polarization and, depending on the level of faultlines, consensus may be reached faster. Cognitive phenomena affect the success of teams in solving their tasks. Prior performance can be used to inform team selection when interdependence is needed. The decision scheme preferences by members do not adequately predict group decisions.
How do group member decision scheme preferences relate to group decisions?
Deanna M. Kennedy and Sara A. McComb
Author(s)
2006
How does the setup of work groups impact transactive memory systems?
Transactive memory systems
Statistical analysis
Agent-based model simulation
Patrashkova & McComb
2004
What drives the curvilinearity of the communicationperformance relationship?
Communication
Agent operators
Simulation
Ren, Carley, & Argote
2006
How does transactive memory affect team outcomes?
Transactive memory systems
Agent operators
Multi-agent simulation
Rojas-Villafane
2010
How is coordination and performance affected when teams work in complex job environments?
Coordination mechanisms
Optimization
Agent-based model simulation
Starting knowledge level, accuracy of expertise, team size, and communication affect transactive memory systems. The peak in the inverse curvilinear relationship between communication and performance occurs more quickly via asynchronous media than via synchronous media. Transactive memory can help reduce group response time and improve decision quality. The coordination load and performance of the team change depending on team composition, coordination mechanisms, and job structure. (Continued)
Using Simulations to Predict the Behavior of Groups and Teams
Palazzolo, Serb, She, Su, & Contractor
157
158
TABLE 6.2 (Continued)
Year
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
Sinclair, Siemieniuch, Haslam, Henshaw, & Evans
2012
How do human factors (selection, training, process design, interactions, culture, etc.) influence the likelihood of team success?
Team success predictions
Agent operators
Simulation
Solow, Vairaktarakis, Piderit, & Tsai
2002
Team member replacement policies
Mathematical functions
Simulation
Son & Rojas
2011
How do interactions affect a team’s expected performance and number of replacements needed under different replacement policies? How does collaboration in construction project team networks evolve over time?
Collaboration
Mathematical functions
Agent-based model simulation
Human factors play an important role in team success, but communication and individual skill can help recover some success if lost. The amount of interaction generates a trade-off between performance and replacement activities under different replacement policies. The time it takes for collaboration across the network to reach stable states takes time depending on number of individuals familiar with others and the effort needed to bridge organizations.
Deanna M. Kennedy and Sara A. McComb
Author(s)
2016
How do different levels of power disparity affect group performance?
Power disparity
Optimization
Agent-based model simulation
Wong & Burton
2000
What strategies improve performance in virtual settings?
Team success predictions
Agent operators
Multi-agent simulation
Xia, Hu, & Jiang
2016
How does communication affect the relationship between knowledge and task assignment over time?
Learning
Agent operators
Agent-based model simulation
Xu, Liu, & Li
2014
Coordination
Agent operators
Multi-agent simulation
Yildirim, Ugurly, Basar, & Yuksekyildiz
2017
How does peer-to-peer coordination improve large-scale teamwork? Can we identify important human factor-related problems in maritime accidents?
Human error
Statistical analysis
Monte Carlo simulation
The examination of assumptions around the competency of top hierarchies at group tasks and the potential of equality help explain past findings. The virtual team context creates varying effects on different facets of team performance. Communication affects team learning earlier rather than later during a project. As well, random assignment of tasks leads to better performance than other determined strategies for task assignment. Team coordination may be adjusted to speed up team performance. Major issues arise from human error and team failure.
Using Simulations to Predict the Behavior of Groups and Teams
Tarakci, Greer, & Groenen
159
160
Deanna M. Kennedy and Sara A. McComb
8
7
6
5
4
3
2
1
0
FIGURE 6.2
Articles by year applying simulation to study groups and teams.
(e.g., Crowder, Robinson, Hughes, & Sim, 2010) but also other aspects of effec tiveness such as collective interpretation (Gavetti & Warglien, 2015), knowledge sharing (Caimo & Lomi, 2015), and team creativity (Chae, Seo, & Lee, 2015). The insights garnered from the reviewed research efforts include creating prescriptive considerations for setting up groups and teams for success before they get started on the task. For example, implications suggest that teamwork may benefit from such activities as devising strategies for team formation (Feng, Jiang, Fan & Fu, 2010; Juran & Schruben, 2004; Millhiser, Coen, & Solow, 2011), project planning (Lunesu, Münch, Marchesi, & Kuhrmann, 2018), and identifying how to undertake effective learning (Barkoczi & Galesic, 2016; Xia, Hu, & Jiang, 2016). As well, the research suggests that opportunities may arise to improve group and team processes during the task, for example, managing when team processes shift (Kennedy & McComb, 2014), faultlines are created
Using Simulations to Predict the Behavior of Groups and Teams
161
(Flache & Mäs, 2008a, 2008b; Mäs, Flache, Takács, & Jehn, 2013), or knowledge emerges (Grand et al., 2016). Thus, the variety of inputs, processes, and outputs under investigation has led to a multiplicity of insights that provide theoretical and practical implications for future research and management opportunities. Fourth, in examining the aforementioned constructs, researchers took either a controlled or holistic approach. Using a controlled approach, researchers focus on specific relationships in a targeted manner, which would be analogous to a laboratory experiment that isolates a particular phenomenon of interest. For instance, Kennedy and McComb (2014) employed communication content in a controlled approach to examine how team processes unfold during teamwork, and Tarakci et al. (2016) focused on how power disparity impacts team perfor mance. Alternatively, the holistic approach is used to examine how team activity occurs over time, which would be more analogous to an observational study of process changes over time. For example, Patrashkova and McComb (2004) employed a holistic approach by generating tasks with varying requirements that could be met through communication among team members with different characteristics, as did Grand et al. (2016) in their examination of how knowledge emerges through team member engagement processes. Fifth, illustrated by the application of our categories for simulation and mod eling techniques, we see that researchers have utilized a variety of tools. Based on the reviewed literature, the employment of simulation approaches over time can be seen in Figure 6.3, where the cumulative numbers of studies using each are plotted. As the graph shows, the use of simulation is being bolstered steadily over time. Interestingly, the uptake of agent-based modeling is outpacing the use of multi-agent simulation in recent years. Specifically in our dataset, we found a number of studies that use simulation to assess the way different conditions affect an outcome like team processes (Kennedy & McComb, 2014; Lunesu et al., 2018) or team formation (Feng et al., 2010; Juran & Schruben, 2004; Millhiser et al., 2011). Researchers were found to use Monte Carlo simulation to bootstrap from a distribution and analyze the generated dataset(s). For example, Yildirim, Ugurlu, Basar, and Yuksekyildiz (2017) parsed a dataset of maritime accidents and used bootstrapping to uncover the differences among accidents that may be attributable to human error. Several papers apply multi-agent simulation to examine how teams collectively achieve outcomes (i.e., multiple agents and knowledge formation (Palazzolo, Serb, She, Su, & Contractor, 2006), multiple agents and knowledge sharing (e.g., Ren, Carley, & Argote, 2006)). Finally, we found that researchers use agent-based modeling to study the way phenomena emerge, like coordination (Rojas-Villafane, 2010) or knowledge (Grand et al., 2016), as groups or teams interact. The modeling techniques varied across the dataset. In 24 of the 43 papers, or slightly over half of the papers, researchers use agent operators to sequence actions or events based on different decision rules. We also found several
162
Deanna M. Kennedy and Sara A. McComb
18
16
14
12
10
8
6
4
2
0
FIGURE 6.3
The modeling techniques varied across the dataset.
different optimization techniques utilized to identify boundary conditions or best-case scenarios for comparison. For example, researchers drew insight from the team outcomes generated when assessed or motivated by particle swarm optimization (Tarakci et al., 2016), simulated annealing optimization (McComb, Cagan, & Kotovsky, 2015), multi-objective optimization (Feng et al., 2010), and mixed integer linear optimization (Kennedy et al., 2017). Fewer papers reported statistical analysis as a mechanism for operationalizing the actions or events
Using Simulations to Predict the Behavior of Groups and Teams
163
unfolding in the simulation environment such as statistical analysis (Yildirim et al., 2017) or nonlinear statistics (Kennedy et al., 2011). Finally, a small num ber of papers were found that use mathematical functions to motivate actions or events (e.g., cognition and organizational design (Carley et al., 1998)). Discussion
Over the past 20 years, the growing number of articles examining groups and teams via simulation demonstrates the versatility and promise of this approach to theory development. Studies have examined issues ranging from team composition strategies to specific team processes such as communication and cognition to varying support structures like transactive memory systems and coordination mechanisms. The methodologies employed are also diverse and include a range of simulation approaches including simulation in general, Monte Carlo simulation, multi-agent simulation, and agent-based modeling. These approaches have been paired with computational methods including agent operators, mathematical functions, optimization techniques, and statisti cal analysis to set up, direct, or evaluate actions or events of the simulation. Taken together, this body of evidence exemplifies the efficacy and flexibility of simulation, while offering insights about future opportunities to employ it in specific fields to study of groups and teams. Moreover, we note that relevant studies have been published across a variety of fields. Therefore, researchers are encouraged to cast a wide net when identifying relevant studies to inform their own inquiries. Theoretical Considerations and Opportunities
Researchers opting to employ simulation to study teams must employ rigor as they construct simulations and computational models to ensure appropriate alignment with real-world phenomena. In addition to the examples provided in the articles reviewed herein, several resources are available to guide study devel opment, including by Davis, Eisenhardt, and Bingham (2007), Harrison et al. (2007), and Kennedy and McComb (2017). In particular, care is necessary in defining the boundary around the system of interest, aligning the level of interest with the level of theoretical development desired, and verifying that the model is properly validated. Boundary definition is critical to establish clarity about what can, and cannot, be inferred from the results. For instance, Tarakci and colleagues (2016) specified their interests in examining power disparity in their study, whereas Grand and colleagues (2016) focused on knowledge emergence through interactions. Level alignment is a key consideration to ensure that what is being simulated aligns with the theoretical implications being drawn. Indeed, just as the level of
164
Deanna M. Kennedy and Sara A. McComb
measurement needs to be explicitly specified and coordinated with how data are collected in empirical studies (Kozlowski & Klein, 2000), the theoretical level of interest must align with the way in which the simulation and computational models represent behavior. In Kennedy and McComb (2014), for example, the focal point was specific content of communicated messages, which were used to make theoretical inferences about the flow of team processes over time. A more granular focus may have included the individual-level speakers and facilitated theoretical inferences about power and contribution in team interactions, or a broader focus that included other process-oriented constructs (e.g., coordina tion, boundary spanning) may have provided insights about the actions in which teams engage. The opportunities to employ simulation in the study of groups and teams are abundant. Three clear benefits of simulation are the ability to examine longitu dinal phenomenon (Kozlowski et al., 2013), to inform more effective laboratory and field studies (Kennedy & McComb, 2014), and to compare various sce narios (Davis et al., 2007), particularly those that may be empirically difficult to operationalize or ethically questionable to manipulate. The growing interest in emergence offers an example of the way longitudinal processes may benefit from simulation and computational modeling (Kozlowski & Chao, 2018). Two recent publications highlight how researchers might bridge this gap. In their review, Ramos-Villagrasa, Marques-Quinteiro, Navarro, and Rico (2017) call for a better understanding of teams as complex adaptive systems, which may require comparing continuity and/or changes in team behavior across the system under varying circumstances. This type of investigation is particularly suited to simulation and computational modeling because these methods facilitate such inquiry. For instance, comparing and contrasting changes in many behaviors and/or circumstances in isolation or concurrently can be accomplished very quickly once a valid model is constructed. Finally, Mathieu et al. (2019) call for more nuanced and complex examina tions of multilevel teams and their dynamics and that initial theoretical develop ment in this area may come from simulation and computational modeling. By beginning with exploration in a virtual environment, more informed laboratory and/or field research designs can be created, thereby ensuring the most effective use of limited resources available. For example, in the work of Kennedy and McComb (2014), the virtual environment served to demonstrate the benefits not only of starting certain transition processes earlier as suggested by Hackman, Brousseau, and Weiss (1976) and Woolley (1998) but also to delaying action processes that had been evidenced by (DeChurch & Haas, 2008; Katzenbach & Smith, 1993). The resulting implications provide a more nuanced perspective of the relationship between these processes that is a forerunner to future experi mental research. Thus, such insights can drive subtle connections that can be a vehicle for testing the need for and diagnostics of a study design.
Using Simulations to Predict the Behavior of Groups and Teams
165
Methodological Considerations and Opportunities
Our review reveals that simulation opportunities about groups and teams are on the rise and will likely increase in the future. Moreover, multiple, viable methodological approaches are available to conduct such research. Given all the options, researchers must make three critical decisions at the onset. First, researchers need to decide how much control they need to have over isolating the variables of interest. Second, researchers need to determine the simulation approach that would facilitate the variable investigation. That is, we found that researchers are utilizing simulation, Monte Carlo simulation, multi-agent simu lation, and agent-based modeling approaches that descriptively and then math ematically determine how the simulation will unfold. Third, researchers need to decide how they are going to construct their model. We found that research ers described the motivation for the actions and events through a set of agent operators (e.g., if-then scenarios), mathematical functions (e.g., Bayesian net work, nonlinear equations, game theory), optimization (e.g., genetic algorithm optimization, particles swarm optimization, simulated annealing), and statistical analysis (e.g., regression). Regardless of the approach taken, we assert that researchers should consider proper validation of the simulation. Validation can be accomplished in a variety of ways including employing existing real-world data to establish relationships, and probabilities that are required for model development (e.g., Kennedy & McComb, 2014), comparing simulated results to results from real-world inves tigations (e.g., Patrashkova & McComb, 2004), or conducting validation experi ments (e.g., Grand et al., 2016). Guidance for proper validation can be found in multiple sources including Larson (2012) and Law and Kelton (2000). Yet, we also acknowledge that the validation process might require a program of research in and of itself. As such, some research efforts may be more easily proposed and tested in a single research study, while others result in proposi tions that launch further testing (i.e., abductive theorizing). Thus, the researcher must be conscientious in the selection of simulation as a methodology and the proper validation processes that are necessary to ensure meaningful and realistic outcomes. Interdisciplinary Considerations and Opportunities
Interestingly, our search process identified 20 research papers that applied com putational modeling and simulation to study groups and teams of robots and/ or other non-human entities (see summaries in Table 6.3). This unexpected finding suggests an opportunity for fruitful interprofessional collaboration that may simultaneously enhance theoretical contributions about non-human teams and expand the methodological approaches available to examine human teams. Regarding theoretical contributions, several robot team researchers use
166
TABLE 6.3 Examples of Simulation of Robotic Teams
Year
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
Alami & Bothelho
2002
Team cooperation
Agent operators
Multi-agent simulation
Arrichiello, Chiaverini, Indiveri, & Pedone
2010
Task organization
Mathematical functions
Simulation
The multi-robot coordination simulation shows that cooperative issues, behaviors, and skills should inform operations and plans. A behavior-based technique is found to improve task organization by multiple mobile robots.
Das, Hu, Lee, & Lu
2007
How does multi-robot cooperation affect mission decomposition and planning, task allocation, and performance? How can the execution of tasks be managed in alignment with priorities by multiple mobile robots? How should information be routed in a multirobot team to improve coordination when a flexible organizing structure is needed?
Information sharing, group communication
Mathematical functions
Multi-agent simulation
Dollarhide & Agah
2003
Control strategies
Mathematical functions, agent operators
Multi-agent simulation
What affects control strategies for distributed autonomous robots in search coverage (potentially for search and rescue)?
Awareness of robot current state (i.e., velocity and for what distance) can be used to make route information and group communication more efficient. Team size, duration allotted, and team interaction affect how much coverage is achieved by the team.
Deanna M. Kennedy and Sara A. McComb
Author(s)
2019
Nelson & Grant
2006
Pham & Awadalla
2004
Pitonankova, Crowder, & Bullock,
2016
Ramani, Viswanath, & Arjun
2008
How does information gain and belief models help mobile underwater robots be more efficient in exploration across different scenarios? Do evolved controllers versus knowledgebased controllers improve mobile robot team performance in competitions? How does controlling local interactions affect mobile robot team performance in tracking a dynamic target?
Information, belief models
Mathematical functions
Multi-agent simulation
The belief models incorporate information and improve path planning for greater exploration of a field.
Learning, knowledge
Mathematical functions (Swarm)
Agent-based model simulation
Evolved controllers can outperform knowledgebased controllers in a competitive game (capture the flag).
Behavior coordination
Mathematical functions (swarm, fuzzy logic)
Agent-based model simulation
How does information flow among foraging robot teams affect swarm behaviors? What learning methods help teams of soccerbots (robotic soccer teams) perform better?
Information sharing
Mathematical functions (Swarm)
Multi-agent simulation
Learning, intelligence
Mathematical functions (Swarm)
Multi-agent simulation
The use of collective taskachieving behaviors improved performance but was further affected by environmental factors (i.e., obstacles, target size) and team size. The ability of robot teams to perform well depends on how robots obtain and exchange information. Teams that are evolved by social insect behaviors outperform opponent teams.
167
(Continued )
Using Simulations to Predict the Behavior of Groups and Teams
Duecker, Geist, Kreuzer, and Solowjob
168
TABLE 6.3 (Continued)
Year
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
Rodriguez & Reggia
2004
Intelligence, memory, problem solving
Mathematical functions (swarm)
Multi-agent simulation
Collective intelligence may benefit robotic teams conducting search and collect tasks.
Rodriguez, Grushin, & Reggia
2007
How can reflexive agents improve collective movement and problem solving for teams (i.e., robotic teams)? How can self-organization be guided to improve robotic team coordination in foraging?
Goal-orientation
Multi-agent simulation
Trade-offs in flocking approaches affect robotic team coordination and performance.
Rosenfeld
2019
What communication approach can improve multi-robot collision avoidance?
Team communication
Mathematical functions (Swarm), agent operators Mathematical functions
Multi-agent simulation
Rubenstein, Sai, Chuong, & Shen
2009
How do robotic stem cell teams detect and recover or repair damage from unforeseen issues that face their team?
Self-organization, information sharing
Agent operators
Multi-agent simulation
Being adaptive and adapting communication methods help multi-robot teams to avoid collisions during foraging and be more productive. The way robotic stem cell teams communicate to share information may help explain selforganization and how they repair damage.
Deanna M. Kennedy and Sara A. McComb
Author(s)
2018
How can robot agent teams better plan to execute a task under different work scenarios?
Team planning
Mathematical functions
Multi-agent simulation
Spears, Spears, Hamann, & Heil; Spears, Spears, & Heil Tambe, Adibi, Al-Onaizan, Erdem, Kaminka, Marsella, & Muslea Winder & Reggia
2004 2005
Is physicomimetics an improved way for mobile robots to self-organize?
Self-organization
Mathematical functions (swarm)
Multi-agent simulation
1999
How is team building informed by teamwork and agent learning for multi-agent collaboration on synthetic soccer teams? What is the benefit of limited distributed memory on self organizing teams (i.e., robotic teams)?
Team building, teamwork, team learning
Mathematical functions, agent operators
Multi-agent simulation
Memory, problem solving
Mathematical functions (swarm)
Multi-agent simulation
2004
The decomposition approach provides better solutions across scenarios to the linear temporal logic mission specification. Parameter-setting decisions can affect how mobile robot teams self-organize to address their work. The use of team plans and goals, as well as roles and relationships inform greater team building for multi-agent collaboration. Teams with memory had improved performance. Memory removal strategies when memory was full showed random selection was the best approach. (Continued)
Using Simulations to Predict the Behavior of Groups and Teams
Schillinger, Bürger, & Dimarogonas
169
170
TABLE 6.3 (Continued)
Year
Research Question
Focal Variable(s)
Modeling Technique
Simulation Approach
Insights Gained
Zheng, Guo, & Gill
2019
In a multi-satellite system, how can the team communicate and negotiate to keep the system from failing?
Team communication
Mathematical functions
Multi-agent simulation
Zhu, Huang, & Yang
2013
How does a team of autonomous underwater vehicles (AUVs) manage dynamic task assignment and routing for effectiveness?
Effectiveness
Mathematical functions
Multi-agent simulation
A hybrid approach to distributing the planning workload so that the team can communicate and negotiate a plan to keep the system from failing. The algorithm approach to workload balance and energy efficiency in goal attainment for AUVs is found.
Deanna M. Kennedy and Sara A. McComb
Author(s)
Using Simulations to Predict the Behavior of Groups and Teams
171
theoretical frameworks from the human group and team literatures to direct their inquiries; incorporating a group or team scholar into their research teams may bolster how the results advance these theories. Nevertheless, their results may infuse information into theoretical frameworks employed to examine human teams, such as those focused on task assignment (e.g., Arrichiello, Chiaverini, Indiveri, & Pedone, 2010), effectiveness (e.g., Zhu, Huang, & Yang, 2013), and efficiency (e.g., Das, Hu, Lee, & Lu, 2007). For example, Duecker, Geist, Kreuzer, and Solowjob (2019) provide a relevant investigation of belief systems and information gain that might align with what team researchers think of as team mental models. In their work, the researchers investigated the exploration of an environmental field by autonomous, mobile underwater robots and used a number of techniques (e.g., Gaussian Markov random fields, Klaman filtering) to examine the processing efficiency and effectiveness of the robot teams as well as the examination of constraints on, and stochastic optimal control of, the system dynamics. Perhaps group and team researchers can map such considerations about belief systems and information processing into research that tests assumptions about the establishment and management of team mental models over time. From a methodological perspective, the researchers examining non-human teams are often from computer science. As such, they may be familiar with a broad array of options for constructing simulation and computational models of systems. These researchers draw heavily on multi-agent approaches to understand how team members execute the assigned tasks. Moreover, many use biomimicry, such as swarm or insect intelligence, to inform or evolve team learning and behaviors (e.g., Pitonakova, Crowder, Bullock, 2016; Ramani, Viswanath, & Arun, 2008; Rodríguez, Grushin, & Reggia, 2007). For example, Pham and Awadalla (2004) investigated behavior coordination of mobile robots using fuzzy-logic-based approaches and modeling of social insects to improve tracking of a dynamic target. Such an approach may be relevant in examining the nonlinear approach of teams in pursuit of creative outcomes. These approaches are only beginning to make their way into the investigations about human teams, as we found only one study by Tarakci and colleagues (2016), where particle swarm optimization was employed to investigate power disparity in human teams. Limitations
As with all research, this study has limitations. Our search strategy may have caused us to miss some relevant studies. To mitigate this concern, we have also included studies that were identified in the course of our review. The number of studies identified through these legacy searches was very small, suggesting that our original search was robust. In a small number of identified studies, the approaches and/or findings were not explicitly described, leaving room for interpretation in our review. Our collective expertise in team research,
172
Deanna M. Kennedy and Sara A. McComb
combined with our experience in using simulation and computational modeling, was employed to extract accurate information and infer where that information was not explicit. Conclusion
The study of teams has been dominated by static, cross-sectional studies that have provided insights about team inputs, processes, and outputs. These insights, however, are limited as teams are inherently dynamic entities that deserve richer examinations. The increasing use of simulation to examine team construction and dynamics is addressing this limitation. The variety in how computational models are mathematically formulated and simulated found in the reviewed studies demonstrates the versatility of this approach and hints at the possibilities for future growth. Indeed, simulation represents a methodological approach with the potential to vastly increase theoretical development about groups and teams and, more importantly, their dynamic interactions. References Alami, R., & Botelho, S. S. D. C. (2002). Plan-based multi-robot cooperation. Lecture Notes in Computer Science, 2466, 1–20. Arrichiello, F., Chiaverini, S., Indiveri, G., & Pedone, P. (2010). The null-space-based behavioral control for mobile robots with velocity actuator saturations. The Interna tional Journal of Robotics Research, 29(10), 1317–1337. Ashworth, M. J., & Carley, K. M. (2006). Who you know vs. what you know: The impact of social position and knowledge on team performance. Journal of Mathematical Sociology, 30(1), 43–75. Austin-Breneman, J., Yu, B. Y., & Yang, M. C. (2014). Biased information passing between subsystems over time in complex system design. In Proceedings of the ASME 2014 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference. New York, NY: The American Society of Mechanical Engineers. Barkoczi, D., & Galesic, M. (2016). Social learning strategies modify the effect of network structure on group performance. Nature Communications, 7, 13109. doi: 10.1038/ncomms13109. Bosse, T., Duell, R., Memon, Z. A., Treur, J., & van der Wal, C. N. (2015). Agent-based modeling of emotion contagion in groups. Cognitive Computation, 7(1), 111–136. Caimo, A., & Lomi, A. (2015). Knowledge sharing in organizations: A Bayesian analy sis of the role of reciprocity and formal structure. Journal of Management, 41(2), 665–691. Carley, K. M., Prietula, M. J., & Lin, Z. (1998). Design versus cognition: The interaction of agent cognition and organizational design on organizational performance. Journal of Artificial Societies and Social Simulation, 1(3), 1–19. Carley, K. M., & Schreiber, C. (2002). Information technology and knowledge distri bution in C3I teams. Proceedings of the 2002 Command and Control Research and
Using Simulations to Predict the Behavior of Groups and Teams
173
Technology Symposium. Conference held in Naval Postgraduate School, Monterey, CA (Evidence Based Research, Track 1, Electronic Publication, 17, Vienna, VA). www.dodccrp.org/events/2002/CCRTS_Monterey/Tracks/pdf/032.PDF. Carvalho, A. N., Oliveira, F., & Scavarda, L. F. (2015). Tactical capacity planning in a real-world ETO industry case: An action research. International Journal of Produc tion Economics, 167, 187–203. Chae, S. W., Seo, Y. W., & Lee, K. C. (2015). Task difficulty and team diversity on team creativity: Multi-agent simulation approach. Computers in Human Behavior, 42, 83–92. Christensen, L. C., Christiansen, T. R., Jin, Y., Kunz, J., & Levitt, R. E. (1999). Modeling and Simulating Coordination in Projects. Journal of Organizational Computing and Electronic Commerce, 9(1), 33–56. Coen, C. (2006). Seeking the comparative advantage: The dynamics of individual cooper ation in single vs. multiple-team environments. Organizational Behavior and Human Decision Processes, 100, 145–159. Crowder, R. M., Robinson, M. A., Hughes, H. P. N., & Sim, Y. (2012). The development of an agent-based modeling framework for simulating engineering team work. IEEE Transactions on Systems, Man, & Cybernetics—Part A: Systems and Humans, 42(6), 1425–1439. Das, S. M., Hu, Y. C., Lee, C. S. G., & Lu, Y. (2007). Mobility-aware ad hoc routing pro tocols for networking mobile robot teams. Journal of Communications and Networks, 9(3), 296–311. Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Development theory through simulation methods. Academy of Management Review, 32(2), 480–499. DeChurch, L. A., & Haas, C. D. (2008). Examining team planning through an episodic lens: Effects of deliberate, contingency, and reactive planning on team effectiveness. Small Group Research, 39, 542–568. Dionne, S. D., Sayama, H., Hao, C., & Bush, B. J. (2010). The role of leadership in shared mental model convergence and team performance improvement: An agentbased computational model. The Leadership Quarterly, 21, 1035–1049. Dollarhide, R. L., & Agah, A. (2003). Simulation and control of distributed robot search teams. Computers and Electrical Engineering, 29, 625–642. Duecker, D. A., Geist, A. R., Kreuzer, E., & Solowjob, E. (2019). Learning environmental field exploration with computationally constrained underwater robots: Gaussian pro cesses meet stochastic optimal control. Sensors, 19, 2094. doi: 10.3390/s19092094. Feng, B., Jiang, Z., Fan, Z., & Fu, N. (2010). A method for member selection of crossfunctional teams using the individual and collaborative performances. European Jour nal of Operational Research, 203, 652–661. Flache, A., & Mäs, M. (2008a). How to get the timing right. A computational model of the effects of the timing of contacts on team cohesion in demographically diverse teams. Computational and Mathematical Organization Theory, 14(1), 23–51. Flache, A., & Mäs, M. (2008b). Why do faultlines matter? A computational model of how strong demographic faultlines undermine team cohesion. Simulation Modeling Practice and Theory, 16, 175–191. Frantz, T. L., & Carley, K. M. (2013). The effects of legacy organizational culture on post-merger integration. Nonlinear Dynamics, Psychology, and Life Sciences, 17(1), 107–132.
174
Deanna M. Kennedy and Sara A. McComb
Gavetti, G., & Warglien, M. (2015). A model of collective interpretation. Organization Science, 26(5), 1263–1283. Goodman, R. A., & Goodman, L. P. (1976). Some management issues in temporary sys tems: A study of professional development and manpower-the theater case. Adminis trative Science Quarterly, 494–501. Grand, J. A., Braun, M. T., Kuljanin, G., Kozlowski, S. W. J., & Chao, G. T. (2016). The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams. Journal of Applied Psychology, 101(10), 1353–1385. Hackman, J., Brousseau, K. R., & Weiss, J. A. (1976). The interaction of task design and group performance strategies in determining group effectiveness. Organizational Behavior and Human Performance, 16, 350–365. Harrison, J. R., Lin, Z., Carroll, G., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32(4), 1229–1245. Huang, E., & Chen, S. (2006). Estimation of project completion time and factors analysis for concurrent engineering project management: A simulation approach. Concurrent Engineering: Research and Application, 14(4), 329–341. Juran, D. C., & Schruben, L. W. (2004). Using worker personality and demographic information to improve system performance prediction. Journal of Operations Man agement, 22, 355–367. Kang, M. (2007). The effects of agent activeness and cooperativeness on team decision efficiency: A computational simulation study using Team-Soar. International Journal of Human-Computer Studies, 65, 497–510. Katzenbach, J. R., & Smith, D. K. (1993). The wisdom of teams: Creating the highperformance organization. New York, NY: McKinsey & Company Inc. Kennedy, D. M., & McComb, S. A. (2014). When teams shift among processes: Insights from simulation and optimization. Journal of Applied Psychology, 99(5), 784. Kennedy, D. M., & McComb, S. A. (2017). Simulation and virtual experimentation. In A. Pilny & M. S. Poole (Eds.), Group processes: Data-driven computational approaches. Computational social science series (pp. 181–206). New York, NY: Springer Publishing. Kennedy, D. M., McComb, S. A., & Vozdolska, R. R. (2011). An investigation of project complexity’s influence on team communication using Monte Carlo simulation. Jour nal of Engineering and Technology Management, 28(3), 109–127. Kennedy, D. M., Sommer, S. A., & Nguyen, P. A. (2017). Optimizing multi-team system behaviors: Insights from modeling team communication. European Journal of Opera tional Research, 258(1), 264–278. Kozlowski, S. W. (2015). Advancing research on team process dynamics: Theoreti cal, methodological, and measurement considerations. Organizational Psychology Review, 5(4), 270–299. Kozlowski, S. W., & Chao, G. T. (2018). Unpacking team process dynamics and emer gent phenomena: Challenges, conceptual advances, and innovative methods. Ameri can Psychologist, 73(4), 576. Kozlowski, S. W., Chao, G. T., Chang, C. H., & Fernandez, R. (2015). Team dynamics: Using “big data” to advance the science of team effectiveness. In S. Tonidandel, E. B. King, & J. M. Cortina (Eds.), Big data at work: The data science revolution and organizational psychology (pp. 273–309). New York, NY: Routledge Academic.
Using Simulations to Predict the Behavior of Groups and Teams
175
Kozlowski, S. W., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2013). Advanc ing multilevel research design: Capturing the dynamics of emergence. Organizational Research Methods, 16(4), 581–615. Kozlowski, S., & Klein, K. (2000). A multilevel approach to theory and research in organizations, Contextual, temporal and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and methods in organizations: Foun dations, extensions and new directions. San Francisco, CA: Jossey-Bass. Larson, J. R., Jr. (2012). Computer simulation methods for groups. In A. B. Holl ingshead & M. S. Poole (Eds.), Research methods for studying groups and teams (pp. 329–357). New York, NY: Routledge. Law, A. M., & Kelton, W. D. (2000). Simulation modeling and analysis (3rd ed.). New York, NY: McGraw-Hill. Lunesu, M. I., Münch, J., Marchesi, M., & Kuhrmann, M. (2018). Using simulation for understanding and reproducing distributed software development processes in the cloud. Information and Software Technology, 103, 226–238. Martínez-Miranda, J., & Pavón, J. (2012). Modeling the influence of trust on work team performance. Simulation: Transactions of the Society for Modeling and Simulation International, 88(4), 408–436. Mäs, M., Flache, A., Takács, K., & Jehn, K. A. (2013). In the short term we divide, in the long term we unite: Demographic crisscrossing and the effects of faultlines on subgroup polarization. Organization Science, 24(3), 716–736. Mathieu, J. E., Gallagher, P. T., Domingo, M. A., & Klock, E. A. (2019). Embracing com plexity: Reviewing the past decade of team effectiveness research. Annual Review of Organizational Psychology and Organizational Behavior, 6, 17–46. Mathieu, J. E., Hollenbeck, J. R., van Knippenberg, D., & Ilgen, D. R. (2017). A century of work teams in the Journal of Applied Psychology. Journal of Applied Psychol ogy, 102(3), 452–467. McComb, C., Cagan, J., & Kotovsky, K. (2015). Lifting the veil: Drawing insights about design teams from a cognitively-inspired computational model. Design Studies, 40, 119–142. McGrath, J. E., Arrow, H., & Berdahl, J. L. (2000). The study of groups: Past, present, and future. Personality and Social Psychology Review, 4(1), 95–105. Millhiser, W. P., Coen, C. A., & Solow, D. (2011). Understanding the role of worker inter dependence in team selection. Organization Science, 22(3), 772–787. Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA Group. (2009). Pre ferred reporting items for systematic reviews and meta-analyses: The PRISMA state ment. PLoS Medicine, 6(7), e1000097. doi: 10.1371/journal.pmed.1000097. Nelson, A. L., & Grant, E. (2006). Using direct competition to select for competent con trollers in evolutionary robotics. Robotics and Autonomous Systems, 54, 840–857. Nieboer, J. (2015). Group member characteristics and risk taking by consensus. Journal of Behavioral and Experimental Economics, 57, 81–88. Palazzolo, E. T., Ser, D. A., She, Y., Su, C., & Contractor, N. (2006). Coevoluation of communication and knowledge networks in transactive memory systems: Using com putational models for theoretical development. Communication Theory, 16, 223–250. Patrashkova, R. R., & McComb, S. A. (2004). Exploring why more communication is not better: Insights from a computational model of cross-functional teams. Journal of Engineering and Technology Management, 21(1–2), 83–114.
176
Deanna M. Kennedy and Sara A. McComb
Pham, D. T., & Awadalla, M. H. (2004). Fuzzy-logic-based behaviour coordination in a multi-robot system. Proceedings of the Institution of Mechanical Engineers, 218(6), 583–598. Pitonakova, L., Crowder, R., & Bullock, S. (2016). Information flow principles for plas ticity in foraging robot swarms. Swarm Intelligence, 10, 33–63. Ramani, R. G., Viswanath, P., & Arjun, B. (2008). Ant intelligence in robotic soccer. International Journal of Advanced Robotic Systems, 5(1), 49–58. Ramos-Villagrasa, P. J., Marques-Quinteiro, P., Navarro, J., & Rico, R. (2018). Teams as complex adaptive systems: Reviewing 17 years of research. Small Group Research, 49(2), 135–176. Ren, Y., Carley, K. M., & Argote, L. (2006). The contingent effects of transactive mem ory: When is it more beneficial to know what others know? Management Science, 52(5), 671–682. Rodríguez, A., Grushin, A., & Reggia, J. A. (2007). Swarm intelligence systems using guided self-organization for collective problem solving. Advances in Complex Sys tems, 10(Supp. 1), 5–34. Rodriguez, A., & Reggia, J. A. (2004). Extending self-organizing particle systems to problem solving. Artificial Life, 10, 379–395. Rojas-Villafane, J. A. (2010). An agent-based model of team coordination and perfor mance (FIU Electronic Theses and Dissertations). 250. https://digitalcommons.fiu. edu/etd/250. 10.25148/etd.FI10081217 Rosenfeld, A. (2019). Two adaptive communication methods for multi-robot collision avoidance. Robotica, 37, 851–867. Rubenstein, M., Sai, Y., Chuong, C., & Shen, W. (2009). Regenerative patterning in swarm robots: Mutual benefits of research in robotics and stem cell biology. The Inter national Journal of Developmental Biology, 53, 869–881. Schillinger, P., Bürger, M., & Dimarogonas, D. V. (2018). Decomposition of finite LTL specifications for efficient multi-agent planning. In Distributed autonomous robotic systems (pp. 253–267). Cham: Springer. Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual Review of Psychology, 70, 747–770. Sinclair, M. A., Siemieniuch, C. E., Haslam, R. A., Henshaw, M. J. d. C., & Evans, L. (2012). The development of a tool to predict team performance. Applied Ergonomics, 43, 176–183. Solow, D., Vairaktarakis, G., Piderit, S. K., & Tsai, M. (2002). Managerial insights into the effects of interactions on replacing members of a team. Management Science, 48(8), 1060–1073. Son, J., & Roja, E. M. (2011). Evolution of collaboration in temporary project teams: An agent-based modeling and simulation approach. Journal of Construction Engineering and Management, 137(8), 619–628. Spears, W. M., Spears, D. F., Hamann, J. C., & Heil, R. (2004). Distributed, physicsbased control of swarms of vehicles. Autonomous Robots, 17, 137–162. Spears, W. M., Spears, D. F., & Heil, R. (2004). A formal analysis of potential energy in a multi-agent system. Lecture Notes in Computer Science, 3228, 131–145. Tambe, M., Adibi, J., Al-Onaizan, Y., Erdem, A., Kaminka, G. A., Marsella, S. C., & Muslea, I. (1999). Building agent teams using an explicit teamwork model and learn ing. Artificial Intelligence, 110, 215–239.
Using Simulations to Predict the Behavior of Groups and Teams
177
Tarakci, M., Greer, L. L., & Groenen, P. J. (2016). When does power disparity help or hurt group performance? Journal of Applied Psychology, 101(3), 415–429. Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu: Com putational modeling for micro-level organizational researchers. Organizational Research Methods, 15, 602–623. Weinhardt, J. M., & Vancouver, J. B. (2012). Computational models and organizational psychology: Opportunities abound. Organizational Psychology Review, 2, 267–292. Winder, R., & Reggia, J. A. (2004). Using distributed partial memories to improve self-organizing collective movements. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, 34(4), 1697–1707. Wong, S. S., & Burton, R. M. (2000). Virtual teams: What are their characteristics, and impact on team performance? Computational & Mathematical Organization Theory, 6(4), 339–360. Woolley, A. (1998). Effects of intervention content and timing on group task perfor mance. Journal of Applied Behavioral Science, 34, 30–46. Xia, N., Hu, B., & Jiang, F. (2016). An exploration for knowledge evolution affected by task assignment in a research and development team: Perspectives of learning obtained through practice and communication. Simulation, 92(7), 649–668. Xu, Y., Liu, P., & Li, X. (2014). Reorganizing complex network to improve largescale multiagent teamwork. Mathematical Problems in Engineering. doi: 10.1155/2014/107246. Yildirim, U., Ugurlu, O., Basar, E., & Yuksekyildiz, E. (2017). Human factor analysis of container vessel’s grounding accidents. International Journal of Maritime Engineer ing, 159, A89–A98. Zheng, Z., Guo, J., & Gill, E. (2019). Distributed onboard mission planning for multisatellite systems. Aerospace Science and Technology, 89, 111–122. Zhu, D., Huang, H., & Yang, S. X. (2013). Dynamic task assignment and path planning of multi-AUV system based on an improved self-organizing map and velocity synthesis method in three-dimensional underwater workspace. IEEE Transactions on Cybernet ics, 43(2), 504–514.
PART II
Creating and Validating Computational Models
7 AGENT-BASED MODELING Chen Tang and Yihao Liu
Compared to developing traditional verbal theories, formal theory building using the computational modeling technique is relatively new to organizational researchers but represents a powerful tool for advancing organizational theo ries with greater precision, logical consistency, transparency, and reproducibility (Adner et al., 2009; Csaszar, 2020; Hannah et al., 2021; Harrison et al., 2007; Wang et al., 2016). Among the various types of computational modeling tech niques, agent-based modeling (ABM) has its unique value in describing and explaining dynamic and emergent phenomena in organizations (Kozlowski et al., 2013). ABM is a technique that describes a complex system using a collec tion of heterogeneous and adaptive agents and the dynamics of the interactions between them (Wilensky & Rand, 2015). It is a powerful tool for describing, understanding, and theorizing phenomena that “are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom-up dynamical model of the microfoundations at the relational level” (Macy & Willer, 2002, p. 143). In this chapter, we provide a tutorial on using ABM to model organizational phenomena and build organization theories. We organize this chapter into three parts. In Part I, we offer an overview of the core ideas, defining characteristics, strengths, and limitations of ABM. In Part II, we review articles published in six premier organizational and psychological journals that adopted ABM and sum marize how ABM was applied (e.g., topics, procedures, and findings) in these articles. In Part III, we provided a detailed walkthrough of the process for build ing a simple agent-based model using the scenario of newcomers joining a team of seasoned organizational members (i.e., veterans) as an example. DOI: 10.4324/9781003388852-9
182
Chen Tang and Yihao Liu
Part I: What Is Agent-Based Modeling
As a computer simulation technique, ABM is geared toward building formal models and is particularly suitable for understanding and theorizing complex phenomena that emerge from behaviors and interactions between heterogene ous, adaptive agents (e.g., individual employees) in a social system (e.g., work teams; Harrison et al., 2007). The central idea of ABM is that many complex phenomena are collectively formed from simple agent behaviors and interactions at the lower level. Agents are individual computational units that follow simple, predefined rules and make autonomous decisions based on the rules during the simulation process (Bonabeau, 2002). For example, when simulating a certain interaction pattern among members in work teams, the agents can be thought of as abstract representations of the individual team members, with the interaction patterns among them governed by a set of simple rules (e.g., dyadic cognitions of each other and behavioral intentions toward each other). This type of simula tion can also be applied at the macro level, such as modeling the interactions (e.g., learning, acquisition) between companies (i.e., agents) in a given industry to understand their collective and emergent influences on the business processes and models of the industry as a whole. Although the agents only operate by simple decision rules, together they are capable of forming complex patterns at the higher, aggregate level. Interestingly, although these aggregate patterns often seem to be highly intelligent, such collective intelligence is usually not due to “the existence of or the direction provided by a central authority” (Macal & North, 2005, p. 5) but rather emerges via bottom-up processes based on simple agent behaviors at the lower level. As an illustrative example, we all have experienced traffic jams. When there are many vehicles on the road, the traffic slows down and the road gets congested due to various reasons such as some drivers accidentally stepping on the brakes and changing of lanes. That is, the traffic jam is formed as an emergent state of the road by all individual vehicles involved, and each individual vehicle follows two simple rules in creating such an emergence: (a) it slows down if there is another vehicle close ahead, and (b) it speeds up (within speed limits) if there is no vehicle close ahead. Because all vehicles are slowly moving forward when there is a traffic jam, one would assume that the jam itself is moving forward with the vehicles as well (Wilensky & Rand, 2015). But real-world evidence contradicts this assumption: vehicles eventually leave the traffic jam if they keep going forward. This implies that the traffic jam does not move forward with the vehicles, or at least moving forward not as fast as the vehicles. ABM is well suited to help us understand such a contradiction. Treating the vehicles as agents and setting the two simple rules mentioned earlier, an agent-based model ulti mately demonstrated that the traffic jam moves in the opposite direction relative to the traffic (for a detailed discussion, see Resnick, 1994, pp. 68–74). Therefore,
Agent-Based Modeling
183
strictly speaking, the vehicles do not leave the traffic jam; rather, the traffic jam leaves the vehicles! The traffic jam example also illustrates a key characteristic of ABM—agents are meaningful abstractions of real-world entities. For example, when specify ing the agents to model the traffic jam, there is often no need to specify the make, body type, or color of a vehicle because they are not likely to be relevant to the phenomenon of interest. Instead, key features such as the speed of the vehicle and even the reactivity of its driver should be codified into each agent. This level of abstraction helps the researchers disregard aspects irrelevant to the phenomenon of their interest and focus only on the core elements of the phenomenon. In the organizational context, this is an important benefit because individuals within organizations are highly complex. With a reasonable level of simplification and abstraction, ABM enables organizational researchers to study and theorize the way agents (e.g., individual workers) in a social system (e.g., a work team) behave and interact with each other to inform the emergence of higher-level properties such as team-level emergent states and processes (Har rison et al., 2007). Therefore, ABM has great potential in helping researchers understand a variety of dynamic organizational phenomena (Wang et al., 2016). What Is an Agent?
The basis of ABM is the individual agents. In the literature, although different scholars have defined agents in their own way, these definitions share a few common characteristics that are worth noting (see Macal & North, 2005 and Wooldridge & Jennings, 1995). First, as mentioned earlier, agents are abstrac tions of real-world entities that follow simple rules. Like in the traffic jam exam ple, agents are abstract versions of the real entities (e.g., vehicles) and can be governed by rules as simple as if-then heuristics (but can also be guided by more complex rules). Regardless of the level of complexity of the rules, they should have the capacity of driving the emergence of collective patterns in the complex system. Second, agents are flexible and adaptive. In addition to following simple rules, agents can also be configured so that they can be adaptive in the complex system by engaging in different behaviors and interactions over time depend ing on their idiosyncratic attributes and cumulative and/or immediate experi ences in the system. In other words, agents can be assigned additional (and usually more complex) rules that allow them to follow the simpler, basic rules with modifications according to their unique characteristics and experiences (Smith & Conrey, 2007). Third, agents are autonomous and self-directed. As mentioned earlier, the global and emergent pattern formed by the agents is not a result of coordinated actions of the agents—there is no central authority creating and managing the
184
Chen Tang and Yihao Liu
global pattern. Rather, the pattern is a product of all the self-directed agents, each taking actions on their own according to the predefined rules. Fourth, agents are interdependent with each other. This characteristic con cerns how the agents interact and influence each other. In the traffic jam exam ple, the speed of the agents (vehicles) is interdependent on each other, because a slowdown in one vehicle would cause a slowdown in the vehicle that fol lows. Such interdependence eventually produces the traffic jam as a collective phenomenon. What Is Emergence?
One distinguishing characteristic of ABM is that it describes a generative pro cess. ABM does not model the global phenomenon of interest directly. Rather, it focuses on the agents at the lower level and indirectly generates the emergence of the global pattern as a result of what happens at the lower level. Therefore, emergence is a bottom-up process “whereby dynamic interaction processes among lower-level entities (i.e., individuals, teams, units)—over time—yield phenomena that manifest at higher, collective levels” (Kozlowski et al., 2013, p. 582). Given emergence involves dynamic, complex, and iterative interac tions at the lower level, it is often difficult for the human mind to comprehend it intuitively. ABM is particularly useful for us to understand emergence because of its emphasis on the basic rules and simple behaviors of the agents that ulti mately lead to complex emergent phenomena at the higher level (Wilensky & Rand, 2015). In the context of ABM, emergence has four core characteristics (Kozlowski et al., 2013). First, emergence is a type of multilevel phenomenon that origi nates from a lower level and manifests at the higher level (e.g., from the micro level to the meso level, from the meso level to the macro level). As such, it is common to include at least two levels of analysis in ABM research. Second, emergence is a process. It involves describing how the agents interact with each other via processes and mechanisms relevant to their cognition, affect, behav iors, and other characteristics to form the emergent phenomenon. Thus, explicit rules that can capture these processes and mechanisms inherent in the agents’ behaviors and interactions with each are often deemed necessary to success fully model emergence in ABM. Third, emergent patterns are often a result of complex and nonlinear compilations of lower-level patterns. Therefore, various forms of rules can be seen in ABM. For example, when specifying a learning mechanism between agents, one can use a deterministic rule (e.g., increase one’s knowledge every time they form an additional connection in the social network), a stochastic rule (e.g., increase one’s chance in obtaining knowledge in each iteration if they form an additional connection in the social network), a rule that depends on other characteristics of the agents (e.g., the amount of knowledge
Agent-Based Modeling
185
obtained is influenced by the agent’s learning capability), or a combination of these rules. However, we note that the rules, as well as other aspects of the ABM, should be appropriately informed by sound theoretical reasoning and empirical evidence in the literature. Fourth, emergence is dynamic. It takes time for the agents’ behaviors and interactions to manifest at the higher level and form pat terns collectively, and therefore time and dynamism are an integral part of emer gence. While it is usually difficult for traditional verbal-based theory building to clearly and precisely propose theories featuring time and change (Ployhart & Vandenberg, 2010), ABM is well-equipped to address such dynamic processes by explicitly modeling how agents’ behaviors and interactions change, evolve, and develop both over time and because of time in driving the emergence of col lective phenomena. Strengths and Limitations of ABM
Besides what has been discussed earlier, a few additional advantages of ABM are worth noting. First, like any computational modeling technique, ABM requires clear user inputs (Hughes et al., 2012). That is, for ABM to effectively and accu rately represent a particular theory, researchers are required to clearly specify all aspects of the proposed theory with precise arguments for the decision rules of the agents. Moreover, ABM users must convert the rules into mathematical equations and then codify these rules using a simulation platform’s program ming language. This forces the users to map out their theoretical arguments and logics precisely and clearly in developing the model, removing its ambiguities and increasing its logical consistency. Second, similar to other computational modeling techniques, ABM as a type of simulation tool enables researchers to conduct studies that are not feasible in real-life situations and does not incur much cost and risk to the units being studied and to the researchers conducting the study (Hughes et al., 2012). For example, few organizations would allow researchers to experimentally imple ment a major change in organizational policy and study the resulted influence, as this may potentially interrupt their normal operations. ABM can be employed in such a scenario to virtually predict and examine whether and how employees react to such a change and the associated bottom-up consequences at the higher levels of the company, such as teams and departments. More importantly, ABM allows experimentation so that a certain property can be systematically varied to examine the corresponding emergent patterns, while holding others constant. For example, researchers can vary company size (i.e., number of agents) and see how the policy change would unfold similarly or differently in small, medium, and large companies. Third, ABM facilitates understanding real-world phenomena that are complex and dynamic in nature, especially those that cannot be sufficiently accounted for
186
Chen Tang and Yihao Liu
with linear relationships alone. When building verbal theories, researchers often rely on linear relationships to describe and understand the associations among constructs because such relationships are parsimonious and intuitive. However, this theory-building approach has limited capacity in delineating the complexi ties inherent in the potential associations. For example, interactive social sys tems (e.g., teams) are often characterized by nonlinear patterns in their agents’ behaviors and interactions that are not always appropriate for linear models (Harrison et al., 2007). ABM is capable of describing and modeling relationships with nonlinearities (Wilensky & Rand, 2015), because of its focus on the agents’ interactions themselves rather than on the constructs and their associations that derive from these interactions (Smith & Conrey, 2007). ABM can also incorpo rate time and dynamic change of the systems as the simulations run sequentially, allowing for various forms of nonlinearities to manifest over time. Despite its various advantages, ABM also has some limitations (Bonabeau, 2002). First, ABM is best suited to study research questions that involve interac tions between individual agents in complex systems where the global pattern emerges from such interactions. That is, it has a relatively narrower scope com pared to other computational modeling techniques (e.g., system dynamic mod eling) that can be applied to understanding intrapersonal processes as an agent interacts with a broad work environment that may include more than just agents. Second, it can be difficult and even overwhelming to appropriately specify the interactions between agents when they represent humans, because humans tend to be subjective, irrational, and complicated. Third, when simulating large sys tems, ABM can become time-consuming because it seeks to understand the sys tem’s behavior as a function of the behaviors and interactions at the agent level. This poses a challenge to researchers because it requires them to optimize the scale of the model without sacrificing its validity and fidelity. Finally, conduct ing and reading ABM research require a certain amount of mathematical and computer programming skills. This requires the researchers to have the appro priate training to conduct ABM research and to ensure a clear communication of the modeling procedure and results to reach a wider readership. Part II: Agent-Based Modeling in Organizational Research
After an overview on the concepts and characteristics of ABM, we now turn to some more pragmatic aspects of ABM research. We conducted a literature search among articles published in six premier journals in organizational and psychological sciences (Administrative Science Quarterly, Academy of Manage ment Journal, Academy of Management Review, Journal of Applied Psychol ogy, Organization Science, and Organizational Behavior and Human Decision Processes) and synthesized the relevant articles that used ABM in a narrative review. Our literature search resulted in a total of 24 articles published between
Agent-Based Modeling
187
1991 and 2022. We note that the goal of this search was not to exhaustively iden tify all relevant articles in these journals but to select and review representative papers that used ABM to study various organizational phenomena. Specifically, through the literature review, we summarized (a) common organizational topics that were studied using ABM and (b) typical structure and procedures that were followed by these articles. Topics Studied Using Agent-Based Modeling
The results of the literature search showed that ABM has been used to study a wide range of topics in organization studies. First, a few articles used ABM to model the dynamics of organizational culture. Harrison and Carroll (1991) stud ied how culture was transmitted when an organization experienced employee turnover and/or growth in size. Using ABM, they systematically modeled the effects of hiring, socialization, and turnover on culture transmission in various types of organizations, which led to some interesting findings. For example, rapid growth and high turnover of employees did not hinder but facilitated cul ture transmission and helped establish cultural stability, because rapid growth was related to employees’ increased engagement in socialization and high turno ver indicated that employees with less enculturation were more likely to exit the organization. Adapting Harrison and Carroll’s (1991) model, Carroll and Harrison (1998) used ABM to examine the relationship between the tenure of top management team members and organizational culture. They found general support for a positive relationship between heterogeneity in tenure and hetero geneity in culture—a common assumption in the organizational demography literature—but they also identified a few potentially important exceptions. Second, ABM was frequently adopted to understand and theorize about organizational learning. For example, March (1991) used ABM to study how organizations could better structure knowledge exploration and exploitation to streamline organizational learning. Siggelkow and Levinthal (2003) built an agent-based model to examine the relationship between different explora tion approaches and organizational performance. Miller et al. (2006) extended March’s (1991) model by adding two more parameters (i.e., direct interpersonal learning and tacit knowledge). Siggelkow and Rivkin’s (2006) model showed that in multilevel organizations, exploration at the lower level could backfire on organizations by negatively affecting the overall level of exploration, hence reducing organizational performance. Other articles that studied organizational learning with ABM include Miller and Lin (2010), Levine and Prietula (2012), Knudsen and Srikanth (2014), Grand et al. (2016), and Puranam and Swamy (2016). Third, ABM was used to model team networks. For example, Chang and Har rington’s (2007) model explicated how a problem-solving network was formed
188
Chen Tang and Yihao Liu
around the most innovative team members yet bridged and expanded by the imitators of those innovators. Turning the spotlight onto a different team net work, Lazer and Friedman (2007) modeled how characteristics of team com munication network affected organizational performance. Their model showed that when solving complex problems, an efficient team communication network among team members only facilitated team performance in the short run but not in the long run because an efficient communication network restrained the exchange of diverse perspectives. Given this finding, efficient communication networks were deemed less ideal for knowledge exploration. Fourth, ABM was adopted to study various types of team characteristics. Tarakci et al. (2016) used ABM to delineate the relationship between team power disparity and team performance and found that this relationship depended on the task competence of the power holder—when the power holder had high task competence, power disparity was more likely to facilitate team perfor mance. In addition, Wellman et al. (2020) modeled the relationship between team hierarchical structure and team performance. They found that compared to a pyramid-shaped team hierarchy, an inverse pyramid-shaped team hierarchy improved team performance when there was a higher amount of task variety. Further, Raveendran et al. (2022) used ABM to show that self-selection-based division of labor was more advantageous compared to the traditional division of labor (e.g., managers allocate tasks to employees) when (a) employees were skilled at a specific task, (b) the task structure was decomposable (i.e., high task independence), and (c) employee availability was unforeseeable (i.e., a small talent pool). Finally, ABM was also used to study cooperation. For example, Coen (2013) used ABM to model prisoner’s dilemma within and between groups (i.e., singlegroup and intergroup prisoner’s dilemma) and showed how the payoff matrices of both single-group and intergroup prisoner’s dilemma facilitated cooperation in distinct ways. Further, Roos et al. (2015) explicated how group norms of cooperation and coordination helped groups to adapt to external threats. Additionally, strategic management scholars have utilized ABM to model the behaviors of firms and organizations at the macro level (i.e., Levinthal & Posen, 2007; Ganco & Agarwal, 2009; Siggelkow & Rivkin, 2009; Coen & Maritan, 2011; Etzion, 2014; Tatarynowicz et al., 2016; Haack et al., 2021). We did not review findings of these studies here, given they fall outside of the scope of the psychology discipline. Typical Structure and Procedures of ABM Research
Another goal of the literature review is to identify how existing ABM research published in premier organizational and psychological journals were designed and structured, so that readers who are interested in utilizing ABM in their own
Agent-Based Modeling
189
research can find good references to follow. Designing and following a clear structure is crucial for ABM research because ABM typically involves a great amount of theoretical and technical details. Therefore, an organized set of struc tures and procedures help researchers clearly develop their models and theories, helping readers to better understand their research. From the literature review, we identified that the structure followed by the extant articles can be largely summarized into three essential steps, with an optional fourth step: (1) develop the narrative and conceptual foundations of the phenomenon of interest; (2) specify the agent-based model based on Step 1; (3) conduct simulations and interpret findings; and (4) empirically test the formal theory (optional). We note that this is largely consistent with what Kozlowski et al. (2013) proposed as rec ommendations for using ABM to study organizational phenomena. Step 1. Develop Narrative and Conceptual Foundations
Step 1 aims at developing a conceptual understanding of the phenomenon of interest. This serves as the theoretical foundation for the agent-based model. The majority of the articles we reviewed started with a comprehensive literature review that identifies the existing theorical perspectives and empirical findings that are relevant to their research topic. For example, to propose a team knowl edge emergence model that is driven by the learning and sharing activities of the team members, Grand et al. (2016) first drew from existing theories of team learning and knowledge sharing and identified a set of mechanisms that rep resent team members’ learning (i.e., data selection, encoding, decoding, and integration) and knowledge sharing (i.e., member selection, retrieval, sharing, and acknowledgment). Doing so is important because it grounds the subsequent design and specifications of the computational model within an organized, established framework that provides theoretical propositions and empirical findings to guide the translation of the proposed theories into computational languages. Step 2. Specify the Agent-Based Model
In Step 2, researchers usually translate the narratives developed from Step 1 into a formal agent-based model. The process of translation typically involves two components. First, all verbal descriptions of the proposed theory from Step 1 are re-expressed in simple rules and simulation procedures, often using mathe matical language, that is, mathematical equations that describe the states and the behavioral rules of the agents. ABM articles often present their model specifica tions in several organized modules that align with the major components of their theory. For example, when specifying their model, Harrison and Carroll (1991) organized the mathematical representation of their theory into three modules
190
Chen Tang and Yihao Liu
that indicate three different organizational processes (i.e., hiring, socialization, and turnover), along with some additional parameters. Some articles provided figures and/or tables to better summarize and organize all the model details. For example, Chang and Harrington (2007) provided a table that included all the model parameters, their notations, definitions, and values. We also followed this practice in Part III when we develop the exemplary agent-based model. Next, the aforementioned rules and procedures are written into code using ABM software of choice. We note that there is a long list of software for agentbased modeling, from free and open-source to commercial ones. For detailed reviews of the software, we refer readers to Allan (2010) and Abar et al. (2017) for their summary of agent-based modeling tools. Among all available tools, NetLogo seems to be the most popular choice by organizational scholars because of its user-friendly interface, intuitive programming language, and easy display and visualization of data (Fioretti, 2013). Wilansky and Rand’s (2015) book pro vided a detailed tutorial for using NetLogo to build agent-based models for both psychological research and beyond. Later in this chapter, we also use NetLogo to demonstrate our exemplary agent-based model. Step 3. Conduct Simulations and Interpret Findings
In this step, researchers instantiate the specified computational model into simu lations to examine whether and how well the specified model generates propo sitions, insights, and prescriptions that sufficiently and accurately conform to both the proposed narrative theory and the existing evidence (especially empiri cal evidence) from the literature. Stated differently, researchers calibrate the agent-based model so that it works as expected. This is often an iterative process involving coding, debugging, and verifying. In addition, researchers often con duct sensitivity analyses by varying key parameters of the model and see if the simulation results can hold within a reasonable range of these parameters. After the model is correctly specified and calibrated, researchers can also conduct experimentation with virtual data. That is, researchers can design the simulation study and manipulate key factors relevant for their theory to exam ine how they impact the functioning and results of their core computational model. This enables researchers to further establish internal validity for the pro posed relationships between the agent-level factors and the aggregate-level pat terns, facilitating and enriching theory building. As an example, Wellman et al. (2020) identified six hierarchical structures of teams and tested the relationships between those structures and team performance. That is, they varied the input (i.e., assigned different hierarchical structures to teams), along with other param eters, and examined how doing so shaped the output (i.e., team performance). Ultimately, the ABM process can be treated as a data generation process that generates outcome data based on the input values per the model specifications.
Agent-Based Modeling
191
Then, results of this data generation process, which contains both input and out put values, can be analyzed to facilitate theory building. Step 4. Empirical Testing (Optional)
Steps 1 through 3 represent the most essential procedures of developing a formal theory using ABM. If the goal of a research is to build a formal theory, then it can end at Step 3. However, if researchers hope to further test and/or compare theoretical specification, predictions, and insights from simulation with real data, then they can take Step 4 to test the proposed theory empirically. Empiri cal verification solidifies the validity and utility of the proposed theory because simulated theories, after all, are built upon a series of assumptions and simplified procedures to represent realistically more complex phenomena. Thus, testing such theories using real data helps researchers establish a stronger connection between abstract computational simulation and complex real-world situations, and allows them to further revise their theories based on insights from evaluat ing and comparing the simulated and empirical results. Several articles from our literature review used empirical data to verify their theory building using ABM. For example, Grand et al. (2016) included an empirical study of knowl edge emergence in real work teams after developing their agent-based model. Tarakci et al. (2016) added both a field study and a laboratory study to test the robustness of their agent-based model on power disparity. Wellman et al. (2020) also conducted a field study using a sample of nurses to cross-validate their find ings from the agent-based model of team hierarchical structures. Part III: An Agent-Based Modeling Example
In this section, we demonstrate the process of developing, implementing, and interpreting a simple agent-based model using newcomer socialization in work teams as an example. ABM is suitable for studying and theorizing this phenom enon because socialization involves multiple agents (i.e., newcomers and veter ans [existing and seasoned team members]), is embedded in a social system (i.e., work teams), and is a dynamic process (i.e., socialization takes time and involves constant interactions between newcomers and veterans; Vancouver et al., 2010). Following the procedure summarized in Part II, we (1) conceptually describe the phenomenon and theoretical model on newcomer socialization in teams, (2) translate the narratives into an agent-based model, and (3) simulate and inter pret the model. We note that it is not our goal to develop a comprehensive model that covers all aspects of the newcomer socialization process nor incorporate all characteristics of ABM (e.g., emergence) into a single model. Rather, we keep the model simple and focused to illustrate how an agent-based model is typically built with a research question relevant for organizational research.
192
Chen Tang and Yihao Liu
Step 1. Developing a Conceptual Model on Newcomer Socialization in Teams
When joining work teams, new employees (i.e., newcomers) must acquire knowledge, information, and skills to familiarize themselves with their new coworkers and environment. This process is usually referred to as newcomer socialization, the primary goal of which is to acquire job knowledge in a timely manner to become proficient with the new job (i.e., knowledge acquisition; Ashforth et al., 2007; Cooper-Thomas & Anderson, 2005). That is, the amount and efficiency of newcomer knowledge acquisition can be viewed as a key indicator of their socialization success. In this example, we propose and build a simple agent-based model that considers two antecedents of newcomer socialization1: newcomer ability and veteran impression of the newcomer, representing intrap ersonal and interpersonal factors that can facilitate newcomer socialization in teams, respectively. Intrapersonal Factor: Newcomer Ability
Given knowledge acquisition is essentially a learning process, we propose that newcomer ability (more specifically, cognitive ability) is a highly rele vant predictor of newcomer socialization (Ashforth et al., 2007). Specifically, based on numerous findings in the broader learning literature (e.g., Kanfer & Ackerman, 1989; Schmidt & Hunter, 2004), individuals’ cognitive ability stands out as perhaps the strongest predictor of one’s learning outcomes, such as the amount of job knowledge and the acquisition of task skills, because it fundamentally determines the cognitive resources individuals possess and allocate to complete resource-dependent activities such as knowledge acquisi tion. As newcomers acquire and accumulate more knowledge over time, they are more likely to demonstrate satisfying performance at work (Schmidt & Hunter, 2004). Moreover, in socially complex contexts like those of teams, successful learn ing usually cannot occur in a social vacuum and requires knowledge-intensive interactions between knowledge seekers and holders (Grand et al., 2016). According to the interactionist perspective (Cooper-Thomas & Anderson, 2005; Reichers, 1987; Wang et al., 2015), interacting with team veterans provides such critical learning and sensemaking opportunities for newcomers. Not only can veterans’ interaction efforts create an interactive learning environment for new comers where information-sharing and feedback-giving can be achieved effi ciently (Morrison, 1993, 2002; Collins & Smith, 2006), their efforts can also allow newcomers to engage in behavioral modeling of veterans’ skills and trial and-error learning via performance attempts (Bandura, 1977; Davis & Luthans, 1980). Taken together, we believe a thorough understanding of newcomer
Agent-Based Modeling
193
socialization requires the consideration of the interplay between both intraper sonal factors (e.g., newcomers’ knowledge absorption with different capacities) and interpersonal factors (e.g., veterans’ provision of learning opportunities via interactions with newcomers). Interpersonal Factor: Veteran Impression of the Newcomer
As elaborated earlier, newcomers’ successful socialization in a team also depends on whether they have frequent social interactions with veteran mem bers in the team (Wang et al., 2015). For example, past research has shown that newcomers can achieve better socialization outcomes when their coworkers are supportive (Kammeyer-Mueller et al., 2013; Nifadkar et al., 2012; Nifadkar & Bauer, 2016) and are willing to share task-related information (Bauer & Green, 1998; Li et al., 2011; Sluss & Thompson, 2012). Hence, we posit that it is key to examine veterans’ interpersonal impression of newcomers, especially regard ing how it affects veterans’ willingness to interact with newcomers. Among the various impressions veterans can form during socializing with newcomers, their ability-based impression of newcomers has been shown as a potent driver of veteran-newcomer interaction. In particular, Chen and Klimoski (2003) found that veterans’ expectations of newcomer performance—representing an ability-based impression—facilitated the quality of social exchange between veterans and newcomers. Other scholars have also examined this idea with different operationalizations of this type of impression, such as supervisors’ perception of newcomer commitment to task mastery (Ellis et al., 2017) and supervisors’ perceptions of newcomers’ ability-based characteristics via selfpromotion (Gross et al., 2021). To simplify our model, we therefore only consider veteran impression of the newcomer on their ability as the interper sonal driving force of newcomer socialization, via the occurrence of veterannewcomer interactions. One thing to note is that individuals’ interpersonal impression of others is not static but rather often goes through deliberate processes of updating (Cooper et al., 2021) and even correction (Gilbert et al., 1988). That is, as individuals accumulate observations of their targets via increased interactions, they may re-evaluate their previous judgments and refresh their impression of the targets. Accordingly, veterans may update their impression of the newcomers as a result of interacting with them and observing their behaviors through such interac tions, which likely affects their intention to invest further socialization efforts in the future. This changing feature of interpersonal impression thus necessi tates a dynamic perspective to understand the process of newcomer socialization with veterans (Vancouver et al., 2010). The conceptual model is summarized in Figure 7.1.
194
Chen Tang and Yihao Liu
FIGURE 7.1
An illustration of the proposed model on newcomer socialization in teams.
Source: Note. For the purpose of simplicity, this figure only illustrates the behaviors and interactions that take place between the team newcomer and one team veteran in a given iteration of the simula tion. In our actual simulation, all three team veterans follow the same set of rules in interacting with the team newcomer.
Step 2. Specifying the Agent-Based Model
Based on the conceptual narrative of the proposed model, we then developed and specified the corresponding agent-based model, which we describe in this step. This included identifying the types of agents of interest and elaborating the types of interactions/behaviors between agents. In our example, we specified two types of agents: newcomer agents and veteran agents. For a simple illustra tion, we defined each team as composed of three veteran agents and one new comer agent. As for the types of interactions and behaviors, we built four core modules in this model: (a) veteran interacting with the newcomer, (b) newcomer learning, (c) newcomer performing, and (d) veteran updating impression of the newcomer. All characteristics of the agents and the associated behavioral rules are defined using mathematical languages and translated into computer codes. Tables 7.1 and 7.2 summarize all the values and rules involved in our model. Input Values for Agent Characteristics
During the initial setup, we first assign a parameter to the newcomer in each team, representing newcomer ability. This value is assigned following a normal distribution with a mean of 4.0 and standard deviation of 1.0 and does not change over time, because we assume ability is a stable characteristic for newcomers. Then, for each veteran in the team, we assign a parameter to represent veteran impression of the newcomer. The initial value of this parameter follows a normal distribution with a mean of 4.0 and standard deviation of 1.0, indicating each veteran’s first impression of the newcomer. Over time (i.e., after each iteration), this value can be revised for each veteran based on the newcomer’s performance, which we detail later, representing the dynamic nature of interpersonal cognition
Agent-Based Modeling
195
TABLE 7.1 Summary of Initial Parameters
Parameter
Definition
Newcomer ability
The extent to which newcomers are capable to acquire knowledge and perform at work The extent to which Veteran impression of newcomers are the newcomer socially desirable and welcomed by others VeteranVeterans’ newcomer interactions interaction with newcomers to transfer job knowledge Newcomer knowledge Newcomer performance
Initial Value Nature
Notes
N (4,1)
Constant
N (4,1)
Constant
0
Binary
1 = very low, 4 = moderate, 7 = very high, bounded between 1 and 7 1 = very negative, 4 = neutral, 7 = very positive, bounded between 1 and 7 1 = veteran interacts with the newcomer, 0 = veteran does not interact with the newcomer Starts with 0 and is bounded by 1
Level of newcomers’ 0 job knowledge acquired from veterans over time 0 Newcomers’ demonstration of successful performance on a given task
Stock
Binary
1 if newcomer performs in an iteration, 0 if newcomer does not perform
during constant and continuous social interactions such as the ones between new and existing team members. Veteran Behavior: Interacting With the Newcomer
To operationalize veteran behavior, we specify that the probability of a veteran interacting with the newcomer is a function of their current impression of the newcomer, such that the more positive a veteran’s impression is toward the new comer, the more likely they will interact with the newcomer. More specifically, in each iteration of the simulation (i.e., representing a task episode for the team), each veteran will have a chance to decide if they want to interact with the new comer in the current nth team episode based on their existing impression (imp) formed from the (n 1)th episode. To reflect this, we define the probability of veteran-newcomer interaction (int ) as p (int ) = (imp 1) / 6
196
Theoretical Mechanism
Nature
Mathematical Function
Veteran interacting with the newcomer
Binary process determined by probability
p (int ) = (imp 1) / 6
Newcomer learning
Autoregressive process
know = know + 0.05 ´ (1 know ) ´
Veterans delegating
Binary process determined by probability
p ( perf _ opp ) = imp 1 / 6
Newcomer performing
Binary process determined by probability
p ( perf ) =
Veteran updating impression of the newcomer
Autoregressive process
(
Pre-existing Condition and Notes
)
abi 7
2 1 abi 1 ´ know + ´ 3 3 6
1 æ ö imp = imp + imp ´ ç 0.1 + ÷ duration ø è
Only runs when interaction occurs imp indicates the average impression of all veterans Only runs when newcomer obtains a performance opportunity Only runs when newcomer performs
Source: Notes. imp = veteran impression of the newcomer, abi = newcomer ability, know = newcomer knowledge, int = probability of veteran-newcomer inter action, p(perf_opp) = probability of veterans delegating the newcomer with an opportunity to perform, p(perf) = probability of newcomer performing, dura tion = number of iterations.
Chen Tang and Yihao Liu
TABLE 7.2 Summary of Mathematical Specifications
Agent-Based Modeling
197
Since imp is bounded between 1 and 7, p (int ) will range between 0 and 1, representing a probability-based formula that determines whether each veteran decides to interact with the newcomer or not in each task episode. Newcomer Behavior: Learning
Newcomer learning is operationalized as a newcomer’s (who is assumed to have no job knowledge at all at the start of the simulation) acquisition of job knowledge from veterans (who are assumed to have full knowledge of the job) over time. Specifically, in our model, every time an interaction occurs between the new comer and any veteran, the newcomer will engage in learning from the interaction. Therefore, a newcomer learns more if multiple veterans interact with them during one task episode. As reviewed earlier, social interaction only serves as a channel for the transfer of knowledge from veterans to newcomers; the exact amount of knowledge that newcomers acquire from their interactions with veterans should be determined by newcomers’ own cognitive ability (Schmidt & Hunter, 2004). Thus, we use the following equation to specify the increase in newcomer knowl edge per occurrence of a social interaction with any veteran on the team: know = know + 0.05 ´ (1 know ) ´
abi , 7
where know represents the amount of newcomer job knowledge, and abi repre sents the level of newcomer ability. In other words, during each social interac tion, a newcomer learns as much as 5% from the unlearned knowledge pool and as little as 0%, depending on the level of existing knowledge and the level of newcomer ability. Such a specification reflects the fact that the process of learn ing usually follows a nonlinear curve with decreasing learning rate over time (i.e., content is more difficult at the later stage of learning; Vancouver et al., 2010). With this specification, over time, newcomer knowledge will get infinitely close to 1.0 but never reaches 1.0. Therefore, to facilitate model convergence, newcomer socialization is considered to be successful when newcomer knowl edge reaches .99. The number of task episodes (i.e., iterations of the simulation) it takes for the newcomer to reach a knowledge level of .99 can then be used as an important dependent variable representing newcomer socialization efficiency in evaluating the formal model (the smaller the number of task episodes, the more efficient the socialization process). Newcomer Behavior: Performing
After all veterans have made their individual decisions about interacting with the newcomer and engaging in such an interaction (or not), the newcomer is
198
Chen Tang and Yihao Liu
specified to have a chance to perform at the end of each task episode. The per forming module of the model has two components: whether the newcomer will be delegated an opportunity to perform, and whether the newcomer can actu ally perform given a performance opportunity. Including both components of newcomer performing reflects a realistic scenario where team members with more experience and greater decision-making autonomy (e.g., veterans) get to decide on the task assignment of the less-experienced members in the team (e.g., newcomers). For the first component, we specify that veterans’ aggregated impression of the newcomer determines the probability of the newcomer obtaining a per formance opportunity from veterans: the newcomer is more likely to have an opportunity to perform when veterans’ average impression of the newcomer is more positive. This reflects the idea that when veterans hold better impressions of newcomers, they tend to trust the newcomers more and thus are more likely to delegate team tasks to the newcomers and empower the newcomers with more job autonomy (e.g., Chen & Klimoski, 2003; Colquitt et al., 2007). Therefore, the following equation is specified:
(
)
p ( perf _opp ) = imp 1 / 6. where p ( perf _opp) is the likelihood of the newcomer obtaining a performance opportunity from veterans, and imp is the current average impression of the new comer across all veterans. For the second component, we specify that, assuming the opportunity is offered, the chance of newcomer performing is posited to be determined by the newcomer’s current level of knowledge as well as ability. This proposition is derived from Schmidt and Hunter’s (2004) meta-analytical findings that indi viduals’ job knowledge and general mental ability were the two most impor tant determinants of job performance, with job knowledge (r = .50–.60) having a stronger effect than general mental ability (r = .20–.30). Therefore, when a newcomer is given an opportunity to perform, the following equation is used to determine the likelihood of the newcomer actually performing or not éë p ( perf ) ùû : p ( perf ) =
2 1 abi 1 ´ know + ´ 3 3 6
In other words, this equation specifies the likelihood of newcomer performing, which we treat as a binary variable for the purpose of simplicity in this example, as a combined probability determined by newcomer knowledge and newcomer ability, with the influence of newcomer knowledge weighted twice that of new comer ability.
Agent-Based Modeling
199
Veteran Behavior: Updating Impression of the Newcomer
Finally, we considered how veterans update their impression of the newcomer as a result of their interactions with the newcomer and thus observations of new comer performance. This updating process provides feedback for revising veter ans’ future intentions to interact with the newcomer and thus reflects the dynamic nature of our model (Vancouver et al., 2010). Specifically, we propose that veter ans’ impression of the newcomer fluctuates over time as a function of the new comer’s demonstration of satisfying performance or not in the team. At the end of each task episode, each veteran will have a chance to update their impression of newcomer ability by increasing its value if the newcomer performs in a task episode; this value remains the same if the newcomer does not perform. This specification reflects the idea that targets’ ongoing performance level signifies their efficiency in pursuing task-related goals, thus allowing observers to adjust their overall judgment of the targets (Singh & Teoh, 2000; Tausch et al., 2007). Moreover, we consider the critical role of time in this updating process. Specifi cally, we posit that when the newcomer performs in earlier (versus later) task episodes, veterans’ increase in their impression of the newcomer will be greater (versus smaller). This specification is consistent with organizational insiders’ common expectation of newcomer performance over time, in that insiders’ expectations of newcomers going through a period of adjustment render them to perceive newcomers’ success in demonstrating performance at earlier (versus later) times as more impressive (Chen, 2005; Chen & Klimoski, 2003). To reflect this time-sensitive characteristic of the impression-updating process, the follow ing equation is specified: 1 æ ö imp = imp + imp ´ ç 0.1 + , if the newcomer performs, duration ø÷ è where duration represents the number of iterations that have elapsed (i.e., how long the newcomer has worked in the team). This equation is only implemented when the newcomer has performed in a task episode. The aforementioned mathematical representations were then implemented in NetLogo (version 6.2.2). Figure 7.2 shows a screenshot of the interface for our example model in NetLogo. Figure 7.3 contains the complete set of codes behind this model. Step 3. Conducting Simulations and Interpreting Results
After specifying our model in NetLogo, we conducted various simulations and evaluated if the results from the simulations support the conceptual proposi tions in our theory. For the purpose of simplicity, we only considered one type
200
Chen Tang and Yihao Liu
FIGURE 7.2
Screenshot of the Netlogo interface.
FIGURE 7.3
Netlogo code for the example model.
Agent-Based Modeling
201
of analysis: the (independent and interactive) effects of two input factors (i.e., newcomer ability and veteran [first] impression of the newcomer) on the end outcome of newcomer socialization, operationalized as the number of iterations (i.e., duration) it took for the newcomer to obtain all job knowledge. Shorter duration indicates more successful socialization such that the newcomer absorbs job knowledge faster via interactions with team veterans. With our simulated data, more complex analyses are also possible, such as longitudinal analysis of veteran impression of the newcomer over time and an analysis of whether and how veteran impression of the newcomer converges with the value of new comer ability over time; however, we did not include them here due to space constraints. Effect of Newcomer Ability on Newcomer Socialization
To evaluate the effect of newcomer ability, we manipulated newcomer ability at two different levels (i.e., 6.0 and 2.0, representing two standard deviations above and below its mean) and simulated 1,000 teams for each condition. The results showed that, on average, newcomers with high abilities needed a shorter socialization duration (average duration = 39.47 iterations) to acquire all job knowledge, whereas lower-ability newcomers needed a longer duration (aver age duration = 118.39 iterations). This finding confirmed our proposition that newcomer ability positively impacts the efficiency of newcomers’ socialization progress. Additionally, our simulated results allowed us to plot out and examine the trajectory of newcomer knowledge acquisition, that is, how their job knowledge was accumulated over time. Specifically, in Figure 7.4, we calculated and plotted the mean job knowledge level across the first 30 iterations for newcomers with high and low abilities, respectively. This figure showed that, over time, new comers with high abilities (solid line) acquired job knowledge faster than those with low abilities (dashed line). The curvature of the trajectories also confirmed our theoretical expectation that knowledge acquisition is generally achieved at a faster rate (i.e., steeper slopes) at earlier stages than at later stages of a learning process. We only showed data from the first 30 iterations in this figure (and in Figures 7.5 and 7.6) because during the simulation newcomers can obtain full job knowledge in as few as 30 iterations, after which the simulation ended for these newcomers and data on subsequent iterations became unavailable for some newcomers. Effect of Veteran First Impression of the Newcomer on Newcomer Socialization
To evaluate the effect of veteran impression of the newcomer on newcomer socialization, we manipulated the starting values of veteran impression at two
202
Chen Tang and Yihao Liu
FIGURE 7.4
Newcomer job knowledge accumulation as a function of newcomer ability.
different levels (i.e., 6.0 and 2.0), and simulated 1,000 teams for each condition. The values represent different first impressions of the newcomer; however, these values can be updated and revised as the model is simulated over time based on our model specification. The results showed that newcomers had more suc cessful socialization when veterans had good first impressions of them (average duration = 57.31), compared to newcomers when veterans did not have good impressions of them (average duration = 111.38). Similarly, we plotted out the trajectories of newcomer knowledge acquisi tion over the first 30 iterations under good and bad first impression conditions. Figure 7.5 showed that over time newcomers were able to learn and socialize faster when veterans had good first impressions of them (solid line), compared to when veterans had bad impressions of them (dashed line). This finding is also consistent with our earlier theoretical propositions. Interaction Effect Between Newcomer Ability and Veteran First Impression
Finally, we evaluated the interaction effect between newcomer ability and vet eran first impression of the newcomer on newcomer socialization. Specifically, in this simulation, we manipulated both newcomer ability (at a high value of 6.0 and a low value of 2.0) and veteran first impression of the newcomer (also at a high value of 6.0 and a low value of 2.0). This created four conditions, each of which was then simulated a total of 1,000 times (i.e., 1,000 teams per condition). The mean duration of newcomer socialization in each condition was calculated
Agent-Based Modeling
FIGURE 7.5
203
Newcomer job knowledge accumulation as a function of veteran first impression of the newcomer.
and showed that, consistent with our expectation, the high ability-good impres sion condition showed the highest newcomer socialization efficiency (average duration = 36.21), the low ability-bad impression condition showed the lowest efficiency (average duration = 194.26), and the other two conditions had inter mediate efficiency (average duration = 108.58 for the low ability-good impres sion condition and 68.57 for the high ability-bad impression condition). The trajectories of newcomer knowledge acquisition over time under the four conditions also suggested a similar pattern. As Figure 7.6 shows, high-ability newcomers with good veteran first impressions demonstrated the best learn ing trajectory, while low-ability newcomers with bad veteran first impressions suffered from a less successful learning curve; newcomers under the other two conditions ranked in the middle in terms of their learning efficiency and showed similar trajectories. Alternative Data Analysis
In the previous examples, we only demonstrated the modeling results qualita tively for a more intuitive illustration of our simulation findings. We did not quantitatively evaluate the main and interaction effects of the two input fac tors. We would like to note that because ABM as well as other computational modeling techniques can generate a large amount of data, quantitative statisti cal analyses are also possible, especially when researchers hope to estimate the magnitude of a certain effect and compare the magnitudes of different effects.
204
Chen Tang and Yihao Liu
FIGURE 7.6
The interaction effect between newcomer ability and veteran first impres sion of the newcomer on newcomer job knowledge accumulation.
To demonstrate this possibility, we conducted another simulation where both newcomer ability and veteran first impression were assigned values from the specified normal distribution function (versus being constrained at 2.0 or 6.0). After simulating a total of 5,000 teams, we formed a dataset and analyzed how newcomer ability, veteran first impression, and their interaction term influ enced newcomers’ socialization duration, using the Poisson regression model, given socialization duration was a count variable. The results were summarized in Table 7.3. Again, they showed that both newcomer ability and veteran first impression of the newcomer facilitated newcomer socialization (i.e., negative and significant effects). Interestingly, we did not find a significant interaction effect between these two factors. One possible explanation is that higher levels of newcomer ability enabled newcomers to learn faster and perform better, and higher levels of veteran first impression allowed newcomers to receive more opportunities to demonstrate performance, both of which can be equally critical for learning and performing in a social context (e.g., teams) with surrounding agents serving as critical sources for knowledge sharing, behavioral modeling, and task delegation (Wang et al., 2015). Conclusion
In this chapter, we provided an overview of ABM, reviewed organizational research articles that adopted ABM, and illustrated how to build a simple agentbased model using newcomer socialization in teams as an example. In sum, ABM can be useful and powerful in helping organizational researchers under stand and theorize complex phenomena in organizational studies, especially
Agent-Based Modeling
205
TABLE 7.3 Poisson Regression Results
DV: Socialization Duration Model 1
(Intercept) Newcomer Ability Veteran Impression Newcomer Ability × Veteran Impression Observations Log-Likelihood Akaike Information Criterion
Model 2
b
SE
b
SE
5.61** –0.29** –0.09**
0.01 0.002 0.003
5.55** –0.27** –0.08** –0.004
0.05 0.01 0.01 0.003
5,000 –16,060.30 32,126.60
5,000 –16,059.62 32,127.25
Source: Notes. ** p < .01.
those involving dynamic interactions between agents (e.g., people, teams, and companies) and bottom-up emergent processes. We hope that this important computational modeling tool can be further incorporated in future organizational research to advance various streams of organization theories. Note 1. We note that many other factors can also influence the process of newcomer sociali zation (Bauer et al., 2007). For the purpose of demonstrating a simple agent-based model, we only focus on these two factors in this chapter.
References * Indicates articles included in the literature review in Part II. Abar, S., Theodoropoulos, G. K., Lemarinier, P., & O’Hare, G. M. P. (2017). Agent based modelling and simulation tools: A review of the state-of-art software. Computer Sci ence Review, 24, 13–33. https://doi.org/10.1016/j.cosrev.2017.03.001 Adner, R., Pólos, L., Ryall, M., & Sorenson, O. (2009). The case for formal the ory. Academy of Management Review, 34(2), 201–208. https://doi.org/10.5465/ amr.2009.36982613 Allan, R. J. (2010). Survey of agent based modelling and simulation tools (pp. 1362– 0207). New York: Science & Technology Facilities Council. Ashforth, B. E., Sluss, D. M., & Saks, A. M. (2007). Socialization tactics, proactive behavior, and newcomer learning: Integrating socialization models. Journal of Voca tional Behavior, 70(3), 447–462. https://doi.org/10.1016/j.jvb.2007.02.001 Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice-Hall. Bauer, T. N., Bodner, T., Erdogan, B., Truxillo, D. M., & Tucker, J. S. (2007). Newcomer adjustment during organizational socialization: A meta-analytic review of anteced ents, outcomes, and methods. Journal of Applied Psychology, 92(3), 707–721. https:// doi.org/10.1037/0021-9010.92.3.707
206
Chen Tang and Yihao Liu
Bauer, T. N., & Green, S. G. (1998). Testing the combined effects of newcomer informa tion seeking and manager behavior on socialization. Journal of Applied Psychology, 83(1), 72–83. https://doi.org/10.1037/0021-9010.83.1.72 Bonabeau, E. (2002). Agent-based modeling: Methods and techniques for simulating human systems. Proceedings of the National Academy of Sciences, 99(Suppl. 3), 7280–7287. https://doi.org/10.1073/pnas.082080899 *Carroll, G. R., & Harrison, J. R. (1998). Organizational demography and culture: Insights from a formal model and simulation. Administrative Science Quarterly, 43(3), 637–667. https://doi.org/10.2307/2393678 *Chang, M.-H., & Harrington, J. E. (2007). Innovators, imitators, and the evolving archi tecture of problem-solving networks. Organization Science, 18(4), 648–666. https:// doi.org/10.1287/orsc.1060.0245 Chen, G. (2005). Newcomer adaptation in teams: Multilevel antecedents and out comes. Academy of Management Journal, 48(1), 101–116. https://doi.org/10.5465/ AMJ.2005.15993147 Chen, G., & Klimoski, R. J. (2003). The impact of expectations on newcomer performance in teams as mediated by work characteristics, social exchanges, and empowerment. Academy of Management Journal, 46(5), 591–607. https://doi.org/10.2307/30040651 *Coen, C. (2013). Relative performance and implicit incentives in the intergroup pris oner’s dilemma. Organizational Behavior and Human Decision Processes, 120(2), 181–190. https://doi.org/10.1016/j.obhdp.2012.12.003 *Coen, C. A., & Maritan, C. A. (2011). Investing in capabilities: The dynamics of resource allocation. Organization Science, 22(1), 99–117. https://doi.org/10.1287/ orsc.1090.0524 Collins, C. J., & Smith, K. G. (2006). Knowledge exchange and combination: The role of human resource practices in the performance of high-technology firms. Academy of Management Journal, 49(3), 544–560. https://doi.org/10.5465/AMJ.2006.21794671 Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. Journal of Applied Psychology, 92(4), 909–927. https://doi. org/10.1037/0021-9010.92.4.909 Cooper, D., Rockmann, K. W., Moteabbed, S., & Thatcher, S. M. (2021). Integrator or gremlin? Identity partnerships and team newcomer socialization. Academy of Man agement Review, 46(1), 128–146. https://doi.org/10.5465/amr.2018.0014 Cooper-Thomas, H. D., & Anderson, N. (2005). Organizational socialization: A field study into socialization success and rate. International Journal of Selection and Assessment, 13(2), 116–128. https://doi.org/10.1111/j.0965-075X.2005.00306.x Csaszar, F. A. (2020). Certum quod factum: How formal models contribute to the theoret ical and empirical robustness of organization theory. Journal of Management, 46(7), 1289–1301. https://doi.org/10.1177/0149206319889129 Davis, T. R., & Luthans, F. (1980). A social learning approach to organizational behavior. Academy of Management Review, 5, 281–290. https://doi.org/10.5465/ amr.1980.4288758 Ellis, A. M., Nifadkar, S. S., Bauer, T. N., & Erdogan, B. (2017). Newcomer adjustment: Examining the role of managers’ perception of newcomer proactive behavior dur ing organizational socialization. Journal of Applied Psychology, 102(6), 993–1001. https://doi.org/10.1037/apl0000201
Agent-Based Modeling
207
*Etzion, D. (2014). Diffusion as classification. Organization Science, 25(2), 420–437. https://doi.org/10.1287/orsc.2013.0851 Fioretti, G. (2013). Agent-based simulation models in organization science. Organizational Research Methods, 16(2), 227–242. https://doi.org/10.1177/1094428112470006 *Ganco, M., & Agarwal, R. (2009). Performance differentials between diversifying entrants and entrepreneurial start-ups: A complexity approach. Academy of Management Review, 34(2), 228–252. https://doi.org/10.5465/amr.2009.36982618 Gilbert, D. T., Pelham, B. W., & Krull, D. S. (1988). On cognitive busyness: When person perceivers meet persons perceived. Journal of Personality and Social Psychology, 54(5), 733–740. https://doi.org/10.1037/0022-3514.54.5.733 *Grand, J. A., Braun, M. T., Kuljanin, G., Kozlowski, S. W. J., & Chao, G. T. (2016). The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams. Journal of Applied Psychology, 101(10), 1353–1385. https://doi.org/10.1037/ apl0000136 Gross, C., Debus, M. E., Liu, Y., Wang, M., & Kleinmann, M. (2021). I am nice and capable! How and when newcomers’ self-presentation to their supervisors affects socialization outcomes. Journal of Applied Psychology, 106(7), 1067–1079. https:// doi.org/10.1037/apl0000817 *Haack, P., Martignoni, D., & Schoeneborn, D. (2021). A bait-and-switch model of corporate social responsibility. Academy of Management Review, 46(3), 440–464. https:// doi.org/10.5465/amr.2018.0139 Hannah, D. P., Tidhar, R., & Eisenhardt, K. M. (2021). Analytic models in strategy, organizations, and management research: A guide for consumers. Strategic Management Journal, 42(2), 329–360. https://doi.org/10.1002/smj.3223 *Harrison, J. R., & Carroll, G. R. (1991). Keeping the faith: A model of cultural transmission in formal organizations. Administrative Science Quarterly, 36(4), 552–582. https://doi.org/10.2307/2393274 Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32(4), 1229–1245. https://doi.org/10.5465/amr.2007.26586485 Hughes, H. P. N., Clegg, C. W., Robinson, M. A., & Crowder, R. M. (2012). Agent-based modelling and simulation: The potential contribution to organizational psychology: Agent-based modelling and simulation. Journal of Occupational and Organizational Psychology, 85(3), 487–502. https://doi.org/10.1111/j.2044-8325.2012.02053.x Kammeyer-Mueller, J. D., Wanberg, C. R., Rubenstein, A., & Song, Z. (2013). Support, undermining, and newcomer socialization: Fitting in during the first 90 days. Academy of Management Journal, 56(4), 1104–1124. https://doi.org/10.5465/amj.2010.0791 Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/ aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology, 74(4), 657–690. https://doi.org/10.1037/0021-9010.74.4.657 *Knudsen, T., & Srikanth, K. (2014). Coordinated exploration: Organizing joint search by multiple specialists to overcome mutual confusion and joint myopia. Administrative Science Quarterly, 59(3), 409–441. https://doi.org/10.1177/0001839214538021 Kozlowski, S. W. J., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2013). Advancing multilevel research design: Capturing the dynamics of emergence. Organizational Research Methods, 16(4), 581–615. https://doi.org/10.1177/1094428113493119
208
Chen Tang and Yihao Liu
*Lazer, D., & Friedman, A. (2007). The network structure of exploration and exploi tation. Administrative Science Quarterly, 52(4), 667–694. https://doi.org/10.2189/ asqu.52.4.667 *Levine, S. S., & Prietula, M. J. (2012). How knowledge transfer impacts performance: A multilevel model of benefits and liabilities. Organization Science, 23(6), 1748– 1766. https://doi.org/10.1287/orsc.1110.0697 *Levinthal, D., & Posen, H. E. (2007). Myopia of selection: Does organizational adap tation limit the efficacy of population selection? Administrative Science Quarterly, 52(4), 586–620. https://doi.org/10.2189/asqu.52.4.586 Li, N., Harris, T. B., Boswell, W. R., & Xie, Z. (2011). The role of organizational insiders’ developmental feedback and proactive personality on newcomers’ performance: An interactionist perspective. Journal of Applied Psychology, 96(6), 1317–1327. https:// doi.org/10.1037/a0024029 Macal, C. M., & North, M. J. (2005). Tutorial on agent-based modeling and simula tion. Proceedings of the Winter Simulation Conference, 2005, 2–15. https://doi. org/10.1109/WSC.2005.1574234 Macy, M. W., & Willer, R. (2002). From factors to actors: Computational sociology and agent-based modeling. Annual Review of Sociology, 28(1), 143–166. https://doi. org/10.1146/annurev.soc.28.110601.141117 *March, J. G. (1991). Exploration and exploitation in organizational learning. Organiza tion Science, 2(1), 71–87. https://doi.org/10.1287/orsc.2.1.71 *Miller, K. D., & Lin, S.-J. (2010). Different truths in different worlds. Organization Sci ence, 21(1), 97–114. https://doi.org/10.1287/orsc.1080.0409 *Miller, K. D., Zhao, M., & Calantone, R. J. (2006). Adding interpersonal learning and tacit knowledge to March’s exploration-exploitation model. Academy of Management Journal, 49(4), 709–722. https://doi.org/10.5465/amj.2006.22083027 Morrison, E. W. (1993). Newcomer information seeking: Exploring types, modes, sources, and outcomes. Academy of Management Journal, 36(3), 557–589. https:// doi.org/10.2307/256592 Morrison, E. W. (2002). Newcomers’ relationships: The role of social network ties dur ing socialization. Academy of Management Journal, 45(6), 1149–1160. https://doi. org/10.2307/3069430 Nifadkar, S. S., & Bauer, T. N. (2016). Breach of belongingness: Newcomer relationship conflict, information, and task-related outcomes during organizational socialization. Journal of Applied Psychology, 101(1), 1–13. https://doi.org/10.1037/apl0000035 Nifadkar, S., Tsui, A. S., & Ashforth, B. E. (2012). The way you make me feel and behave: Supervisor-triggered newcomer affect and approach-avoidance behavior. Academy of Management Journal, 55(5), 1146–1168. https://doi.org/10.5465/amj.2010.0133 Ployhart, R. E., & Vandenberg, R. J. (2010). Longitudinal research: The theory, design, and analysis of change. Journal of Management, 36(1), 94–120. https://doi. org/10.1177/0149206309352110 *Puranam, P., & Swamy, M. (2016). How initial representations shape coupled learn ing processes. Organization Science, 27(2), 323–335. https://doi.org/10.1287/ orsc.2015.1033 *Raveendran, M., Puranam, P., & Warglien, M. (2022). Division of labor through self-selection. Organization Science, 33(2), 810–830. https://doi.org/10.1287/ orsc.2021.1449
Agent-Based Modeling
209
Reichers, A. E. (1987). An interactionist perspective on newcomer socialization rates. Academy of Management Review, 12(2), 278–287. https://doi.org/10.2307/258535 Resnick, M. (1994). Turtles, termites, and traffic jams: Explorations in massively parallel microworlds. Cambridge, MA: A Bradford Book. *Roos, P., Gelfand, M., Nau, D., & Lun, J. (2015). Societal threat and cultural variation in the strength of social norms: An evolutionary basis. Organizational Behavior and Human Decision Processes, 129, 14–23. https://doi.org/10.1016/j.obhdp.2015.01.003 Schmidt, F. L., & Hunter, J. (2004). General mental ability in the world of work: Occupa tional attainment and job performance. Journal of Personality and Social Psychology, 86(1), 162–173. https://doi.org/10.1037/0022-3514.86.1.162 *Siggelkow, N., & Levinthal, D. A. (2003). Temporarily divide to conquer: Central ized, decentralized, and reintegrated organizational approaches to exploration and adaptation. Organization Science, 14(6), 650–669. https://doi.org/10.1287/orsc. 14.6.650.24840 *Siggelkow, N., & Rivkin, J. W. (2006). When exploration backfires: Unintended conse quences of multilevel organizational search. Academy of Management Journal, 49(4), 779–795. https://doi.org/10.5465/amj.2006.22083053 *Siggelkow, N., & Rivkin, J. W. (2009). Hiding the evidence of valid theories: How coupled search processes obscure performance differences among organiza tions. Administrative Science Quarterly, 54(4), 602–634. https://doi.org/10.2189/ asqu.2009.54.4.602 Singh, R., & Teoh, J. B. P. (2000). Impression formation from intellectual and social traits: Evidence for behavioural adaptation and cognitive processing. British Journal of Social Psychology, 39(4), 537–554. https://doi.org/10.1348/014466600164624 Sluss, D. M., & Thompson, B. S. (2012). Socializing the newcomer: The mediating role of leader-member exchange. Organizational Behavior and Human Decision Pro cesses, 119, 114–125. https://doi.org/10.1016/j.obhdp.2012.05.005 Smith, E. R., & Conrey, F. R. (2007). Agent-based modeling: A new approach for the ory building in social psychology. Personality and Social Psychology Review, 11(1), 87–104. https://doi.org/10.1177/1088868306294789 *Tarakci, M., Greer, L. L., & Groenen, P. J. F. (2016). When does power disparity help or hurt group performance? Journal of Applied Psychology, 101(3), 415–429. https:// doi.org/10.1037/apl0000056 *Tatarynowicz, A., Sytch, M., & Gulati, R. (2016). Environmental demands and the emergence of social structure: Technological dynamism and interorganizational net work forms. Administrative Science Quarterly, 61(1), 52–86. https://doi.org/10.1177/ 0001839215609083 Tausch, N., Kenworthy, J. B., & Hewstone, M. (2007). The confirmability and discon firmability of trait concepts revisited: Does content matter? Journal of Personality and Social Psychology, 92(3), 542–556. https://doi.org/10.1037/0022-3514.92.3.542 Vancouver, J. B., Tamanini, K. B., & Yoder, R. J. (2010). Using dynamic compu tational models to reconnect theory and research: Socialization by the proac tive newcomer as example. Journal of Management, 36(3), 764–793. https://doi. org/10.1177/0149206308321550 Wang, M., Kammeyer-Mueller, J., Liu, Y., & Li, Y. (2015). Context, socialization, and newcomer learning. Organizational Psychology Review, 5(1), 3–25. https://doi. org/10.1177/2041386614528832
210
Chen Tang and Yihao Liu
Wang, M., Zhou, L., & Zhang, Z. (2016). Dynamic modeling. Annual Review of Organi zational Psychology and Organizational Behavior, 3(1), 241–266. https://doi. org/10.1146/annurev-orgpsych-041015-062553 *Wellman, N., Applegate, J. M., Harlow, J., & Johnston, E. W. (2020). Beyond the pyra mid: Alternative formal hierarchical structures and team performance. Academy of Management Journal, 63(4), 997–1027. https://doi.org/10.5465/amj.2017.1475 Wilensky, U., & Rand, W. (2015). An introduction to agent-based modeling: Modeling natural, social, and engineered complex systems with NetLogo. Cambridge, MA: The MIT Press. Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115–152. https://doi.org/10.1017/ S0269888900008122
8 COMPUTATIONAL MODELING WITH SYSTEM DYNAMICS Jeffrey B. Vancouver and Xiaofei Li
The point of initial congruence between the practitioner and the theorist lies in their sharing of a common body of symptoms defined by the practitioner as revealing an undesirable situation. The practitioner is concerned with improv ing the situation. The scientist is concerned, from that point on, with building a model of how the situation that produced the undesirable symptoms came into being. (Durban, 1976)
Although clearly before the advent of positive psychology, the earlier quote from the theory-building chapter in the Handbook of Industrial and Organiza tional Psychology nicely describes the role of theory in I-O psychology. That is, it is meant to explicate the processes that produce effects relevant to practition ers and, thus, presumably organizations and the people they employ. Durban (1976) subsequently notes that theoretical models are made up of units, which are things or variables, and the laws of interaction among the units that apply within boundaries of application to create various system states (e.g., a set of symptoms). Typically, the descriptions of the units, laws, boundaries, and sys tem states are made verbally. Durban also notes that once laid out, the theorist should derive propositions drawing from applying logic to the model. These propositions, once combined with operational definitions (i.e., empirical indica tors) of the constructs, become hypotheses and are often presented in path mod els (e.g., Locke & Latham, 2004). As hypotheses and path models, the theory can then be subject to additional empirical investigations beyond the existing empirical work that is typically the basis of the laws of interaction (Durban, 1976; Locke & Latham, 2004). DOI: 10.4324/9781003388852-10
212
Jeffrey B. Vancouver and Xiaofei Li
Unfortunately, several weak links occur during this process of theory build ing and assessing. For example, many note (e.g., Busemeyer & Diederich, 2010; Cronin, Gonzalez, & Sterman, 2009; Farrell & Lewandowsky, 2010; Hintzman, 1990; Vancouver, Wang, & Li, 2020) that humans often make errors when trying to understand how a set of symptoms come about (i.e., identifying units and the laws of interaction), or what a set of processes might produce in terms of behav ior or symptoms (i.e., how the laws of interaction lead to the system states). Moreover, the research methods literature is replete with concerns regarding the interpretation of empirical investigations, often owing to a lack of regard for the dynamics of the processes (e.g., DeShon, 2013; Hanges & Wang, 2012; Wang, Zhou, & Zhang, 2016). Fortunately, there is a tool that can be very useful when it comes to under standing, explicating, and evaluating theory: computational modeling. In par ticular, computational modeling allows one to describe the laws of interaction among units formally with either mathematical or logical statements (Adner, Pólos, Ryall, & Sorenson, 2009; Davis, Eisenhardt, & Bingham, 2007). When the units are things like individuals or groups, agent-based models may be the best platform for representing the theory computationally (see Tang & Liu, Chap ter 7). However, when the units are variables or constructs, which is the case for most of our theories, then a long-standing computational modeling platform, called system dynamic modeling, may be more appropriate for rendering a the ory. In either case, the formal representations allow one to see how the units and laws interact over time to produce and thus possibly explain how the phenomena come into being. That is, it helps one a) think about the phenomena, b) determine if the units and laws proposed can explain the phenomena, c) derive associations and other propositions, d) examine the diagnostic value of empirical protocols, and e) assess the fit of the model or its competitors to observations. In this chapter, we focus on the system dynamic modeling perspective and how to create computational models of the types of theories I-O psychologists use to understand how phenomena “came into being.” Thus, the purpose of the current chapter is not so much as to sell the idea of computational modeling. Rather, it is to introduce a modeling platform that these authors have found to be very useful and easy to use. System Dynamics
The system dynamics approach to computational modeling was introduced by Jay Forrester in a 1958 paper in the Harvard Business Review. The approach assumes that systems behave over time according to processes that can be rep resented mathematically and simulated via computer programs (i.e., computa tional models). The systems and variables of primary interest to Forrester were at the organizational level (e.g., inventories, R&D expenditures, profits, market
Computational Modeling With System Dynamics
213
shares), which was an unusual target of computational representation at the time, though not unheard of (Cooper, 1951; Simon, 1952). Indeed, system dynam ics provided an approach to the more general system view that was emerging and still is central to organizational science (Katz & Kahn, 1978; Millet, 1998; Scott, 1998). The system view holds that phenomena arise from a system com posed of parts interacting over time (Vancouver, 2013). Moreover, like Herbert Simon but independent of him, Forrester was interested in how dynamic pro cesses could undermine human decision making and account for endogenous (i.e., within system) change over time (Richardson, 1991; Sterman, 2000). For example, physiological opponent processes within an individual tend to return one to baseline emotional levels independent of external or behavioral (e.g., cop ing) activities (Vancouver & Weinhardt, 2012). If one attributes emotional regu lation to only behavioral and external processes, errors in understanding will emerge. Also, like Powers’ (1978) work on human behavior, Forrester realized the importance of identifying the variables with inertia (i.e., that accumulated or otherwise retained their values over time), which system dynamics calls stocks, like the stock in an inventory, or level variables, like the level of water in a lake. The flows into and out of level variables are rates that may be affected by the level variable in question, as well as other level variables or auxiliary variables that do not have inertia. Together these variables and how they affect each other represent the system of interest to the modeler. Because of the system dynamics focus on computational models, Forrester and his protégées began creating computer programs that would facilitate model building and simulation. It is the latest incarnation of these programs, Vensim©, that we use for building and simulating the models we describe here. Ventana, which publishes Vensim, has a free, downloadable version of Vensim, VensimPLE©, if used for personal or educational uses. To gain the most from this chap ter, we suggest you download VensimPLE© and follow along with it as we build and describe some simple models here. More general mathematical modeling software, like MATLAB or R, can be used as well. However, the learning curve is steeper, building and debugging process more challenging, and the presenta tion of the model and results more daunting in generic platforms, given they were not designed solely for the purpose of building and testing computational models that are in close proximity conceptually to the kind of constructs and laws of interaction theories typically developed and used in I-O psychology. Nonetheless, we provide the MATLAB code for the models built here in the appendix of code at the end of this book. If you are already proficient in R, Farrell and Lewandowsky’s (2018) book on computational modeling uses that platform for providing sample code for models of cognition and behavior as well as many model fitting and parameter estimating procedures. Likewise, other chapters in this book focus on model fitting using R (Ballard & Neal, Chapter 10), for which they provide the code. However, in this chapter, we focus more on developing
214
Jeffrey B. Vancouver and Xiaofei Li
an understanding of a phenomenon that the process of building and simulating models can provide. We find the Vensim platform invaluable toward that end. Sample Dynamic Processes
As noted, system dynamics modeling uses an approach that is similar, but not identical, to the way theories are typically represented in I-O. That is, a system dynamics model can look a lot like a path diagram used to represent a set of constructs and how they influence each other. However, there are three key dif ferences. First, they are dynamic. That is, they are about how stuff changes (or not, when it seems it should). In contrast, path diagrams represent associations (i.e., covariance) among values of constructs where the values are presumed to be based, at least somewhat, on the levels of the variables preceding the focal vari able in the path diagram. One might simulate such path models by changing the values in the exogenous variables (first in the path) and using them to update the endogenous variables down the line. However, once the end of the line is reached, no values change. Given that I-O path models often begin with personality vari ables, which are assumed essentially stable over time, one cannot see how differ ences in personality translate into differences in constructs down the line. They simply are different. Indeed, path diagrams are nearly always tested with crosssectional data. When one has true longitudinal data (i.e., repeated measures of all the constructs in the model over time), the path diagrams become cumbersome repetitions of constructs as a way of capturing the effect of change in one variable on another as well as the effect of the variable on itself over time (i.e., its inertia). A second difference is that system dynamic models nearly always have feed back loops because dynamic phenomena rarely occur as open loops. That is, rarely does a change in a variable propagate through and out of a system (i.e., into the “open”) without affecting the original variable. Indeed, von Bertalanffy’s (1968) definition of systems was that it included variables affected by the pre vious levels of themselves. In contrast, path diagrams assume that changes in a variable do not provide feedback to change the variable or any variable that affects it. Alternatively, they include feedback loops but do not provide a clear description of how they work. For example, Vancouver et al. (2020) showed that a simple function for self-efficacy based on adding back an effect for performance “predicted” runaway levels of self-efficacy and performance. That is, feedback dynamics adds a substantial layer of complexity to theorizing about behavioral phenomena that only becomes clear when one builds a model and simulates it. Third, system dynamics models can include components (i.e., functions) that represent some presumed or theoretical process but that do not necessarily have or need an empirical indicator. This is an important element for understanding human behavior because of the latent quality of much of the processing involved. It frees up the theorist to think about the process unconstrained by concerns for
Computational Modeling With System Dynamics
215
empirical indicators. Of course, seeking at least some such indicators and show ing that they change or relate in ways consistent with the model will likely be relevant when it comes to validating the model (see Ballard et al., Chapter 20; Weinhardt, Chapter 9). Meanwhile, we work through a simple dynamic process example that might be used as part of models one might build to understand I-O phenomena. That is, we build these simple models, describe their generalizabil ity, and discuss some I-O phenomena to which they might apply. A Growth Model Background
Growth models, particularly population growth, have long been a staple of dynamic modeling (Taber & Timpone, 1996). Such models begin by recognizing that the central variable of interest (e.g., population) is a variable with inertia, whether the population of interest was the number of employees in a profession or organization, the number of majors in a program, or the number of publica tions in an academic’s vita. That is, its level does not change unless processes move the level. The processes can be one of two kinds: flows in or flows out. For a population of species in an area, the number of births is a flow in, and popula tions can grow because of them. On the other hand, death is inevitable and the number of them is a flow out for a population. Depending on the boundary of the system, one might also consider immigration and emigration as flows in and out of the population.1 However, for our example, we consider an isolated popula tion, making these last two rates irrelevant. In the previous paragraph, we described three variables that we might include in our model: population, births, and deaths. Moreover, we implied a simple equation involving these variables: population = births–deaths. This much is true. However, they are insufficient for computationally representing the pro cess. For one thing, they do not provide a way to understand the role of time, and therefore change. Yet, change is what system dynamics is all about explaining. Indeed, as the chapter’s initial quote states, theorists are interested in under standing how some condition or situation came about (Durban, 1976). In this case, the condition of interest is the state of the population at any one time. One simple way to add time to the “population = births–deaths” equation is to make it into a discrete, difference function: populationt = population0 + births–deaths. In natural language, this equation reads “the population at time t equals the population at time 0 plus the births minus deaths in the interim.” An alternative way is to use a continuous, integration function like the following: t
populationt = population0 +
ò ( births deaths ) dt 0
(1)
216
Jeffrey B. Vancouver and Xiaofei Li
In this representation, the equation is explicit that the population level changes from an initial level to a new level depending on the difference (i.e., the area under the curve) between the number of births and deaths between Time 0 and Time t. Because of Forrester’s orientation, he assumed that change was continu ous, and thus models should use the integration equation rather than the dis crete difference equation. As a result, the system dynamics modeling platforms use the integration function to represent variables with inertia. The differences between the equations are typically unimportant for most of the problems that system dynamics models tackle. One common element of growth is that the level of the thing that is growing can change the rate at which the thing grows. In the case of populations, the birth rate, which is the number of births per 1000 people, might stay the same. How ever, if the level of the population grows, this per capita birth rate will result in a faster and faster increase in the population. Indeed, the growth rate will likely be exponential. To see this, let us build the model. That is, to get the most out of this chapter, we recommend downloading VensimPLE© and building the model as described next. Simply reading about it will not provide the experience needed to appreciate what the computational modeling approach offers. Building the model
Figure 8.1 shows the primary interface for Vensim, some important com mands and where to activate them (in Calibri font), and the structure of the model in the Vensim software platform using system dynamics conventions (in New Times Roman font). To create this structure, click on the “Auxiliary variable” icon (has “A” in unlined box) just above the build window (large blank space) and then click in the build window. A textbox should appear. Type “birth rate” and hit enter. The words “birth rate” should appear in the build window. Click in the build window again, type “average lifespan,” and click enter. Finally, type “initial population,” and hit enter. As noted earlier, auxiliary variables do not have inertia. In contrast, the variable POPULA TION is in a box (see Figure 8.1) because it is a variable with inertia (i.e., a level variable). By clicking on the “level” icon (has box with “T” in it as well as label “level”), you can type “POPULATION” to create the variable. Vensim uses boxes to visually distinguish level variables from auxiliary vari ables; however, the software is flexible enough to allow one to remove or add boxes or other shapes around variable labels. One can right-click on the vari able label to see these options. They are useful because conventions within psychology journals are to put boxes around any variable that is a function of other variables in the model. For this reason, Cronin and Vancouver (2019) suggest capitalizing level variables to provide a way to distinguish them from auxiliary variables or constants.
Computational Modeling With System Dynamics
217
As noted earlier, level variables are affected by flows in, out, or both. In this case, births flow in to increase the level of the population and deaths flow out to decrease it. Given the focus on dynamics, the value of a flow is a rate of change. System dynamics encourages an explicit representation of these flows and their direction via the use of “rate” graphics (see Figure 8.1). To create them, click on the “rate” icon. Then click on a point to the left of the POPULATION box followed by a click within the POPULATION box. This creates 1) a little cloud to the left where you first clicked, 2) a double-lined arrow (i.e., or pipe) point ing at POPULATION, and 3) a box ready for you to type “births.” Type “births” and then hit enter to give you the flow in representation of births on POPULA TION seen in Figure 8.1. For the flow out, click inside the POPULATION box first, click to the right of the POPULATION box, and then type “deaths” in the textbox that appears. In this case, the arrow points away from the level variable and to the cloud. Besides the direction of flow, an important concept central to the system dynamics perspective is represented in the rate graphic. Forrester (1961) sought to represent the endogenous systems where the variables relevant to a process were represented, whereas the variables irrelevant were left out. However, sys tems are not closed (Richardson, 1991). They take information and materials (i.e., resources) from outside the system and deposit them beyond the system. The clouds represent the assumption that, within the boundary of the system
FIGURE 8.1
Vensim interface with tools labeled in italics and population model in Times New Roman.
218
Jeffrey B. Vancouver and Xiaofei Li
being modeled, the availability of these resources is unlimited and the conse quences of the deposits unimportant to the processes being considered. That is, they emerge from and disappear into a cloud. Of course, the assumption can be incorrect and thus motivate a revised model. For example, in some populations, the accumulation of dead bodies might lead to an increase in disease and thus a decrease in average lifespan or birth rates over time. In that case, the cloud to the right should be represented as a level variable whose value increases with deaths and which feeds back to determine other variables in the model. In this example, we assume rituals that obviate such a process (and apologize for having brought up such an unpleasant notion). Meanwhile, there are some interesting dynamics going on in this example. In particular, the level of the population affects the number of births and deaths. This is represented via the arrows from POPULATION to births and deaths. To create these arrows, click on the “arrow” icon, then click on the “cause” vari able followed by the “effect” variable (e.g., birth rate affects births). Do this to reproduce all the arrows shown in Figure 8.1. We should note that the words “cause” and “effect” can get complicated when dealing with feedback loops. For our purposes, a cause is any variable that is needed in the domain of the function determining the “effect.” For example, a moderator, which presumably determines the degree of a cause’s effect (Baron & Kenny, 1986), will be in the domain of the effect variable function and should be pointed at the effect variable as opposed to the arrow from the cause to the effect as it is typically represented in path diagram models. Indeed, Vensim will not let an arrow point at an arrow. This is because the functions are built from the variables or constants pointing at the variable. An exception is the arrows pointing to or from the level variable. Despite this, deaths will be available in the domain of POPULATION and Ven sim will expect you to use it. Now we get to the math. To do that, click on the “f(x) equation” icon. All the labels should turn black because none have functions or values in them. Click on POPULATION. A dialog box like that shown in Figure 8.2 will appear. Notice that in the upper left-hand corner just below the name “POPULATION” is the word “level” as the “Type” of variable. This means the integration function is automatically applied to the domain (you may see “= INTEG (” to left of the Equations textbox, but this depends on the version of Vensim you have). The integration function is ∫(x)dt, where x is whatever is in the equation textbox. In this case, “births–deaths” is in the Equation textbox. To get “births–deaths” into the Equation textbox either type them in, or better, click in the Equations textbox and then on births in the “Variables” textbox, type “—” and then click on deaths in the Variables textbox. This assures that the variables pointing at the effect var iable are used in the simulations of the model and spelled correctly. This can be especially important if a variable label includes an operator (e.g., self-efficacy). Vensim will put quotes around the label so that it understands the operator is part
Computational Modeling With System Dynamics
FIGURE 8.2
219
Equations window.
of a label as opposed to an operator (e.g., “self-efficacy” is a variable, not the function self minus efficacy). You might also notice a keypad to the left of the variables list with logical operators like “AND” and “OR” as well as a list of functions to the left of the keypad. Vensim includes many preprogrammed functions to facilitate equation building much like spreadsheet software (e.g., ABS for absolute value and EXP for exponential). Some functions, like the delay functions one can see in the list in Figure 8.2, are unique to the dynamic focus of Vensim. Online documentation lists the functions and what they can do (and what versions they are in). Many are financial or related to discrete materials distributions, which is a common content of computational models built by system dynamics modelers. Others are probabilistic (e.g., RANDOM NORMAL) and can be useful for adding noise to a model. One can even make their own functions, called lookups, which pair a value with another value as desired. To make such a function, click the down arrow in the Type field (i.e., where “Level” is in Figure 8.3), select “Lookup,” and then the “As Graph” button that appears to the right of the Sub-Type field. This brings up a two-column table that allows for the pairing of values. As they are entered, the function shows up in graphic form within the window. This fea ture can be used to represent a specific set up of conditions in some experiment one might be seeking to model like in an ABBA design (e.g., Vancouver, Wein hardt, & Schmidt, 2010b), though we will not use it in this chapter. Finally, you might notice that the Equations textbox has two parts. The sec ond part is labeled “Initial Value.” All level variables change from their previous state. At the beginning of a simulation, the level variable must have a state from which to change. That value goes here. It is population0 from Equation 1 or
220
Jeffrey B. Vancouver and Xiaofei Li
FIGURE 8.3
Model settings window.
initial population from our structure. Thus, click within this initial value textbox and then click on initial population from the Variables textbox. Initial popula tion should appear in the textbox. You will now have used all the variables in the Variables textbox except POP ULATION. POPULATION is in the list because it is a level variable. Vensim allows you to use levels variable to effect themselves, which is why it puts it in the list, but it does not require it you use it. Otherwise, you will get an error mes sage noting that not all the variables are used if a variable listed in the Variables textbox is not used. This is a feature of the software design to facilitate model building. It does not want to assume that pointing the unused variable at the focal variable was the error without asking. In other versions of Vensim, saying you do not want the variable will automatically remove the arrow. In VensimPLE© that process will need to be done by clicking on the trashcan icon (see Figure 8.1, above build window near the center) and then on the arrowhead to remove an arrow. This can be done once out of the Equation dialog box. Indeed, to make sure you have entered everything correctly, click on the “Check Syntax” button at the bottom of the dialog box (see Figure 8.2). Any
Computational Modeling With System Dynamics
221
errors found will be listed in the “Errors” textbox or it will say “Equation OK,” like it does in Figure 8.2. Once that is the case, you can click the “OK” button to close this dialog box. Clicking “OK” will also check the syntax. If the syntax is correct, the arrow from “initial population” to POPULATION should vanish or turn gray. This distinction is because initial population does not affect POPU LATION beyond setting the initial value for it. That is, it is not involved in the dynamics of the model. At this point, it is necessary to talk more about the boundaries of the system and processes being modeled. That is, system dynamic models have timeframes and, for labeling purposes, levels of granularity regarding time, depending on the purpose of the model. To set up these boundaries, one needs to click on the “Model” menu item at the top of Vensim (see Figure 8.1) and then on the “Set tings” option (first one in the list) to bring up the dialog box in Figure 8.3. Sev eral labeled textboxes appear, including INITIAL TIME, FINAL TIME, TIME STEP, Units for Time, and Integration Type. The first three are system variables that might be used in the model, which is why they are in all caps. The default is to start the simulation at zero and end at 100 after time steps of one. The units of these numbers are what you choose using the Units of Time field. They range from “Year” to “Second.” In this case, year is the typical metric for consider ing population growth, so that is what is selected. One might need to scroll up after clicking the dropdown arrow to find it. Also, change the final time to 1000. We will be modeling 1000 years of population growth. Note that these can be changed later and that the units of time label choice does nothing to the math or simulations. On the other hand, the time metric should be known before entering rates and other values in the model, which is why we diverted your attention to this dialog box at this time. That is, we will be expressing birth rates and lifespan in years. Finally, if one clicked on the “Sketch” tab at the top of the dialog box (Figure 8.3), one will see “Show initial causes on model diagrams” as an option. If this is checked, the arrow from initial population and POPULATION will appear grey. If unchecked, the arrow disappears if the variable is only used as the initial value for a level variable. Now that the unit of time has been specified, we can understand the types of numbers we might use in the model. For example, in the Paleolithic era, lifes pans were about 33 years. Thus, click on the “f(x) equation” icon and then aver age lifespan. Type “33” in the Equation text box. Because this is not a level variable, there is only one textbox (i.e., “Initial Value” textbox is gone) and there is no function preceding it (i.e., only the “=” sign remains). Also, there are no variables in the variable box. Indeed, because of this, the “Type” listed below the name is “Constant.” However, we might be interested in the effect of different values for this constant. For example, according to the CIA, Monaco currently has the longest lifespan (89.4). To examine this in simulations, we will set up minimum, maximum, and increment values. Specifically, above the Equations
222
Jeffrey B. Vancouver and Xiaofei Li
text box are three small textboxes labeled “Min,” “Max,” “Incr.” Enter “0” for the minimum amount, “100” for the maximum amount, and “1” for the incre ment. Click “OK.” Next, click on initial population. To avoid the chicken and egg problem, we will assume some non-zero initial population value. That is, we start the simula tions with some given level of population. In this case, 1000 is a useful starting place because that is the population level used to calculate birth and death rates (e.g., numbers that are born per 1000 people in one year). Thus, we type 1000 into the Equation textbox as our minimum value. Meanwhile, the current population is at about 7.7 billion. Vensim understands scientific notions. Enter “7.7e+09” in the maximum textbox to allow that maximum value. Vensim will work out an incremental value for itself. Now click on “birth rate” in the upper right-hand corner of the dialog box (i.e., see the section “Edit a Different Variable” in Fig ure 8.2). This will take you to the Equation dialog window for birth rate. The current average global birth rate is about 18.5 per 1000, or 0.018. The highest rate, about 0.045, is in sub-Saharan Africa. Thus, similar to average lifespan, we will enter a minimum of zero, a maximum of 0.1 in the respec tive textboxes, and 0.001 in the increment value. Enter a value of 0.045 in the Equation textbox. Next, click on “births” from the “Edit a Different Variable” list. The number of births is a simple, multiplicative function of the birth rate and the population level (i.e., birth rate * POPULATION). For example, given the initial population value of 1000, the number of births at the beginning of the simulation should be 45 (i.e., 0.045 * 1000). This value is the rate of growth in the population level per year if there were no deaths. Dividing the population by its lifespan provides the number of deaths per year. This is the function that should be put in deaths (i.e., POPULATION/average lifespan). This is also the last variable needing a function. Click on “Check Model” at the bottom of the Equations dialog box (Figure 8.2) to make sure all is well. If you have an error, the software is pretty good at showing you what and where it is. Before moving on to run simulations of this model, it is useful to reflect on what we did. We began with a basic dynamic notion that populations change as individuals are added to them (i.e., births) and subtracted from them (i.e., deaths). Yet, the rates (i.e., number per year) at which births and deaths occur are a func tion of the population as well as some aspects of the population (i.e., birth rates and lifespans). Thus, the population level provides feedback on itself to deter mine the rate it changes. Let us now consider the implications of that process. Simulating the model
To run a simulation of a model with math, one can press the “play” button labeled “Simulate” or the “SyntheSim” button next to it, as marked in Fig ure 8.1. We suggest pressing the “SyntheSim” button. This brings up slider bars
Computational Modeling With System Dynamics
FIGURE 8.4
223
Simulating the population model using SyntheSim.
for the constants and trajectories for the variables in the model (see Figure 8.4). Hovering the mouse over a variable will show a larger graphic of the trajectory. Clicking on a variable and then the graphic icon (see Figure 8.4) brings up the graphic shown in the figure. Most impressively, moving the sliders provides quick, visual feedback regarding the behavior of the model, where behavior of the model refers to the changes in the variables over time that the starting val ues and structure of the model produce. In this case, one sees an exponentially increasing function for all the variables in the model. The pattern is driven by the positive difference between births and deaths of the accumulating popula tion. In particular, the rate of deaths (i.e., 1/33 = 0.03) is below the rate of births (0.045) and thus the population value rises, which increases the number of births per year. The result is that the population reaches almost 2.2 billion over the 1000 years simulated given the two parameters (i.e., birth rate and average life time) and a population starting value of 1000. Indeed, this is a classic case of runaway behavior, given a positive feedback loop between population and rate of change to the population. Understanding and Debugging a Model
Vensim provides several tools to facilitate understanding a model or what might be wrong with a model. Most of these tools can be found along the left-hand edge of the Vensim window. Figure 8.5 shows the results of clicking on the top three (i.e., Causes Tree, Uses Tree, & Loops) when POPULATION is the focal
224
Jeffrey B. Vancouver and Xiaofei Li
FIGURE 8.5
Analysis tools for model evaluation and debugging.
variable as well as the “Table Time Down” window when POPULATION, then deaths, and then births are each selected and the “Table Time” icon is clicked. The causes tree shows causes to a focal variable two layers back. In this case, POPULATION feeds back on itself via births and deaths. This can also be seen in the Uses Tree and well as the Loops windows. Often, however, models are much more complex with many layers of loops, making the Loop window useful and causes and uses trees quite different. Note that some windows disappear when other icons or variables are selected. To bring them back to the forefront, click the Output Windows icon near the top left of the Vensim window (see Figure 8.1). Perhaps the most useful tool, however, is the data table (see Figure 8.5) and the graphs (see Figure 8.4). We talk more about the graphs in subsequent mod els, but here we show the data table with the time series data going down. The Table icon displays the data where time goes to the right. As the data in the table show, POPULATION starts at 1000 (i.e., the initial population value), deaths at 30.303 (i.e., the death rate per 1000), and births at 45 (i.e., the birth rate per 1000). More importantly, it shows how the variables change over time. This can be very useful for making sure the variables change as desired. Often a floatingpoint error will occur when building and testing a model. This table may show that some variable is wildly oscillating and/or taking on negative values when it should not. Moreover, one can use the little clipboard icon in the top left-hand
Computational Modeling With System Dynamics
225
corner of the table window to copy the contexts of the table. Then one can open a spreadsheet (e.g., Excel) and paste the table contents for use in additional analy sis. More sophisticated versions of Vensim allow one to create Excel or text files with similar information. Indeed, other versions have other tools like “Gaming,” which allow one to step through a simulation one step at a time, changing variables along the way to observe how they change to behavior of the system. In VensimPLE, one can right-click on a constant while in SyntheSim to have it change from one specific value to another at a particular time point. Sensitivity Analysis
What is often remarkable about dynamic behavior is the insensitivity of the behavior to different parameter values (Forrester, 1971). That is, the structure often drives the behavior, not the parameter values. One can see this with the initial population parameter. Specifically, while in “SyntheSim” mode, you can click on and move the slider for initial population to its far right-hand position. Recall, we set up this function to have a maximum of 7.7 billion to represent the population of humans on the earth today. What is interesting is that moving the sider has no impact on the pattern of behavior. That is, the trajectory remains exponentially increasing. That said, the values (i.e., scale) of the y-axis change dramatically depending on this starting value. Either way though, the numbers become astronomical relative to the initial population. Had Tony Stark (aka. Iron Man) known that the level of population did not change the trajectory of population, he might have convinced Thanos that eliminating half the universe’s population would have no long-term impact on population. This can be seen in Figure 8.6, where we halved the population at year 900 in the simulation. Yet, while some parameters have little effect on the behavior of the system, others can have a dramatic effect. For example, if the birth rate slider was moved to the current average birth rate of .018, the population, regardless of its starting value, decays asymptotically toward extinction. Though increasing lifespan has little effect, the same decay to extinction happens if average lifespan drops such that the rate of deaths exceeds the rate of births. This shows that the direction of change can matter (Cronin & Vancouver, 2019). Only if the two rates are the same will the population remain steady because it would mean that the value within the integration function for POPULATION was zero. Of course, affecting these parameters in real life would not be easy, as many science fiction stories centering on attempts to control either the birth rate or life expectancies to con trol population levels have resulted in disastrous consequences. The model also gets more interesting once feedback processes that affect birth and death rates are included (e.g., standards of living). In another section, we discuss a major constraint on population growth called carrying capacity.
226
Jeffrey B. Vancouver and Xiaofei Li
POPULATION
1000 T
500 T
0
0
200
400
600
800
1000
Time (Year) FIGURE 8.6
Thanos’ mistake.
Lessons Learned
The population growth model illustrated several elements of the system dynam ics perspective. First, feedback processes, which must involve a level variable, can easily create runaway behavior. Growth models often have this feature. For example, wealth is considered a function of income minus expenses plus– and the kicker– return on investment of one’s wealth. Piketty (2014) showed that in the 1800s and before, this last component created extremely wealthy families as the wealth was passed on over generations. More recently, incomes have been the major source of wealth because organizations have used stock options as a method of compensation and these stock values can exponentially increase as revenues exponentially increase via exponentially increasing markets. Add this to wealth via inheritance and today we again have the same inequality that cre ated unrest in the 1800s. Second, via the population growth model, we saw some of the key types of processes and conventions used by system dynamics modelers. These include boxes to represent level variables, double-lined arrows for rate variables with clouds for resources or deposits beyond the scope of the model (and assumed limitless), single-lined arrows for causal processes that often are part of feed back loops or for initial values, constants, or parameters as affecting level and auxiliary variables. System dynamic modelers encourage the explicit represen tation of these elements; however, as a model becomes more complex, the ele ments can make a graphic representation of a model very busy, and they are
Computational Modeling With System Dynamics
227
often not necessary or discouraged by journals in the field. For example, in the population model case, one can build the model in just one level variable with the following function: POPULATION = 1000 + ∫[0.018 * POPULATION– POPULATION/33]dt). The resulting behavior will be identical, a fact the reader should confirm for themselves. Of course, if the parameters are not variables, they cannot be examined for sensitivity or fit to data (e.g., Vancouver, Tamanini, & Yoder, 2010a). Indeed, a third lesson learned is that some parameters matter, and some do not. The SyntheSim feature allows one to examine the effects of parameters easily once the model has been built. More sophisticated parameter evaluation can occur via Monte Carlo analysis of one or more parameters, though this feature requires an upgrade from VensimPLE©. Vancouver, Li, Weinhardt, Purl, and Steel (2016) provide an example of this type of analysis when they examined the growth in performance that can happen when past performance is used to parse resources for future performance. Specifically, they found that this structure can create positively skewed distributions in performance that are commonly attributed to the attributes of an elite set of individuals (cf., O’Boyle & Aguinis, 2012). Limited Growth
The Vancouver et al. (2016) model used another concept from population ecology called carrying capacity. Carrying capacity refers to the limits of an environment to support a population. As the population approaches the carrying capacity, growth slows. In our original population model, carrying capacity was assumed to be an unlimited resource. Yet, Thanos’ presumed concern was the suffering of individuals in populations that bump up against their carrying capacity (i.e., people starving because of lack of food). To represent carrying capacity, one could simply include a parameter that represents the level of a population that an environment can sustain. Calculating the ratio of the population to the carrying capacity and subtracting that value from one produces a value that approaches zero as the population approaches the carrying capacity. If used as a weight (i.e., a multiplier), this value will slow growth to zero when the population reaches carrying capacity (i.e., POPULATION = 1000 + ∫[(1–POPULATION/carrying capacity) * (births–deaths)]dt)). To illustrate Thanos’ futile legacy, we reran the model with a higher birth rate (0.05) and a snap of the Thanos’ fingers at 700 years into the simulation. The trajectory is shown in Figure 8.7. The trajectory follows an S-shaped curve (also called an ogive) given it only approaches but does not exceed the carrying capacity of 100 million. After the finger snap, the population drops in half, only to return to the carrying capacity level after about 150 years. Vancouver et al. (2016) used a similar function to describe how learning curves get their s-shape (i.e., start slow but then increase exponentially until learning again slows and eventually stops). The carrying capacity in that case
228
Jeffrey B. Vancouver and Xiaofei Li
POPULATION
100 M 80 M 60 M 40 M
20 M
0
0
200
400
600
800
1000
Time (Year) FIGURE 8.7
Thanos’ futile legacy.
represents knowing all there is to know (or reaching some physical limit on skill development). Learning and Calibration
Two related dynamic processes are learning and calibration. As it turns out, the previous model shows how to represent the dynamics of a learning curve (see also, Vancouver et al., 2016). Here we present a simple model of the process of calibrating or learning a belief or some aspect of one’s environment. Models with this element have been published in the I-O literature, including a model of learn ing expectancies and the uncertainties in the environment (Vancouver, Wein hardt, & Vigo, 2014) as well as calibrating a self-efficacy belief to one’s ability based on past performances (Vancouver et al., 2019). These models are based on the simple, delta-learning rule (Anderson, 1995), which is the basis of connec tionist models of learning used in artificial intelligence applications (Thomas & McClelland, 2008). The delta-learning rule compares a “supervisory signal” to the currently held value and moves the currently held value toward the supervi sory signal value by some fraction, k. The thing learned is a level variable, given that learning is obtaining some relatively permanent value (Weiss, 1990). In the example provided, which we adopted with some simplifications from the Vancouver et al. (2014) paper, the model represents the process of learning one’s ability for a task. A belief regarding one’s capability is referred to as selfefficacy (Bandura, 1997). We assume a percentile-like metric. That is, values
Computational Modeling With System Dynamics
229
range from 0 to .99 to represent one’s capacity relative to others. In the model, one’s initial self-efficacy belief in one’s capacity for some performance is based on personality, which we set to 0.3 in this example. Capability was set to 0.7, so if self-efficacy is to be perfectly calibrated, it needs to travel from 0.3 to 0.7. The speed of that travel, often called the learning rate, is determined by k, which we set to 0.1. In the literature on learning, this value represents a relatively fast learner (Vancouver et al., 2014). We also assumed that effort was a direct, unweighted function of self-efficacy (i.e., effort = “self-efficacy”) and that the supervisory signal included an observation of performance while accounting for effort (i.e., performance given effort; performance/effort) to determine the capability. We built this into the self-efficacy calibration function. Figure 8.8 shows the struc ture of the model, the self-efficacy function, changes in self-efficacy, and all the variables affecting self-efficacy. As can be seen, self-efficacy indeed travels from 0.3 to 0.7. Note that the function shown in a window labeled “Document” comes from clicking the “Document” or “Document All” icons on the left-hand side of the figure. This feature is useful for obtaining the model’s code for presenting in
FIGURE 8.8
A calibration model with documentation and a causes graph strip.
230
Jeffrey B. Vancouver and Xiaofei Li
the appendix of a modeling paper or while working through the model during construction and debugging. Meanwhile, interesting variants of this model might add noise or some form of systematic bias to the components of the supervisory signal, to see if they would create some miscalibration in the belief. Goal Striving
Another common dynamic process is goal-striving (Austin & Vancouver, 1996). This example represents the process by which resources are allocated to a task (Vancouver, Putka, & Scherbaum, 2005). For example, self-regulatory views of motivation assume that individuals allocate resources to a task until a goal for the task is reached (Diefendorff & Chandler, 2011). A very simple representa tion of this process is shown in Figure 8.7. In this case, most of the interest ing action is in the function that determines the discrepancy, which is shown in Figure 8.9. Specifically, using an “IF THEN ELSE” function, the TASK STATE is compared to the goal and if the discrepancy is positive (i.e., > 0), then action is
FIGURE 8.9
A goal-striving model with the discrepancy function and a causes graph strip.
Computational Modeling With System Dynamics
231
taken (i.e., discrepancy = 1). Otherwise, no action is taken (i.e., discrepancy = 0). The effect of action is determined by effectiveness, which is set to 0.5. That is, TASK STATE is discrepancy times effectiveness. If discrepancy is one, TASK STATE changes one unit of effectiveness every two timesteps. If discrepancy is zero, there is no change to TASK STATE, which occurs at 40 “minutes” into the simulation (see Figure 8.9). The goal-striving structure, though simple, is perhaps the most used structure in system dynamics models. For Forrester (1961), the discrepancy function rep resents what he called a policy. Industrial dynamics, Forrester argued, is replete with numerous decision-making entities applying policies to what those entities are responsible for and are thus monitoring. Of interest to Forrester was how the constellation of entities, all with their policies, created productive or counter productive processes for the organization. What can be difficult to discern with out the support of computational models are how these processes arise and what might be done to move the counter-productive ones into productive ones. At the individual level, theories of self-regulation tell a similar story. Van couver (2008) has argued that acting, thinking, learning, and feeling are all processes that can be modeled with the basic, goal-striving structure (see also, Lord & Levy, 1994). However, what Forrester (1961) calls policy, Vancouver calls the self-regulatory agent. Thus, according to this perspective, complex, individual behavior arises from the constellation of agents within the individual. This perspective is highlighted in Vancouver et al.’s (2010b) paper on multiplegoal striving, Vancouver et al.’s (2014) paper where learning was added, and Ballard Vancouver, and Neal’s (2018) paper where the effect of differing dead lines across goals was considered. All these papers show complex, nonlinear behavior arising from a network of self-regulatory agents arranged in a specific, largely static structure. Applications of System Dynamic Modeling to I-O
The system dynamics platform has a philosophical perspective regarding social phenomena that has been vindicated in models built here and elsewhere. Here, we highlight some of those insights. Resistance to Change
Forrester (1971) observed that complex systems are often resistant to change owing to their feedback loops and homeostatic mechanisms. Indeed, it does not take too many loops to find this type of behavior. For example, Vancouver et al. (2020) showed that the addition of a feedback loop that used performance to calibrate self-efficacy was sufficient to undermine goal-setting effects on moti vation and performance, though delays in the restoration process make it appear
232
Jeffrey B. Vancouver and Xiaofei Li
that goal setting is effective in the kind of short-term experiments typically used to test the intervention and theory. Indeed, it might take months before one might see a return to baseline following a goal-setting intervention. Spurious Causal Inferences
Another feature of complex systems is what Forrester (1971) called “coin cident symptoms,” whereby effects of processes can appear to be causes of the processes. This, of course, is the third variable and reciprocal causation problem that plague causal inferences when using quasi or non-experimental methods for making causal inferences (James, Mulaik, & Brett, 1982; Shad ish, Cook, & Campbell, 2002). An example of this issue can be found in the debate between Bandura (2012) and Vancouver (2012) regarding self-efficacy and performance. As noted earlier, self-efficacy is presumed to be influenced by performance as well as influence performance (Bandura, 1997). Yet, this theo rized reciprocal relationship rarely tempered interpretations of self-efficacy’s influence on performance despite research designs not up for the task of teasing apart the reciprocal effects (Lindsley, Brass, & Thomas, 1995; Shea & How ell, 2000). Meanwhile, designs that could tease apart the effects could still not tell the entire story (Sitzmann & Yeo, 2013). Rather, a computational model by Vancouver and Purl (2017) highlighted several moderating conditions that could lead to negative, positive, null, and curvilinear effects for self-efficacy on performance. Ambiguous Causal Influence
Indeed, several scholars use the notions of feedback and systems to argue that causal influence is a spurious concept (e.g., Powers, 1978). A simple control system example highlights the idea. Specifically, when a driver or a car’s cruise control maintains the speed of a car as it goes down the road, what are the causal factors at play? That is, what causes the car to pick up speed if it slows? One answer is the fact that it slows. Another is the additional gas to the engine. A third is the desire of the driver to maintain a speed, which is operationalized in either the cruise control setting or the mind of the driver. A fourth is whatever causes the car to slow down (e.g., a hill; wind). We suspect the gas to the engine would be the answer most intuitively assigned to the question of what causes the car to speed up. However, if the car’s cruise control was very good (as most are), one would not measure any variance in car speed. Thus, nothing would covary with car speed because car speed would not vary. If one did have an instrument sensi tive enough to pick up car speed, one would see that engine revving was nega tively related to speed. That is, the engine would engage more the slower the car was going. Thus, one might see an association of engine rev with car speed, but
Computational Modeling With System Dynamics
233
it would be negatively related. Models of causation, which use association as a key to assigning causation, will be misled by this finding. An illustration of the issue as applied to I-O is shown in a set of models by Vancouver et al. (2010a) on information seeking during socialization. In one of the models, they showed that information seeking is both positively and nega tively related to knowledge. That is, the model represents the idea that individu als seek to reduce uncertainty by seeking information. They detect uncertainty when they have information or have reason to believe that their current state of knowledge is inadequate. If the information seeking is successful at reducing the uncertainty (i.e., provides the knowledge), then seeking stops. The inferen tial interpretations regarding causation are likely to get the empiricist in trouble without a good understanding of the underlying dynamics. Conclusion
As intimated earlier, the simple examples we have provided have been or could be included in many models of phenomena of interest to I-O psycholo gists (Weinhardt & Vancouver, 2012). For example, Vancouver et al. (2016) turned the classic selection model (Cortina & Luchman, 2013) from a static heuristic to a dynamic model that could explain positively skewed distributions of performance. Likewise, Vancouver and Purl (2017) used a computational model to clarify how self-efficacy can both positively and negatively affect motivation and performance. Zhou, Wang, and Vancouver (2019) formalized core elements of leadership theories into a working, dynamic model. Gee, Neal, and Vancouver (2018) challenged existing theory regarding the process by which individuals set goal levels for themselves. Perhaps most relevant, Van couver et al. (2020) described the process of taking an existing, static model of motivation and translating it into a dynamic computational model, learning many lessons along the way. We suspect that much can be learned from attempt ing to translate all of the theories I-O psychologists use into working, dynamics models of change. Such efforts will (a) increase understanding dramatically, (b) provide clear directions in terms of empirical evidence needed to confirm or clear up ambiguities regarding processes, (c) lead to the integration and thus reduction of theoretical models, and (d) provide the tools for address the symp toms of applied problems or taking advantage of unexplored opportunities. Let us get to it. Note 1. One might substitute hires for births, turnover for deaths, and transfers for immigration and migration if employees in an organization is the population of interest. Promotions into and out might be relevant for positions or levels in an organization (e.g., first-level supervisors).
234
Jeffrey B. Vancouver and Xiaofei Li
References Adner, R., Pólos, L., Ryall, M., & Sorenson, O. (2009). The case for formal theory. Acad emy of Management Review, 34, 201–208. Anderson, J. A. (1995). An introduction to neural networks. Cambridge, MA: MIT Press. Austin, J. T., & Vancouver, J. B. (1996). Goal constructs in psychology: Structure, pro cess, and content. Psychological Bulletin, 120(3), 338–375. Ballard, T., Vancouver, J. B., & Neal, A. (2018). On the pursuit of multiple goals with different deadlines. Journal of Applied Psychology, 103, 1242–1264. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. Bandura, A. (2012). On the functional properties of perceived self-efficacy revis ited. Journal of Management, 38(1), 9–44. Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considera tions. Journal of Personality and Social Psychology, 51(6), 1173. Busemeyer, J., & Diederich, A. (2010). Cognitive modeling. Thousand Oaks, CA: SAGE. Cooper, W. W. (1951). A proposal for extending the theory of the firm. Quarterly Journal of Economics, 65, 87–109. Cortina, J. M., & Luchman, J. N. (2013). Personnel selection and employee performance. In N. W. Schmitt & S. Highhouse (Eds.), Handbook of psychology, industrial and organi zational psychology (Vol 12, pp. 143–183). Hoboken, NJ: John Wiley & Sons, Inc. Cronin, M., & Vancouver, J. B. (2019). The only constant is change: Expanding theory by incorporating dynamic properties into one’s models. In S. E. Humphrey & J. M. LeBreton (Eds.), The handbook for multilevel theory, measurement, and analysis. Hoboken, NJ: American Psychological Association. Cronin, M. A., Gonzalez, C., & Sterman, J. D. (2009). Why don’t well-educated adults understand accumulation? A challenge to researchers, educators, and citizens. Organi zational Behavior and Human Decision Processes, 108, 116–130. Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. The Academy of Management Review, 32, 480–499. DeShon, R. P. (2013). Inferential meta-themes in organizational science research: Causal inference, system dynamics, and computational models. In N. W. Schmitt & S. Highhouse (Eds.), Handbook of psychology: Industrial and organizational psychology (Vol. 12, pp. 14–42). Hoboken, NJ.: American Psychological Association. Diefendorff, J. M., & Chandler, M. M. (2011). Motivating employees. In S. Zedeck (Ed.), APA handbook of industrial and organizational psychology (Vol. 3, pp. 65–135). Washington, DC: American Psychological Association. Durban, R. (1976). Theory building in applied areas. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology (pp. 17-39). Chicago, IL: Rand-McNally. Farrell, S., & Lewandowsky, S. (2010). Computational models as aids to better reasoning in psychology. Current Directions in Psychological Science, 19, 329–335. Farrell, S., & Lewandowsky, S. (2018). Computational modeling of cognition and behav ior. New York: Cambridge University Press. Forrester, J. W. (1958). Industrial dynamics: A major breakthrough for decision makers. Harvard Business Review, 36(4), 37–66. Forrester, J. W. (1961). Industrial dynamics. Cambridge, MA: MIT Press. Forrester, J. W. (1971). Counterintuitive behavior of social systems. Technological Fore casting and Social Change, 3, 1–22.
Computational Modeling With System Dynamics
235
Gee, P., Neal, A., & Vancouver, J. B. (2018). A formal model of goal revision in approach and avoidance contexts. Organizational Behavior and Human Decision Processes, 146, 51–61. Hanges, P., & Wang, M. (2012). Seeking the Holy Grail in organizational science: Uncov ering causality through research design. In S. W. J. Kozlowski (Ed.), The Oxford hand book of organizational psychology (pp. 79–116). New York: Oxford University Press. Hintzman, D. L. (1990). Human learning and memory: Connections and dissociations. Annual Review of Psychology, 41, 109–139. James, L. R., Mulaik, S. A., & Brett, J. M. (1982). Causal analysis: Assumptions, models, and data. Beverly Hills, CA: SAGE. Katz, D., & Kahn, R. L. (1978). The social psychology of organizations. New York: Wiley & Sons. Lindsley, D. H., Brass, D. J., & Thomas, J. B. (1995). Efficacy-performing spirals: A mul tilevel perspective. Academy of Management Review, 20, 645–678. Locke, E. A., & Latham, G. P. (2004). What should we do about motivation theory? Six recommendations for the twenty-first century. Academy of Management Review, 29, 388–403. Lord, R. G., & Levy, P. E. (1994). Moving from cognition to action: A control theory perspective. Applied Psychology, 43, 335–367. Millett, B. (1998). Understanding organizations: The dominance of systems theory. Inter national Journal of Organisational Behaviour, 1, 1–12. O’Boyle, E., & Aguinis, H. (2012). The best and the rest: Revisiting the norm of normal ity of individual performance. Personnel Psychology, 65, 79–119. Piketty, T. (2014). Capital in the twenty-first century. Cambridge, MA: Harvard Univer sity Press. Powers, W. T. (1978). Quantitative analysis of purposive systems: Some spadework at the foundations of scientific psychology. Psychological Review, 85, 417–435. Richardson, G. P. (1991). Feedback thought in the Social Science and Systems Theory. Philadelphia, PA: University of Pennsylvania Press. Scott, W. R. (1998). Organizations: Rational, natural, and open systems (4th ed.). Upper Saddle River, NJ: Prentice Hall. Shadish, W., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin. Shea, C. M., & Howell, J. M. (2000). Efficacy-performance spirals: An empirical test. Journal of Management, 26, 791–812. Simon, H. A. (1952). On the application of servomechanism theory in the study of pro duction control. Econometrica, 20, 247–268. Sitzmann, T., & Yeo, G. (2013). A meta-analytic investigation of the within-person selfefficacy domain: Is self-efficacy a product of past performance or a driver of future performance? Personnel Psychology, 66, 531–568. Sterman, J. D. (2000). Business dynamics. Systems thinking and modeling for a complex world. New York, NY: McGraw-Hill. Taber, C. S., & Timpone, R. J. (1996). Computational modeling. Thousand Oaks, CA: SAGE. Thomas, M. S. C., & McClelland, J. L. (2008). Connectionist models of cognition. In R. Sun (Ed.), Cambridge handbook of computational psychology (pp. 23–58). New York: Cambridge University Press.
236
Jeffrey B. Vancouver and Xiaofei Li
Vancouver, J. B. (2008). Integrating self-regulation theories of work motivation into a dynamic process theory. Human Resource Management Review, 18, 1–18. Vancouver, J. B. (2012). Rhetorical reckoning: A response to Bandura. Journal of Man agement, 38, 465–474. Vancouver, J. B. (2013). Systems theory of organizations. In E. Kessler (Ed.), Encyclope dia of management theory (Vol. 2, pp. 521–523). Thousand Oaks, CA: Sage. Vancouver, J. B., Li, X., Weinhardt, J. M., Purl, J. D., & Steel, P. (2016). Using a compu tational model to understand possible sources of skews in distributions of job perfor mance. Personnel Psychology, 69, 931–974. Vancouver, J. B., & Purl, J. D. (2017). A computational model of self-efficacy’s various effects on performance: Moving the debate forward. Journal of Applied Psychology, 102, 599–616. Vancouver, J. B., Putka, D. J., & Scherbaum, C. A. (2005). Testing a computational model of the goal-level effect: An example of a neglected methodology. Organizational Research Methods, 8, 100-127. Vancouver, J. B., Tamanini, K. B., & Yoder, R. J. (2010a). Using dynamic computational models to reconnect theory and research: Socialization by the proactive newcomer example. Journal of Management, 36, 764–793. Vancouver, J. B., Wang, M., & Li, X. (2020). Translating informal theories into formal theories: The case of the dynamic computational model of the integrated model of work motivation. Organizational Research Methods, 23(2), 238–274. Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu: Com putational modeling for micro-level organizational researchers. Organizational Research Methods, 15, 602–623. Vancouver, J. B., Weinhardt, J. M., & Schmidt, A. M. (2010b). A formal, computational theory of multiple-goal pursuit: Integrating goal-choice and goal-striving processes. Journal of Applied Psychology, 95, 985–1008. Vancouver, J. B., Weinhardt, J. M., & Vigo, R. (2014). Change one can believe in: Add ing learning to computational models of self-regulation. Organizational Behavior and Human Decision Processes, 124, 56–74. von Bertalanffy, L. (1968). General systems theory, foundations, development, and appli cation. New York: George Braziller. Wang, M., Zhou, L., & Zhang, Z. (2016). Dynamic modeling. Annual Review of Organi zational Psychology and Organizational Behavior, 3, 241–266. Weinhardt, J. M., & Vancouver, J. B. (2012). Computational models and organizational psychology: Opportunities abound. Organizational Psychology Review, 2, 267–292. Weiss, H. M. (1990). Learning theory and industrial and organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (Vol. 1, 1st ed., pp. 171–221). Palo Alto, CA.: Consulting Psychologists Press, Inc. Zhou, L., Wang, M., & Vancouver, J. B. (2019). A formal model of leadership goal striv ing: Development of core process mechanisms and extensions to action team context. Journal of Applied Psychology, 104, 388–410.
Additional Resources Vancouver and Weinhardt (2012); Vensim tutorials (Vensim.com).
9 EVALUATING COMPUTATIONAL MODELS Justin M. Weinhardt, PhD
I think the introduction of computational models to our field presents us with a great opportunity to conduct our science differently. Heidegger (1927/1996) proposes that most of the time we are absorbed in practice without question ing ourselves and things in our world. However, when there is some significant breakdown or disruption, we question things either temporarily or completely and start to reflect on our being and its connection with our practice. Similarly, Kuhn (1962/2012) discusses that most of the time we operate under “normal science,” where we are problem solving and operating under the dominant para digm. However, some anomaly is discovered, and then as scientists we engage in “revolutionary science,” where we engage in research to bring the anomaly into the paradigm or adopt a new paradigm. For Kuhn, these anomalies were new findings that needed to be reconciled, often by theory. However, I think the anomaly (Kuhn) or disruption (Heidegger) we have been presented with is not a new finding but a method. Computational models are a disruption in our science and should make us confront how we practice our science. We should not let this disruption go to waste. Therefore, my criteria here for evaluating computational models might seem onerous. I chose these criteria by integrating a diverse set of recommendations from philosophy of science, psychology, and mathematical modeling with the goal of creating the most useful scientific method of evalua tion. My larger hope is that the criteria for evaluating models are considered so useful that they are adopted by researchers for verbal theories. At the very least, if those of us who are modelers conduct our science differently, the disruption to normal science will be even more pronounced, and possibly permanent. How should we evaluate computational models? As empirical researchers, we might hope that there is some statistical test that we can do that tells us DOI: 10.4324/9781003388852-11
238
Justin M. Weinhardt
whether a model is good or bad. We examine the goodness of fit statistic, and if the statistics is within some predetermined range that is considered to be a stand ard for good fit, we report that our model has good fit. This is in line with our cur rent practice of science. Sometimes we might compare different models against one another, and we decide that the model that does better regarding goodness of fit, is the better theory. The process of model evaluation is simple, neat, and com plete. Now, I am not saying that fitting computational models are easy or simple (see Ballard’s current volume; Myung, 2003 for quantitative guides). However, when evaluating models using goodness of fit, our evaluative decision is easy. The fit statistics do our thinking for us (Gigerenzer, 2004). I propose that fit statistics are not sufficient for evaluating computational models. To back up this provocative proposal, I will outline a framework for evaluating computational models that goes beyond goodness of fit statistics and parameter estimation. Although provocative, my proposal is deeply rooted in scholarly work in philosophy, psychology, and mathematical psychology. This framework does not abandon all quantitative metrics but downplays their role. The evaluation of computational models is not easy, and no statistic can do our thinking for us. Evaluation is a process that requires purposeful work. Evaluation of compu tational modeling occurs during the building of the model, after the model has been produced, during the review process, and once it is out in our scientific com munity. The work of evaluation requires two roles: the modeler and the evaluator. The modeler is the person(s) who have developed the model and the evaluator is the person(s) who are evaluating the usefulness and validity of the model. The work of the evaluator can only be successfully accomplished if the modeler did their necessary work. The work of the modeler is to specify and justify their model to the full extent of their ability, and I will specify throughout this chapter what that entails. In addition, I will be explicit about the work evaluators need to do, which I specify in italics. In the simplest form, the modeler submits their paper to a journal and the evaluator evaluates the model and the paper as a reviewer. So, as you read this paper, think both about your role as the person developing a model and your role as a person evaluating a model for a journal. Now of course a modeler evaluates their own model—they adopt both roles. I believe that if modelers adopted both roles and did the work of the evaluator, they will have more specific, justified, and valid models. However, as has been well established, we are beholden to various biases such as confirmation bias, which necessitates outside evaluation. Science is work that happens in public, and a model that does not stand up to sci entific scrutiny has no scientific value (Popper, 1959/2002). This is why during the review process the role of the evaluator is so important because they provide the scrutiny that results in what will be the model that is presented to the world. Then as readers, we engage in some sort of evaluation process. Realistically it is
Evaluating Computational Models
239
unlikely it will be as specific as what is outlined here, but we evaluate the model and the paper, nonetheless. I believe evaluation of scientific theories (e.g., computational models) must be a quest toward finding the most well-justified and useful model. We should not be concerned with the truth of the model but rather should be concerned that the model is justified and is more useful than alternative models in the literature. This is true whether it is a computational or verbal theory. I firmly believe that what I have to say in this chapter regarding computational models will improve our verbal theory evaluation as well. The approach to scientific discovery and evaluation I take here aligns with pragmatist philosophy (James, 1907/1975; Peirce, 1868; Dewey, 1930). Using the Model Evaluation Framework (MEF), a well-justified and useful model is a model that is (1) logically consistent, (2) makes crucial predictions, and (3) is generalizable. Before I get into the framework, I must first deal with the general epistemological issue of fallibility, which is integral to how we evaluate computational models. Computational Models and Fallibility
When receiving an award for innovations in the field of computational modeling, it might seem strange to title your acceptance speech All Models Are Wrong, but that is exactly what John Sterman (2002) titled his speech. Sterman is not alone in thinking this way about computational models. Lewandowsky and Farrell (2010) propose that if a model fits the data, it is sufficient to explain the data but is not necessary to explain the data. What this means is that the model is not the unique explanation of the data. It is just one possible model in competition with a finite number of known models and an infinite number of unknown models. How can all models be wrong if computational models are more transparent, precise, and logically consistent than verbal theories, as claimed by a number of authors, including myself (Adner et al., 2009; Weinhardt & Vancouver, 2012)? One answer is that all verbal theories are wrong too. In fact, a second more perilous answer is that we can never know anything (Descartes, 1637/1999). Descartes claimed that we must take a stance of absolute doubt regarding sense experience and beliefs, which forms the basis of the skeptical stance in science. From this, not only are all models wrong, there never will be a model that is right, and we are not justified in our belief or use of any model. We must always remain skeptical. Peirce (1868), one of the founding philosophers of pragmatism, proposed a solution to Cartesian skepticism. Pierce proposed the concept of fallibilism, which is the idea that no belief or sense experience is so well justified by evi dence that it could not be proven false. Therefore, modelers and those who evalu ate models need to take a stance of fallibility that a well-justified model could be proven wrong, but until it is proven wrong it can be scientific and useful. So, it is
240
Justin M. Weinhardt
not that all models are wrong but rather that a model has the ability to be proven wrong. In fact, according to Popper (1963/2014), this aspect of being able to be proven wrong (i.e., falsifiability) is what makes a model scientific. A welljustified model has a level of verisimilitude, which is its partial truth value or its’ truth likeness, according to Popper. A model cannot be true, but it can be closer to the truth than other models. The level of verisimilitude of a model is related to the model’s usefulness (Lewandowsky & Farrell, 2010). As Meehl (1990) notes, we can continue to make changes to a theory (i.e., our fallible belief), but the theory is useful because it makes strong predictions that other theories cannot make and be corroborated with evidence (i.e., verisimilitude). When we evaluate models, these concepts need to inform our evaluative stance. Now that I have proposed the stance we should take toward model evaluation, I will lay out the first phase of the MEF. Examining the logical consistency of a model is a necessary first step because it informs and leads to all other phases of evaluation. The first phase of the MEF is lengthy and effortful work for both the modeler and the evaluator. Model Evaluation Framework: Logical Consistency
Natural language can be ambiguous and our disagreements about things in the world stem from our failure to be clear with our natural language (Wittgenstein, 1953/2009). In addition, research has shown that individuals do not understand dynamic relationships (Cronin et al., 2009; Weinhardt et al., 2015), which are two of the reasons why we need computational models (Kleinmuntz, 1990; Vancouver & Weinhardt, 2012; Weinhardt & Vancouver, 2012). Computational models aid us in examining the relationships between clearly defined variables and how those relationships might change over time. However, computational models are not a panacea against logical inconsistency and poor operationaliza tion. Computational models can help us see the failure of our reasoning, but the model is built by humans and built using human language. Definitions and Measurement
The variables and parameters in the model must be clearly defined (Davis et al., 2007). One of the goals of computational models is to reduce ambiguity regard ing what we are studying and proposing. In our verbal theorizing, there might be lots of disagreement about what exactly is motivation, job satisfaction, learning, engagement, etc. However, for a model, the definition for the variables must be clear and precise; this means that one variable could not be confused with other related constructs. Therefore, these variables will likely be simpler than what is often present in the literature. In addition, the modeler must specify their meas urement, the range of possible values, and the justification for those values. This
Evaluating Computational Models
241
means that the modeler must specify where these values and units of measure ment are coming from and why these were chosen over other values. For exam ple, Vancouver and Purl (2017) used a table to clearly outline the parameters in the model, the ranges for those variables, and values used in the simulations. One of the parameters in their model of self-efficacy is ability. Ability can mean many things; it could mean general human capital, it could mean intelligence, or it could mean performance. In their model, ability is closer to performance and is defined as the rate of task progress and the range of values for ability is between .02 and .08. Therefore, knowing how a parameter is defined and scaled is cen tral to interpreting the model, and as Rahmandad and Sterman (2012) outline, this type of reporting is needed for transparency and relatedly for the ability of others to reproduce the model. In addition, the reporting of the range of values and values used in simulations is necessary because these decisions (i.e., the numbers used in the simulation) have an influence on the behavior and results of the model (Sterman, 2000). Therefore, the modeler needs to justify why these specific values were chosen over other values, specifying their relation to psy chological representation. The work of the evaluator is to make sure that variables and parameters are clearly defined. They also must evaluate if the units of measurement and the range of values have been specified and justified. Equations and Algorithmic Rules
Now that we have evaluated the definition of constructs and their measurement, we must evaluate the relationships between constructs. Modelers need to present the full specification of their model, which means that all equations, algorithmic rules, and assumptions need to be presented (Rahmandad & Sterman, 2012). These are the representation of the theoretical processes (Davis et al., 2007; Myung, 2000). In addition, the modeler needs to present their justification for why these equations were chosen, why these algorithmic rules have been used to model some phenomenon, and the basis of their assumptions. The modeler must present the reasons why and how constructs are interacting with one another. This is the model’s logic. In these early days of computational modeling, modelers must also present their justification and mathematics in plain language. This could mean an appen dix with definitions of more advanced mathematical terms. However, the logic of the model must be accessible to the reader. As Richard Feynman noted during his career, if people cannot understand your idea, people cannot use your idea and it has no scientific value (Feynman, 2005). For example, if a modeler says some process or relationship is represented as a differential equation, they must define a differential equation and justify why they chose this representation.
242
Justin M. Weinhardt
The work of the evaluator is to ensure that all equations are specified, algo rithmic choices are justified, and assumptions of the model are explicit. The evaluator must also ensure that the justifications are presented as simply as possible so that other scientists can use the ideas in the model. Plausibility
To establish the plausibility of the model, the modeler must provide explanation and argumentation (Osborne & Patterson, 2011). When explaining the plausi bility of the model, the modeler is answering how and why the constructs (and their measurement and values) in the model were chosen and how and why they causally relate to one another. Then the modeler must provide an argument about how they think their model and the components correspond to previous scien tific results. Therefore, when the modeler is presenting the plausibility of their model, they are positioning the model among other scientific explanations and results. A model that is removed from other scientific explanations and results is implausible is not useful. This is not to say that a model or a component cannot contradict other scientific data, but explanation and argument must be provided for why there is that contradiction. For example, Vancouver et al. (2014) expanded upon the computational model of multiple-goal pursuit developed by Vancouver et al. (2010) by incorporating work on learning and connectionist modeling. Specifically, in updated model, they incorporated the delta-learning rule from connectionist models (Thomas & McClelland, 2008) to better account for the data and to account for new data. Therefore, incorporating learning into a model of multiple-goal pursuit more strongly placed the model in both motivational research and cognitive science. This integration with other theories and ideas that have faced scientific scrutiny legitimizes the model. The work of the evaluator is to ensure that the why and how of the model and all its parts has been explained. The evaluator must determine if there are any flaws or gaps in the reasoning of the modeler. In addition, the evaluator must evaluate whether there are alternative explanations to the ones presented by the modeler. Thus, the evaluator must determine if the model and its com ponents are supported by what we know in the scientific data and if there are findings or theories that contradict the model or its components. Them the evaluator must present the most plausible alternative explanation to the modeler for consideration into the model or as a competitor to the model. Just as the modeler must be specific, the evaluator must also be specific. The advantage of computational models is that our intuitions can be operational ized and simulated to reveal the consequences of our thinking. Therefore, a contradictory model or alternative explanation is not a rejection of a model, but rather a new test for the model and modeler.
Evaluating Computational Models
243
Simulation and Logic
The next step of evaluation is to ensure that (1) the model actually runs and (2) the results of the model are logical. Modelers need to be explicit about what program they are using to run their model. The simulation of a model is able to show the consequences of our thinking. Essentially, the results of a simulation are a check on our logical ability and thinking about dynamics. There might be some impossible values in the data or results that a modeler is unable to justify. This is the beauty of computational models because we can have proof that our thinking is fallible (Kleinmuntz, 1990; Vancouver & Weinhardt, 2012). It could also mean that our coding ability is fallible, and running the model will show if we made coding errors. If the model runs and the results of the model do not seem impos sible, the modeler will have a momentary sense of relief and joy that they have a working model. Sadly, for the modeler, there is more work to be done. Modelers must perform sensitivity analysis on their model to justify the logi cal consistency of their model (Davis et al., 2007; Sterman, 2000; Vancouver & Weinhardt, 2012). The modeler must present the results of the simulation with changes to assumptions, values, algorithmic choices, and relationships between constructs. The results of these tests will inform the numerical, behavioral, and policy sensitivity of the model (Sterman, 2000). The stance of fallible belief in the model is paramount during this stage. The modeler must work to question their assumptions, ranges of values, and algorithmic choices. As I stated previ ously, there are an infinite number of possible alternative models. Pragmatically, sensitivity analysis should not happen randomly, but plausible changes should be made to aspects of the model that the modeler is uncertain about and are most influential (Sterman, 2000). These changes then can be different experiments of the model (Davis et al., 2007). Now the modeler can test their initial model against other models that have different values, assumptions, relationships, or even new elements. There are resources about how to perform sensitivity analy sis in detail that are beyond the scope of this paper (Sterman, 2000; Vancouver & Weinhardt, 2012; Davis et al., 2007). The modeler must reconcile these sensitiv ity results with their initial proposed theoretical model. This could mean making changes to the model or acknowledging boundary conditions. It could also result in little changes to the model, but the modeler must do the work of justifying that the results of the sensitivity analysis do not motivate change to the model. The work of the evaluator is to call into question the choices of the modeler regarding their sensitivity analysis. As humans, we are stricken with confir mation bias and overconfidence. Therefore, the role of an outside evaluator who did not construct the model is important because they must do the work of pushing the modeler’s thinking and willingness to test alternatives. The evaluator must determine whether the necessary experiments have been done and that any changes or lack thereof have been justified.
244
Justin M. Weinhardt
A Note on Transparency
Computational models and their benefits fit very well into the Open Science movement. I propose that all papers on computational models should have an appendix that fully specifies the model and have an online supplement that con tains the full computational model. This is in line with calls from other modelers (Rahmandad & Sterman, 2012). To fully evaluate the model generally, evaluators need the model being proposed. Then the evaluator can perform their own sen sitivity analysis and see if there are other plausible alternatives. I have reviewed computational models that sound very interesting and have possibly interesting results, but the model is not presented. Therefore, I tell the editor I cannot make a final decision on this paper because I am not able to fully evaluate or represent the model. This would be akin to publishing an empirical paper with no methods and analysis section. Just as our theorizing is limited without the aid of compu tational models, our evaluation is limited if we do not have access to the model. Conclusion: Logical Consistency
Although this might seem onerous for both the modeler and the evaluator, this is necessary, so the audience knows what the modeler is representing and propos ing. The goal is to have as little asymmetric information as possible between the modeler and the audience (including the evaluator). To create understand ing between the modeler and the audience, the modeler must map the ideas in their head into the language of the computational model and supporting verbal theory that is able to be understood by someone else. This can only be done if the modeler is explicit in establishing the logical consistency of their model in their papers. If the modeler fails to do this, they have private information and there cannot be a shared understanding between the modeler and the audience (Witt genstein, 1953/2009). Therefore, the modeler must put an immense amount of effort in presenting their ideas with transparency, clarity, and justification. The evaluator must scrutinize the work of the modeler, continually wondering—are constructs defined? were the values made explicit? are there alternative models? is the model plausible is there anything missing? will the results change? At this stage what we want to ensure is that we know what the model is proposing, whether it is plausible, whether the model runs, and whether those propositions stand up to variation in assumptions, values, algorithmic changes, etc. Next, we move to examining crucial predictions. Model Evaluation Framework: Crucial Predictions
The logical consistency and precision of models are benefits unto themselves, but they also lead to improved predictions (Lewandowsky & Farrell, 2010). Integrating different work, I propose that evaluation of models should consider
Evaluating Computational Models
245
whether the models make crucial predictions. I propose that crucial predic tions are falsifiable, risky, and comparative. It is easy to find support for a the ory, which Meehl (1978, 1990) discussed in relation to our field, where most constructs are going to be related to one another. Therefore, saying that some construct is related to something else is obvious and will almost certainly be corroborated. Moreover, there is the even more pernicious problem of induc tion that prevents us from generalizing inferences from our observations. This is why corroboration of a theory does not mean that the theory is true (Popper, 1959/2002). As I said at the beginning, we are not looking for truth with models, we are searching for the most justifiable and useful model. Corroboration of a model can only be informative regarding justification and usefulness of the model if the predictions of the model are crucial. Work on philosophy of sci ence proposes that these elements of crucial predictions are what we should be doing in our practice regardless of whether it is a computational model or verbal theory. Whether our current verbal theories have these elements is not relevant for the current paper. However, I contend that models that have crucial predic tions will be more justifiable and useful and need to be considered by evaluators. As with the previous section, I will be explicit about what evaluators need to do and what modelers should provide. Falsifiable Predictions
For Popper (1959/2002), the criterion of demarcation between science and nonscience is falsifiability, meaning that it is possible for a statement to be refuted by some observation. If we want our models to be in the realm of scientific knowl edge, the predictions in our model must be able to be proven false. According to Popper, for a prediction to be scientific, it does not need to have been tested before, but it must be capable of being tested and proven false. Falsifiability of the universal theory might not be possible, but the predictions of lesser univer sality need to be falsifiable, and as universality decreases falsifiability should be increasing (Popper). Therefore, for a model to be scientific, the modeler must develop a model that can be proven wrong by empirical observation. The falsifi ability of the predictions is influenced by logical consistency. For a prediction to be falsified, the prediction must be formulated as “sharply as possible” by the theoretician so the experimenter can best decisively test the prediction, and the experimenter must only test this prediction and put in work to test other predic tions (Popper). Therefore, the modeler needs to precisely state their predictions from the model and provide information about what result would be considered a falsification of the prediction. If the modeler is not clear about what consti tutes falsification, the justification of the prediction is weakened because the model always has an escape clause that the prediction was misunderstood or was really proposing something else. Therefore, falsifiability begets testability. The
246
Justin M. Weinhardt
modeler must establish that their model is able to be tested. However, I am not saying that a modeling paper must have empirical support in the same paper—it just has to have the possibility of being tested and information must be provided about how that can happen. What I am proposing is that if a model is not testable, it is not scientifically justifiable or useful (Popper, 1959/2002). At the very least, if the model cannot be currently tested, the modeler must specify what test in the future could be done. The work of the evaluator is to determine whether the predictions of the model are falsifiable. Moreover, the evaluator must determine that the universal the ory is made up of falsifiable predictions. The evaluator must also determine that the criteria for falsifiability of the predictions has been specified and is justifiable. Finally, they must determine that the model is testable in its entirety or at the very least identify which elements of the model are testable. Then they need to push the modeler to justify why untestable elements are in the model and to show how their exclusion/inclusion influences the model. Risky Predictions
Corroboration of a model’s predictions is only informative if those predictions are risky integrating work from a number of authors (Mayo, 19911; Meehl, 1967; Popper, 1959/2002; Yarkoni, 2019), a prediction is considered risky if it is pre cise, can be proven false, is specific only to the model/theory, and is incongruent with other models/theories. Therefore, a model that is able to account for any and all data is not scientifically useful. When a model’s risky prediction is cor roborated, we have a more useful and justified model. The likelihood that risky predictions will not be corroborated is what makes them so powerful because when these risky predictions are corroborated, we know the model is useful because it does something that no other model can do. This type of precision is often not possible with verbal theorizing, but with computational models, we are able to make these types of risky predictions. Therefore, the modeler must establish that the predictions from their model are able to be proven wrong, are specific to the model, and are not predicted by other models. The modeler must also specify what would be considered failure to corroborate predictions. We might not yet be ready for point estimate predictions in our field, but with computational models, we are ready to move beyond directional predictions. Specific risky predictions could be within some pre-specified range of scores: it could be predicting a certain type of behavior in response to some event that other theories do not predict, it could be predicting some behavior occurring within a certain time range, etc. Therefore, if a researcher creates a computa tional model of racial bias in employee selection, a risky prediction from the model would be that X attitude/behavior/policy in certain types of organizations
Evaluating Computational Models
247
leads to a 20–30% decrease in hiring rates for African Americans. Then they would go on to describe how their model predicts this effect, whereas other mod els do not predict this effect or do not predict this amount of an effect. The work of the evaluator is to determine if the predictions from the model are indeed risky. The evaluator needs to determine if there are any other theories that make similar predictions. The evaluator needs to push the modeler to justify how their model is unique and makes predictions that other theories/ models do not make. The evaluator must also make sure that some specificity is in the predictions of the model. Finally, the evaluator must ensure that the modeler has specified what would not be considered corroboration. Competitive Predictions
Falsifiability and risky predictions lead to the final part of a crucial prediction, which is that it is competitive. What I mean by comparative is that the model’s predictions are in competition with some other model/theory. In our field, we have a plethora of theories and empirical findings, but often what is missing in our papers is competition between theories. The practice of science is not about finding a theory in isolation that accounts for some phenomenon (Giger enzer & Brighton, 2009). The practice of science is about finding a theory that accounts for some phenomenon better than other current theories, or a theory is able to account for the phenomenon when no other theory has been able (Popper, 1959/2002). As proponents of computational models in our field have claimed, the won derful advantage of computational models is that they can be used to integrate and/or more easily competitively test theories (Adner et al., 2009; Weinhardt & Vancouver, 2012). For example, motivation researchers have been quick to adopt this practice with success (e.g., Vancouver & Purl, 2017; Ballard et al., 2016). This type of work is similar to computational models and competitive theory testing in cognitive and decision-making psychology where models are tested against one another (Lewandowsky & Farrell, 2010). To have a well-justified and useful model, the modeler needs to compete their model against other models fairly (Cooper & Richardson, 1986). This is going to be a pressing issue for our field because we do not have many of our theories operationalized as computational models. At the very least, the modeler should follow the logical consistency component of the MEF. This will ensure that the audience will know exactly what are being competed against one another. Testing two models that make opposite predictions or different predictions can reconcile disagreement in our science and advances future theoretical develop ment. Competing models that make different predictions also advance empirical research because now empirical researchers can devise more precise tests, based
248
Justin M. Weinhardt
on the results of a simulation, to show that the models differ in their simulated results. At the end of the chapter when all the elements of the MEF have been specified, I will discuss comparing models against one another. Competing predictions and models will likely be the most difficult element of the MEF. As our field is in its infancy regarding computational modeling, there will not be numerous ready-made computational models up for competition. As I said, I am proposing an ideal evaluation of models. The competitive aspect might be too aspirational for right now. However, it is an aspiration we should strive toward because it will advance our science by showing us which models have the most utility, not just the models that account for data. Growth and pro gress in our science will occur by replacing less useful and justified models with models that are more useful and justified (Meehl, 1978; Popper, 1959/2002). The work of the evaluator is to evaluate whether there are other plausible models or theories that could account for the phenomenon under investiga tion. The evaluator needs to push the modeler to at the very least attempt to incorporate the element of the competing theory/model into their model, that differs from their current model. Then the evaluator needs to ensure that the modeler tests the model with and without this element. The evaluator needs to determine that any models that are being competed are being competed fairly and are operationalized fairly if they have yet to be developed into computa tional models. If a model cannot be formally competed, the evaluator needs to determine if the current computational model advances knowledge above what is currently known from theory or empirical observation. Extending Competitive Prediction
One reason why these motivation researchers might have embraced theory com petition is that they are dealing with theories, studies, and findings that are struc turally similar to what cognitive psychology researchers deal with. However, the majority of researchers in organizational studies are not dealing with problems similar to motivation researchers studying laboratory decisions. This does not mean that model testing is not possible for other sub-disciplines of organiza tional studies. It is possible, necessary, and will have advantages. For example, we know that transformational leadership has positive effects on organizations and followers (Barling, 2014). However, leadership research does not tell us how important transformational leadership is to follower perfor mance in competition with other things we know are important to follower per formance but are not leadership. To do that, we would need to compete a model of transformational leadership against something else that relates to employee job performance, such as work design. This is a great strength of computational models and competitive predictions because now a researcher could develop a
Evaluating Computational Models
249
model of work design’s effect on employee job performance and transforma tional leadership’s effect on employee job performance and compete with them. This would provide us with testable predictions for these theories and a better understanding of employee job performance. Our science has a unique oppor tunity to go expand computational modeling past the boundaries often seen in other disciplines in psychology. We can use computational models to compete models of organizational justice, but we can also compete a theory of organiza tional justice against, say, a theory of employee engagement to predict organi zational commitment. We will not only advance our science but will also be an example to other disciplines about how to use computational models differently. Conclusion: Crucial Predictions
Examining work from philosophy of science (Mayo, 1991; Meehl, 1967; Popper, 1959/2002; Yarkoni, 2019) I have claimed that crucial predictions are falsifiable, risky, and competitive. These three elements of crucial predictions are integral to creating a well-justified and useful model. The reason that a model with crucial predictions is more useful and justifiable is that it could be proven wrong from data and is predicting things that no other theories can. As with logical consist ency, this is a lot of work for both the modeler and the evaluator. As is common in philosophy, my comments here are prescriptive and present an ideal standard of what a model should be. It will be up to the evaluator and the audience to determine how close any model is to this ideal. Now that I have covered logical consistency and crucial predictions, I will move to the issue of generalizability. Model Evaluation Framework: Generalizability
As the saying goes: last but not least, we have come to generalizability. Gener alizability is the ability of a model to account for all other data generated by the same processes beyond the current data (Pitt & Myung, 2002). Generalizability is not a thing that a modeler should think about at the end of the model creation process. Generalizability is something that the modeler needs to keep at the fore front of their mind throughout the model creation process. In fact, generalizabil ity is something that needs to be considered when deciding what type of model is being developed. When creating a model, we have two goals in mind: we want to explain behavior and we want to predict behavior. We might think that we can easily do both, but as I will discuss later this is likely not possible. The issue of generalizability reveals that there is a tenuous and fraught relationship between explanation and prediction. Often in papers, we say we predict some behavior or relationship will occur. However, when we go to run our statistical test, we examine the variance explained, which is not equivalent to predicting behavior. In fact, the overwhelming majority of research in our field is about explaining
250
Justin M. Weinhardt
behavior, not predicting behavior. Indeed, the model that best explains the data is not necessarily the model that best predicts future behavior (Hagerty & Srini vasan, 1991; Myung, 2000; Shmueli, 2010; Yarkoni & Westfall, 2017). To put it simply, a more complex model, which often means one with more parameters, is more likely to be worse for prediction than a simpler model. Com plex models are less able to generalize to new data than simpler models. The rea son is because of overfitting. Overfitting occurs because parameters estimated from a sample are selected to minimize the sum of squared error in the sample they are collected from, and the relationship between variables is influenced by sampling and measurement error, which are not shared with other samples (Yarkoni & Westfall, 2017). Therefore, the fitted model will result in overfitting. Related to the issue of overfitting and prediction versus explanation is the issue of the bias-variance trade-off. Total error is equal to the sum of bias and variance and often in how we conduct our research we focus on minimizing bias. How ever, this means that the variance of the model is larger, which severely limits the ability of the model to predict new behavior in other samples. It is very likely that if a model is created to explain some phenomenon, it will not do well regard ing prediction. This is why the issue of generalizability needs to be considered at the start of the model creation process. This is why goodness of fit statistics can be misleading and are not actually indicative of model quality in isolation. It should be noted that this is not an issue just for computational models. All our statistical models are beholden to these issues. Therefore, when we have a complex statistical model and say it accounts for 60% of the variance in the data, we are not predicting, and, worse, the model is unlikely to generalize success fully to a new sample. The modeler needs to be clear about what the goal of the model is. If the goal is to predict some behavior, then the modeler needs to ensure they have the simplest model possible. This might mean that the model is deficient in psycho logical plausibility, but that is not as relevant because the goal is to predict new behavior. To do this, the modeler needs to conduct cross-validation (Browne, 2000). Cross-validation is when a model is trained on one dataset and then tested on a new independent data set. This means that the parameters are estimated from one data set, and those estimates are used to predict in the new data set. Researchers could split their data in half and train the model on one half of the data and then use the other dataset for prediction. A model that does better at cross-validation is often more generalizable, which means that it is a more useful and justifiable model for predicting behavior. If the goal is to explain some phenomenon, this does not mean that the mod eler has carte blanche to use unlimited complexity. Simpler models are pre ferred because they are more falsifiable in addition to being more generalizable (Popper, 1959/2002). A complex explanatory model might do well explaining the phenomenon under investigation, but because of its complexity might be
Evaluating Computational Models
251
able to explain many other phenomena as well that it was never intended to account for, which makes it unfalsifiable. All the aforementioned issues apply to explanatory models as well. Moreover, the end goal would eventually be to apply that explanatory model to predict some phenomena under investigation. Therefore, simpler models again are preferred. The modeler needs to develop the simplest model possible and show that it is able to generalize by conducting cross-validation. The work of the evaluator is to examine the complexity of the model. The evaluator needs to give scrutiny to all parameters in the model to determine if they are truly necessary. The evaluator must ensure that cross-validation has been performed to show that the model is able to predict new behavior. The evaluator needs to ensure that the modeler has tested their proposed model against simpler models and evaluate the justification if a more complex model is chosen. Conclusion: Generalizability
Regarding generalizability, simpler models are often going to be better. In psy chology, where many factors seem relevant and relate to one another, devising a simple generalizable model is not an easy task, but with the work of the mod eler and the evaluator it is possible. However, this does not mean that a simpler model must always be chosen. As I have said throughout this chapter, model evaluation is a holistic and complex process. Generalizability is very important, but it is just one element of how we should evaluate computational models. All the other elements of the MEF need to be considered when evaluating a model. Final Thoughts
At the beginning of this chapter, I asked, “How should we evaluate computa tional models?” I offered the MEF as a way to evaluate computational models that is not computational. This might seem strange, but model evaluation and model choice are subjective processes (Myung & Pitt, 2002). Ballard et al. (this volume) do a great job covering the topic of parameter estimation and model fit by offering a quantitative guide on how models can be compared against one another. Yes, there are goodness of fit statistics, and measures of generalizability, but they are only additional tools to help aid model evaluation—not definitive judgments. No single test or quantitative metric can tell us that a certain model is good or bad or that one model is better than another model. The evaluation of models is subjective and effortful. However, the MEF provides some structure to how these evaluations can be made. Through the shared work of the modeler and the evaluator, a model will be created that is useful and justifiable. When the
252
Justin M. Weinhardt
time comes and there is a new competitor in town, we can use the MEF to see if that model should be replaced. My final proposal is that the majority of what I said in this chapter should be applied to verbal theories. Those who create verbal theories should clearly define the variables and processes in their theory, should have to justify how their theory fits in with other research and contain risky predictions. If those cre ating verbal theories adopted the behaviors, I believe are required of modelers, and those evaluating those verbal theories adopted the behaviors, I believe are required of evaluators, we will have more useful and justifiable verbal theoriz ing. The strengths of computational modeling can and should extend to other types of scholarly work. Note 1. Mayo (1991) uses the term “severe test” rather than “risky” to mean the same type of predictions discussed here.
References Adner, R., Pólos, L., Ryall, M., & Sorenson, O. (2009). The case for formal theory. Acad emy of Management Review, 34(2), 201–208. Ballard, T., Yeo, G., Loft, S., Vancouver, J. B., & Neal, A. (2016). An integrative formal model of motivation and decision making: The MGPM*. Journal of Applied Psychol ogy, 101(9), 1240–1265. Barling, J. (2014). The science of leadership: Lessons from research for organizational leaders. Oxford University Press. Browne, M. W. (2000). Cross-validation methods. Journal of Mathematical Psychol ogy, 44(1), 108–132. Cooper, W. H., & Richardson, A. J. (1986). Unfair comparisons. Journal of Applied Psy chology, 71(2), 179. Cronin, M. A., Gonzalez, C., & Sterman, J. D. (2009). Why don’t well-educated adults understand accumulation? A challenge to researchers, educators, and citizens. Organi zational Behavior and Human Decision Processes, 108(1), 116–130. Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. (2007). Developing theory through simulation methods. Academy of Management Review, 32(2), 480–499. Descartes, R. (1999). Discourse on method and meditations on first philosophy. Hackett Publishing. (Original work published 1637) Dewey, J. (1930). The quest for certainty: A study of the relation of knowledge and action. The Journal of Philosophy, 27(1), 14–25. Feynman, R. P. (2005). The pleasure of finding things out: The best short works of Rich ard P. Feynman. Helix Books. Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587–606. Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143. Hagerty, M. R., & Srinivasan, V. (1991). Comparing the predictive powers of alternative multiple regression models. Psychometrika, 56(1), 77–85.
Evaluating Computational Models
253
Heidegger, M. (1996). Being and time (Trans. J. Stambaugh). SCM Press. (Original work published 1927) James, W. (1975). Pragmatism (Vol. 1). Harvard University Press. (Original work pub lished 1907) Kleinmuntz, B. (1990). Why we still use our heads instead of formulas: Toward an inte grative approach. Psychological Bulletin, 107(3), 296–310. Kuhn, T. S. (2012). The structure of scientific revolutions. University of Chicago Press. (Original work published 1962) Lewandowsky, S., & Farrell, S. (2010). Computational modeling in cognition: Principles and practice. SAGE. Mayo, D. G. (1991). Novel evidence and severe tests. Philosophy of Science, 58(4), 523–552. Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological para dox. Philosophy of Science, 34(2), 103–115. Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychol ogy, 46(4), 806. Meehl, P. E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1(2), 108–141. Myung, I. J. (2000). The importance of complexity in model selection. Journal of Math ematical Psychology, 44(1), 190–204. Myung, I. J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathemati cal Psychology, 47(1), 90–100. Osborne, J. F., & Patterson, A. (2011). Scientific argument and explanation: A necessary distinction? Science Education, 95(4), 627–638. Peirce, C. S. (1868). Some consequences of four incapacities. The Journal of Speculative Philosophy, 2(3), 140–157. Pitt, M. A., & Myung, I. J. (2002). When a good fit can be bad. Trends in Cognitive Sci ences, 6(10), 421–425. Popper, K. (2002). The logic of scientific discovery. Routledge. (Original work published 1959) Popper, K. (2014). Conjectures and refutations: The growth of scientific knowledge. Routledge. (Original work published 1963) Rahmandad, H., & Sterman, J. D. (2012). Reporting guidelines for simulation-based research in social sciences. System Dynamics Review, 28(4), 396–411. Shmueli, G. (2010). To explain or to predict? Statistical Science, 25(3), 289–310. Sterman, J. D. (2000). Business dynamics: Systems thinking and modeling for a complex world. McGraw-Hill. Sterman, J. D. (2002). All models are wrong: Reflections on becoming a systems scientist. Sys tem Dynamics Review: The Journal of the System Dynamics Society, 18(4), 501–531. Thomas M. S. C., & McClelland J. L. (2008). Connectionist models of cognition. In R. Sun (Ed.), The Cambridge Handbook of Computational Psychology (pp. 23–58). Cambridge University Press. Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu: Compu tational modeling for micro-level organizational researchers. Organizational Research Methods, 15(4), 602–623. Vancouver, J. B., Weinhardt, J. M., & Schmidt, A. M. (2010). A formal, computational theory of multiple-goal pursuit: Integrating goal-choice and goal-striving pro cesses. Journal of Applied Psychology, 95(6), 985–1008.
254
Justin M. Weinhardt
Vancouver, J. B., Weinhardt, J. M., & Vigo, R. (2014). Change one can believe in: Add ing learning to computational models of self-regulation. Organizational Behavior and Human Decision Processes, 124(1), 56–74. Weinhardt, J. M., Hendijani, R., Harman, J. L., Steel, P., & Gonzalez, C. (2015). How analytic reasoning style and global thinking relate to understanding stocks and flows. Journal of Operations Management, 39, 23–30. Weinhardt, J. M., & Vancouver, J. B. (2012). Computational models and organiza tional psychology: Opportunities abound. Organizational Psychology Review, 2(4), 267–292. Wittgenstein, L. (2009). Philosophical investigations. John Wiley & Sons. (Original work published 1953)
Yarkoni, T. (2022). The generalizability crisis. Behavioral and Brain Sci ences, 45, e1: 1–78.
Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychol ogy: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122.
10
FITTING COMPUTATIONAL MODELS TO DATA A Tutorial Timothy Ballard, Hector Palada, and Andrew Neal
One of the goals of industrial-organizational (I-O) psychology is to understand the laws or principles that govern behavioral and organizational phenomena. What makes this process challenging, however, is that these laws and princi ples are not directly observable (Farrell & Lewandowsky, 2018; Heathcote, Brown, & Wagenmakers, 2015; Myung, 2003). For example, the concept of skill acquisition is often used to explain why task performance improves with prac tice (Kanfer & Ackerman, 1989). Yet the processes underlying skill acquisition are not observable. It is impossible to directly quantify the rate at which skill is acquired or the level of skill a person has at a given point in time. These things must be inferred based on patterns of behavior or performance. In order to develop and test explanations for how unobserved processes oper ate, I-O psychologists rely on models. Although conceptual (e.g., verbal or pic torial) models have traditionally dominated the literature, computational models are becoming more widely used. A computational model is a representation of a theory that is expressed mathematically or using formal logic (see Ballard, Palada, Griffin, & Neal, 2019). Computational models produce precise, quan titative predictions regarding the pattern of behavior that should be observed, given the assumptions of the theory. This ability to generate such precise predic tions strengthens the link between the theory and the hypotheses implied by the theory, which ultimately allows for a more direct test of the theory (Oberauer & Lewandowsky, 2019). To illustrate the importance of precision for theory testing, consider the fol lowing example inspired by Meehl (1978). Suppose three meteorologists with competing theories of atmospheric dynamics put their theories to the test by attempting to predict next month’s rainfall in Australia. The first theory predicts DOI: 10.4324/9781003388852-12
256
Timothy Ballard, Hector Palada, and Andrew Neal
that it will, indeed, rain at some point next month in Australia, and this predic tion turns out to be supported. The second theory predicts that it will rain more next month than it did this month, and this too is eventually supported. The third theory predicts that next month Australia will receive an average rainfall of between 100 and 120 mm, which also ends up being correct. Which theory do you think is the most credible? This example illustrates that theories should not be evaluated simply on the basis of whether or not their predictions are supported. To do so would lead to the erroneous conclusion that all three of the theories discussed earlier are equally supported by the evidence. The researcher also needs to consider to what extent support for a theory’s predictions constitutes evidence for the theory itself. The first two theories in the previous example make predictions that are quite broad, and there are likely many different theoretical explanations that would lead to the same predictions. Thus, support for these predictions can only ever provide relatively weak evidence for the theory. The third predic tion, however, is more precise and is less likely to happen by chance or overlap with the predictions of another theory. This precision, therefore, means that we should be more persuaded of the theory’s credibility when its predictions are supported. The precise nature of computational models’ predictions makes theories eas ier to test. This precision facilitates the comparison of competing explanations by clarifying the differences in their predictions. It also makes it easier to iden tify explanations that fail to account for observable phenomena. Ultimately, this precision allows the researcher to better understand the implications of the data for the theory. The purpose of this chapter is to demonstrate how to use empirical data to test a computational model’s predictions. Previous chapters in this volume have addressed the issue of how to develop computational models and generate pre dictions. Here, we focus on the issue of testing models against data. In the same way as in a structural equation modeling analysis, computational models need to be fit to the data in order to be tested. That is, one usually needs to estimate certain model parameters from data and quantify the alignment between the data and the model predictions. However, unlike structural equation modeling, computational models usually cannot be implemented in statistical packages such as Mplus, SAS, or Stata. Thus, fitting computational models to data often requires a few extra steps beyond coding the model itself. In the next section, we describe the model fitting process and further discuss its importance for theory testing. The rest of the paper provides a hands-on tutorial, which demonstrates the model fitting process using a computational model of multiple-goal pursuit. The tutorial concludes with an example results section, which demonstrates how the results from the analysis we present in our tutorial may be written up for publication.
Fitting Computational Models to Data
257
What Is Model Fitting and Why Is It Important?
Fitting a model to data refers to the process of adjusting model parameter val ues in such a way as to maximize the correspondence between the data and the predictions of the model. Farrell and Lewandowsky (2018) describe this pro cess by comparing the model parameters to tuning knobs on an analog radio. As the listener turns the knob, the radio picks up different stations. In the same way, changing the values of model parameters alters the predictions the model makes. Consider a simple example of a univariate regression model, which has two parameters: an intercept and a slope. Each parameter has a unique effect on the model predictions. Increasing the value of the slope parameter increases the strength of the predicted relationship between the predictor and outcome vari able. Increasing the intercept value increases the overall predicted level of the outcome variable. To illustrate why the model fitting process is important for testing theory, con sider the more familiar case of structural equation modeling. As with analysis based on computational models, structural equation modeling usually involves testing a series of alternative models in order to build support for a set of theo retical assumptions by ruling out alternative explanations. However, suppose a researcher decided to skip the model fitting step in such an analysis. That is, they simply chose parameter values that they believed to be reasonable and evalu ated model-data alignment, for example, via measures such as the comparative fit index (CFI), Tucker-Lewis index (TLI), or the root mean squared error of approximation (RMSEA) based on output of the model under those parameter values. Under this approach, how confident would you be in the conclusion that Model 1 provides a better account of the data than Model 2? The issue with this approach is that lack of model-data correspondence is ambiguous. Suppose Model 2 is found to be a poor approximation of the data (e.g., the CFI and TLI have relatively low values and the RMSEA is relatively high). This could suggest that Model 2 provides a poor explanation of the pro cess being investigated and should be ruled out in favor of Model 1. However, the poor model-data correspondence under Model 2 could simply be the result of the researcher’s choice of parameter values. It may be that there is another set of parameters under which Model 2 provides a better approximation of the data than Model 1. Fitting the model to the data allows the researcher to identify the parameters that produce the output that most closely aligns with the data. This gives the model the best possible chance of accounting for a set of empirical observations. When this is the case, lack of model-data correspondence is less ambiguous. If the researcher knows for certain that there is no combination of parameter values under which the model more accurately characterizes the data, they can conduct a fair assessment of the model’s adequacy. For example, Ballard,
258
Timothy Ballard, Hector Palada, and Andrew Neal
Vancouver, and Neal (2018) fit several versions of a computational model of multiple-goal pursuit to data from an experiment in which participants pursued two goals with different deadlines. They demonstrated that an earlier version of the model could not account for the tendency to prioritize the goal with shorter deadline. They interpreted this finding as evidence against this version of the model. This conclusion would not have been possible had the models not been fit to the data, because failure of the model to accurately characterize the data would be ambiguous. Such a failure could be due to the inadequacy of the model itself or to the inadequacy of the parameter values used to generate the model predictions. Model fitting also facilitates the comparison of alternative models. Modelers will typically evaluate the evidence for a hypothesized model by comparing it to a set of theoretically plausible alternative models (Heathcote et al., 2015). The evidence in favor of a model is much stronger when the researcher dem onstrates that a hypothesized model not only adequately characterizes the data but also does a better job of accounting for the data than competing models. The challenge however is that it can be difficult to determine what is meant by “a better job of accounting for the data.” Typically, the researcher will wish to establish that the model adheres to the principle of parsimony, that is, that the model provides the simplest possible explanation that accounts for the pattern of observed behavior. Ideally, this involves demonstrating that simplifying the model further by removing certain assumptions undermines the model’s ability to capture essential features of the data, as well as demonstrating that an increase in the complexity of the model provides little to no increase in the alignment between model and data. Model fitting allows the researcher to quantify both the correspondence between the model and the data and the complexity of the model, which together help the researcher assess the model’s parsimony. For example, the AIC (Akaike, 1973) and BIC (Schwarz, 1978) are commonly used metrics for quantifying the ability of the model to capture that data that account for both model-data correspondence and model complexity (as measured by the number of estimated parameters). Such indices help the researcher to select the model that provides the best trade-off between model-data correspondence and complexity, provided that model is theoretically plausible. Model fitting is also important because it allows parameters to be estimated from the data. This allows the researcher to obtain useful information about the process under investigation. In the case of regression analysis, for example, the researcher would interpret the estimate of the slope parameter to make infer ences about the relationship between the predictor and outcome variable. In the same way, estimates of computational model parameters can be used to make inferences about latent components of the process that the model purportedly represents. For example, Zhou, Wang, and Zhang’s (2019) computational model of leadership goal striving contained a parameter that reflected the extent to
Fitting Computational Models to Data
259
which the leader prioritizes working on their own tasks as opposed to acting to support their subordinates. After fitting their model to experimental data, they found the estimated value of this parameter to be relatively low, indicating that leaders in their study tended to prioritize their subordinates’ tasks. Parameter estimates are often interpreted by comparing them across indi viduals or experimental conditions. For example, Vancouver, Weinhardt, and Schmidt (2010) found individual differences in the estimates of a parameter that represents sensitivity to time and deadlines. Between-person variation in this parameter accounted for individual differences in resource allocation patterns over time. This type of comparison is analogous examining the bi-variate corre lation between a pair of variables, with the main difference being that one of the variables is estimated by the model rather than directly observed. Ballard, Yeo, Loft, Vancouver, and Neal (2016) found that a similar parameter in their model was affected by goal type and concluded that people were less sensitive to dead lines when pursuing avoidance goals compared with approach goals. This type of comparison involves a comparison of means (which conventionally might be conducted using a t-test or ANOVA), though the inferences are best on estimated variables rather than observed ones. In the next section, we begin our tutorial by describing the model that we use as an example to illustrate the steps required to fit a model to data. The model we use is the multiple-goal pursuit model (MGPM; Ballard, Vancouver et al., 2018; Ballard, Yeo, Loft et al., 2016; Vancouver, Weinhardt et al., 2010; Vancouver, Weinhardt, & Vigo, 2014), which attempts to explain how people make prior itization decisions when managing competing goals. In the section that follows, we show how to translate the model from a set of equations to computer code. In subsequent sections, we demonstrate how to use this code to fit the model to empirical data, and how the result from this analysis can be written up for publication. We conclude by discussing how this approach can be extended to develop and test more complex models. The approach we demonstrate uses maximum likelihood methods for param eter estimation, which is a frequentest approach to model fitting. Although parameter estimation can also be done using the Bayesian framework (e.g., Bal lard, Palada et al., 2019; Ballard, Vancouver et al., 2018), we decided to use maximum likelihood methods in this tutorial for two reasons. First, we wanted to make this tutorial as accessible as possible. We expect that most readers will be more familiar with frequentest methods than Bayesian ones and did not want these readers to have to learn Bayesian methods in order to make use of the content on model fitting. Second, many of the principles of maximum likeli hood parameter estimation are directly relevant to Bayesian parameter estima tion. This tutorial should therefore provide a useful foundation for readers who eventually wish to use Bayesian methods. We elaborate on some of the useful features of the Bayesian approach in the “Extensions” section.
260
Timothy Ballard, Hector Palada, and Andrew Neal
The Multiple-Goal Pursuit Model
Multiple-goal pursuit is the process by which people manage competing demands on their on their time and other resources as they strive to achieve desired outcomes and to avoid undesired outcomes (Ballard, Yeo, Neal, & Farrell, 2016; Neal, Ballard, & Vancouver, 2017; Schmidt & DeShon, 2007; Schmidt & Dolis, 2009; Schmidt, Dolis, & Tolli, 2009). Multiple-goal pursuit is challenging because allocating time and effort toward one goal will often undermine our capacity to make progress on other goals. As a result, we must make decisions about which goal to attend to at a given point in time and repeatedly reevaluate these decisions as priorities change. The multiple-goal pursuit model (MGPM; Ballard, Vancouver et al., 2018; Ballard, Yeo, Loft et al., 2016; Vancouver, Weinhardt et al., 2010; Vancouver et al., 2014) is a computational model that has resulted from over a decade of research into the process by which people make prioritization decisions. The core assumption of the MGPM is that people make prioritization decisions by weighing up the need to act on the goal, which Vancouver, Weinhardt et al. (2010) refer to as valence, and the perceived likelihood of achieving the goal, which is referred to as expectancy. Valence and expectancy combine to form a perception regard ing the expected utility of acting on the goal, which represents the degree to which the individual is motivated to prioritize the goal at that point in time. As the multiple-goal pursuit model is a computational model, it is articulated in the form of mathematical equations. These equations describe the relation ships between theoretical constructs proposed by the model and ultimately the pattern of observed behavior that should emerge if the model’s assumptions are correct (the model’s predictions). There are two types of constructs that are operationalized in the equations. The first type are variables that are directly observable (e.g., the time available before a deadline). The values of these vari ables can usually be ascertained based on the environment and are akin to inde pendent variables in a regression equation. The second type are parameters. Parameters usually reflect constructs that are not directly observable and there fore must be inferred based on patterns of observable behavior. For example, it would be difficult to directly measure the extent to which people are sensitive to differences between the time available and the time required to reach the goal (referred to as time sensitivity). However, we can make inferences about this construct by examining the prioritization decisions that people make when faced with different deadlines. Parameters are referred to as free when their values are not known beforehand but rather estimated from the data (akin to regression coefficients). Parameters are referred to as fixed when their values are defined in advance. Here we briefly describe the equations that form the model, which we trans late to computer code in the next section. At the time of writing, the version of
Fitting Computational Models to Data
261
the MGPM introduced by Ballard, Vancouver et al. (2018) represents the latest and most general version of the model. We therefore focus on this version of the model in the tutorial. The valence of acting on goal i at time t (denoted Vk(t)) can be defined formally as follows1: Vi ( t ) = k i ×
TRi ( t ) TAi ( t )
,
(1)
where TRi ( t ) represents the time required to reach the goal and TAi ( t ) repre sents the time available before the deadline. The time required is the product of the distance to goal i at time t (denoted di ( t ) ) and one’s belief regarding the time needed to reduce the distance to goal i by a single unit (referred to as the expected lag). The time available and distance to the goal are generally assumed to be observable variables. k i is a gain parameter that reflects the importance of the goal. In most applications of the MGPM, k i has been treated as a fixed parameter with its value set to one. However, k i may be useful as a free param eter in settings where one wishes to examine the extent to which goals differ in importance. The expectancy of achieving goal i at time t (denoted Ei ( t )) can be defined as follows: Ei ( t ) =
1
1 + exp ëé g × (TAi ( t ) TRi ( t ) ) ùû
(2)
According to Equation 2, expectancy is a function of the difference between the time available and the perceived time required to reach the goal. If the time available and time required are equal, expectancy will be 0.5. Expectancy will be greater than 0.5 when the time available exceeds the time required, and less than 0.5 when the time required exceeds the time available. The γ parameter represents time sensitivity and determines how strongly expectancy is affected by the difference between the time available and time required. When g = 0, the person is completely insensitive to the difference between the time available and time required, and expectancy is always 0.5. As g increases, expectancy becomes more sensitive to this difference. Time sensitivity is usually treated as a free parameter. The expected utility of acting on a goal is influenced by the product of valence and expectancy. However, Ballard, Vancouver et al. (2018) showed that expected utility can also be subject to temporal discounting (e.g., Ainslie & Haslam, 1992; Steel & König, 2006). Under this assumption, people discount goals that have deadlines that are relatively far away, treating them as less important than they would if the deadline were nearer. The expected utility of prioritizing goal i at
262
Timothy Ballard, Hector Palada, and Andrew Neal
time t is therefore a function of valence, expectancy, and the time to the deadline, which can be defined as follows: Ui (t ) =
Vi ( t ) × Ei ( t ) 1 + GTAi ( t )
,
(3)
where G is a parameter that refers to the discount rate, which represents the extent to which future deadlines are discounted. When G = 0, no discounting occurs. In this case, deadlines have no influence on expected utility (over and above their influence on valence and expectancy). As G increases, discounting becomes stronger. Like time sensitivity, discount rate is usually treated as a free parameter. The MGPM assumes that the actions that one may take in order to progress toward a goal can have uncertain consequences. The attractiveness of prioritiz ing a goal at a given point in time depends on the particular consequence being considered. When considering the possibility that good progress can be made, prioritizing the goal will be attractive. On the other hand, when considering the possibility that little or no progress would be made, prioritizing the goal is less attractive. The attractiveness of an action j at time t is referred to its momentary attractiveness (denoted A j ( t ) ) and is defined as follows: Aj (t ) =
å U (t ) × Q i
i
ij
(t ) ,
(4)
where Qij ( t ) refers to quality, which represents the impact of the consequence that is being considered at time t on goal progress. Specifically, Qij ( t ) represents the consequence for goal i that would result from action j . Quality fluctuates over time as people consider different consequences of each action. Qij ( t ) is positive when the person anticipates that an action will have a beneficial effect on goal progress, with higher values indicating more beneficial effects. Qij ( t ) is negative when the person anticipates that an action will have a detrimental impact on goal progress. Quality is most often treated as an observable vari able that is defined based on the environment. The momentary attractiveness of j action is the sum of the product of quality and expected utility for all goals that are affected by that action. According to the MGPM, the preference for each action evolves over time according to a sequential sampling process, as the person considers different possible consequences of the actions they can take. In our example, we consider a scenario where the person is striving for two goals and therefore has two pos sible actions: prioritize Goal 1 or prioritize Goal 2. In this case, the change in preference over time can be described by the following equation: P ( t ) = P ( t 1) + éë A1 ( t ) A2 ( t ) ùû ,
(5)
Fitting Computational Models to Data
263
where A1 ( t ) and A2 ( t ) represent the momentary attractiveness of prioritizing Goals 1 and 2 respectively. According to this equation, preference is measured on a bipolar continuum with positive values indicating a preference for prior itizing Goal 1 and negative values indicating a preference for prioritizing Goal 2. The change in preference at time t is determined by the difference in the momentary attractiveness between the two actions being considered. The model assumes that the person continues to consider different consequences until pref erence for one action breaches a threshold, at which point that action is selected. Because the quality of action j with respect to goal i fluctuates over time as different consequences are considered, preference accumulation is stochas tic. Thus, we cannot predict with certainty which action will be selected. We can only predict the probability of each action being chosen. In the scenario described earlier where there are two actions being considered, the probability of selecting Action 1 can be calculated as follows: æ ö E éë Adiff ( t ) ûù ç ÷ ×q ÷ , p1 ( t ) = L ç 2 × V ar éë Adiff ( t ) ûù ç ÷ ø è
(6)
where L represents the standard cumulative logistic distribution function, f ( x ) = 1 éë1 + exp ( x ) ùû . The probability of choosing Action 2 is simply 1 p1 ( t ) . The q parameter represents the threshold that determines the pref erence strength required before an action is selected (usually treated as a free parameter). High thresholds mean that a stronger preference is required before an action is selected. E éë Adiff ( t ) ùû and V ar éë Adiff ( t ) ùû refer to the mean and vari ance of the difference in momentary attractiveness between the two actions at any given point in time. These quantities can be calculated from the mean and variance of the momentary attractiveness of each action: E éë Adiff ( t ) ùû = E éë A1 ( t ) ùû E éë A2 ( t ) ùû ,
(7)
V ar ëé Adiff ( t ) ûù = V ar éë A1 ( t ) ûù + V ar éë A2 ( t ) ùû
(8)
In Equations 7 and 8, E éë Ai ( t ) ùû and V ar ëé Ai ( t ) ùû refer to the mean and variance of the momentary attractiveness of Action i , respectively. The mean momen tary attractiveness represents the average attractiveness that would be expected across the different consequences considered. The variance in the momentary attractiveness reflects the extent to which attractiveness deviates from the mean as different consequences are considered. These quantities can be calculated
264
Timothy Ballard, Hector Palada, and Andrew Neal
based on the quality of the relevant action with respect to each goal and the util ity of the goal: E éë A j ( t ) ùû =
å U (t ) × E éëQ ùû ,
V ar éë A j ( t ) ùû =
i
i
ij
å U (t ) i
i
2
×V ar éëQij ùû ,
(9) (10)
where E éëQij ùû and V ar éëQij ùû represent the mean and variance of the quality of action j with respect to goal i . Translating the Model to Code
Once the model has been specified in equation form, the next step is to translate the equations into computer code. The researcher needs to create several func tions in order to construct the code that will be used for fitting the model. In this section and the ones that follow, we explain these functions in detail. The first function that must be created is one that takes as inputs the parameter values and the data required to generate predictions and returns the model predictions. In the case of the MGPM, the predictions come in the form of a probability of prioritizing each goal. In this example, the parameter values required to generate the model’s prediction are the values of the three free parameters—time sensitiv ity, discount rate, and threshold—and the data required are variables such as the time available for and distance to each goal. Figure 10.1 shows an illustration of how this function works. The name of the function is MGPM_predictions and the function has two input arguments: 1) the parameter values, and 2) a dataset containing the variables required to generate model predictions. In the following subsection, we describe the structure of each of these input arguments. We then explain how the function transforms the inputs into a prediction regarding goal prioritization. The Input Arguments
The first input argument is a vector called params, which contains the values of the model parameters that are being used to generate model predictions. When the model is fit to the data, these parameter values will be repeatedly adjusted until the combination of parameter values is found that maximizes the corre spondence between the data and the predictions that are made by the model under those parameter values. This is how the parameter values are estimated from the data. In this model, there are three parameters being estimated: time
Fitting Computational Models to Data
FIGURE 10.1
265
Structure of the MGPM_predictions function. The params argument contains the parameter values being considered. The data argument is a dataset containing the values of the variables required to generate model predictions. The MGPM_predictions function returns the pre dicted probability of prioritizing Goal 1 under the parameter values in params for every observation in data.
sensitivity (γ), discount rate (Γ), and threshold (q ). We therefore assume the params object is a vector with three elements (one for each estimated param eter). The value of the time sensitivity parameter being considered at that point occupies the first element in the vector. The discount rate parameter occupies the second element, and the threshold occupies the third. In Figure 10.1, the values being considered are 1.1, 2.2, and 3.3, respectively. However, these values will be updated repeatedly during the parameter estimation process as different com binations of parameters are tested. The second input argument is a data frame (an R object containing a dataset) called data, which contains the variables needed to generate model predictions. The data is structured in the long format, where each row represents an obser vation, and each column represents a variable. The variables in the dataset are assumed to be known quantities, as opposed to values that are estimated by the model. In this way, they are akin to predictors in a regression analysis.
266
Timothy Ballard, Hector Palada, and Andrew Neal
The dataset we use in this tutorial is from Experiment 1 in Ballard, Vancouver et al. (2018). In this experiment, 48 participants played a computerized farming game, which was broken down into a series of trials. In each trial, participants had to manage two crops. Their objective was to ensure that the height of each crop exceeded the target height at the end of the trial. Each time step in the game represented one week in the growing season. Participants facilitated the growth of the crops by irrigating them. However, only one crop could be irrigated each week. Thus, in each week participants had to choose which crop to prioritize. In this experiment, the distance to the goals and the deadlines were manipu lated by varying the height of the crop at the start of the growing season and the number of weeks in the growing season for one of the two crops (this crop was referred to as the experimental crop). These properties were held constant for the other crop (the fixed crop). The growing season for the experimental crop varied across five levels (10, 20, 30, 40, and 50 weeks). The starting height of the experimental crop was varied across five levels (30, 60, 90, 120, and 150 cm). The target height for each crop was always 180 cm, so the five starting heights corresponded to distances of 150, 120, 90, 60, and 30 cm, respectively. The fixed crop always had a deadline of 30 weeks and a starting height of 90 cm (which corresponded to a distance of 90 cm from the goal). This resulted in a 5 (initial time available: 10, 20, 30, 40, or 50 weeks) x 5 (initial distance to goal: 30, 60, 90, 120, or 150 cm) within-participants design, with each participant completing each of the 25 unique combinations twice. The following output shows a summary of the dataset. As can be seen, there are 40,925 observations in total. Each observation in the dataset represents one week in the growing season. The subject and trial_number variables identify the participant and the trial (each participant performed 50 trials). The goal variables correspond to the target height. The suffixes _1 and _2 represent the experimen tal and fixed crops, respectively. The init_state variables represent the height of the crop at the start of the trial, whereas the state variables represent the height of the crop before each observation was made. The init_TA variables represent the length of the growing season for each crop, whereas the TA variables represent the number of weeks remaining in the growing season before each observation was made. As can be seen, the TA variables decrease by 1 with each successive observation within a trial. 1 Observations: 40,925 2 Variables: 23 3 $ subject
4 $ trial_number
5 $ goal_1
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 . . . 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2 . . . 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, . . .
Fitting Computational Models to Data 6 $ goal_2
267
180, 180, 180, 180, 180, 180, 180, 180, 180, 180, 180, . . . 7 $ init_state_1 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 90, 90, 90, 90, . . . 8 $ init_state_2 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, 90, . . . 9 $ state_1 30.00000, 38.24187, 39.52989, 40.24762, 39.86453, 41.89 . . . 10 $ state_2 90.00000, 84.12108, 92.28692, 95.01393, 98.08640, 108.3 . . . 11 $ init_TA_1 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, . . . 12 $ init_TA_2 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, . . . 13 $ TA_1 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 10, 9, 8, 7, 6, 5, 4, 3, . . . 14 $ TA_2 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 30, 29, 28, 27, . . . 15 $ importance 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 . . . 16 $ expected_lag 0.3333333, 0.3333333, 0.3333333, 0.3333333, 0.3333333, . . . 17 $ qual_mean_1_given_1 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 . . . 18 $ qual_mean_2_given_1 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 . . . 19 $ qual_mean_1_given_2 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 . . . 20 $ qual_mean_2_given_2 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 . . . 21 $ qual_var_1_given_1 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0 . . . 22 $ qual_var_2_given_1 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0 . . . 23 $ qual_var_1_given_2 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0 . . . 24 $ qual_var_2_given_2 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0 . . . 25 $ prioritize_1 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1 . . .
The importance variable contains the value of κ. We assume that the impor tance of each goal is fixed to a known value, which is the same for the two goals. Importance is therefore a fixed parameter in this application of the MGPM. The value of the importance variable is one for every observation. The expected_lag variable also takes on the same value for every observation. We set the expected lag to one-third because, in this experiment, it would take one-third of a week on average to reduce the distance to each goal by one cm. In this experiment, the impact of the chosen action on the progress for each goal depends on both the action and goal in question. Thus, there are four
268
Timothy Ballard, Hector Palada, and Andrew Neal
different qualities that need to be considered. The quality variables are labeled such that the first number corresponds to the goal that is being affected and the second number corresponds to the goal being prioritized. The qual_mean variables represent the mean quality for a given goal resulting from a particular prioritization decision (E éëQij ùû in Equation 9). qual_mean_1_given_1 therefore
refers to the expected impact on the progress for Goal 1 when Goal 1 is prior itized. qual_mean_2_given_1 refers to the expected impact on the progress for Goal 2 when Goal 1 is prioritized. qual_mean_1_given_2 refers to the expected impact on the progress for Goal 1 when Goal 2 is prioritized. As can be seen by the values of the qual_mean variables, the expected impact on progress for the prioritized goal is set to one whereas the expected impact for the non-prioritized goal is set to zero. This reflects the notion that the irrigated crop was likely to grow in that week, whereas the non-irrigated crop was equally likely to increase versus decrease in height. The qual_var variables represent the variance in qual ity. (V ar [Qij ] in Equation 10). The variance in the quality for each goal was always 0.25, regardless of which crop was prioritized. The prioritize_1 variable is a binary variable representing whether or not the participant prioritized Goal 1 in that week of the growing season (1 if yes, 0 if no). From Inputs to Model Predictions
The following code shows the function, called MGPM_predictions, which runs the model. By “run the model,” we mean that this function converts the parameters and data into predictions regarding the prioritization of each goal. The first operation the function performs (lines 4-6) is to extract the values of the model parameters that are being tested from the params object. The par ams[1], params[2], and params[3] statements extract the first, second, and third elements of the params vector, respectively. The value contained in each element is then stored in its own object. The time_sensitivity, discount rate, and threshold objects will therefore contain the value of the time sensitivity, discount rate, and threshold parameters, respectively2. 1 MGPM_predictions = function(params,data){ 2 #extract parameters 3 4 time_sensitivity = params[1] 5 discount_rate = params[2] 6 threshold = params[3] 7 8 #calculate distance to each goal 9 dist_1 = data$goal_1—data$state_1 10 dist_2 = data$goal_2—data$state_2
Fitting Computational Models to Data
269
11 12 #time required 13 TR_1 = pmax(dist_1 * data$expected_lag, 0) 14 TR_2 = pmax(dist_2 * data$expected_lag, 0) 15 16 #valence 17 val_1 = data$importance * TR_1/data$TA_1 18 val_2 = data$importance * TR_2/data$TA_2 19 20 #expectancy 21 exp_1 = 1/(1 + exp(-time_sensitivity * (data$TA_1 — TR_1))) 22 exp_2 = 1/(1 + exp(-time_sensitivity * (data$TA_2 — TR_2))) 23 24 #utility 25 util_1 = val_1 * exp_1/(1 + discount_rate * data$TA_1) 26 util_2 = val_2 * exp_2/(1 + discount_rate * data$TA_2) 27 28 #mean momentary attractiveness of prioritizing each goal 29 attract_1_mean = util_1 * data$qual_mean_1_given_1 + util_2 * data$qual_mean_2_given_1 30 attract_2_mean = util_1 * data$qual_mean_1_given_2 + util_2 * data$qual_mean_2_given_2 31 32 #mean difference in momentary attractiveness 33 attract_diff_mean = attract_1_mean—attract_2_mean 34 35 #variance in the momentary attractiveness of prioritizing each goal. 36 attract_1_var = util_1^2 * data$qual_var_1_given_1 + util_2^2 * data$qual_var_2_given_1 37 attract_2_var = util_1^2 * data$qual_var_1_given_2 + util_2^2 * data$qual_var_2_given_2 38 39 #variance of the difference in momentary attractiveness. 40 attract_diff_var = pmax(attract_1_var + attract_2_var, 0.0001); 41 42 #probability of prioritizing Goal 1 43 prob_prioritize_1 = 1/(1 + exp(-2 * (attract_diff_mean/sqrt(attract_diff_var)) * threshold)) 44 45 return(prob_prioritize_1) 46}
270
Timothy Ballard, Hector Palada, and Andrew Neal
The function then calculates a series of new variables according to the equa tions described earlier. On lines 9 and 10, the distance to each goal is computed by subtracting the current state with respect to that goal from the goal itself. Because data is a data frame object, accessing the variables contained within requires the data$ prefix. For example, the variable representing Goal 1 would be accessed with data$goal_1. On lines 13 and 14, the time required to achieve goal is calculated based on the distance to each goal and the expected lag. Note that time required is calculated in such a way that it is constrained to have a minimum value of 0. This constraint is implemented via the pmax function. This function works by, for each obser vation, comparing the product of distance and expected lag to a value of 0 and returning the maximum of the two values. The result is that TR_1 and TR_2 will never take on values less than 0. This constraint is needed because time cannot take on a negative value. When the person has surpassed the goal, no more time is needed to reach it, regardless of the extent to which the goal has been exceeded. Lines 17 and 18 compute the valence of each goal according to Equation 1. Lines 21 and 22 compute the expectancy of each goal according to Equation 2. Lines 25 and 26 compute the expected utility of each goal according to Equa tion 3. Lines 29 and 30 compute the mean attractiveness of prioritizing each goal according to Equation 9. Line 33 computes the difference in the mean momen tary attractiveness between the two actions based on Equation 7. Lines 36 and 37 compute the variance in the momentary attractiveness of pri oritizing each goal according to Equation 10. Line 40 then computes the variance in the difference in momentary attractiveness between the two actions accord ing to Equation 8. When the variance in the difference in momentary attractive ness is 0, the ratio of the mean difference in momentary attractiveness to the variance of the difference is undefined. We therefore set this variance to have a lower bound of a small, positive number (in this case, 0.00001). This constraint is implemented via the same pmax function that was used to compute the time required, except here the lower bound is set to 0.00001 instead of 0. This con straint prevents the variance in the momentary attractiveness difference from taking on a value of 0. This constraint is needed because a variance of 0 means that the predicted probability of prioritizing Goal 1, which is computed on Line 43 according to Equation 6, would be undefined. An undefined prediction would cause an error in the code used to fit the model. Restricting the variance from tak ing on a value of 0 eliminates the opportunity for this error to emerge. Such con straints are often necessary when dealing with equations that may have values of 0 in their denominators. This is not problematic theoretically because there is virtually no difference in the predictions made by the model when the variance is 0.0001 compared to when the variance is even closer to 0. Changes in values at this scale have no real effect on the model’s predictions. Thus, setting the lower bound of the variance to 0.0001 does not affect our ability to test the theory.
Fitting Computational Models to Data
271
Defining the Cost Function
Model fitting involves generating predictions under different parameter values and examining the alignment between the data and model under each value con sidered. To do this, the modeler must first define a cost or discrepancy func tion that quantifies the model-data correspondence. The cost function is defined in such a way that higher values indicate poorer alignment between the model and data. In other words, the greater the output of the cost function, the larger the discrepancy between the model predictions and the empirical observations. The goal of model fitting, therefore, is to identify the parameter values that mini mize the value of the cost function. The choice of cost function will depend on the nature of the data, the model, and the research question being investigated. One approach to defining a cost function is the least-squares method (e.g., Van couver, Weinhardt et al., 2010). Using this approach, a modeler working with continuous data might define the cost function as the square root of the mean squared difference between each observation and the model prediction that cor responds to that observation (commonly referred to as the root mean squared deviation or RMSD). A commonly used analog of the RMSD that is appropriate for categorical data is the χ2 (or G2) index (see Farrell & Lewandowsky, 2018). Another approach to defining the cost function is maximum likelihood (e.g., Ballard, Yeo, Loft et al., 2016). The maximum likelihood approach differs from the least-squares in that the goal is to identify the parameters that are most likely given the data, as opposed to the parameters that yield the smallest mean squared difference between the model and the data. This difference might seem subtle, but it has some important implications. A major advantage of the maximum like lihood approach is that the output of the cost function can be used to conduct quantitative model comparisons. Under maximum-likelihood estimation, the cost function defines the probability of the data having been observed, given the model predictions that are generated under the parameter values being considered. The goal is to find the parameters under which the data are most likely to have been observed. These are the parameter values that are most likely given the data. We use the maximum likelihood approach to define the cost function for our example implementation of the MGPM. As the reader will see, the maximum likelihood approach makes it easy to transform the value of the cost function into an index that quantifies the ability of the model to parsimoniously account for the data, which can be used to compare the model to other theoretically plausible alternatives. The binary nature of the prioritization data makes the maximum likelihood approach straightforward to implement. Because the prior itization variable to which the model is being fit is binary (1 if Goal 1, 0 if Goal 2), the data can be modeled using a Bernoulli distribution (which is the same approach used in logistic regression). Recall that the model generates predic tions regarding the probability that Goal 1 will be prioritized. This prediction
272
Timothy Ballard, Hector Palada, and Andrew Neal
can be converted into a likelihood by simply reading out the model’s predicted probability of that decision having been observed. This can be done using the following rule: If Goal 1 is prioritized, the likelihood of that observation is equal to the predicted probability of prioritizing Goal 1 under the model. If Goal 2 is prioritized, the likelihood is equal to one minus the predicted probability of pri oritizing Goal 1 under the model. For example, if the model predicts that Goal 1 has a 0.75 probability of being prioritized, the likelihood of an observation in which Goal 1 is prioritized is 0.75, whereas the likelihood of an observation in which Goal 2 is prioritized is 0.25. In order to implement the cost function, we need to construct a second func tion that computes the likelihood of the observed prioritization decisions under the parameter values in question. Figure 10.2 shows a diagram of how this sec ond function works. The function is called MGPM_likelihood and it has the same two inputs as the MGPM_predictions function (the parameter values and the data). The MGPM_likelihood passes the information regarding the parameter values and the data to the MGPM_predictions function, which
FIGURE 10.2
Structure of the MGPM_likelihood function. The params argument contains the parameter values being considered. The data argument is a dataset containing the values of the variables required to generate model predictions. The MGPM_likelihood function passes these arguments to the MGPM_predictions function, which returns the predicted probability of prioritizing Goal 1. This prediction is used to compute the negative log-likelihood, which is returned by the MGPM_likelihood function.
Fitting Computational Models to Data
273
returns the predicted probability of prioritizing Goal 1 for each observation in the dataset. The MGPM_likelihood function uses this information to deter mine the likelihood of the data. The following code shows the MGPM_likelihood function. This function performs three operations. On line 4, the MGPM_predictions function is run to generate the model predictions, which are stored in the prob_prioritize_1 object. The likelihood of each observation is calculated on line 7. The likelihood is computed using the ifelse function, which performs a logical test on each element of the first input argument and returns the corresponding element of the second argument if the condition is met and the corresponding element of the third argument otherwise. Here, the logical test is whether Goal 1 is prioritized. The first argument in the ifelse is a logical test, which returns TRUE for observations where Goal 1 is prioritized and FALSE where Goal 2 is prior itized. For observations where Goal 1 is prioritized, the likelihood is equal to prob_prioritize_1, which is predicted probability of prioritizing Goal 1. For observations where Goal 2 is prioritized, the likelihood is equal to 1-prob_ prioritize_1, which is the predicted probability of prioritizing Goal 2. The likelihood of each observation is stored in the likelihood object. 1 MGPM_likelihood = function(params, data){ 2 3 #Run MGPM to get the predicted probability of prioritizing goal 1 for each observation 4 prob_prioritize_1 = MGPM_predictions(params, data) 5 6 #Calculate the likelihood of the observation given the model prediction 7 likelihood = ifelse(data$prioritize_1==1, 8 prob_prioritize_1, 9 1-prob_prioritize_1) 10 11 #Take the logarithm of the likelihood (i.e., the log-likelihood), 12 #then sum the log-likelihood across observations and 13 #multiply by -1 to convert negative number to positive. 14 neg_log_likelihood = -sum(log(likelihood)) 15 16 return(neg_log_likelihood) 17 }
There is one more operation that needs to be performed in order to calculate the cost function. In order for the algorithm that estimates the parameters to
274
Timothy Ballard, Hector Palada, and Andrew Neal
work, the cost function must represent the discrepancy between the model and the data as a single number. This means that we need to aggregate the values stored in the likelihood variable, which reflects the likelihood of each individual observation, to create a global index reflecting the likelihood of the data as a whole. Conceptually, this is straightforward. According to probability theory, the joint probability of observing a set of independent data points is equal to the product of the probabilities of observing each individual data point. Thus, the likelihood of the data as a whole can be calculated by multiplying the predicted probabilities of each observation across the entire dataset. In practice, however, multiplying probabilities across an entire set of observa tions results in extremely small numbers that are often outside the range of val ues that can be represented by most computers. It is therefore common practice to compute the likelihood on a logarithmic scale. As can be seen on line 14 of the code shown earlier, we transform the likelihood of each individual observation to the logarithmic scale by taking the log of the likelihood variable. We then sum the log-likelihoods across observations (which is equivalent to multiplying the raw likelihoods) to create a single index representing the logged likelihood of the data as a whole having been observed. The sum of the log-likelihoods will be a negative number, with lower values representing data that are less likely under the model (i.e., poorer model-data correspondence). We therefore reverse the sign of the sum of the log-likelihoods so that the output of the cost function is a positive number where higher values represent less likely data. Note that the log transformation is monotonic, which means that it does not change the relation ship between the parameters and the output of the cost function. In other words, the combination of parameters that produces the lowest logged value of the cost function will be the same combination that produces the lowest raw value. When modeling data that are continuous, rather than binary, a different like lihood function is needed. For normally distributed data, the likelihood can be computed by evaluating the probability density function of the normal distribu tion at the observed value (see Ballard, Palada et al., 2019; Neal, Gee, Ballard, Vancouver, & Yeo, 2018). Most statistical or mathematical programming lan guages contain functions that automate this computation (e.g., the dnorm func tion in R or the normpdf function in MATLAB). For continuous data that are not normally distributed, such as highly skewed data, a different functional form may be more appropriate. For example, researchers seeking to model response times, which tend to be positively skewed, may elect to compute the likelihood using a log-normal, gamma, or ex-gaussian distribution. Minimizing the Cost Function
Fitting the model to the data involves evaluating the cost function for differ ent combinations of parameters with the goal of finding the combination that
Fitting Computational Models to Data
275
minimizes the cost function. Minimizing the cost function is necessary because it allows the researcher to identify the parameter values that provide the best pos sible correspondence between the model predictions and the data (as measured, in this case, by the likelihood of the model and its parameters given the data). As described in the introduction to this chapter, when model-data correspond ence is maximized, the researcher can attribute any discrepancy between model and the data to the inadequacy of the model itself (as opposed to inadequacy in the parameter values used to generate model predictions). This makes theories easier to evaluate. To minimize the cost function, the researcher must first specify the free parameters that are to be adjusted when attempting to bring the model predic tions in line with the data. Revisiting the metaphor of the analog radio, free parameters are the tuning knobs that can be changed in order to pick up a better signal. This process of adjusting the values of free parameters until a combina tion is found that minimizes the cost function enables free parameters to be esti mated from the data. As illustrated earlier, our implementation of the MGPM has three free parameters: time sensitivity, discount rate, and threshold. The number of free parameters bears upon the parsimony of the model. In general, as the number of free parameters increases, the model becomes more flexible. This can make the model difficult to falsify because it increases the range of data patterns for which the model can account. Increasing model flex ibility also increases the risk that the model provides such a close fit to one data set that it fails to adequately capture new observations (this problem is referred to as overfitting or fitting to noise). This limits the model’s generality. Thus, there is a trade-off between giving the model the best chance of accounting for the data (by maximizing the number of free parameters) and maintaining a parsimonious model (by limiting the number of free parameters). We return to this issue in the Model Comparison section. Parameters that are held constant instead of being estimated are com monly referred to as fixed parameters. Fixing parameters helps to keep the number of free parameters to a minimum. The researcher may choose to fix parameters that have values that are known a priori or are not of theoretical interest. For example, Zhou, Vancouver, and Wang’s (2018) computational model of leadership goal striving included parameters that controlled the rate at which leaders and subordinates could make progress toward the goal. These parameters were not focal to the hypotheses being tested and were therefore fixed for the purposes of parsimony. In our implementation of the MGPM, we fix the expected lag and quality parameters because these parameters have known values that are determined by the experimental task. We also fix the goal importance parameter because the goals that participants pursue in this experiment are identical in value and therefore would not be expected to differ in importance.
276
Timothy Ballard, Hector Palada, and Andrew Neal
The process of testing different free parameter values is conceptually simple but often practically challenging. The simplest way to do this is to conduct an exhaustive search of the parameter space. Consider an example where a model has a single free parameter. In this case, the modeler might simply test a series of parameter values that span the range of values that the parameter could plausibly take on. If the modeler expects the parameter value to fall between 0 and 10, they might test all values along this interval increments of 0.01. This would require testing 1001 possible parameter values, which would take only a few seconds on most computers. It is often the case, however, that a model will have more than one free param eter. In this scenario, an exhaustive search requires testing every possible combi nation of parameter values (a technique known as “grid search”). This approach becomes less feasible as the number of free parameter combinations that need to be tested increases. For example, if a second parameter with the same range of possible values was added to the model discussed earlier, over 1 million unique combinations of parameter values would need to be tested. If a third parameter was added, the number of possible combinations would be over 1 billion! Such an analysis would be far too computationally intensive to be practical. A grid search is therefore only feasible when the number of unique combinations of parameter values is modest. The range of possible parameter values and the increments between param eter values tested also influence the number of combinations of parameters that need to be assessed. If a grid search is used, the range of each parameter should be selected so that it covers the full space of plausible parameter values. The upper and lower bounds on the range are typically guided by theory. For example, it might not make theoretical sense for certain parameters to take on negative values or there may be certain parameter values beyond which further increases (or decreases) in the parameter value have negligible effects on the model’s predictions. In the latter case, parameter values above (or below) the identified value carry the same theoretical interpretation, so values beyond this point may not need to be considered. The increments in parameter values should be chosen so that they are as small as possible without rendering the grid search infeasible. Smaller increments are better because they allow for more fine-grained parameter estimates. In general, the increments should be small enough that transitioning between consecutive parameter values results in only a minor change in the model’s predictions. If this is not the case, the researcher runs the risk of “missing” parameter values that result in far superior model-data correspondence. This possibility can make a discrepancy between the data and the model’s predictions under the estimated parameter values difficult to interpret for reasons outlined earlier in this chapter. Gee, Neal, and Vancouver (2018) used a grid search to estimate two param eters from their computational model of goal revision. However, the plausible
Fitting Computational Models to Data
277
ranges of these parameters were narrow (one parameter was bounded between 0 and 1 and the other between -3 and 3). Furthermore, the increment between parameter values considered was relatively large (0.1). As a result, only 671 unique combinations of parameter values needed to be tested. In most cases where more than one parameter needs to be estimated, however, a grid search will be too computationally intensive to estimate parameters efficiently. Fortunately, there are far more efficient ways to estimate parameters than by exhaustively searching the parameter space. Modelers will typically make use of algorithms that find the values that minimize the cost function without having to test all possible combinations of parameters. Broadly, these algorithms work by first evaluating the cost function under some fixed set of initial parameter values. The algorithms then repeatedly modify the parameter values in such a way that the value of the cost function decreases with each iteration, until some stopping criterion is met (Farrell & Lewandowsky, 2018). This stopping criterion may involve stopping after a certain number of iterations or when the change in the cost function between iterations drops below some threshold value. Most pro grams have default stopping criteria that are sufficient in the majority of cases. Once the algorithm ceases, the resulting parameter values constitute the best estimates. The following code implements an algorithm that estimates the parameters by finding the combination of parameters that minimizes the cost function. The function that runs the algorithm is a built-in function in R called optim (line 5). This function has three arguments. The first argument, par, is a vector of initial values for the free parameters. This vector, called starting_values is assigned on line 2. The vector has three elements, one for each free parameter in our model. The time sensitivity, discount rate, and threshold parameters occupy the first, second, and third elements, respectively. 1 2 3 4 5 6 7
#specify initial parameter values starting_values = c(1,1,1) #run parameter estimation algorithm result = optim(par = starting_values, fn = MGPM_likelihood, data = data)
The second argument, fn, is the name of the cost function that is to be mini mized. The third argument, data, is the data required to evaluate the cost func tion. The optim function requires the function to be minimized to have two input arguments. The first argument to the function being minimized must be the vec tor of parameters that are adjusted in order to minimize the function. The second argument to the function being minimized is the data required to evaluate the function. The optim function also assumes that the function being minimized
278
Timothy Ballard, Hector Palada, and Andrew Neal
returns a single numeric value. In our case, the function to be minimized is the MGPM_likelihood function. We’ve structured the MGPM_likelihood function so that it meets the requirements of the optim function. The first argument to the MGPM_likelihood function contains the values of the three parameters that are updated as the model is fit to the data. The second argument MGPM_likelihood function contains the data needed to generate the model predictions. Figure 10.3 shows how the optim function works. The function repeatedly evaluates the MGPM_likelihood function, each time, adjusting the values for the params argument and examining the corresponding change in negative loglikelihood. This process continues until the algorithm implemented by the optim function reaches a point where no further reductions in the negative loglikeli hood can be achieved. This process is referred to as function optimization (hence the name “optim”). The following code shows the output from the optim function, which is stored in the result object. As can be seen, the optim function returns a list with five elements. The par element contains the parameter values that minimize the func tion, or in other words, the parameter estimates. As can be seen, the estimates for the time sensitivity, discount rate, and threshold parameters were approximately 0.41, 1.76, and 0.51, respectively. The value element contains the minimum
FIGURE 10.3
Structure of the optim function. The input arguments are the starting parameter values, the name of the function to be minimized, and the dataset containing the variables required to generate model predic tions. optim repeatedly evaluates the MGPM_likelihood function until the combination of parameters is found that minimizes the function output.
Fitting Computational Models to Data
279
value of the function. In our implementation, this is the negative log-likelihood of the data under the estimated parameter values. The counts element contains information regarding how many times the function needed to be evaluated before the minimum value was found. The convergence element confirms that the algorithm converged, with zero indicating successful convergence. Finally, the message element contains any other information that was returned by the optim function. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
$par [1] 0.4107650 1.7628407 0.5116319 $value [1] 24719.97 $counts function gradient 154 NA $convergence [1] 0 $message NULL
At this point, the reader may be wondering how much the results of this analysis depend on the starting values for the three parameters. This is an important question to consider, because in some cases, changing the starting values will result in different parameter estimates. This happens because the relationship between the parameters and the output of the cost function can be complex. An an example, Figure 10.4 shows the relationship between the time sensitivity parameter and the value of the cost function (assuming the discount rate and threshold parameters are fixed to their estimated values). As can be seen, the value of the cost function is high when time sensitivity is 0 and then rapidly decreases before reaching its minimum around 0.4. As time sensitivity increases beyond this value, the output of the cost function begins to increase. This increase is rapid at first, but then levels off as time sensitivity reaches 8 or so. Recall that the starting value for the time sensitivity parameter was one. At this value, any small change in the parameter value will have a relatively pro nounced effect on the output of the cost function. Small increases will amplify the output of the cost function, whereas small decreases will reduce the output. Because of this, it’s easy for the algorithm to identify the changes in the param eter that are necessary to move toward the function’s minimum. However, sup pose we used a starting value of 10 for the time sensitivity parameter. In this
280
Timothy Ballard, Hector Palada, and Andrew Neal
FIGURE 10.4
The relationship between time sensitivity and the negative loglikelihood of the data under the model. This figure was constructed by fixing the discount rate and threshold to their estimated values (1.76 and 0.51, respectively) and examining the result of the MGPM_likelihood function for a range of time sensitivity values.
region, changes in time sensitivity have little effect on the output of the cost function (as can be seen by the relatively flat nature of the function around that value). Such cases can lead the algorithm to prematurely “conclude” that the minimum of the function has been found because it cannot find any neighboring values that produce meaningful decreases in the output of the function. Indeed, this is precisely what happens if we change the starting values for our three parameters to 10. When working with more complex models, it is also possible for more than one “valley” to exist in the parameter space. In such cases, the parameter estimates are especially sensitive to the starting values that are chosen
Fitting Computational Models to Data
281
because there are multiple points at which all neighboring parameter values pro duce an increase in the output of the cost function. The parameter estimates will therefore depend on which “valley” is closest to the starting values. This issue is referred to as the problem of local minima. The parameters that produce the closest correspondence between model and data are those for which the value of the cost function is lowest. The lowest point of the cost function is referred to as the global minimum. As illustrated earlier, finding the global minimum may be challenging because many algorithms can converge on parameter values that produce the lowest value of the cost func tion among parameter values in the immediate vicinity but may not represent the lowest values possible (such values are referred to as local minima). There are several steps the modeler can take to ensure that the parameter estimates returned by the algorithm reflect the true minimum value of the cost function (referred to as the global minimum). It is generally advisable to run the analysis using different starting points and examining the output under each set. It is not uncommon for different sets of starting values to yield different parameter esti mates. However, typically the different sets of parameter estimates will produce different “minimum” values for the cost function. For example, when we run the algorithm using starting values of 10 for all three parameters, the resulting parameter estimates are 16.49, 1.72, and 0.42 for time sensitivity, discount rate, and threshold, respectively. However, the value of the cost function under this combination of parameter values is 25327.92, which is higher than the value under the original parameter estimates. Thus, we know that the parameter values that were returned when we set the starting values to 10 do not represent the best estimates because there is another combination of values under which the model and data are more closely aligned. One useful way to systematically test different starting values is to use a grid search method. We noted earlier that an advantage of the grid search is that it can systematically test every combination of parameters but that it has the limitation of only being practical when the number of combinations is modest. However, one can combine the virtues of the grid search with those of the optimization algorithm by using a grid search to systematically vary the starting values that are used for each run of the algorithm. For example, we might construct a set of all combinations of starting values in which each of our three parameters takes on values of 0, 5, or 10 (alternatively, we could randomly sample the sets of starting values). We would then run the optimization algorithm once for each set of starting values and record the minimum value of the cost function returned each time. We would then identify the starting values that produced the min imum value that was lowest. When the algorithm is run using these starting values, the parameter estimates returned would constitute the best estimates, and the minimum value identified would represent our best guess at the global minimum of the cost function.
282
Timothy Ballard, Hector Palada, and Andrew Neal
Model Comparison
Demonstrating that the data correspond with the model’s predictions is an important first step in testing a computational model. We can infer on the basis of model-data correspondence that the model is a possible explanation for the empirical phenomena under investigation. However, we cannot rule out the pos sibility that there are other viable mechanisms. In other words, model predic tions that map accurately onto the data provide evidence that the assumptions that form the model are sufficient to account for the empirical phenomenon. However, they do not speak to the necessity of these assumptions. We therefore encourage modelers to go beyond simply showing that their model can account for the data by comparing their model against other plausible models. As with more commonly used methodologies such as structural equation modeling, com putational model comparison generally involves comparing a series of models on the basis of their ability to parsimoniously account for the data in order to determine which of a set of models provides the best explanation. The compari son may be between a hypothesized model and a set of variants that represent competing explanations or between a series of models that are equally plausi ble theoretically. There are several criteria on which the quality of the model’s explanation is evaluated (Myung & Pitt, 1997). The first criterion is the align ment between the model predictions and the data, which in structural equation modeling might be quantified using indices such as the CFI, TLI, RMSEA, or the value of the log-likelihood function under the estimated parameter values. A second criterion is parsimony, which in structural equation modeling, as well as many applications of computational models, is often measured as the number of free parameters in the model. All else being equal, a simpler model (i.e., a model with fewer free parameters) should be preferred. This is because a sim pler model can account for a narrower range of possible patterns of data. Thus, the simpler the model, the more convincing it is when the model aligns with the data. A third criterion is generality, which refers to the extent to which the model can be applied across experimental tasks or settings. A final criterion is plausi bility, which reflects the reasonableness of the model’s assumptions and their compatibility with established findings. Of these four criteria, model-data alignment and parsimony are the most readily quantifiable and are therefore the two primary dimensions considered in most model comparisons. Together, these two dimensions determine the degree to which the model satisfies Occam’s razor, which implies that models should be as simple as possible while still being able to account for the data. The trade-off between fit and parsimony can be quantified in a number of ways. One method is the likelihood ratio test, which examines whether the addition of a free parameter produces enough of an increase in the log-likelihood (i.e., model-data alignment) to warrant the added complexity. However, this test only
Fitting Computational Models to Data
283
applies to comparisons between nested models. Two more generally applicable indices are the Akaike information criterion (AIC Akaike, 1973) and the Bayes ian information criterion (BIC Schwarz, 1978), which combine assessments of fit and parsimony into a single value. These indices can be applied to nested or non-nested models (we elaborate on this issue in the “Extensions” section). They are interpreted such that lower indices indicate a better balance between fit and parsimony, and therefore a better explanation. In this tutorial, we use the BIC to compare different versions of the MGPM. The BIC is calculated according to the following equation: BIC = 2 × ln ( L ) + ln ( n ) × k .
(11)
The first term in Equation 11 assesses the model-data alignment, as measured by the log-likelihood under the estimated parameters (ln ( L )). As can be seen, the better the model-data alignment (the lower the log-likelihood), the lower the BIC. The second term in Equation 11 penalizes the complexity of the model. As can be seen, the BIC increases with the number of free parameters (k) and the log of the number of observations (n). This means that the model with more free parameters will incur a greater penalty for complexity, with the difference between penalties being more pronounced when there are more observations. The following code calculates the BIC for the MGPM. On line 1, the number of observations is obtained by using the nrow function, which counts the number of rows in the data object. On line 2, the number of free parameters is obtained by using the length function to count the number of estimated parameters stored in the result object. On line 3, the log-likelihood under the estimated parameters is obtained by taking the negative of the minimum cost function value returned by the optim function. Recall that the output of the cost function was the negative log-likelihood of the data under the model. Here, we simply convert that value back to the positive log-likelihood (which, confusingly, is a negative number) by taking the negative. On line 5, the BIC is calculated according to Equation 11. 1 n = nrow(data) #number of observations 2 k = length(result$par) #number of estimated parameters 3 lnL = -result$value #log-likelihood of the model under estimated parameters 4 5 BIC = — 2*lnL + log(n)*k
We compare the implementation of the MGPM described earlier to two alter native models. The first alternative tests the necessity of the assumption that the expected utility of a goal is subject to temporal discounting. We do this by com paring the version of the MGPM introduced earlier to a simpler model in which
284
Timothy Ballard, Hector Palada, and Andrew Neal
the discount rate parameter is fixed to 0. This constraint turns off the temporal discounting mechanism within the model. This model has only two free param eters: time sensitivity and threshold. The second alternative model tests the suf ficiency of the assumption that the perceived time required to progress toward the goal by a single unit is equal to the actual time required in the experimental task. We do this by testing a more complex alternative in which the expected lag parameter is treated as a free parameter. This model has four free parameters: time sensitivity, discount rate, threshold, and expected lag. Figure 10.5 shows the predictions of each model superimposed against the choices observed. As can be seen, in the condition where the experimental goal started off a short or very short distance from the goal (30–60 cm), longer dead lines resulted in a weaker tendency to prioritize the experimental goal. In the con dition where the experimental goal started off a moderate distance from the goal (90 cm), there was a non-monotonic effect of deadline, such that the tendency to prioritize the experimental goal was highest when the deadline was moder ate. When the experimental goal started off farther from the goal (120–150 cm),
Very Short Distance (30 cm)
Short Distance (60 cm)
Long Distance (120 cm)
Very Long Distance (150 cm)
Moderate Distance (90 cm)
Proportion of Choices Prioritzing Experimental Goal
0.8
0.6
0.4
0.2 10
20
30
40
50
0.8
0.6
0.4
0.2 10
20
30
40
Observed
FIGURE 10.5
50
10
20
30
40
50
Experimental Goal Deadline (weeks) 2-Parameter Model
3-Parameter Model
4-Parameter Model
The fit of the three models superimposed over the data observed in the experiment. The black dots represent the observed proportion of decision prioritizing Goal 1 in each experimental condition (with error bars representing the standard error of the mean). The lines represent the average probability predicted by the models.
Fitting Computational Models to Data
285
longer deadlines resulted in a stronger tendency to prioritize the experimental goal. The model predictions suggest the three- and four-parameter versions of the model both do a reasonable job of capturing these trends, whereas the align ment between the data and the two-parameter model is poorer. At this point, we can rule out the two-parameter model based on its infe rior ability to account for the data. The question to be answered is whether the increase in model-data alignment that is obtained from transitioning from the three-parameter model to the four-parameter model is worth the added complex ity. The BICs suggest that the answer to this question is yes. The BIC for the three-parameter model is 49,472, whereas the BIC for the four-parameter model is 49,328 (the BIC for the two-parameter model is 51,727). On the basis of this result, one could argue that the four-parameter model provides a better tradeoff between fit and parsimony. On the other hand, the two models do not make qualitatively different predictions, and it appears that the increase in modeldata alignment achieved by the extra parameter is rather modest. This argument might be counted as a strike against the four-parameter model. One other question that is worth considering is to what extent the estimate of expected lag in the four-parameter model deviates from the value to which this parameter was fixed in the three-parameter model. Recall that expected lag had been fixed to a value of 0.33, because this represents the average amount of time taken to reduce the distance to each goal by one unit in the task. The estimated value of this parameter in the four-parameter model is 0.35. This estimate sug gests that people may slightly overestimate the time required to progress toward the goal. However, is the difference between the perceived and actual lags dif ferent enough to warrant treating expected lag as a free parameter? The point we are attempting to illustrate is that there are several considerations that need to be made when comparing models. Although quantitative model comparison metrics are informative, any decisions that are made need to be informed by the broader theoretical context. It is important to note that the approach described earlier is not the only method for comparing models. One alternative approach is cross-validation. Cross-validation involves fitting models to a subset of the data and then evalu ating the models on the basis of their ability to predict the remaining data. For example, Kennedy and McComb (2014) estimated the parameters of their model of team process shifts based on 75% of the data and then tested their model’s ability to account for the remaining data by generating predictions using their parameter estimates. Cross-validation handles the issue of parsimony in a dif ferent way to the approach described earlier. Rather than imposing an explicit penalty for model complexity, as is the case with information criteria such as the BIC, cross-validation takes advantage of the tendency for overly complex or flexible models to mistake noise in the data for systematic variance (i.e., overfitting). When this happens, the model’s predictions are less able to generalize
286
Timothy Ballard, Hector Palada, and Andrew Neal
to new data even if they provide a better fit to the data to which the model was initially fit. As such, cross-validation manages the trade-off between model-data correspondence and complexity by directly assessing the generality of the mod el’s predictions. The Bayesian framework also offers alternative approaches to model comparison, which we address in the “Extensions” section. Parameter Recovery
The final issue we wish to discuss pertains to the reliability of the parameter esti mates. As discussed in the opening section, modelers will often wish to interpret the parameter estimates to make inferences about components of the process being investigated. For example, one may wish to examine whether a particular sub-process differs across individuals or experimental conditions (e.g., Ballard, Yeo, Loft et al., 2016; Gee et al., 2018). However, the fact that the optimization algorithm has identified the “best” parameter estimates does not mean those parameter estimates are practically meaningful. It is possible that there are sev eral different combinations of parameters that produce virtually identical predic tions. In this scenario, the parameter estimates would be unreliable because there might be many different combinations of parameters that generate predictions that align with the data. If we wish to obtain reliable parameter estimates, each combination of param eters must produce unique predictions. When this is the case, the parameters are said to be identifiable, because there will only ever be a single combination of parameters that best explains a given pattern of data. This means that param eters estimated based on the data can be interpreted meaningfully. One way to assess the reliability of a model’s parameter estimates is to conduct a parameter recovery analysis. A parameter recovery analysis involves simulating data under known parameter values, fitting the model to the simulated data, and then exam ining how closely the parameter estimates match the values that were used to generate the data. Parameters are said to be recovered when the estimates match the data-generating values. Good parameter recovery is an indication that the parameters can be reliably estimated. Importantly, parameter recovery depends not only on the model but also on the dataset to which the model is being fit. For example, larger sample sizes produce more reliable parameter estimates because there is more information to constrain the estimate. Thus, it is important that the parameter recovery analysis be conducted using a simulated dataset that is identical to the empirical dataset to which the model will eventually be fit. That is, the simulated data should contain the same number of observations as the empirical data, and the observed variables that are needed to generate the predictions (e.g., time available, goal level) should have the same values. Fig ure 10.6 shows the results of a parameter recovery analysis that was conducted on the three-parameter version of the MGPM. This analysis was conducted by
Fitting Computational Models to Data
FIGURE 10.6
287
Results of the parameter recovery analysis conducted on the threeparameter version of the MGPM. Parameters were randomly sampled independently from a uniform distribution on the interval [0.1,5].
randomly sampling 100 combinations of the time sensitivity, discount rate, and threshold parameters. For each combination of parameters, we ran the model to determine the probability of prioritizing Goal 1 for each observation in the dataset. We then repeated the following procedure 100 times: 1) simulate each choice based on the probabilities predicted by the model, 2) fit the model to the simulated choices, and 3) record the parameter estimates. The figure is con structed such that the data-generating parameter values are shown on the x-axis. The recovered values are shown on the y-axis, with the points representing the mean of the recovered values across the 100 repetitions and the error bars repre senting the 2.5 and 97.5 percentiles of the recovered values. The diagonal line represents perfect correspondence between data-generating and recovered parameter values. A discrepancy between the mean recovered value and data-generating value indicates bias in the parameter estimate. Points that fall above the diagonal indicate that the model systematically overestimates that parameter. Points that fall below the diagonal indicate that the parameter is underestimated by the model. The figure does not suggest that there is systematic bias in any of the parameter estimates. The width of the error bars reflects the precision in the estimate. As can be seen, the parameter estimates become less precise as the data-generating parameter value increases. This is to be expected because changes in the parameter value have less of an impact on the model predictions when the parameter value is larger. For example, the model predic tions change more drastically when transitioning between time sensitivity values of 0.4 and 0.5 than when transitioning between values of 4.4 and 4.5 (this can be seen by the curvature of the log-likelihood function in Figure 10.4). Overall,
288
Timothy Ballard, Hector Palada, and Andrew Neal
the results of this analysis suggest very good parameter recovery, which means the modeler can trust the parameter estimates obtained when the model is fit to this dataset. Example Results Section
Model fitting was conducted in R (R Core Team, 2018) using a maximum likeli hood approach. Our hypothesized model contained three free estimated param eters: time sensitivity (γ), discount rate (Γ), and the threshold (θ). A single set of parameters was estimated for the entire sample, meaning that each parameter was assumed to take on the same value for all participants. Because the two goals in this experiment were equally important, we fixed the gain parameter (κi) for each goal to one so that this parameter would be equal for the two goals. We 1 fixed the expected lag parameter (α) to because this is the average amount of 3 time needed to reduce the distance to each goal by one unit in the experiment. To be consistent with the experimental environment, the quality (Qij ) of an action’s consequences was defined as a normally distributed, random variable. The qual ity distribution had a mean of one for the goal that was prioritized and zero for the goal that was not prioritized, with both distributions having a variance of 0.25. The reliability of the parameters estimated by this model was confirmed via a parameter recovery analysis (see Figure 10.6). The predicted probability of Goal 1 being prioritized was calculated according to Equations 1–10. As the prioritization decision was a binary variable (1 = Goal 1 prioritized, 0 = Goal 2 prioritized), the likelihood of the observed prioritization decision under the model was assumed to be Bernoulli distributed. The parameters were estimated using the optimization algorithm implemented by the optim function in R, which minimized the negative summed log-likelihood of the data under the model. In order to ensure our parameter estimates were robust, we ran the optimiza tion algorithm multiple times using different starting values for each run and selecting the parameter estimates that produced the lowest minimized negative summed log-likelihood across all runs. In order to assess the assumptions of our hypothesized model, we compared the hypothesized model to two alternatives. The purpose of the first alternative was to test the necessity of the assumption that the expected utility of prioritiz ing a goal is subject to temporal discounting. In this model, the discount rate parameter was fixed to zero, which eliminates the effect of temporal discount ing. The first alternative therefore had two free parameters (time sensitivity and threshold). The purpose of the second alternative was to test the sufficiently of the assumption that people have an accurate perception of the time required to reduce the distance to each goal by one unit. In this model, the expected lag parameter was treated as a free parameter and estimated from the data. The
Fitting Computational Models to Data
289
second alternative therefore had four free parameters. The alternative models were otherwise identical to the hypothesized model. These models were fit using the same protocol described earlier. The ability of each model to adequately characterize the data was quantified using the BIC (Schwarz, 1978), which is interpreted such that the model with the lowest value is said to provide the best trade-off between model-data correspondence and parsimony. The predictions of the three models, given their estimated parameters, are presented in Figure 10.5. As can be seen, the hypothesized and first alterna tive models (shown by the green and blue lines, respectively) are able to accu rately reproduce the empirical trends in the data. The second alternative model performs noticeably worse. The BICs were 49,472, 49,328, and 51,727 for the hypothesized, first alternative, and second alternative models, respectively. The BICs suggest that the first alternative model provides a better explanation of the data than the hypothesized model. This result suggests that the assumption that participants’ perception of the time required to reduce the distance to each goal by one unit (i.e., expected lag) is equal to the objective time required in the experiment may not be sufficient for adequately characterizing the empirical trends. However, the difference in BICs between the hypothesized model and first alternative is relatively small and the predictions of the two models shown in Figure 10.5 are very similar. This suggests that any advantage that comes from treating expected lag as a free parameter in the first alternative model is very small. The fact that the second alternative is inferior to the other two models sug gests that the assumption that the expected utility of a goal is subject to temporal discounting is necessary to account for the empirical trends. As the four-parameter model provided the best explanation of the data accord ing to the BIC, we report the parameter estimates from this model. The values of the time sensitivity, discount rate, threshold, and expected lag parameters were 0.34, 1.47, 0.51, and 0.35, respectively. Note that the estimate of the expected lag parameter (0.35) differs only slightly from the value to which this param eter was fixed in the hypothesized model (0.33). This reinforces the conclusion that the first alternative offers only a slight improvement over the hypothesized model. The values of the time sensitivity, discount rate, and threshold parame ters were comparable to their estimated values in previous research (see Ballard, Vancouver et al., 2018; Ballard, Yeo, Loft et al., 2016).3 Extensions
The method we have presented in the preceding sections is a general approach that can be used to test theories of many different organizational phenomena. Although we have demonstrated this approach using a model of multiple-goal pursuit, it can just as easily be applied to understand phenomena such as leadership (e.g., Van couver, Wang, & Li, 2018), fatigue (e.g., Walsh, Gunzelmann, & Van Dongen,
290
Timothy Ballard, Hector Palada, and Andrew Neal
2017), socialization (e.g., Vancouver, Tamanini, & Yoder, 2010), time pressure (Palada, Neal, Tay, & Heathcote, 2018), scheduling (e.g., Hannah & Neal, 2014), stress and coping (e.g., Vancouver & Weinhardt, 2012), among others. The method presented earlier can be extended in several ways to help researchers answer more complex research questions. One extension involves estimating unique parameters for different individuals or experimental condi tions. In this tutorial, we assumed that every participant was characterized by the same set of parameters. However, there is often reason to estimate a unique set of parameters for each participant. For example, the researcher may wish to examine whether a certain trait variable explains variance in model parameters. In this case, the model fitting procedure described earlier needs to be conducted for each participant separately. Modelers might also wish to examine whether certain parameters vary across experimental conditions (e.g., Ballard, Sewell, Cosgrove, & Neal, 2019). To do this, the researcher would fit the model separately to the data from each con dition. Note, however, that fitting to individual participants or conditions will mean that there is less data to inform each parameter estimate. This can result in less precise parameter estimates. So it is important that a parameter recovery analysis is conducted to examine the reliability of parameters estimated using smaller subsets of the dataset. Other Types of Models
In the example model comparison used earlier, the three models were identi cal in structure but differed in terms of which parameters were treated as free parameters and estimated from the data and which parameters were fixed. In such cases where one can transition between models by freeing or fixing a single parameter, the models are referred to as nested. It is important to note, however, that the approach we demonstrated for comparing models also applies to nonnested models that have different functional forms. For example, we could use this same approach to test an alternative variant of the MGPM in which expected utility is proportional to the sum of valence and expectancy as opposed to their product. This approach also generalizes to models that include logical if-then statements, rather than just mathematical equations. The models used in our tutorial contain likelihood functions that can be derived analytically. In other words, the likelihood of the data given the mod el’s prediction can be calculated based on a set of equations. This is not the case for all models. For example, some models might have random compo nents (i.e., noise) with effects that cannot be expressed in equation form. For example, Grand (2017) examined the effects of stereotype threat on organiza tional performance using a computational model in which employee turnover was governed by a random process. Models might also involve the dynamic
Fitting Computational Models to Data
291
interaction between different agents (i.e., agent-based models) with downstream consequences that cannot be predicted analytically. For example, Tarakci, Greer, and Groenen (2016) used this type of model to understand the effects of power disparity on group performance. Mäs, Flache, Takács, and Jehn (2013) devel oped such a model to examine the role of team diversity on opinion polarization. These types of models need to be simulated in order to generate predictions. In theory, these so-called “simulation” models can also be analyzed using the approach described earlier. However, doing so is often extremely computation ally intensive. For example, models with random components typically need to be simulated thousands of times in order to generate reliable predictions that are not contaminated by noise. This means that the time required to fit such a model to data may be thousands of times longer than a similar model that can be analytically derived. Models with multiple interacting agents also typically have longer run times because the number of operations required to generate predictions usually scales with the number of agents in the model. Such models can take much longer to generate predictions even if they do not contain random components. Due to these practical challenges, simulation models are often not fit to data in the way that we demonstrated earlier. However, recent advances in computing combined with the development of more efficient parameter estima tion algorithms have meant that fitting simulation models to data is becoming easier (Evans, 2019; Holmes, 2015). Bayesian Parameter Estimation
We have limited our focus to maximum likelihood methods, which are a fre quentest approach to parameter estimation. An alternative to this approach is Bayesian parameter estimation. Bayesian parameter estimation has become widely used as a method for fitting computational models to data in the cog nition literature and is starting to be used by researchers in industrial and organizational (see Ballard, Farrell, & Neal, 2018; Ballard, Palada et al., 2019; Ballard, Vancouver et al., 2018; Neal, Gee, Ballard, Vancouver, & Yeo, 2018). The Bayesian approach differs from the maximum likelihood approach in some important ways (Kruschke, Aguinis, & Joo, 2012). First, although the Bayes ian approach involves evaluating parameters based on their likelihood, given the data, this approach also incorporates prior beliefs regarding the plausibility of different parameter values into the analysis. The potential to incorporate prior beliefs allows the researcher to take into account results from previous studies, theoretical assumptions regarding the parameter value, or information about the plausible parameter values for other participants. The latter makes it easy to implement hierarchical models that capture variation in parameters between individuals while simultaneously estimating parameters at the group (or condition) level.
292
Timothy Ballard, Hector Palada, and Andrew Neal
Second, whereas the maximum likelihood approach uses optimization meth ods to find the single best set of parameter estimates given the data, the Bayesian framework quantifies the uncertainty in a parameter estimate. This is done by estimating the posterior distribution on each parameter, which represents the range of plausible values for that parameter, given the data and the researcher’s prior beliefs. By taking into account uncertainty, the posterior distribution makes it easy to determine whether differences in parameter values (e.g., between con ditions or individuals) are meaningful. Overlapping posterior distributions indi cate that the researcher can plausibly conclude that there is no difference in the parameters. Posterior distributions that do not overlap suggest a reliable differ ence between parameter values. Another point of difference between the maximum likelihood and Bayesian approaches to parameter estimation is how they conceptualize model complex ity. As discussed earlier, maximum likelihood approaches usually operational ize complexity as the number of free parameters in the model. The Bayesian approach takes the number of parameters into account, but also considers two other dimensions of complexity: 1) the functional form of the model and 2) the range of possible values that the model parameters can take on (Lee & Wagen makers, 2013; Myung & Pitt, 1997). This more general approach to quantifying model complexity can facilitate the comparison of models that may differ in their flexibility for reasons other than a difference in the number of free param eters estimated by each model. Conclusion
We hope that the tutorial presented in this chapter provides a useful foundation for researchers wishing to develop and test their own computational models. We believe that computational modeling encourages cumulative scientific pro gress because it enables a more direct mapping between theory and data, which makes theories easier to test, compare, refine, extend, and reject. Fitting models to data is a central part of that process. We are therefore hopeful that the material presented here will help researchers to develop stronger theories that ultimately move the field forward. Further Reading Busemeyer, J., & Diederich, A. (2010). Cognitive modeling. New York, NY: Sage. Farrell, S., & Lewandowsky, S. (2018). Computational modeling of cognition and behav ior. Cambridge: Cambridge University Press. Kruschke, J. (2010). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. New York, NY: Academic Press. Lee, M., & Wagenmakers, E. (2013). Bayesian cognitive modeling: A practical course. New York, NY: Cambridge University Press.
Fitting Computational Models to Data
293
Myung, I. J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathemati cal Psychology, 47, 90–100.
Glossary
Cost function: A mathematical function that quantifies model-data corre spondence (or perhaps more accurately, the lack of model-data correspond ence) in such a way that higher values represent poorer correspondence between the data and predictions of the model. Fixed parameter: A parameter that is specified before the analysis to a known value. Free parameter: A parameter with a value that is unknown before the analy sis and that is estimated from data. Global minimum: A combination of parameter values for which the value of the cost function is lowest value possible for the model. Likelihood: The probability of the data having been observed under the model and the model’s parameter values. Local minimum: A combination of parameter values for which the value of the cost function is lower than all combinations involving similar parameter values, but that does not produce the lowest value possible for the model. Model-data correspondence: The extent to which the predictions of a model align with or match the data (also referred to as model-data alignment, model-data fit, or goodness of fit). Model fitting: The process of adjusting model parameter values in such a way as to maximize the correspondence between the data and the predictions of the model (also referred to as fitting a model to data). Optimization: A method of identifying the parameter values that maximize model-data correspondence that uses an algorithm to iteratively adjust the parameter values based on the value of the cost function produced at each step, Parameter: A variable that is required by the model to generate predictions. Parameters can be fixed to predefined values or estimated from data. Notes 1. The MGPM addresses both approach and avoidance goals. For simplicity, however, we limit our discussion to approach goal pursuit. The model of avoidance goal pursuit has been presented by Ballard, Yeo, Loft et al. (2016). 2. Note that this operation is not required for model fitting, but it enhances the readability of the code. Throughout this tutorial, we have prioritized code readability over effi ciency in order to facilitate understanding. 3. In cases where separate parameters are estimated for each individual, the reader is encouraged to report on the distribution of the estimated parameters (e.g., the mean and SD of each parameter). In such cases, it can also be informative to compare the models at the level of the participant, for example, by reporting the number of partici pants whose data were best explained by each model.
294
Timothy Ballard, Hector Palada, and Andrew Neal
References Ainslie, G., & Haslam, N. (1992). Hyperbolic discounting. In G. Loewenstein & J. Elster (Eds.), Choice over time (pp. 57–92). New York: Russell Sage Foundation. Akaike, H. (1973). Information theory and an extension of the maximum likelihood prin ciple. In B. N. Petrov & F. Caski (Eds.), Proceedings of the second international symposium on information theory (pp. 267–281). Budapest: Akademiai Kiado. doi:10.1007/978-1-4612-1694-0_15 Ballard, T., Farrell, S., & Neal, A. (2018). Quantifying the psychological value of goal achievement. Psychonomic Bulletin and Review, 25, 1184–1192. doi:10.3758/ s13423-017-1329-1 Ballard, T., Palada, H., Griffin, M., & Neal, A. (2019). An integrated approach to test ing dynamic theory: Using computational models to connect theory, model, and data. Organizational Research Methods, 1–34. doi:10.1177/1094428119881209 Ballard, T., Sewell, D. K., Cosgrove, D., & Neal, A. (2019). Information process ing under reward versus under punishment. Psychological Science, 30, 757–764. doi:10.1177/0956797619835462 Ballard, T., Vancouver, J., & Neal, A. (2018). On the pursuit of multiple goals with differ ent deadlines. Journal of Applied Psychology. doi:10.1037/apl0000304 Ballard, T., Yeo, G., Loft, S., Vancouver, J. B., & Neal, A. (2016). An integrative formal model of motivation and decision making: The MGPM. Journal of Applied Psychol ogy, 101, 1240–1265. doi:10.1037/apl0000121 Ballard, T., Yeo, G., Neal, A., & Farrell, S. (2016). Departures from optimality when pursuing multiple approach or avoidance goals. Journal of Applied Psychology, 101, 1056–1066. doi:10.1037/apl0000082 Evans, N. J. (2019). A method, framework, and tutorial for efficiently simulating mod els of decision-making. Behavior Research Methods, 51, 2390–2404. doi:10.3758/ s13428-019-01219-z Farrell, S., & Lewandowsky, S. (2018). Computational modeling of cognition and behav ior. Cambridge: Cambridge University Press. Gee, P., Neal, A., & Vancouver, B. (2018). A formal model of goal revision in approach and avoidance contexts. Organizational Behavior and Human Decision Processes, 146, 51–61. doi: 10.1016/j.obhdp.2018.03.002 Grand, J. A. (2017). An examination of stereotype threat effects on knowledge acquisition in an exploratory learning paradigm. Journal of Applied Psychology, 102, 115–150. Hannah, S. D., & Neal, A. (2014). On-the-fly scheduling as a manifestation of partialorder planning and dynamic task values. Human Factors, 56, 1093–1112. doi:10.1177/0018720814525629 Heathcote, A., Brown, S. D., & Wagenmakers, E.-J. (2015). An introduction to good practices in cognitive modeling. In B. U. Forstmann & E. J. Wagenmakers (Eds.), An introduction to model-based cognitive neuroscience (pp. 25–48). New York: Springer. Holmes, W. R. (2015). A practical guide to the Probability Density Approximation (PDA) with improved implementation and error characterization. Journal of Mathematical Psychology, 68–69, 13–24. doi:10.1016/j.jmp.2015.08.006 Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/ aptitude treatment interaction approach to skill acquisition. Journal of Applied Psy chology, 74, 657–690. doi:10.1037/0021-9010.74.4.657
Fitting Computational Models to Data
295
Kennedy, D. M., & McComb, S. A. (2014). When teams shift among processes: Insights from simulation and optimization. Journal of Applied Psychology, 99, 784–815. doi:10.1037/a0037339 Kruschke, J. K., Aguinis, H., & Joo, H. (2012). The time has come: Bayesian methods for data analysis in the organizational sciences. Organizational Research Methods, 15, 722–752. doi:10.1177/1094428112457829 Lee, M. D., & Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. New York: Cambridge University Press. Mäs, M., Flache, A., Takács, K., & Jehn, K. A. (2013). In the short term we divide, in the long term we unite: Demographic crisscrossing and the effects of faultlines on subgroup polarization. Organization Science, 24(3), 716–736. doi:10.1287/ orsc.1120.0767 Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. doi:10.1136/jnnp.65.4.554 Myung, I. J. (2003). Tutorial on maximum likelihood estimation. Journal of Mathemati cal Psychology, 47(1), 90–100. doi:10.1016/S0022-2496(02)00028-7 Myung, I. J., & Pitt, M. A. (1997). Applying Occam’s razor in modeling cognition: A Bayesian approach. Psychonomic Bulletin & Review, 4, 79–95. Neal, A., Ballard, T., & Vancouver, J. B. (2017). Dynamic self-regulation and multiplegoal pursuit. Annual Review of Organizational Psychology and Organizational Behavior, 4, 410–423. doi:10.1146/annurev-orgpsych-032516-113156 Neal, A., Gee, P., Ballard, T., Vancouver, J. B., & Yeo, G. (2018). The dynamics of affect during approach and avoidance goal pursuit. Organizational Behavior and Human Decision Processes, 146, 51–61. doi:10.1016/j.obhdp.2018.03.002. https://psycnet. apa.org/record/2018-22130-005 Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 1596–1618. doi:10.3758/s13423-019-01645-2 Palada, H., Neal, A., Tay, R., & Heathcote, A. (2018). Understanding the causes of adapt ing, and failing to adapt, to time pressure in a complex multistimulus environment. Journal of Experimental Psychology: Applied, 24, 380–399. doi:10.1037/xap0000176 R Core Team. (2018). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. Schmidt, A. M., & DeShon, R. P. (2007). What to do? The effects of discrepancies, incen tives, and time on dynamic goal prioritization. Journal of Applied Psychology, 92, 928–941. doi:10.1037/0021-9010.92.4.928 Schmidt, A. M., & Dolis, C. M. (2009). Something’s got to give: The effects of dual-goal difficulty, goal progress, and expectancies on resource allocation. Journal of Applied Psychology, 94, 678–691. doi:10.1037/a0014945 Schmidt, A. M., Dolis, C. M., & Tolli, A. P. (2009). A matter of time: Individual differ ences, contextual dynamics, and goal progress effects on multiple-goal self-regulation. Journal of Applied Psychology, 94, 692–709. doi:10.1037/a0015012 Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464. Steel, P., & König, C. J. (2006). Integrating theories of motivation. Academy of Manage ment Review, 31, 889–913.
296
Timothy Ballard, Hector Palada, and Andrew Neal
Tarakci, M., Greer, L. L., & Groenen, P. J. F. (2016). When does power disparity help or hurt group performance? Journal of Applied Psychology, 101, 415–429. doi:10.1037/ apl0000056 Vancouver, J. B., Tamanini, K. B., & Yoder, R. J. (2010). Using dynamic computational models to reconnect theory and research: Socialization by the proactive newcomer as example. Journal of Management, 36, 764–793. doi:10.1177/0149206308321550 Vancouver, J. B., Wang, M., & Li, X. (2018). Translating informal theories into formal theories: The case of the dynamic computational model of the integrated model of work motivation. Organizational Research Methods, 1–37. doi:10.1177/1094428118780308 Vancouver, J. B., & Weinhardt, J. M. (2012). Modeling the mind and the milieu: Compu tational modeling for micro-level organizational researchers. Organizational Research Methods, 15, 602–623. doi:10.1177/1094428112449655 Vancouver, J. B., Weinhardt, J. M., & Schmidt, A. M. (2010). A formal, computational theory of multiple-goal pursuit: Integrating goal-choice and goal-striving processes. Journal of Applied Psychology, 95, 985–1008. doi:10.1037/a0020628 Vancouver, J. B., Weinhardt, J. M., & Vigo, R. (2014). Change one can believe in: Add ing learning to computational models of self-regulation. Organizational Behavior and Human Decision Processes, 124, 56–74. doi:10.1016/j.obhdp.2013.12.002 Walsh, M. M., Gunzelmann, G., & Van Dongen, H. P. A. (2017). Computational cognitive modeling of the temporal dynamics of fatigue from sleep loss. Psychonomic Bulletin and Review, 24, 1784–1807. doi:10.3758/s13423-017-1243-6 Zhou, L., Wang, M., & Vancouver, J. B. (2018). A formal model of leadership goal striv ing: Development of core process mechanisms and extensions to action team context. Journal of Applied Psychology. doi:10.1037/apl0000370 Zhou, L., Wang, M., & Zhang, Z. (2019). Intensive longitudinal data analyses with dynamic structural equation modeling. Organizational Research Methods, 1–32. doi:10.1177/1094428119833164
11
HOW TO PUBLISH AND REVIEW A COMPUTATIONAL MODEL Andrew Neal, Timothy Ballard, and Hector Palada
Computational modeling has revolutionized cognitive and behavioral sci ence, enabling researchers to gain unique insights into the complex processes that are responsible for generating observed behavior (Farrell & Lewan dowsky, 2018; Wilson & Collins, 2019). A number of commentators have noted that computational modeling has the potential to do the same within the organizational sciences (e.g., Salas, Kozlowski, & Chen, 2017). However, because the practice of computational modeling is still within its infancy in our field, authors and reviewers often lack clear guidance about the different ways in which computational modeling papers can be written and the criteria by which they should be assessed. The objective of this chapter is to address this gap. The chapter is organized as follows. We first provide an overview of the prac tice of computational modeling. In this section, we identify the steps involved in carrying out a computational modeling project and describe the different ways in which the results of these projects are published in the top journals within our field. We then discuss issues that authors and reviewers need to consider when publishing and reviewing these types of papers. A key focus of our discussion is on the similarities and differences between a traditional paper and a compu tational modeling paper so that authors and reviewers can think about how they adjust their strategy and criteria to take advantage of the benefits that computa tional modeling provides for our field. A summary of our recommendations is provided in Table 11.1.
DOI: 10.4324/9781003388852-13
298
Andrew Neal, Timothy Ballard, and Hector Palada
TABLE 11.1 Summary of Recommendations
Issue
Recommendations
Choosing the topic Significance Innovation Benefit Setting the hook Who cares? What are the limits of current understanding? What will we learn? Presenting the theory & model Presenting the theory Presenting the model
Testing the model Designing the study Reporting the results
Discussing the findings Summarizing the findings & exploring the significance of the work
Tackle a large, complex, and scientifically interesting problem Move the conversation forwards by the progressive development, revision, extension, and replacement of models Assess whether the model has the potential to generate future benefit Convince the reader of the need to understand the underlying mechanisms Identify phenomena that cannot yet be explained, or problems with the explanation provided by existing models Explain how the paper will improve current understanding Explain the processes thought to be responsible for observable phenomena Explain how the model works (i.e., explain the variables & parameters, and what it means psychologically) Consider running a simulation study to derive testable predictions Assess whether the model can be adequately tested using the proposed design (e.g., by running a simulation study) Assess whether the predictions of the model are supported. If the model is fitted to the data, evaluate the extent to which the model captures the observed trends in the data Explain how the study has answered the original question, and how it has opened up new questions for future exploration
The Practice of Computational Modeling The Life cycle of a Computational Model
Figure 11.1 illustrates the life cycle of a computational model. Computational modeling projects typically start with either theory or data. A theory describes the phenomenon of interest and identifies the mechanisms that explain how that
How to Publish and Review a Computational Model
FIGURE 11.1
299
The computational modeling cycle.
phenomenon emerges, or how it evolves over time. The researcher may develop a new theory from scratch or build off an existing theory. In some cases, the researcher may start with data rather than theory. For example, the researcher may identify a set of empirical phenomena that have not yet been explained by existing theory, or a set of related phenomena that are explained by separate theories. Alternatively, the researcher may run a study in order to identify the empirical phenomena that need to be explained. Once they have identified the empirical phenomena, they then develop a theory to explain those phenomena. Once the researcher has a theory, or set of theories, they specify that theory in the form of a computational model. A computational model consists of a set of equations that describe the process by which a system (e.g., a person, group, or organization) produces observable output (e.g., behavior or performance). These equations are implemented as computer code. In many cases, the equa tions are dynamic, describing how the variables within the model change over time. The model will also often include internal variables that are not directly
300
Andrew Neal, Timothy Ballard, and Hector Palada
observable but which play an explanatory role in the theory. For example, Kanfer and Ackerman’s (1989) theory of ability and motivation explains observed changes in performance using concepts such as skill and effort. These are inter nal cognitive states that cannot be measured independently of the outcome (behavior or performance) but are necessary in order to explain the changes that are observed. Many theories within the organizational sciences include variables of this type. Once the researcher has a working model, they can run simulations to evaluate the behavior of the model. This is done by running the model under a variety of conditions, using a range of parameter settings. This allows the researcher to generate testable predictions by identifying the way that the model responds to variables of interest and examining the patterns that emerge under different parameter settings. If the researcher has competing models, they can identify the conditions under which the models make qualitatively different predictions. The model or models are tested by comparing them to data. If there is an existing set of empirical phenomena within the literature, the researcher may examine whether the model or models can reproduce those phenomena. In this case, the researcher would focus on qualitative fit, assessing whether the model can reproduce the pattern of results that has previously been observed. Alter natively, the researcher may collect new data and examine the correspondence between the model and data. If the researcher has a set of models that make qualitatively different predictions, then the researcher can test those predictions to see which model is supported. In some cases, the researcher may go one step further and fit the model(s) to the data. Model fitting allows the researcher to conduct quantitative model comparisons and estimate the parameters of the model from the data (Wilson & Collins, 2019). Quantitative model comparison is used to assess which model, out of a set of alternatives, provides the best account for the data. Quantitative model comparison can be useful when mod els make similar qualitative predictions, or where there are large numbers of competing models that need to be assessed. Parameter estimation can be used to draw inferences about the underlying psychological processes, or the role of individual differences. For example, parameter estimation can be used to test hypotheses regarding the effects of individual and environmental variables on the parameters of the model. This approach provides a test of construct validity of a model (Arnold, Bröder, & Bayen, 2015). Once the researcher has tested the model(s), they close the loop by revisiting the theory and model(s). If the model fits the data, then this provides support for the theory. The level of support that is provided depends on the breadth of empirical phenomena that the model can account for, and the extent to which models derived from competing theories are able to account for the same phe nomena. A model receives “credit” to the extent that it can account for findings
How to Publish and Review a Computational Model
301
that cannot be explained by other models or can explain the findings in a more parsimonious manner (Farrell & Lewandowsky, 2018). In many cases, the model will need to be revised to account for unexpected findings. This can be done by adding new mechanisms to the model, or revising existing mechanisms within the model, and showing that the changes are necessary to account for the find ings. This is part of the natural development of a model. Over time, the explana tory power of a theory and/or model will grow as it is extended to account for a broader range of empirical phenomena and survives tests against competing theories and models. While confirmation plays an important role in the ongoing development of a theory or model, falsification is equally important. In many cases, there will be clear evidence that a particular model cannot account for the data and should be discarded. If the researcher tries a series of models that are derived from the same theory, and none of those models can account for the data, then this casts doubt on the underlying theory. Ultimately, the fate of any theory is to be dis carded, as it is superseded by a new generation of theories that are better able to account for empirical phenomena. One of the reasons why computational mode ling is so useful is because it allows models and theories to be more easily vetted, which is essential if the field is to make progress (Vancouver, Wang, & Li, 2018). Computational Modeling in the Organizational Sciences
In order to identify the different ways in which computational models are used within the organizational sciences, and the ways that they are published, we con ducted a Psycinfo search for papers published in relevant journals with “com putational model” as a search term.1 Relevant journals included the Journal of Applied Psychology (JAP), Organizational Behavior and Human Decision Processes (OBHDP), Academy of Management Journal (AMJ), Academy of Management Review (AMR), Personnel Psychology (PP), Journal of Manage ment (JoM), Organization Science (OS), and Administrative Science Quarterly (ASQ). Twenty-five papers were identified (see Table 11.2). Two of these papers were published prior to 2000, five were published between 2000 and 2009, and 18 have been published since 2010. The most common outlets were JAP (11 papers) and OS (6 papers). As can be seen in Table 11.2, computational models have been applied to a broad range of phenomena. Examples include organizational learning and adaptation (8 papers), self-regulation (7 papers), and team processes (4 pro cesses). The one thing that all of the phenomena have in common is that they are dynamic. In each case, the authors have used computational modeling as a means of understanding dynamic processes, such as the process by which organizations adapt to changes in their environment (Levinthal & Marino, 2015), people pursue competing goals (e.g., Vancouver, Weinhardt, & Schmidt, 2010), or collective
Author(s)
Decision-making Gibson, Fichman, and Plaut (1997) Gibson (2000) Job performance Vancouver, Li, Weinhardt, Steel, and Purl (2016) Leadership
Zhou, Wang, and Vancouver (2019)
Journal
Objective
Approach
OBHDP
Develop and test a model that explains how decision makers learn from outcome feedback in dynamic tasks Develop and test a model that explains how decision makers learn from delayed feedback in dynamic tasks
Simulation study to generate predictions Experimental study to test predictions
PP
Develop a model of job performance incorporating learning and feedback, and use the model to examine possible sources of skews in distributions of job performance
Simulation study to examine system behavior Comparison of simulated data to existing empirical effects
JAP
Develop and test a model that explains leader goal striving in action teams under different environmental conditions
Simulation study to generate predictions Experimental study to test predictions Model fitting & model comparisons
Develop a model of organizational evolution, and use the model to examine whether learning can accelerate the discovery of new organizational forms Develop a model of organizational adaptation, and use the model to examine the effectiveness of design efforts in different environments
Simulation study to examine system behavior Comparison of simulated data to existing empirical effects
OBHDP
Organizational learning & innovation Bruderer and Singh AMJ (1996) Ethiraj and Levinthal (2004)
ASQ
Simulation study to generate predictions Experimental study to test predictions
Simulation study to examine system behavior
Andrew Neal, Timothy Ballard, and Hector Palada
Topic
302
TABLE 11.2 Sample of Computational Modeling Papers Published in Selected IO and OB Journals
AMJ
Chang (2007)
OS
Kane and Alavi (2007)
OS
Levine and Prietula (2013)
OS
Levinthal and Marino (2015)
OS
Chen, Croson, Elfenbein, and Posen (2018)
OS
Psychological Assessment Grand (2019)
JAP
Develop a model to simulate innovation diffusion through interregional network structures in a population of organizations Develop a model to simulate how ideas are diffused through social networks, and examine how network structures emerge under different environmental conditions Extend an existing model of organizational learning (March, 1991) to account for the effects of information technology on exploration and exploitation Develop a model to simulate the performance of open collaboration ventures, as a function of the cooperativeness of agents, diversity of needs, and rivalry of goods Develop a model of organizational adaptation, and use the model to examine the interplay between learning and selection Develop a model of entrepreneurial learning under uncertainty that explains why too many entrepreneurs enter markets and why too many persist for too long
Simulation study to examine system behavior
Develop a model to explain the process by which people respond to situation judgment test items
Simulation study to examine system behavior Experimental study to identify empirical effects Comparison of simulated data to existing and new empirical effects
Simulation study to examine system behavior
Qualitative case studies to extend model Simulation study to examine system behavior Comparison of simulated data to existing model Simulation study to examine system behavior
Simulation study to examine system behavior
Simulation study to examine system behavior Comparison of simulated data to existing empirical effects
How to Publish and Review a Computational Model
Gibbons (2004)
303
304
Andrew Neal, Timothy Ballard, and Hector Palada
knowledge emerges in teams (Grand, Braun, Kuljanin, Kozlowski, & Chao, 2016). There is considerable variability in the elements of the computational modeling cycle that individual papers focus on. Some papers focus on theory development. These papers develop a computational model of a theory or phe nomenon and run a simulation study to examine the behavior of the model (e.g., Gibbons, 2004). These papers use simulation to explore the logical consequences of a theory. Sometimes, researchers will incorporate a limited form of theory test ing within a theory development paper. They do this by evaluating whether the model can account for established empirical effects (e.g., Chen et al., 2018). Other papers focus on both theory development and theory testing. One approach is to use a simulation study to generate predictions and then run an empirical study to test those predictions (e.g., Grand et al., 2016). Another approach is to start with an empirical study to identify the phenomena that need to be explained, develop one or more models to account for those phenomena, and fit those models to the data in order to test competing explanations (e.g., Ballard, Vancouver, & Neal, 2018) or examine the effects of individual and environmental variables on the parameters of the model (e.g., Gee, Neal, & Vancouver, 2018). The key point to note for anyone wanting to publish or review a computational model is that there is no one “best” way to write a computational modeling paper. When writing the paper, one of the key decisions that needs to be made con cerns the scope. In many cases, it will make sense to focus on specific elements of the computational modeling cycle, rather than trying to include all elements of the cycle within a single paper. The risk with trying to include all elements within one paper is that it may overwhelm the reader, particularly if the model is new or complex. Indeed, if the model is complex, then it may make sense to focus on a specific element of the model within a single paper. One of the common mis takes that people make is to present models that are too complex for reviewers to understand (Repenning, 2003). The scope needs to be broad enough for the paper to make a substantive contribution, but no broader. The question as to what constitutes an adequate scope depends on the type of topic that is chosen, and the state of existing knowledge within the field. These issues are discussed next. Choosing the Topic
Colquitt and George (2011) argue that one of the major factors that determines the publishability of a manuscript is the choice of topic. This is a choice that is made years before the manuscript is written. As Colquitt and George (2011) note, “[T]he seeds for many rejections are planted at the inception of a project, in the form of topics that-no matter how well executed-will not sufficiently appeal to . . . reviewers and readers” (p432). This is as true for a computational modeling project as it is for a traditional project. There are three major criteria that need to be considered when choosing a topic: significance, innovation, and benefit.
How to Publish and Review a Computational Model
305
Significance
A topic is significant if it addresses an important problem. Colquitt and George (2011) argue that a good starting point is to consider whether it addresses a “grand challenge.” Addressing a grand challenge involves tackling a large, complex, and scientifically interesting problem. A problem is likely to be sci entifically interesting if the answer to that problem is not obvious at the out set of the project. Reasons why the answer may not be obvious are that the underlying mechanisms are unclear, there are competing views regarding those mechanisms, or there are findings that cannot be explained by existing theory. Computational modeling is ideally suited to the pursuit of grand challenges because it is used to develop a shared understanding of the processes that are responsible for complex dynamic phenomena. The complexity of these phe nomena means that the answers are not obvious at the outset. Most of the phe nomena that we are interested in within the organizational sciences fall into this category. An example of a grand challenge that has not yet been tackled using compu tational modeling is work design. One of the most fundamental questions within our field concerns the way that activities, tasks, and responsibilities should be allocated to work roles (Parker, Morgeson, & Johns, 2017). The typical way in which work design is examined in the literature is to examine the relationship between work characteristics (e.g., variety, autonomy, significance) and out comes (e.g., performance, well-being, turnover), and to identify the mediators that may be involved (e.g., motivation). The problem with this approach is that it provides limited insight into the way that roles should be designed because it is not a process model. A computational model could be developed that explains how the design of work roles influences system effectiveness. The model might include a set of tasks or functions that can be assigned to work roles in different ways. The model may then describe the dependencies among the roles in the form of a network structure (e.g., whether work flows in a linear sequence from one role incumbent to the next, or whether the flow is parallel or reciprocal). The researcher could then simulate the ways in which tasks or information flows through the network under a range of different operational scenarios. This could be used to derive a set of general principles regarding the situations under which different types of designs will be effective, which would be of broad interest within the field. Innovation
A topic is innovative and novel “if a study addressing it would change the con versation that is already taking place in a given literature” (Colquitt & George, 2011, pp. 432–433). Broadly speaking, there are two ways to do this (Barney,
306
Andrew Neal, Timothy Ballard, and Hector Palada
2018). One is by building on and extending existing theory in new and creative ways. This is what Kuhn (1962) referred to as “normal science. The other is by replacing existing theory with a new theory that overturns existing thinking on a topic. This is what Kuhn (1962) referred to as “revolutionary science.” The cycle shown in Figure 11.1 incorporates both approaches. According to this view, the way to move the conversation forward within a field is by the progres sive development, testing, and refinement of computational models (normal sci ence) and their subsequent replacement by a new generation of computational models that provide a better explanation for empirical phenomena than existing models (revolutionary science). Vancouver et al. (2018) provided a good example of the ways in which a theory development paper can move the conversation forwards. Vancouver et al. (2018) developed a computational model of a prominent theory of work motivation (Locke, 1997). The process of building and simulating the model enabled Vancouver et al. (2018) to identify aspects of the theory that were speci fied imprecisely, explore different options for how the process might work, and eliminate options that produce behavior that is inconsistent with existing data. They were also able to identify components of the model that were redundant and areas where additional mechanisms needed to be added. These problems are not confined to Locke’s (1997) theory of work motivation. Many theories in the organizational sciences lack the precision that is needed to explain how phe nomena evolve over time or emerge from interactions among a set of component processes. Computational modeling can be used as a tool for theory building, allowing the researcher to generate new insights that go beyond what can be achieved using intuition alone. The self-regulation papers in Table 11.2 provide examples of the different ways in which theory testing papers can move the conversation forwards. The first paper in the series (Vancouver et al., 2010) reports the development of a computational model of multiple-goal pursuit (the MGPM). The model drew on control theory accounts of motivation and expected utility accounts of deci sion making. In the first paper, they demonstrated that the MGPM was able to account for a range of existing empirical phenomena, such as the tendency of people to shift resources toward easier goals as deadlines approach, and the effects of incentives on prioritization (Schmidt & DeShon, 2007). Subsequent papers extended the model to account for a broader range of empirical phe nomena. They did this by integrating mechanisms from other theories. Van couver et al. (2014) extended the MGPM by adding a learning mechanism to explain how people learn about uncertainty in the environment and the efficacy of their actions. Ballard et al. (Ballard, Yeo, Loft, Vancouver, & Neal, 2016) integrated the MGPM with decision field theory (Busemeyer & Townsend, 1993) to account for the effects of uncertainty regarding the consequences of actions, while Ballard et al. (2018) integrated the MGPM with theories of
How to Publish and Review a Computational Model
307
intertemporal choice to account for the effects of deadlines on prioritization. Each paper has moved the conversation forwards by increasing the explana tory scope of the model so that it can account for a broader range of empirical phenomena. Benefit
Colquitt and George (2011) argue that a topic is more likely to be publishable at a top-tier journal if it has practical benefit. A topic has practical benefit if a study addressing it has the potential to inform practice or policy or lead to the develop ment of new products or services. There are many examples of potential applica tions of computational modeling. For example, a computational model of stress and well-being could have practical benefits by helping inform how and when to intervene in order to best enable recovery. A computational model of team processes could be used to evaluate the costs and benefits of different interven tions designed to improve team effectiveness. The advantage of a computational model, over and above a verbal theory, is that it can be simulated to generate quantitative predictions regarding potentially complex phenomena. For exam ple, Grand (2017) used a computational model to understand how the negative effects of stereotype threat unfold and accumulate over time. They demonstrated how a relatively small effect (in statistical terms) at the individual level has large emergent effects at the organizational level. One caveat that should be noted is that, while we agree that it is useful to consider whether a topic has the potential for practical benefit, this does not mean that the project has to produce a set of clear recommendations for prac tice. The computational modeling papers shown in Table 11.2 are inspired by practical problems but do not set out to solve those problems—the focus is on enhancing our understanding of complex phenomena that are practically impor tant. This is what is meant by the term “use-inspired basic research” (Stokes, 1997). Use-inspired basic research is valuable because it has the potential to generate a wide range of different applications or solutions in the future, many of which will not have been foreseen at the time the research is conceived. Thus, the question that the reviewer needs to ask is not whether the project will have immediate practical benefit (e.g., in the form of recommendations for managers) but rather whether it has the potential to generate a range of benefits in the future. Setting the Hook
When it comes to writing the paper, the most important part is the opening sec tion, before the reader gets to the first major heading (i.e., the first 2–3 para graphs). The key objective of the opening section is to capture the interest of the
308 Andrew Neal, Timothy Ballard, and Hector Palada
reader, or in other words, to “set the hook.” Grant and Pollock (2011) argue that the opening section needs to address three questions: “(1) Who cares? What is the topic or research question, and why is it interesting and important in theory and practice? (2) What do we know, what don’t we know, and so what? What key theoretical perspectives and empirical findings have already informed the topic or question? What major, unaddressed puzzle, controversy, or paradox does this study address, and why does it need to be addressed? (3) What will we learn? How does your study fundamentally change, challenge, or advance scholars’ understanding?” (p. 873) Who Cares?
The first challenge in writing the opening is to present the topic in a way that enables the reader to appreciate the significance of the topic and sparks their interest in reading the paper. Essentially, this is a matter of describing the grand challenge that the paper addresses. The question, then, is how to do this. Grant and Pollock (2011) identified two archetypal hooks for opening a traditional paper. The first involves using a provocative quote to introduce the topic, while the second involves highlighting current trends. Current trends may include changes in the workplace or society or issues that are the focus of attention in the academic literature. While these approaches can help highlight the phenom enon, we think that the opening needs to do more than this. Computational mod eling papers are typically more complex than traditional theoretical or empirical papers because they seek to provide a detailed and precise explanation of the underlying process that is responsible for observable phenomena. In our expe rience, the most difficult part of writing a computational modeling paper is to convince the reviewers of the need to understand that process in greater detail than has been achieved previously. This needs to be done in a way that is short, sharp, and compelling. In our opinion, the key to a good opening for a computational modeling paper is to describe the phenomenon in a way that makes the complexity of the underlying mechanisms apparent, so that it sparks the reader’s interest. For example, Ballard et al. (2016) opened their paper by describing the phenome non of multiple-goal pursuit, describing the types of decisions that people make on a daily basis while striving for goals. A key feature of the problem that they pointed to was that people need to make choices among alternative courses of action, when each action has a range of potential consequences that are dif ficult to foresee ahead of time. Grand et al. (2016) took a similar approach,
How to Publish and Review a Computational Model
309
pointing to the complexity of the process by which team knowledge develops over time. In both cases, they painted a picture of a complex, dynamic, phe nomenon, involving a range of different underlying mechanisms, which is of broad significance for anyone who wants to understand human behavior in an organizational setting. What Do We Know, and What Are the Limits of Current Understanding?
The second challenge in writing the opening is to provide a concise summary of the state of existing knowledge regarding the phenomenon and identify the limits of current understanding. Grant and Pollock (2011) noted that authors often find it difficult to explain why knowledge about the topic needs to be developed further. They noted that authors often fall into the trap of either: a) making the contribution seem incremental or b) being overly critical, argu ing that there is nothing to be learned from existing research because it is fun damentally flawed. We suspect that this is due, at least in part, to the style of theorizing that dominates our field, in which the focus is on statistical relation ships among observed variables. It can be difficult to explain why we need a set of new variables, or look at a new set of relationships, when there is already an established set of variables and relationships in the field. Adding new variables to an existing nomological network can appear incremental, while introducing a completely new set of variables makes it look like one is ignoring existing research. By contrast, it is often relatively easy to explain why knowledge about the topic needs to be developed further when writing a computational modeling paper, provided the author has effectively motivated the search for understand ing. If the reviewer has already been convinced that we need to understand the mechanisms underlying a particular phenomenon, then it is simply a matter of describing the state of existing knowledge regarding those mechanisms and making it clear what the limitations of current understanding are. For exam ple, in their second paragraph, Ballard et al. (2016) noted that Vancouver et al. (2010) had taken the first steps toward the development of a formal theory of multiple-goal pursuit, by developing the MGPM. Ballard et al. (2016) described the empirical phenomena that the existing model was able to account for, and identified a number of limitations of that model, pointing to the phenomena that it cannot account for, which were highlighted in the opening paragraph. They followed up with another paper (Ballard et al., 2018) that identified a new set of phenomena that the revised model could not account for. In each of these cases, the objective was not to convince the reader that previous research is fundamen tally wrong but rather to make the point that existing models do not yet provide a complete understanding of the phenomena being considered.
310
Andrew Neal, Timothy Ballard, and Hector Palada
On the other hand, if there is already a verbal explanation of the process, and a large body of empirical research, then it can be more difficult to convince reviewers that existing understanding is limited in some fundamental way. Sim ply pointing to gaps in empirical research is not enough, because as reviewers frequently note, not all gaps are worth filling. A good way to deal with this problem is to point to the difference between a verbal theory and computational model, and explain why we need a computational model of the phenomenon. For example, this may involve making the point that verbal models are ambiguous, difficult to falsify, or incomplete (e.g., they may gloss over important details). Grand et al. (2016) did this by making the point that the “black box” of team cognition has not yet been unpacked. This is because existing research on team cognition has focused on examining statistical relationships among observed constructs, rather than examining the underlying process by which team knowl edge emerges over time. They noted: [T]he approaches to theory building (e.g., “box-and-arrow” models of construct-to-construct relationships) and theory testing (e.g., cross-sectional, self-report-based research) most commonly used in the organizational sciences do not permit the precision and transparency needed to specify how and why team processes shape important team outcomes. (p. 1354) This is clearly an important gap that is worth filling. What Will We Learn?
The third challenge in writing the opening is to explain how the paper will move the field forward, by improving our understanding of the phenomenon. Grant and Pollock (2011) noted that the typical ways that studies move the field forward are by challenging widely held assumptions (consensus shifting) or by resolving conflict (consensus creation). The same is true with a computational modeling paper. Regardless of whether one is developing a new model (e.g., Grand et al., 2016), or extending an existing model (e.g., Ballard et al., 2016), it is necessary to explain how the modeling will change current thinking. This is typically done by describing how the paper addresses the limitations noted earlier. For example, Grand et al. (2016) finished their opening section by explaining how their paper unpacks the “black box” of team cognition. Presenting the Theory and Model
The presentation of the theory and model obviously plays a crucial role in the way that reviewers respond to a manuscript. The theory section needs to engage
How to Publish and Review a Computational Model
311
with previous research on the topic and provide a coherent argument to justify the proposed model. The way that this is done depends on the type of model that is developed. Traditionally, the most common type of model within the organi zational sciences has been the “box-and-arrow” model of construct-to-construct relationships. When developing a model of this type, it is necessary to explain why the constructs are linked in the proposed manner (Sparrowe & Mayer, 2011). The argument will often draw on a range of different theoretical perspectives and empirical findings. In some cases, authors may make reference to internal states and/or processes to explain why observed variables are linked in the proposed manner, but will often do so in very general terms, because reviewers may see these arguments as being speculative (Sparrowe & Mayer, 2011). In a computational modeling paper, by contrast, the purpose of the theory section is to explain the underlying mechanisms responsible for producing observable phenomena. As a result, the focus is processes, rather than con structs. This means that the theory needs to be described in a more precise man ner than in a traditional paper, and it needs to make the assumptions regarding the underlying processes explicit. For example, Gee et al. (2018) developed a computational model that explained the process by which people adjust the difficulty of self-set goals. In the past, researchers within the field have typi cally used expectancy-value theory as a heuristic to identify the variables that predict goal choice but have rarely made the assumptions underlying the theory explicit. Gee et al. (2018) spelled out the assumptions upon which the theory is based and made the point that these assumptions are questionable. They then developed an alternate account, which was based on the concept of anchoring and adjustment (Tversky & Kahneman, 1981). It is rare to see this level of detail in a traditional paper. There are a number of different ways to present a computational model within a paper. If the researcher is using an existing model, the researcher will typi cally describe the model verbally and explain how it is being applied to the phe nomenon, without presenting the equations. For example, Tarakci, Greer, and Groenen (2016) used an agent-based simulation method called particle swarm optimization (PSO) to examine the way that power disparity affects group per formance. They explained how the PSO algorithm works, how they used it to account for interactions among group members, and what the parameters mean in terms of the underlying theory. Similarly, Palada, Neal, Tay, and Heathcote (2018) used an existing model of decision making called the Linear Ballistic Accumulator (LBA) to examine the way that people adapt to changes in task demands. They explained how the LBA works and what the parameters of the model mean in terms of the underlying theory. They then described the different ways in which the experimental manipulations could affect the parameters of the model. They did not need to describe the model mathematically, because it was an existing model.
312
Andrew Neal, Timothy Ballard, and Hector Palada
On the other hand, if the author is developing a new model, or adapting or extending an existing model, then the model will need to be presented formally, in the form of a set of equations or computer code. This may be done in the introduction, at a later point in the paper, in the appendix, or in supplementary online material. The advantage of formally presenting the model in the body of the paper itself is that it makes it easy for the reader to see how the theory has been translated into the model. It makes sense to do it this way if the develop ment of the model is an important part of the contribution of the paper, and it can be explained in a way that is understandable to the readership of the journal. The disadvantage of formally presenting the model in the introduction is that it can be a barrier for readers not used to seeing theoretical arguments expressed mathematically. If the model is complex, then it may make sense to present some or all of the math or code in the appendix or in supplementary online material. Regardless of where the model is presented, the way that it is done is the same—the researcher provides a detailed verbal description of the process and expresses that process as a series of equations. When doing this, it is important not to assume that the equations are self-explanatory. Some readers will find it difficult to read and understand equations. This means that the researcher needs to explain the equation in words. This not only involves explaining what the variables and parameters are but also how the parameters affect the behavior of the system, and what it means psychologically. As an anonymous reviewer of one of our papers put it: “[A]lthough the computational model is transparent and unambiguous with regard to the mathematical definitions and relationships among the variables, it can still be ambiguous what psychological construct those mathematical statements are intended to represent.” It took us several revi sions to get to a point where the psychological meaning was explained suffi ciently clearly for the reviewer to accept. We were grateful to the reviewer for persisting because the paper was stronger as a result. A final point to consider when presenting the theory and model is whether or not to include a simulation study in the paper. In a simulation study, the researcher runs the model under different conditions (e.g., by sampling the parameters from specified distributions). This is used to generate a series of predictions, which may then be tested in an empirical study (Ballard et al., 2016; Tarakci et al., 2016). The simulation can be included in the introduction when presenting the model, or presented as a stand-alone study, before the empirical studies. Information that needs to be provided includes the platform or environ ment that the model is implemented in and the values of the inputs to the model. Inputs to the model include the exogenous variables and fixed parameters. If it is a Monte-Carlo simulation, then the authors need to describe the distributions that the parameters are sampled from, and the number of times that the model is run. The output of the simulation is typically presented in the form of a series of figures showing the predicted pattern of results.
How to Publish and Review a Computational Model
313
While it is often useful to include a simulation study, there are circumstances where a simulation study may not be worth including in the paper. For exam ple, Palada et al. (2018) used the LBA as a measurement model to quantify the effects of information load and time pressure on different aspects of the deci sion process (e.g., the rate of information processing, decision threshold, bias). A simulation study would have been of limited use because there were hundreds of different ways in which the experimental variables could have influenced the parameters of the model, each of which would have different effects on the pre dicted pattern of results. In this case, it made sense to fit the model to the data, and to interpret the pattern of parameter estimates from the model, rather than trying to generate a-priori predictions. Testing the Model
If the manuscript incorporates an empirical study, then the design of the study, and the presentation of the method and results will also play a major role in shaping how reviewers respond to a manuscript. If the design is flawed, or the methods and results are incomplete, unclear, or raise questions regarding the credibility of the research, then this can tip the balance, resulting in a decision to reject rather than revise (Zhang & Shaw, 2012). This advice holds, regardless of whether one is trying to publish a traditional empirical paper or a computational modeling paper. Designing the Study
A good place to start when thinking about the design of an empirical study for a computational modeling project is to consider the criteria by which design choices are evaluated. The design of an empirical study typically involves a trade-off among three criteria: fidelity, control, and generalizability (Brinberg & McGrath, 1985). Fidelity is the extent to which the data are representative of the phenomenon the model is designed to explain. Control is the extent to which the design of the study ensures that the conclusions that can be drawn from the data are defensible. A design allows for a high degree of control if it allows the researcher to manipulate or select the variables that have an effect on the phe nomenon under study, and to test the assumptions that are embedded within the model. Generalizability is the extent to which the conclusions from the study can be applied to other situations or settings. The main design choices that need to be considered include the type of design to be used (observational or experimental), as well as the setting, sample, task, and measures. The main advantage of an observational design is that it allows the researcher to track a phenomenon as it evolves over time. If the phenomenon is observed in its natural context (i.e., in a field study), then this will enhance the
314
Andrew Neal, Timothy Ballard, and Hector Palada
fidelity of the study. An experimental study, on the other hand, will provide greater control. Regardless of whether the study uses an observational or experimental design, one of the key issues that needs to be considered is the number and timing of the observations, and the content of the measures. If the objective is to differenti ate between competing models, the researcher will need to ensure that the models’ predictions can be differentiated, given the number of observations being made, the time between those observations, and the variables that are being measured. This can be assessed by constructing a simulation study that mirrors the observational or experimental protocol and comparing the predictions from each model. As with a traditional empirical study, the key issue to consider when select ing the sample and task is to ensure that they are appropriate for the question being examined. If the purpose of the paper is to test a theory or model, then the researcher needs to use a sample to which the theory applies and a task that is representative of the phenomenon. As noted by Highhouse and Gillespie (2010), a theory can be tested in any sample to which it applies. This means that in many situations, a student sample will be perfectly appropriate. For example, Grand et al. (2016) tested their model of teamwork using a student sample performing a task that required participants to work collaboratively in order to make effec tive decisions. The theory applied to the sample, because it is a general theory of teamwork and is not specific to a particular work setting. The task was designed so that it was representative of the phenomenon being studied and provided the control needed to test the predictions of the model. This provides a nice example of a laboratory study that balances the competing demands for fidelity, control, and generalizability. Though it is important to choose the right variables to measure, it is also important to be aware that there will be many variables that cannot be adequately measured. In many cases, the existing measures used within the field may not directly map onto the processes described by the model. For example, during the review process, a reviewer asked why Gee et al. (2018) did not measure self-efficacy, given that previous research had identified these variables as a pre dictor of goal choice. The problem is that the construct of self-efficacy does not directly map onto the processes described by the model. Furthermore, many of the processes that researchers are interested in are not measurable. For example, the MGPM includes a range of internal cognitive processes that people may not have introspective access to. One of the major reasons for using computational modeling is to be able to make inferences about processes that are not directly observable, or where self-report is an unreliable indicator. Reporting the Results
There are a number of different ways that the results of a computational mod eling project can be written up. The approach that is taken largely depends on
How to Publish and Review a Computational Model
315
whether the model is fitted to the data or not. If the purpose of the analysis is simply to test the predictions derived from a simulation study, then the results section can be written up in the same way as a conventional empirical paper. For example, Tarakci et al. (2016) ran a simulation study to derive a set of predic tions and then tested those predictions using conventional statistical analysis. If this is the case, then the model itself may not appear in the results section. On the other hand, if the researcher wants to conduct quantitative model com parisons, or to interpret the parameter estimates, then the model(s) will need to be fitted to the data. If this is the case, then the authors need to describe how the fitting is done. Information that needs to be provided includes the platform or environment that the model is run in, the values of the exogenous variables and fixed parameters, the free parameters that are being estimated, the fitting algorithm, and the fit indices that are being used. Detailed guidance for how to conduct model fitting is provided by Ballard, Palada, and Neal (Chapter 10, this volume). One important point that should be noted is that model comparisons will only identify the best model from the set of models that were considered. There is an infinite number of alternative models that could be considered, which makes it important to have a clear scientific rationale for the set of alternatives (Wilson & Collins, 2019). A second point that should be noted is that it is essential to assess the extent to which the model captures the observed trends in the data when fitting a model to data. This is referred to as “qualitative fit.” A good way to visualize qualitative fit is to superimpose a plot of the output of the model against the empirical data. In doing this, it is important to plot the trends in a way that makes it easy to see whether the model provides a good account of the data. For example, Ballard et al. (2018) did this by plotting participants’ choices, which were represented as points with standard error bars, as a function of the experimental manipulations. The model’s predictions were superimposed over the empirical data, with a line representing the central tendency, and a ribbon representing the 95% credible interval around the central tendency. Discussing the Findings
The structure of a discussion section in a modeling paper is typically the same as that of a traditional empirical paper. The authors will start by providing a recap of the motivation for the study and summarizing the key findings, and then discussing the theoretical and practical implications, together with the limita tions of the study. Geletkanycz and Tepper (2012) argue that in their experience as editors, the major stumbling block for authors is the discussion of theoreti cal implications. They argue that authors often miss the opportunity to explore the significance of their work and shape the ongoing conversation within the literature. They argue that this can be done by treating the discussion as both an
316
Andrew Neal, Timothy Ballard, and Hector Palada
ending and a beginning. As an ending, the discussion brings closure to the study by answering the original question and explaining the findings have improved our understanding of the phenomenon. The discussion provides a new beginning by delving into the deeper meaning of the findings, providing a bridge between the findings and the broader literature, and exploring alternative explanations or unexpected findings. The strategy of treating the discussion as an ending and beginning works equally well for a computational modeling paper. Authors will typically sum marize the empirical findings that the model is able to account for, discuss how the model accounts for those findings, and point to the differences between the current model and the other models within the field. As noted earlier, most com putational modeling studies will generate unexpected findings, which may gen erate new insights or raise new questions that can be explored in future work. In some cases, the discussion may include additional modeling work that explores competing explanations or addresses puzzling aspects of the results. Our advice is to only do this additional modeling, if it can be kept relatively simple. The risk of trying to do too much is that the paper becomes unwieldy. No paper can be expected to provide a complete or final answer to a complex problem, so it is reasonable to leave substantive questions for future work. Conclusion
The use of computational modeling has been growing in I-O psychology and OB over the past decade. With this growth, however, comes new challenges for authors and reviewers. We hope that this chapter has provided a useful founda tion for those who are new to computational modeling and wishing to publish or review research that makes use of this methodology. It is difficult, of course, to put forth a template for reporting computational modeling research that is uni versally applicable. The approach that is taken, and the details that are required, will vary from paper to paper. However, we believe that the general principles outlined earlier provide a good starting point for thinking about how this type of research can be presented in a compelling way. Note 1. The search was conducted for papers including “computational model” in any field. Papers that used statistical models, as opposed to computational models, were excluded.
References Arnold, N., Bröder, A., & Bayen, U. (2015). Empirical validation of the diffusion model for recognition memory and a comparison of parameter-estimation methods. Psycho logical Research, 79(5), 882–898. doi:10.1007/s00426-014-0608-y
How to Publish and Review a Computational Model
317
Ballard, T., Vancouver, J. B., & Neal, A. (2018). On the pursuit of multiple goals with different deadlines. Journal of Applied Psychology, 103(11), 1242–1264. doi:10.1037/ apl0000304 Ballard, T., Yeo, G., Loft, S., Vancouver, J. B., & Neal, A. (2016). An integrative formal model of motivation and decision making: The MGPM*. Journal of Applied Psychol ogy, 101(9), 1240–1265. doi:10.1037/apl0000121 Barney, J. (2018). Editor’s comments: Positioning a theory paper for publication. Acad emy of Management Review, 43(3), 345–348. doi:10.5465/amr.2018.0112 Brinberg, D., & McGrath, J. E. (1985). Validity and the research process. Beverly Hills, CA: SAGE. Bruderer, E., & Singh, J. V. (1996). Organizational evolution, learning, and selection: A genetic-algorithm-based model. The Academy of Management Journal, 39(5), 1322–1349. doi:10.2307/257001 Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamiccognitive approach to decision making in an uncertain environment. Psychological Review, 100(3), 432. Chang, M.-H. (2007). Innovators, imitators, and the evolving architecture of problemsolving networks. Organization Science, 18(4), 648–666. doi:10.1287/orsc.1060.0245 Chen, J., Croson, D., Elfenbein, D., & Posen, H. (2018). The impact of learning and over confidence on entrepreneurial entry and exit. Organization Science, 29(6), 989–1009. doi:10.1287/orsc.2018.1225 Colquitt, J. A., & George, G. (2011). From the editors: Publishing in “AMJ”—Part 1: Topic choice. The Academy of Management Journal, 54(3), 432–435. doi:10.5465/ AMJ.2011.61965960 Ethiraj, S. K., & Levinthal, D. (2004). Bounded rationality and the search for organiza tional architecture: An evolutionary perspective on the design of organizations and their evolvability. Administrative Science Quarterly, 49(3), 404–437. doi:10.2307/4131441 Farrell, S., & Lewandowsky, S. (2018). Computational modeling of cognition and behav ior. Cambridge: Cambridge University Press. Gee, P., Neal, A., & Vancouver, J. B. (2018). A formal model of goal revision in approach and avoidance contexts. Organizational Behavior and Human Decision Processes, 146, 51–61. doi:10.1016/j.obhdp.2018.03.002 Geletkanycz, M., & Tepper, B. J. (2012). Publishing in AMJ Part 6: Discussing the implica tions. Academy of Management Journal, 55(2), 256–260. doi:10.5465/amj.2012.4002 Gibbons, D. E. (2004). Network structure and innovation ambiguity effects on diffusion in dynamic organizational fields. Academy of Management Journal, 47(6), 938–951. Gibson, F. P. (2000). Feedback delays: How can decision makers learn not to buy a new car every time the garage is empty? Organizational Behavior and Human Decision Processes, 83(1), 141–166. doi:10.1006/obhd.2000.2906 Gibson, F. P., Fichman, M., & Plaut, D. C. (1997). Learning in dynamic decision tasks: Computational model and empirical evidence. Organizational Behavior and Human Decision Processes, 71(1), 1–35. doi:10.1006/obhd.1997.2712 Grand, J. A. (2017). Brain drain? An examination of stereotype threat effects during train ing on knowledge acquisition and organizational effectiveness. Journal of Applied Psychology, 102(2), 115–150. doi:10.1037/apl0000171 Grand, J. A. (2019). A general response process theory for situational judgment tests. Journal of Applied Psychology. doi:10.1037/apl0000468
318
Andrew Neal, Timothy Ballard, and Hector Palada
Grand, J. A., Braun, M. T., Kuljanin, G., Kozlowski, S. W. J., & Chao, G. T. (2016). The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams. Journal of Applied Psychology, 101(10), 1353–1385. doi:10.1037/apl0000136 Grant, A. M., & Pollock, T. G. (2011). From the editors: Publishing in AMJ—Part 3: Set ting the hook. The Academy of Management Journal, 54(5), 873–879. Highhouse, S., & Gillespie, J. Z. (2010). Do samples really matter that much?. In Lance, C. E., & Vandenberg, R. J. (Eds.). (2009). Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences. Taylor & Francis. Statistical and methodological myths and urban legends (pp. 267– 286). New York: Routledge. Kane, G., & Alavi, M. (2007). Information technology and organizational learning: An investigation of exploration and exploitation processes. Organization Science, 18(5), 796–812. doi:10.1287/orsc.1070.0286 Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/ aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psy chology, 74(4), 657–690. doi:10.1037//0021-9010.74.4.657 Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press. Levine, S., & Prietula, M. (2013). Open collaboration for innovation: Principles and per formance. Organization Science, 111(52), 1414–1433. doi:10.1287/orsc.2013.0872 Levinthal, D. A., & Marino, A. (2015). Three facets of organizational adaptation: Selec tion, variety, and plasticity. Organization Science, 26(3), 743–755. doi:10.1287/ orsc.2014.0956 Locke, E. A. (1997). The motivation to work: What we know. Advances in Motivation and Achievement, 10, 375–412. March, J. G. (1991). Exploration and exploitation in organizational learning. Organiza tion Science, 2(1), 71–87. Retrieved from www.jstor.org/stable/2634940 Palada, H., Neal, A., Tay, R., & Heathcote, A. (2018). Understanding the causes of adapting, and failing to adapt, to time pressure in a complex multistimulus environ ment. Journal of Experimental Psychology: Applied, 24(3), 380–399. doi:10.1037/ xap0000176 Parker, S. K., Morgeson, F. P., & Johns, G. (2017). One hundred years of work design research: Looking back and looking forward. Journal of Applied Psychology, 102(3), 403–420. doi:10.1037/apl0000106 Repenning, N. P. (2003). Selling system dynamics to (other) social scientists. System Dynamics Review, 19(4), 303–327. doi:10.1002/sdr.278 Salas, E., Kozlowski, S. W. J., & Chen, G. (2017). A century of progress in industrial and organizational psychology: Discoveries and the next century. Journal of Applied Psychology, 102(3), 589–598. doi:10.1037/apl0000206 Schmidt, A. M., & DeShon, R. P. (2007). What to do? The effects of discrepancies, incen tives, and time on dynamic goal prioritization. Journal of Applied Psychology, 92(4), 928–941. doi:10.1037/0021-9010.92.4.928 Sparrowe, R. T., & Mayer, K. J. (2011). From the editors: Publishing in AMJ—Part 4:
Grounding hypotheses. The Academy of Management Journal, 54(6), 1098–1102.
Stokes, D. E. (1997). Pasteur’s quadrant: Basic science and technological innovation.
Washington, DC: Brookings Institution Press.
How to Publish and Review a Computational Model
319
Tarakci, M., Greer, L. L., & Groenen, P. J. F. (2016). When does power disparity help or hurt group performance? Journal of Applied Psychology, 101(3), 415–429. doi:10.1037/apl0000056 Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–458. doi:10.1126/science.7455683 Vancouver, J. B., Li, X., Weinhardt, J. M., Steel, P., & Purl, J. D. (2016). Using a computational model to understand possible sources of skews in distributions of job performance. Personnel Psychology, 69(4), 931–974. doi:10.1111/peps.12141 Vancouver, J. B., Wang, M., & Li, X. (2018). Translating informal theories into formal theories: The case of the dynamic computational model of the integrated model of work motivation. Organizational Research Methods. doi:10.1177/1094428118780308 Vancouver, J. B., Weinhardt, J. M., & Schmidt, A. M. (2010). A formal, computational theory of multiple-goal pursuit: Integrating goal-choice and goal-striving processes. Journal of Applied Psychology, 95(6), 985–1008. Vancouver, J. B., Weinhardt, J. M., & Vigo, R. (2014). Change one can believe in: Add ing learning to computational models of self-regulation. Organizational Behavior and Human Decision Processes, 124(1), 56–74. doi:10.1016/j.obhdp.2013.12.002 Wilson, R. C., & Collins, A. G. (2019). Ten simple rules for the computational modeling of behavioral data. eLife, 8. doi:10.7554/eLife.49547 Zhang, Y., & Shaw, J. D. (2012). Publishing in AMJ-Part 5: Crafting the methods and results. Academy of Management Journal, 55(1), 8–12. doi:10.5465/amj.2012.4001 Zhou, L., Wang, M., & Vancouver, J. B. (2019). A formal model of leadership goal striv ing: Development of core process mechanisms and extensions to action team context. Journal of Applied Psychology, 104(3), 388–410. doi:10.1037/apl0000370
INDEX
Note: Page numbers in italics indicate figures and page numbers in bold indicate tables. Abar, S. 38, 190 Ackerman, P. L. 300 active learning theory 110 ACT-R model see Adaptive Character of Thought-Rational (ACT-R) Theory adaptation 14, 52, 98, 301 Adaptive Character of Thought-Rational (ACT-R) Theory 14, 26, 47–49, 102 Adibi, J. 169 Agah, A. 166 agent-based modeling (ABM): abstract representations in 182–183; agent autonomy and 183–184; agent interdependencies and 184; agents in 76, 182–184; cooperation and 188; creating and validating 27; design and structure of 188–192; diversity and inclusion research 70, 71, 76–84; emergence and 184–185; empirical testing 191; environments in 76; formal theories and 182, 191; gender stratification and 81–82, 82, 83–84; limitations of 186; narrative and conceptual foundations 189; neural networks and 87n1; newcomer socialization
and 191–195, 196, 197–199, 200, 201–202, 202, 203, 203, 204, 204, 205, 205; organizational culture and 187; organizational research and 181, 183, 186–195, 197–199, 201–204; organizational stratification 86–87; particle swarm optimization (PSO) 311; residential racial segregation and 26; rules in 76; simulations and 139, 148–149, 149, 161, 182, 185–186, 190; spiral of silence in 77; strengths of 185–186; team characteristics and 188; team faultlines 77–78, 78, 79–80, 85–86; team leadership and 125, 126 – 127, 129, 139; team network modeling 187–188; team processes 78, 78, 79–81; theory building and 181, 185; translation process 189–190 Akaike information criterion (AIC) 258, 283 Alami, R. 166 Alavi, M. 303 Allan, R. J. 190 Al-Onaizan, Y. 169 Alvarez-Galvez, J. 77 Ambady, N. 73, 88n2
Index
anchoring and adjustment 311 Anderson, J. R. 14, 26, 47 Anzai, Y. 102 apprentices 16–20 approach goals 261, 293n1 Argote, L. 157 Arjun, B. 167 Arrichiello, F. 166 Arthur, W., Jr. 103 artificial intelligence 228 Ashworth, M. J. 151 Austin-Breneman, J. 151 avoidance goals 293n1 Awadalla, M. H. 167, 171 Ballard, T. 9, 12, 18, 23, 28, 231, 251, 255, 257, 259, 261, 266, 293n1, 297, 306, 308–309, 315 Bandura, A. 8, 232 Barkoczi, D. 151 Basar, E. 159, 161 Bass, B. M. 120 Bayesian information criterion (BIC) 258, 283, 285–286, 289, 291 Bell, B. S. 110 Bingham, C. B. 163 Bogacz, R. 49 Bosse, T. 151 Botelho, S. S. D. C. 166 bounded rationality model 50–51 Braun, M. T. 26, 34, 154 Brenner, L. 46 Brousseau, K. R. 164 Brown, E. 49 Bruderer, E. 302 Bullock, S. 167 Bürger, M. 169 Burton, R. M. 159 Bush, B. J. 153 Cagan, J. 156 Carley, K. M. 151 – 152, 154, 157 Carroll, G. 99–100, 105 Carroll, G. R. 187, 189 carrying capacity 227, 228 Carvalho, A. N. 152 Cascade model 103 Castillo, E. A. 129 Chae, S. W. 152 Chang, M.-H. 187, 190, 303 Chao, G. T. 154 Chen, G. 193
321
Chen, J. 303 Chen, S. 154 Chiang, Y. 129 Chiaverini, S. 166 Christensen, L. C. 152 Christiansen, T. R. 152 Chuong, C. 168 CLARION model 103 Coelho, H. 43 Coen, C. 153 Coen, C. A. 156, 188 cognitive heuristics 54 Cohen, J. D. 49 Colella, A. J. 62 collective intelligence 131, 182 Colquitt, J. A. 304–305, 307 competence bias 104, 106 computational modeling: apprentices in 16–20; benefits of 8–13; climate change and 3–4; decisionmaking and 26, 36–40, 41 – 42, 43, 55–57; dynamic 8, 10, 96–97; evaluation of 6, 17–18, 237–238; experimentation and exploration 9–10; falsification and improved prediction 10–12, 301; formal theories and 18–19; human behavior and 4, 12, 14; hurricanes and 3–4; infectious disease spread 3–4; I-O psychology and 4–5, 13–16, 26–27; journey-level researchers 20–23; life cycle of 298–299, 299, 300–301; masters level researchers 23–25; natural sciences and 3–4; processes of 5, 17–18, 37; simulations in 4, 27; static 8, 39, 42; theory integration and 8–9; theory specification and 4–5; typology of 38–40, 41 – 42, 57n1; value of 5–6; vocabularies in 16–17; see also agent-based modeling (ABM); modeling approaches; publishing and reviewing computational models; system dynamics modeling connectionist models see neural network models Contractor, N. 157 control theory 105, 134–135, 140 Cooney, S. N. 26, 34 cooperation 188
322
Index
cost function: defining 271–272, 272, 273–274; global minimum 281; likelihood variable 274; minimizing 274–281 Cronin, M. 216 Croson, D. 303 Crossan, M. M. 99 Crowder, R. M. 153, 167 culture shifts 99–100 Das, S. M. 166 David, N. 43 Davis, J. P. 10, 140, 163 Day, E. 103 decision field theory 46, 57n1, 306 decision-making: ACT-R model and 47–49; bounded rationality model 50–51; as cognitive process 36; computational models and 26, 36–40, 41 – 42, 43, 55–57; as construct 34; descriptive models and 34–36, 38, 41, 42–43; dynamic-descriptive 46–49, 56; dynamic-prescriptive 52–55; as dynamic process 125; EU theory and 34; individuallevel 52–53; Linear Ballistic Accumulator (LBA) 311, 313; model complexity and 37, 56; model level and 39–40; model orientation and 38; model temporality and 38–40, 42; normative models and 34–36, 38, 50; organizational learning and 44–45; as perspective 34; prescriptive models and 35–38, 42, 42, 43, 56; process structure 44; prospect theory and 43–45; SEU theory and 34; static 39, 42, 57n1; static-descriptive 43–46, 56; static-prescriptive 49–52; team leadership and 125, 126, 130–131, 141–142; turnover decisions 45; typology of computational models 38–40, 41, 57n1; see also dynamic decision-making delta-learning rule 228, 242 Descartes, R. 239 DeShon, R. P. 22 Dimarogonas, D. V. 169 Dionne, P. J. 126, 130–131
Dionne, S. D. 15, 125, 126 – 127, 130–133, 153 discrepancy production 7 discriminatory behavior 64, 69 diversity and inclusion research: agentbased models 70, 71, 76–84, 86–87; computational models and 26, 62–66, 84; core concepts 66, 67; discriminatory behavior 64, 69; emergent/dynamic outcomes 68, 69; gender stratification in 81–82, 82, 83–84, 86–87; intergroup contact theory 80; neural network/connectionist models 70–71, 71, 72–76, 84–85; organizational science and 62–64; prejudicial attitudes 64, 69, 72; process mechanisms 67 – 68, 69; simulations in 65; social status cues 75; stereotypes/ stigmas 64–65, 72–75, 84–85; stratification/segregation 64, 69; team processes 77–78, 78, 79–81; see also stereotypes/stigmas Dollarhide, R. L. 166 Duecker, D. A. 167, 171 Duell, R. 151 Durban, R. 211 dynamic decision-making: descriptive models 46–47; goal setting and 9, 54–55; interdependencies and 40, 48; learning process and 46–48; organizational science and 48–49; prescriptive models 52–55; static decisions and 39, 57n1; team leadership and 125, 130–131; temporality and 39–40, 46–47, 57n1; temporal motivation theory and 45–46; work-family balance 48–49; see also decision-making Dynamic Exploration-Exploitation Learning model (DEEL) 103–104, 104, 105, 106 dynamic process: conceptual framework for 17–18; goal-striving and 28, 230–231; goal theory and 8; organizational learning and 14, 96, 98–99, 101; team leadership research 122, 133; variables and 214–216; verbal theories and 185 dynamics: computational models and 8, 10, 96–97; emergence and
Index
185; feedback loops and 37;
growth models and 215; model
complexity and 37, 101, 214;
model temporality and 39; neural
network models 74; organizational
populations and 14; theory
specification and 4; see also
system dynamics modeling
dynamic variables 16, 99
Ehret, P. J. 72
Eisenhardt, K. M. 163
Elfenbein, D. 303 emergence 184–185 empirical research: agent-based modeling and 191; computational models and 10, 256; control and 313; experimental design 313–314; fidelity and 313; generalizability and 313; model fitting and 300–301; observational design 313–314; predictions and 256; study design 313–314; team leadership and 123, 136–138; see also publishing and reviewing computational models endogenous variables 17, 123, 136, 141,
214, 217
Erdem, A. 169 Ethiraj, S. K. 302 EU theory see expected utility (EU) theory evaluating computational models: apprentices and 16–18; competitive predictions and 247–248; criteria for 237–239; crucial predictions 244–245, 249; evaluator role 238, 240–249, 251; extending competitive prediction 248–249; fallibility and 239–240, 242–243; falsifiable predictions and 245–247; fit statistics 238; generalizability and 249–251; logical consistency and 242–244; modeler role 238–251; overfitting and 250; plausibility and 242; pragmatist philosophy and 239–240; relationships between constructs 241; risky predictions 246–247; sensitivity analysis and 242; simulations and 243; transparency and 244; values
323
and 6; variables and parameters 240–241; verisimilitude and 240; see also Model Evaluation Framework (MEF) Evans, L. 158 exogenous variables 17, 50, 123,
130–131, 133, 136, 141, 214
expectancy 260–262
expectancy-value theory 311
expected utility (EU) theory 34, 260–262
experimentation 9–11, 185, 190
exploitation 97–99, 103–105, 187
exploration: agent-based modeling and
187; computational models and 9–10; intervention-focused models 111; organizational effectiveness and 97–99; simulations and 164; training and development 103–105 fallibility 239–240, 242–243
falsifiability 10–11, 240, 245
Fan, Z. 153 Farrell, S. 7, 213, 239, 257
feedback loops: causal processes and 226;
cause and effect in 218; complex
systems and 231; dynamics and
37, 39; negative 135, 135, 136,
139; positive 80, 83, 223; system
dynamics modeling and 88n1,
139, 214, 231
Feng, B. 153 Feynman, R. P. 241
Fichman, M. 302 fit statistics 238, 250–251
Flache, A. 77–78, 78, 79–81, 84–86,
88n4, 88n5, 153, 156, 291
Forrester, J. 13
Forrester, J. W. 212–213, 216–217,
231–232
Frantz, T. L. 154 Freeman, J. B. 72–75, 84–85, 88n2
Friedman, A. 188
Fu, N. 153 Galesic, M. 151 Garbage Can Model of Organizational
Choice 47
Gavetti, G. 154 Gee, P. 233, 276, 311, 314
Geidner, N. 77
Geist, A. R. 167, 171
324
Index
Geletkanycz, M. 315 gender stratification: computational models and 113; developmental opportunities 83–84; hiring rates 83; lack of female leadership 81–82, 82, 83–84, 86–87 generalizability: empirical data and 313;
I-O phenomena and 215; Model
Evaluation Framework and 28,
249–251; model testing 22;
psychological research and 6–7
George, G. 304–305, 307
Gibbons, D. E. 303 Gibson, F. P. 302 Gigerenzer, G. 50
Gill, E. 170 Gillespie, J. Z. 314
goal pursuit model 9, 134–136 goal setting 54–55 goal-striving: computational models
and 23; dynamic process of
28, 230–231; leadership and
258–259, 275; model for 230;
multiple 231; multiple parameters
and 21
goal theory 7–8 Goldstein, D. G. 50
Grand, J. A. 26, 62, 105–107, 113, 140,
154, 163, 187, 189, 191, 290, 303,
307–310, 314
Grant, A. M. 308–310 Grant, E. 167 Greer, L. L. 159, 291, 311
Griffin, D. 46
Groenen, P. J. F. 159, 291, 311
groups: computational models and
147; defining 146; impact of
LMX differentiation on 133;
as organizational forms 146;
segregation/stratification 69;
simulations and 146–148, 148,
150, 160–165, 171–172; see also
teams
growth models 215–216, 225–227 Grushin, A. 168 Guo, J. 170
Harrison, J. R. 99–100, 105, 163,
187, 189
Haslam, R. A. 158 Hazy, J. K. 121
Heathcote, A. 311
Hebl, M. 62
Heidegger, M. 6, 237
Heil, R. 169 Henshaw, M. J. d. C. 158 heuristics: agent-based modeling and 183;
cognitive 54; decision-making and
44, 49, 51, 54; expectancy-value
theory as 311; normative theory
and 50; static 233
Heuristic Self-Improvement (HS)
model 103
Highhouse, S. 314
Holmes, P. 49
Hsu, Y. 129 Hu, B. 159 Hu, Y. C. 166 Huang, E. 154 Huang, H. 170 Hughes, H. P. N. 153 Hulin, C. L. 14
human behavior: computational models
and 4, 12, 14; decision-making and
35–36; discriminatory behavior
64, 69; EU theory and 34; prospect
theory and 43; self-regulation and
231; SEU theory and 34
Hackman, J. 164
Hanisch, K. A. 14
Hao, C. 153 Hardy, J. H., III 26, 95, 104, 106
Harrington, J. E. 187, 190
Jehn, K. A. 156, 291
Jiang, F. 159 Jiang, Z. 153 Jin, Y. 152 Juran, D. C. 154
Ilgen, D. R. 14
Indiveri, G. 166 industrial-organizational (I-O) psychology see I-O psychology informal learning 114
intergroup contact theory 80
internal validity 17
I-O psychology: computational models and 4–5, 13–16, 26–27; diversity and inclusion research 62; learning, training, and socialization in 95; phenomena in 4–5, 255; role of theory in 211–212
Index
Kahneman, D. 43 Kaminka, G. A. 169 Kane, G. 303 Kanfer, R. 300 Kang, M. 53, 155 Kaplan, M. S. 26, 34 Kelton, W. D. 165 Kennedy, D. M. 27, 146, 155, 163–164, 285 King, E. 62 Kleinman, D. L. 49 Kleinmuntz, D. N. 54–55 Klimoski, R. J. 193 Knudsen, T. 187 Koehler, D. J. 46 Kotovsky, K. 156 Kozlowski, S. W. J. 110, 123, 154, 189 Kreuzer, E. 167, 171 Kuhn, T. S. 6, 237, 306 Kuhrmann, M. 155 Kuljanin, G. 154 Kumar, A. 53 Kunz, J. 152 Lane, H. W. 99 Larsen, E. 14 Larson, J. R. Jr. 165 Lau, D. C. 77, 79 Law, A. M. 165 Lazer, D. 188 leader emergence 85 leader-member exchange (LMX) 130–134, 138 leadership: authoritarian 130–131; computational models and 26–27, 141; defining 120; functional theory 135; gender stratification in 81–82, 82, 83–84, 86–87, 113; goal-striving and 258–259, 275; individualized 130–131; participative 130–132; shared 138; stereotypes/stigmas 85; team social network and 132; training and development 140; transformational 248–249; see also team leadership learning, training, and socialization: active learning theory 110; adult learning models 108–111; common goals in 108; computational models and 26, 97–114; culture shifts 99–100; dynamic effects 96,
325
98–99, 101; exploitation and exploration 97–99; gender bias models 113; informal learning and 114; I-O psychology and 95–96; knowledge transfer and 96; learning interventions 26, 109, 111–112; proactive socialization 100–101; relevance and impact of 113–114; see also training and development learning interventions: computational models and 26, 111–112; financial value of 113; machine learning 96; self-regulation and 111–112; theory translation 109; training and 111–112 learning models: adult learning 109–111; calibrating and 228–229, 229, 230; computational models and 26, 102–103; deltalearning rule and 228; dynamic decision-making and 47; individual learning 102–103; system dynamics modeling and 228–229; theory translation 109–110 learning problems 8–9 learning process: active instruction in 95; ACT-R framework for 102; cognitive models 102–103; conceptual bias and 103–104, 106; dynamic decision-making and 46–48; informal learning and 114; learner control and 95; selfregulation and 95; skill acquisition and 102–105 Lee, C. S. G. 166 Lee, J. 26, 62 Lee, K. C. 152 Lemarinier, P. 38 Levine, S. 303 Levine, S. S. 187 Levinthal, D. 302 Levinthal, D. A. 187, 303 Levitt, R. E. 152 Lewandowsky, S. 7, 213, 239, 257 Li, X. 27, 109, 159, 211, 227, 302 Lin, S.-J. 187 Lin, Z. 152 Linear Ballistic Accumulator (LBA) 311, 313 Liu, P. 159
326
Index
Liu, Y. 27
Locke, E. A. 306
Loft, S. 259, 293n1
Lomi, A. 14
Lu, Y. 166 Lunesu, M. I. 155 machine learning 9, 14, 96
Macy, M. W. 53
Malatji, E. M. 50
March, J. G. 14, 26, 97–100, 103,
105, 187
Marchesi, M. 155 Marino, A. 303 Marques-Quinteiro, P. 164
Marsella, S. C. 169 Martell, R. F. 14, 113
Martínez-Miranda, J. 156 Mäs, M. 77–78, 78, 79–81, 84–86, 88n4,
88n5, 153, 156, 291
Mathieu, J. E. 164
Matlab 213
maximum likelihood methods 259,
271–274, 291–292
McClelland, J. L. 43
McComb, C. 156 McComb, S. A. 27, 132, 146, 155, 157,
163–164, 285
McHugh, K. A. 126, 131
Meehl, P. E. 11, 240, 245, 255
MEF see Model Evaluation Framework (MEF) Memon, Z. A. 151 mental model development 131–133 MGPM see multiple-goal pursuit model (MGPM) Miller, K. D. 187
Millhiser, W. P. 156 model complexity: Bayesian information
criterion (BIC) 292; cross-
validation and 285–286; decision-
making and 36, 56; dynamics
and 37, 101, 214; falsifiability
and 250–251; informal learning
and 114; maximum likelihood
methods 292; MGPM and 283,
285; model-data correspondence
and 258
Model Evaluation Framework (MEF):
competitive predictions and
247–248; criteria for 28, 239;
crucial predictions 244–245, 249;
definitions and measurement in 240–241; equations and algorithmic rules 241; extending competitive prediction 248–249; falsifiable predictions and 245–247; generalizability and 249–251; logical consistency and 240, 242–244; plausibility in 242; risky predictions 246–247; simulations and 243; transparency and 244 model fitting: alternative models in
21, 258, 283–284; Bayesian
parameter estimation 291–292;
cross-validation and 285–286;
defining 257; dynamic interactions
and 290–291; extensions
289–290; fitting to noise 275;
generality and 282; maximum
likelihood methods 259,
271–274, 291–292; model-data
correspondence and 257–258,
271, 274–276, 278, 281–283;
multiple-goal pursuit and 28, 259,
264, 271–288; nested/non-nested
290; optimization algorithm
20–21; overfitting and 250,
275; parameter estimation and
20–21, 258–259, 271, 276–281,
286–289, 291–292, 293n3, 300;
parsimony and 258, 275, 282,
301; pattern matching and 23;
plausibility and 282; postdiction
and 22; prediction and 22, 257,
271; problem of local minima
281; qualitative 20–23, 315;
quantitative 20–22, 24, 28, 300;
reporting results 315; simulations
and 291; theory testing and
257–258
modeling approaches: computational
models and 66, 87n1; core
concepts 66, 67, 69; emergent/
dynamic outcomes 66, 67 – 68,
69; process mechanisms 66,
67 – 68, 69
model level 39–40 model orientation 38
model scope 11, 18, 186, 226,
304, 307
model temporality 38–40, 42, 57n1 Moehlis, J. 49
Index
Monte Carlo simulation: bootstrapping
and 161; description of 149; for
groups and teams 148, 161, 162,
163, 165; parameter analysis and
227; publishing models of 312;
team leadership and 126 Morgeson, F. P. 140
multiple-goal pursuit model (MGPM):
approach goals and 261, 293n1;
avoidance goals and 293n1; code
readability and 293n2; competing
models for 11–12; computer
code for 264–270; decision
field theory 306; defining cost
function 271–272, 272, 273–274;
discount rate and 262, 265, 268,
275, 277–278, 284; equations
for 261–264; expectancy and
260–262; expected utility and
260–262; input arguments
264–265, 265, 266–268, 278;
likelihood variable 274, 278;
minimizing cost function
274–281; model comparison
282–284, 284, 285–286; model
fitting and 28, 259, 264, 271–288;
model predictions 268–270, 275;
momentary attractiveness and
262; observable variables and
260; optim function 277–278,
278, 279; parameter recovery
analysis 286–287, 287, 288;
parameters and 260–262,
275–279; prioritization decisions
and 260; theory building and
306–307; threshold parameter
263–265, 268, 275, 277–278;
time sensitivity and 261–262,
264–265, 268, 275, 277–280,
280; valence and 260–262
Münch, J. 155 Murnighan, J. K. 77, 79
Muslea, I. 169
327
and 87n1; directional/bidirectional
connections 72; diversity and
inclusion research 70–71,
71, 72–76; dynamics in 74;
excitatory/inhibitory connections
72, 74–75; feed-forward 72;
leader emergence process 85;
learning and 8–9, 14; motivational
systems and 29; nodes in 71–72;
prejudicial attitudes 72; recurrent
72; social categorization in 73,
73, 74–75, 88n2; stereotypes/
stigmas 72–75, 84–85; weighted
connections in 71
newcomer socialization: agent-based
modeling and 191–195, 196,
197–199, 200, 201–205, 205;
computational models and
113; developmentally-minded
supervision and 110; dynamic
phenomenon of 95, 101; initial
parameters for 194–195, 195, 196,
197–199; interpersonal factors
and 192–193, 194, 195, 197;
knowledge structures and 98, 192;
leaders and 141; newcomer ability
and 194, 197–199, 201–202,
202; organizational approaches
to 112; organizational learning
and 98, 192–193, 201–202, 202,
203, 203, 204, 204; proactive
100; simulations and 141; veteran
impressions and 193–195,
197–199, 201–203, 203, 204, 204
Newell, A. 102
Nguyen, P. A. 155 Nieboer, J. 156 novelty recognition bias 104, 106
Oh, W. 127, 133–134 O’Hare, G., M. 38
Ohlsson, S. 103
Oliveira, F. 152 online collaborative work communities (OCWCs) 133–134 narrative theories 55
Open Science movement 244
Navarro, J. 164
Neal, A. 28, 231, 233, 255, 258–259, 276, organizational culture 99, 187
organizational learning: adaptation 98;
297, 311, 315
agent-based modeling and 187;
Nelson, A. L. 167 culture shifts 99–100; dynamics
NetLogo 190, 199, 200
and 14; effects of stereotype
neural network models: activation threat 105–107, 107, 113;
functions 72; agent-based models
328
Index
and 241; sensitivity analysis
and 9–10, 190, 225; simulations
and 148; team leadership and
123–124; threshold 263–265, 268,
275, 277–278; time sensitivity
261–262, 264–265, 268, 275,
277–280, 280
Park, S. 128 parsimony: conceptual 12–13; cross-
validation and 285; fixed
parameters and 275; free
parameters and 275; model-data
correspondence and 289; model
fitting and 258, 275, 282–283,
285, 301; statistical 250;
theoretical 101
particle swarm optimization (PSO) 311
path diagrams 214
Patrashkova, R. R. 157 Pattipati, K. R. 49
Pavón, J. 156 Pedone, P. 166 Peirce, C. S. 239
Pete, A. 49
Pham, D. T. 167, 171
phenomena: agent-based modeling and
27, 77, 161, 181–185, 187, 189;
complex dynamic 18, 27, 96, 101,
107, 109, 114, 181, 183, 214,
305, 307; computational models
Palada, H. 28, 255, 297, 311, 313, 315
and 7, 12, 16, 20, 212; diversity
Palazzolo, E. T. 157 and inclusion research 65, 69–70,
parameters: agent-based modeling and
72, 76–77; domain-relevant 19;
194, 195; Bayesian parameter
emergence and 184; empirical
estimation 291; combinations
282, 299–301, 304, 306–307,
of 276; discount rate 262,
309; explanatory models and
264–265, 268, 275, 277–278,
251; I-O psychology and 4–5,
284; estimating 28, 101, 141,
19, 28, 215; learning, training,
238; evaluating computational
and socialization 97, 103, 105,
models 240–241; fixed 260,
108–109; model building and 8,
275; free 24, 28, 260, 276,
22, 25; organizational 16, 27, 46,
288–290; goal importance 275;
146, 181, 183, 187, 189, 204, 255,
goal-striving and 258–259;
289; simulations and 163; skill
grid search for 276–277, 281;
decay and 110
MGPM and 260–262, 275–279;
Piderit, S. K. 158 model building and 18, 24;
Piketty, T. 226
model-data correspondence and
Pitonakova, L. 167 275–276, 281; model fitting and
Plaut, D. C. 302 20–21, 258–259, 271, 276–281,
291–292; Monte Carlo analysis of Pollock, T. G. 308–310 227; overfitting and 250; posterior Popper, K. R 10, 240, 245
population ecology 227
distribution 292; quantitative
fitting and 20; self-efficacy model population growth model 225–227
exploration-exploitation 97–99,
103–105; proactive socialization
100–101; social processes and 98
organizational research: agent-based
modeling and 181, 183,
186–195, 197–199, 201–204;
decision-making and 36, 38, 45;
diversity and inclusion 62, 87;
process-oriented theories in 140;
simulations and 139
organizational science: computational modeling in 301, 304; diversity and inclusion research 62–64, 66, 70–71, 71, 72–76; dynamicdescriptive decision-making 48–49; dynamic-prescriptive decision-making 54–55; feedback and 55; gender stratification in 81–82, 82, 83–84, 86–87; goal setting and 54–55; narrative theories and 55; selection and 51–52; static-descriptive decisionmaking 44–46; static-prescriptive decision-making 51–52; system view and 213; turnover decisions 45; work-family balance 48–49 outcome validity 17, 46
Ow, P. S. 53
Index
Posen, H. 303 Powers, W. T. 213
prediction: competitive 247–248;
computational models and 10–11,
125, 136, 255–256; crucial 239,
244–245, 249; empirical data
and 256; extending competitive
248–249; falsifiable 245–246;
model comparison 282, 284–285,
289; model fitting and 22,
249–250, 257, 268, 270–271,
275–276; model parameters
and 264–265; normative 35;
parameter recovery analysis
286–287; qualitative 300; risky
246–247, 252n1; simulation
studies and 136, 138, 291, 304
prejudicial attitudes 64, 69, 72
Prietula, M. 303 Prietula, M. J. 53, 152, 187
PRISMA approach 147
prisoner’s dilemma 53, 188
proactive socialization 100–101 probability theory 274
process theory 19, 140
process validity 17
prospect theory 43–45 publishing and reviewing computational
models: agent-based modeling
and 181, 186; compared to
traditional papers 28–29; complex
phenomena and 308–309;
current trends in 308; discussion
of findings 315–316; diversity
and inclusion research 70, 71;
grand challenge in 305, 308;
innovation 305–307; model
testing 313; moving the field
forward 310; practical benefit
307; presenting theory and model
310–313; quantitative fit 315;
recommendations for 297, 298;
relevant journals 301, 302 – 303,
304; reporting results 314–315;
setting the hook 307–309;
significance 305, 308; simulation
studies and 312–313, 315; study
design 313–314; summary of
existing knowledge 309–310;
theoretical implications 315–316;
topic choice 304–308; underlying
processes and 311; use-inspired
329
basic research 307; see also empirical research Puranam, P. 187
Purl, J. D. 8, 12, 227, 232–233, 241, 302 Rahmandad, H. 241
Ramani, R. G. 167 Ramos-Villagrasa, P. J. 164
Rand, W. 190
Random Support Theory 46–47 Raveendran, M. 188
Read, S. J. 29
recognition-primed decision (RPD)
models 50
Reggia, J. A. 168 – 169 Ren, Y. 157 Rench, T. A. 22
Repenning, N. P. 112
Rico, R. 164
Rivkin, J. W. 187
Robinson, M. A. 153 Rodríguez, A. 168 Roja, E. M. 158 Rojas-Villafane, J. A. 157 Roos, P. 188
Rosenfeld, A. 168 R programming language 28
Rubenstein, M. 168 Sai, Y. 168 Samuelson, H. L. 26, 62, 77, 81–82, 82,
83–84, 86–87, 88n6
Sayama, H. 153 Scavarda, L. F. 152 Schein, E. H. 95
Schelling, T. C. 26
Scherbaum, C. A. 7, 10
Schillinger, P. 169 Schmidt, A. M. 22, 259
Schreiber, C. 152 Schruben, L. W. 154 self-efficacy: ability and 241; influenced
by performance 214, 232; learning
and calibration 228–229, 231;
motivation and 233; runaway
behavior and 214; study design
and 314
self-regulation: computational models and 10; control theory and 134–135; goal pursuit 134–135; individual behavior and 231; learning interventions and
330
Index
111–112; learning process and 95;
motivation and 7, 230; negative
feedback loops 135, 135, 136;
social cognitive theory and 7;
theories of choice and 8; theory
building and 306
sensitivity analysis: assessing models 24;
computational models and 9, 17;
evaluating computational models
242; logical consistency and 242;
parameters and 9–10, 190, 225;
system dynamics modeling and
225, 226
Seo, Y. W. 152 Ser, D. A. 157 Serban, A. 129, 137
Shaw, J. C. 102
She, Y. 157 Shen, W. 168 Shoss, M. K. 12
Sichman, J. S. 43
Siemieniuch, C. E. 158 Siggelkow, N. 187
Sim, Y. 153 Simon, H. A. 102, 213
simulation of groups and teams:
approaches and modeling
techniques 149, 149,
150, 161–162, 162, 163;
boundary definition and 163;
computational models and 147;
level alignment and 163–164;
limitations of 171–172;
literature review 147–148,
148, 149–150, 151 – 159, 160;
longitudinal phenomena and
164; methodological approaches
165; research in 146–147;
robots/non-human entities 165,
166 – 170, 171; team leadership
and 131–134, 136–137, 139,
141–142; team processes and 27,
161; trends and observations 150,
160–161
simulations: agent-based modeling and
139, 148–149, 149, 161, 182,
185–186, 190; computational
models and 4, 27, 56; culture
shifts 99; decision-making and
56; diversity and inclusion
research 65, 83–84; evaluating
computational models 243; logical
consistency and 242; model fitting
and 291; modeling techniques
149, 149, 150; multi-agent 148,
149, 161; see also agent-based
modeling (ABM); Monte Carlo
simulation
Sinclair, M. A. 158 Singh, J. V. 302 skill acquisition: informal learning and
114; information-knowledge
gap 104; learning process and
102–105; stereotype threat and
105–106; task performance and
255
social cognitive theory 7–8, 136
socialization processes: agent-based
modeling and 191–195, 196,
197–199, 200, 201–204;
computational models and
99–100; dynamic effects 96,
101; information seeking in 233;
organizational learning and 98;
proactive 100–101; see also
newcomer socialization
Sohn, D. 77
Solow, D. 156, 158 Solowjob, E. 167, 171
Sommer, S. A. 155 Son, J. 158 Spears, D. F. 169 Spears, W. M. 169 Srikanth, K. 187
static decisions 39, 57n1 Steel, P. 227, 302 stereotypes/stigmas: consequences of
64; construction over time 65;
excitatory/inhibitory connections
74–75; mitigation of 64;
multiple conceptualizations and
64–65; multiple mechanisms
and 65; neural network models
and 72–75, 84–85; social
categorization and 73, 73, 74;
social processes and 64–65; team
processes 86; see also diversity
and inclusion research
stereotype threat: computational models
and 290; negative effects of
307; organizational learning
effects 105–107, 107, 113; skill
acquisition and 105–106
Sterman, J. D. 112, 239, 241
Index
stochastic process 139
stratification/segregation 64, 69
structural equation modeling 257
Su, C. 157 subjective expected utility (SEU)
theory 34
Sun, R. 103
supervised learning 8
Swamy, M. 187
system dynamics modeling: ambiguous
causal influence and 232–233;
computational models and 27–28,
212–213; debugging a model
223–224, 224, 225; feedback
loops and 88n1, 139, 214, 231;
goal-striving and 230, 230,
231; growth models 215–216,
226–227; learning and calibration
228–229, 229, 230; macro-
organizational issues and 13–14;
model building and 216–217,
217, 218–219, 219, 220, 220,
221–222; path diagrams and 214;
processes and conventions used
by 226–227; resistance to change
and 231–232; sensitivity analysis
225, 226; simulating the model
222–223, 223; spurious causal
inferences and 232; system view
and 213; team leadership and
128, 136; theoretical process and
214–215; theory building and
212, 214
Takács, K. 156, 291
Tamanini, K. B. 100
Tambe, M. 169 Tang, C. 27
Tarakci, M. 129, 137–138, 142, 159, 163,
171, 188, 191, 291, 311, 315
Tay, R. 311
team leadership: agent-based modeling
and 125, 126 – 127, 129, 139;
collective intelligence and
131; computational models
and 27, 121–125, 126 – 129,
130, 139–142; control theory
and 134–135, 140; decision-
making process 125, 126,
130–131, 141–142; dynamic
process research 122; election
of formal leaders 138–139;
331
emergence of leadership
structure 128 – 129, 136–139;
empirical research in 122–123,
136–138; forms of 130–131;
group member participation
and 133–134; hybrid modeling
approach 128 – 129, 139–140;
leadership-regulation 135–136;
mental model development
127, 131–133; OCWCs and
133–134; organizational design
and 140–141; power-competence
dynamics 137–138; process-
centered research 120–121, 140;
role of 124, 124, 125; simulations
in 131–134, 136–137, 139,
141–142; structure and 26;
subordinate-regulation 135–136;
system dynamics modeling 128,
136; team-based structures 141;
team goal pursuit 128, 134–136;
team performance 121, 132, 137;
team processes and 123–124, 124;
theory integration 122; virtual
experiments and 142
team networks 187–188 team processes: agent-based models
78, 78, 79–81; computational
models and 26–27; diversity
and inclusion research 77–78,
78, 79–81; domain of expertise
132–133; extremist agents in
80–81; faultlines 77–78, 78,
79–80, 85–86; goal pursuit model
134–136; heterophobia and
78–80; homogenous subgroups
in 77, 79–80, 85, 88n3; identity-
management in 86; knowledge
emergence and 140; leadership
and 26–27, 123–124, 124, 140;
polarization dynamics 79–80;
rejection and 78–80; self-
managing teams and 141; similar
beliefs and opinions in 77, 88n3;
simulations and 27, 161; social
influence processes 77, 88n3;
stereotypes/stigmas 86; team-
based structures 141
teams: agent-based modeling (ABM)
and 187–188; characteristics and
188; computational models and
147; defining 120, 146; impact of
332
Index
LMX differentiation on 133–134;
multilevel 164; outcomes and 121;
power disparities and 137–138;
shared mental models 131–133;
simulations and 146–148, 148,
150, 160–165, 171–172; social
networks and 132–134
temporal motivation theory 45–46 Tepper, B. J. 315
Theodoropoulos, G., K. 38
theory: computational models and
4–5; formal 181–182, 191;
informal 17–19, 22–23, 56;
I-O psychology and 211–212;
natural language 5, 7; predictions
256; probability 274; process-
oriented 140; proliferation 8; self-
regulation and 231; universal 245
theory building: agent-based modeling
and 181, 185; computational
models and 181, 212, 306;
human error and 212; hypotheses
and path models 211; laws of
interaction and 211; MGPM
and 306–307; publishing
and reviewing 304, 306; self-
regulation and 306; system
dynamics modeling and 212, 214;
verbal-based 5–6, 10, 185–186
theory integration 8–9, 122
theory testing: competitive predictions
and 247; computational models
and 63; model fitting and
257–258; precision and 255–256;
study design and 314; theory
development research and 304,
306, 310
time sensitivity: leadership and 136;
multiple-goal pursuit model
and 261–262, 264–265, 268,
275, 277–280, 280; newcomer
socialization and 199; parameters
and 260
training and development: cognitive models of individual learning 102–103; computational models and 101–108; control theory and 105; individual level explorationexploitation 103–105; leadership functions in 140; learning interventions 111–112; skill acquisition and 105; stereotype
threat effects 105–107, 107; see also learning, training, and socialization transformational leadership 248–249 Treur, J. 151 Trinh, M. P. 129 Tsai, M. 158 turnover decisions 45
Tversky, A. 43
Ugurlu, O. 159, 161
use-inspired basic research 307
Usher, M. 43
utility maximization 35
Vairaktarakis, G. 158 valence 260–262 Vancouver, J. B. 3, 7–10, 12, 15, 17,
21–23, 27, 100–101, 109–110,
211, 214, 216, 227–228, 231–233,
241–242, 258–261, 266,
275–276, 302, 306, 309
van der Wal, C. N. 151 VanLehn, K. 103
Van Rooy, D. 8
variables: contextual 49; dependent 69;
dynamic 16, 99; endogenous
17, 123, 136, 141, 214, 217;
evaluating computational models
240–241; exogenous 17, 50, 123,
130–131, 133, 136, 141, 214;
expectancy-value theory 311;
focal 147, 150; individual-level
53; with inertia 16, 213–216;
input 49; level 16; likelihood 274,
278; with memory 51; observable
260; organizational level 52;
relationships between 11, 17–18;
state 121, 124, 125, 130–131,
135; stock 16; study design and
314; system dynamics processes
214–216
Vensim: debugging a model 223–224,
224, 225; model building and
213–214, 216, 217, 218–219, 219,
220, 220, 221–222; parameters
in 227; simulating the model
222–223, 223
VensimPLE 213, 216, 227
verbal theories: computational models and
5–6, 252, 307, 310; definitions
and measurement in 240; dynamic
Index
processes in 185; evaluation
of 237, 239, 252; falsifiability
and 10; linear relationships and
186; modeler role and 244; risky
predictions and 246; transparency
and 11
verisimilitude 240
Viswanath, P. 167 von Bertalanffy, L. 214
Vozdolska, R. R. 155 Wang, M. 3, 109, 233, 258, 275, 302 Warglien, M. 154 Weinhardt, J. M. 3, 17–18, 23, 28, 227,
237, 259–260, 302 Weiss, J. A. 164
Wellman, N. 188, 190–191
Wessel, J. L. 26, 62
White, R. E. 99
Wilensky, U. 190
Will, T. E. 126, 131
Winder, R. 169
333
Wong, S. S. 159 Woolley, A. 164
work-family balance 48–49
Xia, N. 159 Xia, X. 50
Xu, Y. 159 Yang, M. C. 151 Yang, S. X. 170 Yeo, G. 259, 293n1
Yildirim, U. 159, 161
Yoder, R. J. 100
Yu, B. Y. 151 Yuksekyildiz, E. 159, 161
Zhang, J. 50
Zhang, Z. 258
Zheng, Z. 170 Zhou, L. 26–27, 120, 128, 134–136, 233,
258, 275, 302 Zhu, D. 170