240 72 6MB
English Pages 240 [241] Year 2023
Meghan Carmody-Bubb
Cognition and Decision Making in Complex Adaptive Systems The Human Factor in Organizational Performance
Cognition and Decision Making in Complex Adaptive Systems
Meghan Carmody-Bubb
Cognition and Decision Making in Complex Adaptive Systems The Human Factor in Organizational Performance
Meghan Carmody-Bubb Department of Leadership Studies Our Lady of the Lake University San Antonio, TX, USA
ISBN 978-3-031-31928-0 ISBN 978-3-031-31929-7 (eBook) https://doi.org/10.1007/978-3-031-31929-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To Mike, Connor, Coulton, Aidan, and Mary Grace
Acknowledgments
I would like to thank the staff and editors at Springer Nature for their help in preparing this book, especially Megan McManus, Pradheepa Vijay, Brian Halm, Paige Ripperger, and Rachel Daniel. Your skills, professionalism, and responsiveness were invaluable. I would also like to thank my colleagues and students in the Department of Leadership Studies at Our Lady of the Lake University, as well as my friends through the years with whom I’ve shared many fascinating conversations. You have contributed in ways too many to innumerate to my joys and journeys as a lifelong learner. Finally, I want to acknowledge my loving father, a WWII pilot who inspired and encouraged my love of philosophy, history and human factors research, as well as my beloved mother, the wise and quietly guiding force behind my large and supportive extended family.
vii
About This Book
This book provides a broad overview of evidence-based tools grounded in 100 years of empirical research across inter-disciplinary fields that fall under the broad category of complexity science, which is the study of complex adaptive systems. These interdisciplinary fields include, among others, systems engineering, human factors, and cognitive science, as well as their theoretical underpinnings. These disciplines have provided research and application tools that have helped bring the inherently dangerous world of aviation and aerospace into one of the safest modes of travel in history. The focus of the book is on broadening the application of these tools to improve decision making across diverse domains, from improving safety and performance on the flight deck and the operating room to enhancing safety and security in schools, cyberspace, and various other areas. In short, research findings and tools within these disciplines can be—and have been—effectively applied in any instance where humans are involved in decision making within complex adaptive systems. The book is organized into seven parts: Part I (Chaps. 1, 2, 3, 4, and 5) Common Origins: Applied Human Factors, Systems Engineering and Complexity Science Part II (Chaps. 6, 7, and 8) Science, Uncertainty, and Complex Adaptive Systems: The Search for Underlying Patterns Part III (Chaps. 9, 10, 11, 12, and 13) Cognitive Psychology: Understanding the Lens Through Which We Process Information in a Complex World Part IV (Chaps. 14, 15, and 16) Decision Making in a Complex World
ix
x
About This Book
Part V (Chaps. 17 and 18) The Wrench in Newton’s Machine: Applying Models from Cognitive Science, Human Factors, and Systems Engineering to Decision Making Within Complex Adaptive Systems Part VI (Chaps. 19, 20, and 21) Behavioral and Social Aspects of Team Decision Making in Complex Adaptive Systems Part VII (Chaps. 22 and 23) The Leader’s Role in Innovation and Implementation
What You Will Learn This book will explain the role of human behavior research, from both a historical and modern perspective, in improving objective, measurable performance outcomes, to include safety, human performance, and strategic decision making. It includes a brief history of the development of the scientific method, in general, and of the disciplines of cognitive psychology and human factors, in particular. Emphasis is given to common misconceptions of scientific research that can impact interpretation and application of research findings. The book builds upon the historical foundations of the natural, behavioral, and cognitive sciences to examine why it is important to frame understanding and applications of complex adaptive systems (CAS) in predicting human and organizational behavior in order to improve decision making and subsequent outcomes. Finally, models with track records of enhancing human performance within complex adaptive systems are surveyed, and the research to support them summarized.
Intended Audience This book is intended for an audience that is concerned with applying evidence- based principles and tools to understanding and improving organizational behavior and decision making. It will not go deeply into the theoretical background of complex adaptive systems or mathematical modeling of the same, although for the practitioner who has some awareness of linear modeling through statistical procedures, I hope to provide a conceptual connection, including limitations, within the broad outlook of multivariate research.
Contents
Part I Common Origins: Applied Human Factors, Systems Engineering and Complexity Science 1 Introduction�������������������������������������������������������������������������������������������� 3 Complex Adaptive Systems �������������������������������������������������������������������� 3 Chapter Summary������������������������������������������������������������������������������������ 5 References������������������������������������������������������������������������������������������������ 5 2 What Is a Complex Adaptive System? ������������������������������������������������ 7 Defining Systems ������������������������������������������������������������������������������������ 7 Criticisms and Cautions of Complexity Theory�������������������������������������� 9 Why Should People Interested in Organizational Decision Making Be Aware of Complex Adaptive Systems?���������������������������������������������� 10 Chapter Summary������������������������������������������������������������������������������������ 10 References������������������������������������������������������������������������������������������������ 11 3 Evolution from Linear to Systems Thinking �������������������������������������� 13 Linear Versus Nonlinear�������������������������������������������������������������������������� 13 Jurassic Park, Complex Adaptive Systems, and Covid: What Do They Have in Common?���������������������������������������������������������� 14 The Law of Unintended Consequences �������������������������������������������������� 16 Chapter Summary������������������������������������������������������������������������������������ 17 References������������������������������������������������������������������������������������������������ 17 4 Emergence of a New Discipline for the Twentieth Century: Human Factors and Systems Engineering������������������������������������������ 19 Early Developments�������������������������������������������������������������������������������� 19 The Role of World War II������������������������������������������������������������������������ 21 When Things Go Wrong: The Study of Human Error in Decision Making���������������������������������������������������������������������������������� 22 Chapter Summary������������������������������������������������������������������������������������ 25 References������������������������������������������������������������������������������������������������ 25
xi
xii
Contents
5 Complexity Science in Organizational Behavior�������������������������������� 27 Applying Complexity Science to Organizational Behavior Research ���������������������������������������������������������������������������������� 27 Emergence in Complex Adaptive Systems���������������������������������������������� 28 Applications in the Social Sciences �������������������������������������������������������� 29 Chaos Versus Complexity������������������������������������������������������������������������ 30 Detecting Emerging Patterns in a Complex World���������������������������������� 31 Chapter Summary������������������������������������������������������������������������������������ 32 References������������������������������������������������������������������������������������������������ 32 Part II Science, Uncertainty, and Complex Adaptive Systems: The Search for Underlying Patterns 6 The Scientific Method Applied to Complex Adaptive Systems���������� 37 The Scientific Method������������������������������������������������������������������������������ 37 Knowledge Is Power: The Role of “What-If?” Thinking in the Scientific Method �������������������������������������������������������������������������� 40 Conceptual and Operational Definitions�������������������������������������������������� 41 Chapter Summary������������������������������������������������������������������������������������ 44 References������������������������������������������������������������������������������������������������ 44 7 The Challenge of Uncertainty in Complex Adaptive Systems Research ���������������������������������������������������������������������������������� 47 The Role of Uncertainty�������������������������������������������������������������������������� 47 The Role of Probability and Statistical Reasoning in Complexity Science ���������������������������������������������������������������������������� 48 Politics and Science �������������������������������������������������������������������������������� 49 Embracing Uncertainty in Science���������������������������������������������������������� 50 Chapter Summary������������������������������������������������������������������������������������ 51 References������������������������������������������������������������������������������������������������ 51 8 Beauty and the Beast: The Importance of Debate in Science������������ 53 1816: The Year Without a Summer���������������������������������������������������������� 53 Beware of Groupthink in Scientific Research������������������������������������������ 54 The Spirit of the Times���������������������������������������������������������������������������� 55 The Role of Debate in Complex Adaptive Systems�������������������������������� 56 The Wisdom of Crowds �������������������������������������������������������������������������� 57 Chapter Summary������������������������������������������������������������������������������������ 58 References������������������������������������������������������������������������������������������������ 58 Part III Cognitive Psychology: Understanding the Lens Through Which We Process Information in a Complex World 9 Cognitive Psychology: What Is In The Box? �������������������������������������� 63 The Brave New World of Behavioral Research �������������������������������������� 63 The Behaviorists�������������������������������������������������������������������������������������� 64
Contents
xiii
The Role of Quantum Mechanics������������������������������������������������������������ 65 The Role of Human Factors and Computer Science�������������������������������� 66 The Proverbial “Black Box”�������������������������������������������������������������������� 66 Sister Mary Kenneth Keller �������������������������������������������������������������������� 66 Brief History of Cognitive Psychology���������������������������������������������������� 67 How the Brain Retains Information �������������������������������������������������������� 68 Data Versus Information�������������������������������������������������������������������������� 68 The Magic Number Seven ���������������������������������������������������������������������� 69 Memory Stores���������������������������������������������������������������������������������������� 70 Schemas and Scripts���������������������������������������������������������������������������� 70 Cognitive Maps������������������������������������������������������������������������������������ 70 Mental Models ������������������������������������������������������������������������������������ 71 Rule-Based������������������������������������������������������������������������������������������ 71 Knowledge-Based�������������������������������������������������������������������������������� 72 Skill-Based������������������������������������������������������������������������������������������ 72 Automatic Versus Controlled Processing������������������������������������������������ 72 Cognitive Neuroscience: The Biology of Thought and Behavior������������ 74 Chapter Summary������������������������������������������������������������������������������������ 76 References������������������������������������������������������������������������������������������������ 76 10 Is Perception Reality?���������������������������������������������������������������������������� 79 The Role of Perception���������������������������������������������������������������������������� 79 Bottom-Up Versus Top-Down Processing ���������������������������������������������� 80 The Perils of Perception�������������������������������������������������������������������������� 82 Perceptual Illusions���������������������������������������������������������������������������������� 83 The Role of Uncertainty and Ambiguity in Perception �������������������������� 84 Ambiguous Figures���������������������������������������������������������������������������������� 85 How The Brain Processes Information������������������������������������������������ 86 Filling in the Gaps������������������������������������������������������������������������������������ 87 Expectations and the Black Box�������������������������������������������������������������� 88 Chapter Summary������������������������������������������������������������������������������������ 90 References������������������������������������������������������������������������������������������������ 90 11 The Nature of Human Error in Decision Making������������������������������ 93 To Err Is Human�������������������������������������������������������������������������������������� 93 The Vigilance Decrement������������������������������������������������������������������������ 94 What Is Error?������������������������������������������������������������������������������������������ 95 Errors Versus Violations�������������������������������������������������������������������������� 96 Active Versus Latent Human Failures������������������������������������������������������ 96 Potential Future Applications������������������������������������������������������������������ 98 Risk Management in Complex Adaptive Systems ���������������������������������� 99 Expectation and Human Error ���������������������������������������������������������������� 100 Chapter Summary������������������������������������������������������������������������������������ 100 References������������������������������������������������������������������������������������������������ 101
xiv
Contents
12 Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error and Decision Making������������������������������������������������ 105 The Role of Cognitive Bias���������������������������������������������������������������������� 105 Dealing with Uncertainty in Decision Making���������������������������������������� 105 Normative Versus Descriptive Decision Making ������������������������������������ 106 Heuristics and Biases ������������������������������������������������������������������������������ 107 The Most Common Heuristics and Biases���������������������������������������������� 107 Representative Heuristic���������������������������������������������������������������������� 107 Availability, Anchoring, and Adjustment �������������������������������������������� 107 Overconfidence������������������������������������������������������������������������������������ 108 Cognitive Tunneling���������������������������������������������������������������������������� 108 Framing and Prospect Theory�������������������������������������������������������������� 108 Prospect Theory Applied to the Covid-19 Pandemic������������������������������ 110 Response Sets, Cognitive Bias, and the Deadliest Aviation Accident in History���������������������������������������������������������������������������������� 111 Confirmation Bias������������������������������������������������������������������������������������ 112 The Role of Evolution in Cognitive Bias: Fast Versus Slow Thinking������������������������������������������������������������������������������������������ 113 Chapter Summary������������������������������������������������������������������������������������ 114 References������������������������������������������������������������������������������������������������ 115 13 How Polarization Impacts Judgment: We Are More Emotional and Less Rationale Than We Would Like to Believe�������� 117 Polarization Research������������������������������������������������������������������������������ 117 Partisan Perceptions and Biased Assimilation ���������������������������������������� 118 What Can We Do About It? �������������������������������������������������������������������� 119 Relationship as the Basis for All Negotiations���������������������������������������� 121 Chapter Summary������������������������������������������������������������������������������������ 122 References������������������������������������������������������������������������������������������������ 123 Part IV Decision Making in a Complex World 14 Naturalistic Decision Making���������������������������������������������������������������� 127 The Intuition of Charlemagne������������������������������������������������������������������ 127 Experts and Their “Guts”������������������������������������������������������������������������ 128 Naturalistic Decision Making������������������������������������������������������������������ 129 Problem Solving�������������������������������������������������������������������������������������� 134 Training for NDM������������������������������������������������������������������������������������ 134 Attributes of Effective NDM Decision Makers �������������������������������������� 135 Underlying Cognitive Mechanisms to Effective Decision Making Attributes������������������������������������������������������������������������������������ 135 Mental Simulation������������������������������������������������������������������������������������ 136 United Flight 232 �������������������������������������������������������������������������������� 137 Strategy Modulation and Reasoning Skills���������������������������������������������� 138 Real-World Implications for NDM Training ������������������������������������������ 138
Contents
xv
Training for Stress Management���������������������������������������������������������� 139 Applications of Human Factors to Training Expert Decision Making�������������������������������������������������������������������������������������� 140 Metarecognition ���������������������������������������������������������������������������������� 141 Goals of After-Action Reviews and Incident Investigations���������������� 141 What Might the Tragedy in Uvalde Have in Common with the 3-Mile Island Nuclear Power Plant Meltdown?������������������������ 142 Distributed Decision Making ������������������������������������������������������������������ 144 Chapter Summary������������������������������������������������������������������������������������ 145 References������������������������������������������������������������������������������������������������ 145 15 The Stages of a Decision������������������������������������������������������������������������ 149 What Is a Decision? �������������������������������������������������������������������������������� 149 The OODA Loop ������������������������������������������������������������������������������������ 150 Chapter Summary������������������������������������������������������������������������������������ 152 References������������������������������������������������������������������������������������������������ 152 16 Situational Awareness and Situational Assessment���������������������������� 155 The Role of Situation Assessment ���������������������������������������������������������� 155 The Role of Internal Cognitive Models in Decision Making Within Complex Adaptive Systems �������������������������������������������������������� 157 The Role of Situation Assessment in Developing Internal Cognitive Models in Dynamic Environments������������������������������������������ 159 The Modern Flight Deck Remains a Real-World Laboratory for Studying Situational Awareness in Complex Adaptive Systems�������� 161 Chapter Summary������������������������������������������������������������������������������������ 162 References������������������������������������������������������������������������������������������������ 162 Part V The Wrench in Newton’s Machine: Applying Models from Cognitive Science, Human Factors and Systems Engineering to Decision Making Within Complex Adaptive Systems 17 The Human Factor in Complex Adaptive Systems ���������������������������� 167 Learning from Disaster���������������������������������������������������������������������������� 167 The SHEL(L) Model�������������������������������������������������������������������������������� 167 Human Factors Analysis and Classification System (HFACS)���������������� 169 Unsafe Acts of Operators �������������������������������������������������������������������� 170 Preconditions for Unsafe Acts ������������������������������������������������������������ 172 Unsafe Supervision������������������������������������������������������������������������������ 173 Organizational Influences�������������������������������������������������������������������� 173 Chapter Summary������������������������������������������������������������������������������������ 173 References������������������������������������������������������������������������������������������������ 174
xvi
Contents
18 Strategic Decision Making Through the Lens of Complex Adaptive Systems: The Cynefin Framework�������������������������������������� 177 What Is Strategic Decision Making? ������������������������������������������������������ 178 The Cynefin Framework�������������������������������������������������������������������������� 179 Ordered Domains: Simple and Complicated �������������������������������������� 180 Unordered Domains: Complex and Chaotic���������������������������������������� 181 Chaotic Domains���������������������������������������������������������������������������������� 182 Disorder������������������������������������������������������������������������������������������������ 182 Houston, We Have a Problem������������������������������������������������������������������ 184 Applying Cynefin to the Covid-19 Pandemic������������������������������������������ 184 Chapter Summary������������������������������������������������������������������������������������ 187 References������������������������������������������������������������������������������������������������ 188 Part VI Behavioral and Social Aspects of Team Decision Making in Complex Adaptive Systems 19 Team Decision Making and Crew Resource Management���������������� 193 Crew Resource Management ������������������������������������������������������������������ 195 Assessing Effectiveness of CRM ������������������������������������������������������������ 197 Effectiveness of CRM Beyond Aviation�������������������������������������������������� 198 Chapter Summary������������������������������������������������������������������������������������ 198 References������������������������������������������������������������������������������������������������ 198 20 The Importance of Learning Cultures in Organizations ������������������ 201 Organizational Learning�������������������������������������������������������������������������� 201 Information Flow ������������������������������������������������������������������������������������ 201 Creating Cultures of Learning ���������������������������������������������������������������� 202 Single- vs. Double-Loop Learning���������������������������������������������������������� 202 Single-Loop Learning�������������������������������������������������������������������������� 203 Double-Loop Learning������������������������������������������������������������������������ 204 Relationship Between Learning Cultures and Conflict Management�������������������������������������������������������������������������������������������� 204 Triple-Loop Learning�������������������������������������������������������������������������� 205 Similar Models of Organizational Learning�������������������������������������������� 206 Cultures of Indecision������������������������������������������������������������������������������ 207 The Culture of Yes������������������������������������������������������������������������������� 208 The Culture of No�������������������������������������������������������������������������������� 208 The Culture of Maybe�������������������������������������������������������������������������� 209 The Impact of Organizational Culture on Decision Making������������������� 209 Chapter Summary������������������������������������������������������������������������������������ 212 References������������������������������������������������������������������������������������������������ 212 21 Fostering Diversity of Thought in Strategic Decision Making���������� 215 Lessons from Cyrus the Great������������������������������������������������������������������ 215 The Leader’s Role in the Strategic Decision Process������������������������������ 216
Contents
xvii
How Leaders Can Guide the Decision Process Toward Implementation���������������������������������������������������������������������������������������� 217 Decision Quality���������������������������������������������������������������������������������� 218 Implementation Effectiveness�������������������������������������������������������������� 218 Roberto’s Managerial Levers in the Decision Making Process �������������� 219 Composition���������������������������������������������������������������������������������������� 219 Context������������������������������������������������������������������������������������������������ 220 Communication������������������������������������������������������������������������������������ 221 Control ������������������������������������������������������������������������������������������������ 223 Chapter Summary������������������������������������������������������������������������������������ 224 References������������������������������������������������������������������������������������������������ 224 Part VII The Leader’s Role in Innovation and Implementation 22 Innovation in Complex Adaptive Systems ������������������������������������������ 227 What Is Innovation? �������������������������������������������������������������������������������� 229 Diffusion of Innovation���������������������������������������������������������������������������� 231 Diffusion of Innovation���������������������������������������������������������������������������� 231 Tipping Points in Innovation�������������������������������������������������������������������� 232 The Social Benefits of Diffusion of Innovation: Wireless Leiden Project������������������������������������������������������������������������������������������ 235 The Nepal Wireless Networking Project�������������������������������������������������� 236 A Note of Caution with Respect to Case Studies������������������������������������ 237 Chapter Summary������������������������������������������������������������������������������������ 238 References������������������������������������������������������������������������������������������������ 239 23 The Dark Side of Innovation���������������������������������������������������������������� 241 Acknowledging the Risks Associated with Technological Advancement ������������������������������������������������������������������������������������������ 241 Chapter Summary������������������������������������������������������������������������������������ 242 References������������������������������������������������������������������������������������������������ 243 Index������������������������������������������������������������������������������������������������������������������ 245
List of Figures
Fig. 1.1 U.S. Airways Flight 1549 �������������������������������������������������������������� 4 Fig. 4.1 Stanley Roscoe, as found in The Human Factors and Ergonomics Society: Stories from the First 50 Years, Copyright 2006. (Reprinted with Permission, Human Factors and Ergonomics Society) ������������������������������������������������������������������������������������������ 23 Fig. 6.1 Portrait of Frederick Douglass�������������������������������������������������������� 40 Fig. 8.1 Image in Frankenstein, or The Modern Prometheus (Revised Edition, 1831) �������������������������������������������������������������������������������� 54 Fig. 9.1 Little Albert Cries At Sight of Rabbit���������������������������������������������� 64 Fig. 10.1 Ambiguous figure���������������������������������������������������������������������������� 85 Fig. 11.1 Swiss cheese model by James Reason�������������������������������������������� 97 Fig. 14.1 Waldo of Reichenau and Charlemagne������������������������������������������ 128 Fig. 15.1 US Col John Boyd as a Captain or Major�������������������������������������� 151 Fig. 15.2 Boyd’s OODA loop������������������������������������������������������������������������ 152 Fig. 17.1 Human factors analysis and classification system (HFACS)���������� 170 Fig. 18.1 Battle of the Teutoburg Forest�������������������������������������������������������� 178 Fig. 18.2 Cynefin framework ������������������������������������������������������������������������ 180 Fig. 19.1 Florencio Avalos: first trapped miner to exit Chilean mine������������ 194 Fig. 20.1 Triple-Loop learning ���������������������������������������������������������������������� 205 Fig. 22.1 Hedy Lamarr ���������������������������������������������������������������������������������� 228 Fig. 22.2 George Antheil at the Piano������������������������������������������������������������ 228 Fig. 22.3 Friendship Bench Grandmother������������������������������������������������������ 231
xix
About the Author
Meghan Carmody-Bubb graduated from Texas A&M University with a B.S. in Psychology in 1986 and received her Ph.D. in Experimental Psychology from Texas Tech University in 1993. Upon completing coursework, she was commissioned in the U.S. Navy, where she graduated from the training program in Aerospace Experimental Psychology at the Naval Aerospace Medical Institute. Her tenure as an active duty Aerospace Experimental Psychologist included assignments at the Naval Air Warfare Center in Warminster, Pennsylvania, and the Force Aircraft Test Squadron at the Naval Air Warfare Center in Patuxent River, Maryland, where she conducted research in adaptive automation, situational awareness, aircrew training and simulation, advanced technology displays for tactical aircraft, and eye movement behavior analysis. Dr. Carmody-Bubb left active service in 1999, remaining in the Reserves. Her experience in the active reserves included a two-year tour with the 4th Marine Air Wing Medical Group at Marine Air Group (MAG) 41, where she served as Aeromedical Safety Officer. She retired at the rank of Commander. She has authored and co-authored over 30 published journal articles and conference proceedings, in addition to several paper presentations and technical reports. She has chaired over 45 doctoral dissertations. Dr. Carmody-Bubb is currently a Professor in the Department of Leadership Studies, School of Business and Leadership at Our Lady of the Lake University in San Antonio, Texas. She is married to Michael E. Bubb, with whom she shares four children.
xxi
Part I
Common Origins: Applied Human Factors, Systems Engineering and Complexity Science
Chapter 1
Introduction
On January 15, 2009, U.S. Airlines Flight 1549 out of LaGuardia in New York struck a flock of geese and lost all engine power. Captain Chesley “Sully” Sullenberger and co-pilot Jeffrey Skiles were able to fly the Airbus A320 as a glider to safely land the plane on the Hudson river with no loss of life among the crew or 155 passengers. The incident became known as “The Miracle on the Hudson” (Fig. 1.1). But this was no miracle. This was the culmination of nearly a century of research of systematic exploration into the nature of human decision making, including related errors, in complex adaptive systems.
Complex Adaptive Systems Sardone and Wong (2010) indicate that complex adaptive systems (CASs) involve interacting elements that are sensitive to small changes that “can produce disproportionately major consequences” (p. 4). While the specific term is relatively new and will be described in more detail later, it involves an approach that applies decades of research in systems engineering and human factors to the study of human behavior within any domain that involves multivariate, interacting systems and subsystems. These domains can be biological, social, environmental, and/or organizational. Because of the rapidly changing nature of such systems, when applied to organizational and strategic decision making, critical decision makers within CAS domains must be alert to emerging solutions rather than current or best practices (Sardone & Wong, 2010; Snowden & Boone, 2007). Improvements in aviation safety are no accident. “In 1959, there were 40 fatal accidents per one million aircraft departures in the US.” In 2017, there were 6 fatal accidents per 4.1 billion passengers worldwide. While technology has helped drive improvements in the aviation industry’s safety record, great strides in safety © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_1
3
4
1 Introduction
Fig. 1.1 U.S. Airways Flight 1549 Note. Greg L. (2009). U.S. Airways Flight 1549 in the Hudson River. Retrieved from https://commons.wikimedia.org/wiki/File:US_Airways_Flight_1549_(N106US)_after_crashing_into_the_ Hudson_River_(crop_4).jpg. CC BY 2.0
management systems and insights into human factors have also contributed. The safety culture of aviation has changed, especially since the early 1970s. “Improved safety is also a reflection of the aviation industry’s … increasing ability to identify problems before they become a significant issue” (Collins, 2015, pp. 23–24). Captain Chesley Sullenberger was not only an expert pilot, but he was also an expert in aviation safety, particularly Crew Resource Management (CRM). “Crew resource management is a team-oriented concept of error management that originated in the aviation industry and has since been adopted in other demanding, high risk and high stress environments” (Carhart, n.d., para. 5). Captain Sullenberger helped develop the first CRM course at his airline and was one of the first instructors in CRM. As an experienced pilot and, in particular, trainer in CRM, Captain Sullenberger would have been very familiar with human factors research, which, since its inception, was integral to aviation research, particularly in the areas of safety, human performance, and human decision making. Over a century of research has systematically examined human behavior in complex systems in order to create a learning culture that not only recognizes and learns from past errors but also encourages the flow of arguments and ideas to facilitate innovations in organizational and technological improvements. But it is only fairly recently that we have begun to recognize the importance of viewing organizations “through a specific lens of complex adaptive systems” (Mauboussin, as cited in Sullivan, 2011, para. 1).
References
5
Chapter Summary This book will explain some of the biological and sociological aspects of human behavior that help predict individual, organizational, and strategic decision making. It discusses models from systems engineering and human factors that have been successfully applied to strategic decision making in various complex organizations. The underlying factor is the same—the human factor in making and executing decisions. The keys are communication, coordination, and adaptability.
References Carhart, E. (n.d.). Applying crew resource management in EMS: An interview with Capt. Sully. EMS World. Retrieved November 29, 2022, from https://www.emsworld.com/article/12268152/ applying-crew-resource-management-in-ems-an-interview-with-capt-sully Collins, S. (2015). Safer skies. Allianz Global Corporate & Specialty, 1, 22–24. Greg, L. (2009, January 15). U.S. Airways Flight 1549 in the Hudson River [Photograph]. Retrieved November 9, 2022, from https://commons.wikimedia.org/wiki/File:US_Airways_ Flight_1549_(N106US)_after_crashing_into_the_Hudson_River_(crop_4).jpg. This file is licensed under the Creative Commons Attribution 2 General License. Greg L, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons Sardone, G., & Wong, G. S. (2010, October). Making sense of safety: A complexity-based approach to safety interventions. In Proceedings of the Association of Canadian Ergonomists 41st annual conference, Association of Canadian Ergonomists. Snowden, D. J., & Boone, M. E. (2007). A leader’s framework for decision making. Harvard Business Review, 85(11), 68. https://doi.org/10.1016/S0007-6813(99)80057-3 Sullivan. (2011, September). Embracing complexity. Harvard Business Review. https://hbr. org/2011/09/embracing-complexity
Chapter 2
What Is a Complex Adaptive System?
Defining Systems Let’s start at the end, by defining a system. Complexity science is, essentially, the study of complex adaptive systems. It is particularly relevant to human factors and behavioral research because, as described by Chan (2001), “Complexity results from the inter-relationship, inter-action and inter-connectivity of elements within a system and between a system and its environment” (p. 1). But what do we mean by a system? As it turns out, defining a system is not a particularly easy task. Park (2020) described a system as “an abstract or vague entity” (p. 1). Park adopts the approach of using a non-definition to explain a system. “A nonsystem can be represented by a set of isolated entities that do not interact with each other or a collection of entities whose relationships have no implications for the properties or behaviors of the entities” (p. 1). Hence, one could conclude that a system is a collection of entities that do interact with each other and whose relationships do have implications for behavior. Rosen (1987) argued that “complex and simple systems are of a fundamentally different character” (p. 129). According to Hastings (2019), “Complex systems are systems consisting of many interconnected elements interacting with one another such that changes in some elements or their relationships generate changes in other parts of the system. Often, these interactions are non-linear in nature” (p. 4). The origins of what would become known as complex adaptive systems can be found in what Braithwaite et al. (2018, p. 3) called the “pioneering work” of Checkland (Checkland & Scholes, 1999), which began in the 1960s, though I would argue, of course, that they began much earlier, with the work of the human factors pioneers in the 1940s. The reader may be familiar with the somewhat loose terms, “hard” and “soft” sciences. Checkland’s “approach differentiated between hard systems, represented © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_2
7
8
2 What Is a Complex Adaptive System?
by relatively rigid techniques, technology, artifacts, and equipment, and soft systems, which involve the learning that occurs in fuzzy, ill-defined circumstances as people navigate across time in messy ecosystems” (Braithwaite et al., p. 3). Braithwaite et al. (2018) describe the distinctions between what they call traditional approaches and those approaches formed within the framework of complex adaptive systems: Traditionally, people have studied parts of a system (the people, the intervention, the outcomes) as distinct variables… In complexity science, while the components of a system, namely the agents and their artefacts, are important, they are often secondary to the relationships between these components. In such systems, agents communicate and learn from each other and from their environment, and adjust their behavior accordingly. However, there are many cross-cutting interconnections and influences (p. 5).
A close analogy might be a jigsaw puzzle, although one that is still woefully inadequate. With a jigsaw puzzle, you have multiple pieces, and you try to put them together in various combinations until a pattern begins to emerge. The problem with complex adaptive systems, however, is that the puzzle pieces can change, depending on how they are arranged and in what combinations. This may seem an impossible task, and it is likely more akin to a Rubik’s cube – multi-dimensional and more dynamic, with greater variety of combinations. Still, the actual pieces of a Rubik’s cube do not change, whereas in a complex adaptive system—or rather, measurement of variables within complex adaptive systems—factors can change. Moreover, the role that each piece plays in solving the puzzle can change, depending on the combinations of pieces that have been assembled. Braithwaite et al. (2018) go on to describe a CAS, similar to a living organism, as having the “capability to self-organize, accommodate to behaviors and events, learn from experience, and dynamically evolve” (p. 5). While uncovering patterns can be difficult, I would challenge the conclusion that such dynamic evolution cannot be “forecast with any degree of confidence” (p. 5). We make predictions involving complex, multivariate relationships all the time in science, and uncertainty is part and parcel of the process. Li Vigni (2021) argues that complexity science began to solidify as a discipline in the United States and Europe in the 1970s. Despite the fact that his article proceeds to criticize the discipline for what he argues are unsubstantiated claims at the ability to greatly improve applications across a variety of fields, Li Vigni credits the Santa Fe Institute with spreading the study of complex adaptive systems with its formulation of the term “complexity science” in the mid-1980s. On their “About” page, the Santa Fe Institute’s website describes itself as “an independent, nonprofit research and education center that leads global research in complexity science” (para. 1). The institute describes complex systems as “any system in which its collective, system-wide behaviors cannot be understood merely by studying its parts or individuals in isolation” (Santa Fe Institute, 2022, para. 3). The Santa Fe Institute was founded by research scientist George Cowan in 1984, with the help of Nobel-prize winning scientists, including physicists Phil Anderson and Murray Gell-Mann, along with economist Ken Arrow (Santa Fe Institute, 2022).
Criticisms and Cautions of Complexity Theory
9
While often associated with mathematics and computer science, complex systems are not the sole domain of these disciplines. According to Chan (2001), “Many natural systems (e.g., brains, immune systems, ecologies, societies) … are characterized by apparently complex behaviors that emerge as a result of often nonlinear spatio-temporal interactions among a large number of component systems at different levels of organization” (p. 1). Two of the core properties of complex adaptive systems are variety and interdependence. “Variety refers here to the many possible alternative states of the system and its parts, and interdependence to the intricate intertwining or interconnectivity between different actors and components within a system and between a system and its environment” (Raisio et al., 2020, p. 3)
Criticisms and Cautions of Complexity Theory One of the criticisms of complexity theory is that it is difficult, if not impossible, to measure (e.g., Clark & Jacques, 2012). A similar caution is posed by Manson (2001). “The exact nature of complexity research is hard to discover due to the large degree to which complexity ideas are traded across disciplinary boundaries” (p. 405). Indeed, The Santa Fe Institute claims to have made progress in many areas of applied research. While it is true that the website claims to be able to solve problems from pandemics to urban planning, perhaps it is that wide, overarching approach that is a problem, rather than the principles of complexity theory. Indeed, Manson offers “three major divisions” to help bound complexity theory. He argues that, rather than examining complexity on a per discipline basis, making these divisions allows for a “more coherent understanding of complexity theory” (p. 405). The three divisions Manson proposes are Algorithmic complexity, in the form of mathematical complexity theory and information theory, contends that the complexity of a system lies in the difficulty faced in describing system characteristics. Deterministic complexity deals with chaos theory and catastrophe theory, which posit that the interaction of two or three key variables can create largely stable systems prone to sudden discontinuities. Aggregate complexity concerns how individual elements work in concert to create systems with complex behavior.
It is the latter division that is the focus of this book. With respect to organizational behavior and decision making, I think the strength of complexity theory lies in its ability to provide a framework. It provides a tool to guide strategists and decision makers, as well as researchers in the field, based on the current understanding of how the human mind processes information, as well as how variables interact within systems. Barton (1994) notes that the implications of this interaction and interdependence of multiple variables prompted the realization “that studying each factor in isolation may not lead to useful knowledge about the behavior of the system as a whole. This concept, long a tenet of general systems theory, has now been unequivocally demonstrated in complex nonlinear systems” (Barton, 1994, p. 7).
10
2 What Is a Complex Adaptive System?
hy Should People Interested in Organizational Decision W Making Be Aware of Complex Adaptive Systems? Beyond defining complex adaptive systems, though, lies the question, why should people interested in organizational decision making be aware of them and, more importantly, view organizational performance through the lens of complex adaptive systems? It has something to do with the law of unintended consequences. The dangers of misunderstanding complexity are somewhat built into the human psyche, though that isn’t how many traditional financial models are built. Michael Mauboussin is the Head of Consilient Research at Counterpoint Global, Morgan Stanley Investment Management, as well as an adjunct professor at the Columbia Business School, and author of four books on financial investment and decision making. In an interview conducted by Sullivan (2011) for the Harvard Business Review, Mauboussin stated The biggest issue…is that humans are incredibly good at linking cause and effect – sometimes too good. Ten thousand years ago most cause and effect was pretty clear, and our brains evolved to deal with that…{the problem is that} when you see something occur in a complex adaptive system, your mind is going to create a narrative to explain what happened – even though cause and effect are not comprehensible in that kind of system (Mauboussin, as cited in Sullivan, 2011, para. 11).
I largely agree with this statement, though I would not state so definitively that cause and effect are not comprehensible in complex adaptive systems; just that they are much more difficult to establish, and may not involve direct one-to-one cause- effect, as will be discussed in more detail in a later chapter. To illustrate the dangers of misunderstanding complexity, Mauboussin provides a case study that occurred in the late 1880s, when Yellowstone National Park Rangers tried to improve the population of elk through hand feeding. “The elk population swelled and the elk started eating aspen trees, and aspen trees were what the beavers were using to build their dams, and the beaver dams caught the runoff in the spring, which allowed trout to spawn. More elk equaled less trout.” (Mauboussin interview, Sullivan, 2011). One seemingly simple decision can have reverberations throughout the entire system. Indeed, in viewing the current Covid pandemic through the lens of human behavior within complex adaptive systems, Sturmberg and Martin (2020) explain, In general, we are not good at seeing and comprehending the complexities in issues, and we have great difficulties in managing their underlying dynamics into the future. The human brain has not evolved to keep all components of a problem in mind and to appreciate their changing dynamics more than two or three steps ahead (p. 1361).
Chapter Summary Complex adaptive systems describe the general makeup of most relationships in the natural world, including those involving human behavior within socio-technical systems. Such relationships are multivariate and often non-linear, meaning there are
References
11
multiple, interacting variables, and changes in one variable (or even a set of variables) may not predict consistent or proportional changes in an outcome variable. Complexity science is an approach to research that focuses on behaviors within these complex adaptive systems, particularly with underlying patterns of relationships.
References Barton, S. (1994). Chaos, self-organization, and psychology. American Psychologist, 49(1), 5–14. https://doi.org/10.1037/0003-066x.49.1.5 Braithwaite, J., Churruca, K., Long, J. C., Ellis, L. A., & Herkes, J. (2018). When complexity science meets implementation science: A theoretical and empirical analysis of systems change. BMC Medicine, 16(1). https://doi.org/10.1186/s12916-018-1057-z Chan, S. (2001). Complex adaptive systems. Research Seminar in Engineering Systems, 31, 1–9. MIT. http://web.mit.edu/esd.83/www/notebook/NewNotebook.htm Checkland, P., & Scholes, J. (1999). Soft systems methodology in action. Wiley. Clark, J. B., & Jacques, D. R. (2012). Practical measurement of complexity in dynamic systems. Procedia Computer Science, 8, 14–21. https://doi.org/10.1016/j.procs.2012.01.008 Hastings, A. P. (2019). Coping with complexity: Analyzing unified land operations through the lens of complex adaptive systems theory. School of Advanced Military Studies US Army Command and General Staff College. https://apps.dtic.mil/sti/pdfs/AD1083415.pdf Li Vigni, F. (2021). The failed institutionalization of “complexity science”: A focus on the Santa Fe Institute’s legitimization strategy. History of Science, 59(3), 344–369. https://doi. org/10.1177/0073275320938295 Manson, S. M. (2001). Simplifying complexity: A review of complexity theory. Geoforum, 32(3), 405–414. https://doi.org/10.1016/s0016-7185(00)00035-x Park, C. (2020). Evolutionary understanding of the conditions leading to estimation of behavioral properties through system dynamics. Complex Adaptive Systems Modeling, 8(1), 1–25. https:// doi.org/10.1186/s40294-019-0066-x Raisio, H., Puustinen, A., & Jäntti, J. (2020). “The security environment has always been complex!”: The views of Finnish military officers on complexity. Defence Studies, 20(4), 390–411. https://doi.org/10.1080/14702436.2020.1807337 Rosen, R. (1987). On complex systems. European Journal of Operational Research, 30(2), 129–134. https://doi.org/10.1016/0377-2217(87)90089-0 Santa Fe Institute. (2022). About. Retrieved November 15, 2021, from https://www.santafe.edu/ about/overview Sullivan. (2011, September). Embracing complexity. Harvard Business Review. https://hbr. org/2011/09/embracing-complexity Sturmberg, J. P., & Martin, C. M. (2020). COVID-19 – How a pandemic reveals that everything is connected to everything else. Journal of Evaluation in Clinical Practice. https://doi. org/10.1111/jep.13419
Chapter 3
Evolution from Linear to Systems Thinking
Linear Versus Nonlinear Traditional approaches to scientific methodology in the early twentieth century were rooted in what can be described as linear systems. Linear systems are those in which there are relatively few predictor or independent variables; they can be relatively easily measured and quantified, and changes within the predictor variables produce readily predictable and proportionate changes in the dependent variables. Hastings (2019) contends “the distinction between linear and nonlinear dynamics in the interactions between components of a system is critical and merits further explanation prior to exploration of complex adaptive systems” (p. 4). Citing Czerwinski (1998), Hastings outlines the characteristic features of linear dynamics as “proportionality, additivity, replication, and demonstrability of cause and effect” (p. 4). Proportionality implies small outputs should be expected from small inputs, and likewise, large outputs should be expected from large inputs. With additivity, the whole should equal the sum of the parts. Cause-effect relationships should be both easily determined and, with replication “one should expect the same results or outcomes from any action that begins under the same conditions. As such, causes and their associated effects are demonstrable in such systems” (p. 5). In contrast, nonlinear dynamics are disproportionate, non-additive, and often difficult to replicate. Furthermore, “the non-linear nature of complex systems makes observation of the cause-and-effect relationships of interactions of its elements and, thus, predictions of their overall behavior exceedingly challenging” (Hastings, 2019, p. 5). The linear systems approach served well for much scientific work that occurred in the natural and physical sciences up to this point. Systems thinking, on the other hand, began to understand the role of the multiple variables that interacted in ways that were not always consistent, more difficult to measure or quantify, and therefore, more difficult to predict. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_3
13
14
3 Evolution from Linear to Systems Thinking
According to Braithwaite et al. (2018), Recognition came from the knowledge and understanding of human systems that had been accumulating in sociology, ecology, and evolutionary biology ever since the 1940s, and with antecedents even earlier, which we can loosely call ‘systems thinking’…Essentially, that all systems are composed of a set of seemingly discrete but actually interdependent components, defined not just by their inter-relations but by the permeable and shifting boundaries between them (p. 3).
Braithwaite et al. (2018) credit Von Bertalanffy’s (1972) ideas for the development of “general systems theory” in 1946. Drawing from earlier scientific work in the biological and social sciences, as well as mathematics, Von Bertalanffy constructed the theory by “applying universal principles and espousing the ontological underpinnings for the interactive and dynamic nature of social organization and structuring” (p. 3).
J urassic Park, Complex Adaptive Systems, and Covid: What Do They Have in Common? In the summer of 1993, an adaptation of Michael Crichton’s (1990) best-seller, Jurassic Park, hit the big screen, under the direction of Steven Spielberg, becoming a classic blockbuster. In a pivotal scene in the movie, the character of Dr. Ian Malcolm, played by actor Jeff Goldblum, explains the dangers of tampering with one or a few variables in a complex adaptive system. In a nutshell, most relationships, including cause-effect relationships, exist in complex, multivariate, interacting systems. In the book, and the movie adaptation, the scientists attempt to control reproduction of the dinosaurs on the island by only producing the female sex. However, they introduce into the cloning process amphibian genes and unknowingly create the opportunity for the animals to adapt to what appeared to be a closed system by changing their sex, in order to reproduce. “The kind of control you are attempting is not possible…Life will find a way,” according to the character of Dr. Ian Malcolm. That is the key to complex adaptive systems. In a multivariate system in which variables interact with each other, it can be very difficult to predict relationships, particularly cause-effect. It’s not impossible, but it is complicated. This is the reason medical research is often very frustrating; one study finds drinking coffee daily reduces heart disease, while another study finds it increases it. Change the set of variables in any regression equation, and you change the outcome. This is not to say it isn’t possible to find consistent, predictable relationships, just that it takes a lot of time and a lot of analysis of multiple interacting variables before patterns begin to emerge to the point where we can make confident predictions. This has been quite evident in the recent Covid pandemic. While we may wish to focus on only a small set of variables that have clear and direct cause-effect relationships, the fact is, most relationships in the natural world, outside of classic Newtonian mechanics, do not involve direct, one-to-one cause- effect. This is the entire premise of a systems engineering approach; one must
Jurassic Park, Complex Adaptive Systems, and Covid: What Do They Have in Common?
15
examine complex adaptive systems as a Gestalt. The whole is greater than the sum of its parts. Malcolm Gladwell begins his 2000 book The Tipping Point with a discussion involving an epidemic of syphilis in Baltimore in the mid-1990s. He describes three different hypotheses, based on the assessments of three different experts in epidemiology and disease control. All presented valid, logical arguments, though they are dramatically different in terms of the proposed causes. What they share in common, however, is the disproportionate, nonlinear effect relatively small changes in the environment can have on outcomes within a complex adaptive system. “It takes only the smallest of changes to shatter an epidemic’s equilibrium” (Gladwell, 2000, p. 15). This quote captures much of the essence of non-linear thinking. Likewise, Gladwell recognizes not only the multivariate and interacting factors in complex adaptive systems, but the particular role of the human factor. “There is more than one way to tip an epidemic, in other words. An epidemic is a function of the people who transmit the infectious agents, the infectious agent itself, and the environment in which the infectious agent is operating” (p. 18). With that in mind, consider the complexity needed to respond to the current Covid pandemic. Indeed, Sturmberg and Martin (2020) have stated, “The emergence of a coronavirus (SARS-CoV-2) with novel characteristics that made it highly infectious and particularly dangerous for an older age group and people with multiple morbidities brought our complex adaptive system (CAS) ‘society’—the economy, health systems, and individuals—to a virtual standstill” (p. 1361). In both the book and movie adaptation of Jurassic Park, the scientists have chosen to focus on a direct, clear cause-effect relationship, that between biological sex and reproduction in higher phylogenetic species, in an attempt to both enhance predictability within the system and control the population of dinosaurs. Likewise, the worldwide focus of the pandemic has been on controlling the spread of one organism, when in fact, we are dealing with multiple complex adaptive systems, to include both the virus and its host. Like anything else, the immune system needs exercise—that’s largely the foundation of any vaccination program. Nietzsche’s “What doesn’t kill me makes me stronger” (Nietzsche, 1889/1997, p. 6), while not intended solely or necessarily for biological systems, nonetheless applies to immune systems. Vaccines are typically a safer form of exposure. However, Covid is not the only organism out there. They are vast, and they are adaptive. Organisms that seem relatively benign are often so only because we have built up immunity to them. An interesting example pertains to an epidemiological theory regarding the spread of polio in the twentieth century. Prior to that time, the devastating effects of polio with which many are now familiar were not particularly common. But much would change—for the most part in beneficial ways—with the advent of antiseptics in 1865. It was in that year that Dr. Joseph Lister, inspired by the pioneering work of Louis Pasteur on germ theory, became the first to sterilize his operating theater using antiseptic. Dr. Joseph Lawrence, co-founder of Johnson & Johnson, would later use Dr. Lister’s namesake in the development of the first widely marketed antiseptic product, Listerine (Shaban et al., 2020).
16
3 Evolution from Linear to Systems Thinking
The Law of Unintended Consequences The early twentieth century marked a time when household use of antiseptics became widespread. This was also a time when polio infections among children increased dramatically, with devastating results. In what could be called the “too clean theory” of polio, some epidemiologists have suggested that the rise was at least partially due to the possibility that, prior to this time, infants and toddlers were exposed to the germ that causes polio early, in their homes, while still under the protection of the antibodies present in their mother’s breastmilk. As such, they were able to build up their immunity in a safe manner. However, with the advent of household disinfectants, children began to see their first exposure to the germ much later in childhood, when they began playing outside, at which time they were no longer under the protection of their mother’s immunity (Bloomfield et al., 2006). This brings us to the current pandemic. In the summer of 2021, hospitals saw a dramatic rise in infants hospitalized with RSV, or respiratory syncytial virus (Meyer et al., 2022). Some began to suggest, at that time, that this rise could be due to what is officially known as the hygiene hypothesis (Camporesi et al., 2022). Recent news reports in the epidemiology literature are raising alarm bells regarding dramatic increases in hepatitis among toddlers and young children (Hicks, 2022; Uwishema et al., 2022). Initial theories that the rise in hepatitis cases may be linked to either Covid or the vaccines were not supported by the data. Hepatitis is typically caused by one of several organisms which have been labeled as Hep A, B, and C, and two of them are regularly vaccinated against. What initial reports are suggesting with this rise in cases, is that they are being caused by common adenoviruses. In other words, children are ending up seriously ill, sometimes dying or needing liver transplants, from exposure to a type of virus that causes the common cold. Epidemiologists have started to evoke the “too clean”, or “hygiene hypothesis,” suggesting this may, indeed, be linked to pandemic lockdowns and masking. Whatever the eventual conclusions regarding the increase in worldwide hepatitis cases, this demonstrates how scientists must be open to multiple and competing hypotheses, as well as monitoring emerging patterns from the data. The emphasis regarding policy and decision making must consider the many variables within this very complex and fluid dynamic. A systematic view considers not only the virus and how to control it, but the various consequences, intended or unintended, that could possibly occur from interactions among multiple variables. A basic rule of thumb of a systems approach could be, don’t just think about the obvious effects, which are typically the intended effects, of a given change or control within a system. One must consider the possible ramifications and be prepared for and monitoring the unexpected. While a daunting task, an understanding of research methodologies and statistics can help, as well as tools that can be applied to make this process more systematic and effective, rather than reactive. One example of such a tool is the Cynefin Framework (Snowden & Boone, 2007), discussed in Chap. 18. These tools have their origins in the disciplines of human factors and systems engineering.
References
17
Chapter Summary While classic and traditional sciences and methods presume, to a large degree, linear systems that are also relatively simple relationships (i.e., involving direct, 1:1 cause-effect relationships, or relationships among relatively few variables). Such an approach is inadequate in the face of most real-world research problems. The latter are more likely to involve multiple variables that interact with each other, and prediction models can change depending on the combinations of variables in the model. While there are various methods of addressing research within such multivariate real-world systems, an overarching theoretical framework of complex adaptive systems helps to provide the lens through which to find underlying patterns as clues to potential relationships.
References Bloomfield, S. F., Stanwell-Smith, R., Crevel, R. W., & Pickup, J. (2006). Too clean, or not too clean: the hygiene hypothesis and home hygiene. Clinical and Experimental Allergy: Journal of the British Society for Allergy and Clinical Immunology, 36(4), 402–425. https://doi. org/10.1111/j.1365-2222.2006.02463.x Braithwaite, J., Churruca, K., Long, J. C., Ellis, L. A., & Herkes, J. (2018). When complexity science meets implementation science: A theoretical and empirical analysis of systems change. BMC Medicine, 16(1). https://doi.org/10.1186/s12916-018-1057-z Camporesi, A., Morello, R., Ferro, V., Pierantoni, L., Rocca, A., Lanari, M., Trobia, G. C., Sciacca, T., Bellinvia, A. G., Ferrari, A. D., Valentini, P., Roland, D., & Buonsenso, D. (2022). Epidemiology, microbiology and severity of bronchiolitis in the first post-lockdown cold season in three different geographical areas in Italy: A prospective, observational study. Children, 9(4), 491. https://doi.org/10.3390/children9040491 Crichton, M. (1990). Jurassic Park. Random House. Alfred K. Knopf, Inc. Czerwinski, T. J. (1998). Coping with the bounds: Speculations on non-linearity in military affairs (pp. 8–9). National Defense University. Gladwell, M. (2000). The tipping point: How little things can make a big difference. Little Brown. Hastings, A.P. (2019). Coping with complexity: Analyzing unified land operations through the lens of complex adaptive systems theory. School of Advanced Military Studies US Army Command and General Staff College. https://apps.dtic.mil/sti/pdfs/AD1083415.pdf Hicks, L. (2022, May 3). Unexplained hepatitis cases in children reported in 10 US states, more than 200 worldwide. Medscape. https://www.medscape.com/viewarticle/973310 Meyer, M., Ruebsteck, E., Eifinger, F., Klein, F., Oberthuer, A., van Koningsbruggen-Rietschel, S., Huenseler, C., & Weber, L. T. (2022). Morbidity of Respiratory Syncytial Virus (RSV) infections: RSV compared with severe acute respiratory syndrome Coronavirus 2 infections in children aged 0–4 years in Cologne, Germany. The Journal of Infectious Diseases. https://doi. org/10.1093/infdis/jiac052 Nietzsche, F. (1997). Twilight of the idols (R. Polt, Trans.). Hackett Publishing Company, Inc. (Original work published in 1889). Shaban, Y., McKenney, M., & Elkbuli, A. (2020). The significance of antiseptic techniques during the COVID-19 pandemic: Joseph Lister’s historical contribution to surgery. The American Surgeon, 1–2. https://doi.org/10.1177/0003134820984876 Snowden, D. J., & Boone, M. E. (2007). A leader’s framework for decision making. Harvard Business Review, 85(11), 68.
18
3 Evolution from Linear to Systems Thinking
Spielberg, S. (Director), Kennedy, K., Molen, G. R. (Producers), Crichton, M., & Koepp, D. (Screenplay). (1993). Jurassic Park [Film]. Universal Pictures. Sturmberg, J. P., & Martin, C. M. (2020). COVID-19 – How a pandemic reveals that everything is connected to everything else. Journal of Evaluation in Clinical Practice. https://doi. org/10.1111/jep.13419 Uwishema, O., Mahmoud, A., Wellington, J., Mohammed, S. M., Yadav, T., Derbieh, M., Arab, S., & Kolawole, B. (2022). A review on acute, severe hepatitis of unknown origin in children: A call for concern. Annals of Medicine and Surgery, 104457. https://doi.org/10.1016/j. amsu.2022.104457 Von Bertalanffy, L. (1972). The history and status of general systems theory. Academy of Management Journal, 15(4), 407–426. https://doi.org/10.5465/255139
Chapter 4
Emergence of a New Discipline for the Twentieth Century: Human Factors and Systems Engineering
Early Developments In 1950, an American film directed by Walter Lang and starring Clifton Webb and Myrna Loy was released. It was called Cheaper by the Dozen, and it was a comedy about a couple raising their 12 children during the social upheaval of the 1920s and the father’s ill-fated attempts to apply his knowledge as an efficiency expert to streamline the functioning of his unusually large family. But undergirding the film’s lightheartedness was a fascinating couple at the forefront of a rapidly changing scientific outlook. Frank and Lillian Gilbreth were pioneers in a new field of science that combined elements of engineering and psychology, and which would come to be known as Human Factors Engineering. Frank Gilbreth was famous as an entrepreneur, efficiency expert, and pioneer of the emerging human factors discipline. But perhaps more fascinating, given the time period, was the life of Lillian Moller Gilbreth. Born in 1878 in Oakland, California, she would be the oldest of nine children. Influenced by her aunt and namesake, Lillian Delger Powell, who had studied under Sigmund Freud, the younger Lillian enrolled at the University of California, Berkeley, where one of her professors described her to her father as “one of the best minds in the Freshman class” (Vasquez, 2007, p. 49). Upon graduating with a bachelor’s degree in English and Poetry, Lillian Moller pursued a graduate degree in English at Columbia University. However, due to a series of events, including a professor who refused to allow women in his classroom, she ended up taking a Psychology class under the famous forerunner of operant conditioning, Edward Thorndike, a fortuitous detour that would influence her later in life. In 1903, the shy Lillian met the outgoing and dynamic Frank Gilbreth, an engineer and businessman who had a special interest in increasing efficiency. The couple married in 1904 and immediately began to collaborate, publishing several articles together (Gibson et al., 2015; Vasquez, 2007). According to Gibson et al. (2015), “While Frank was © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_4
19
20
4 Emergence of a New Discipline for the Twentieth Century: Human Factors…
interested in the technical side of worker efficiency, Lillian was focused on the human element of time and motion study” (p. 291). One of their most enduring legacies (besides their 12 children!) was the practice of nurses handing surgeons the needed instruments in operating rooms, in order to increase efficiency and decrease time of the patient under anesthesia (Baumgart & Neuhauser, 2009). Shortly before the birth of their sixth child, Lillian graduated from Brown University with a PhD in Psychology. Her dissertation work would later be published as The Psychology of Management: The Function of the Mind in Determining, Teaching and Installing Methods of Least Waste (1914), securing her position as one of the pioneers of what would become Industrial-Organizational Psychology (Gibson et al., 2015; Wren, 2005). In addition to birthing and mothering 12 children, Lillian applied her knowledge of both engineering and psychology to the emerging field of domestic science. For example, she pioneered the foot control for the kitchen trash can, still the most popular used in kitchens today (Gibson et al., 2015). When Frank died suddenly and unexpectedly in 1925, Dr. Lillian Gilbreth became the sole breadwinner for her family of 13. She circumnavigated resistance to women in STEM by aligning herself with the growing household efficiency movement, popularized, in part, by Ladies Home Journal magazine. Dr. Lillian Moller Gilbreth would become known as the “mother of scientific management” (Perloff & Namen, 1996). Dr. Gilbreth would also become the First female member of The Society of Industrial Engineers in 1921; first female member of the American Society of Mechanical Engineers; the first woman to receive the degree Honorary Master of Engineering (from the University of Michigan); the first female professor of management at an engineering school (at Purdue University in 1935); and the first female professor of management at Newark College of Engineering (Wren, 2005, p. 164).
According to the JB Speed School of Engineering at the University of Louisville, Most analysts agree that the IE profession grew out of the application of science to the design of work and production systems. The pioneers who led that effort were Frederick W. Taylor …who initiated the field of work measurement, Frank Gilbreth and his wife, Lillian Gilbreth …who perfected methods improvement, and Henry Gantt who pioneered the field of project management (History of Industrial Engineering at UOFL, n.d., para. 2).
There are several interrelated fields where terms are often used interchangeably, demonstrating their highly interdisciplinary nature. Based on collaborations and discussions among scientists in various fields, “Research in the US and UK concerning real work in real environments during and after WWII formed the beginnings of the discipline that was termed ‘human factors’ (US) and ‘ergonomics’ (UK)” (Shorrock, 2018). According to Hawkins (1987), “Before the availability of dedicated Human Factors courses, those wishing to enter the field originated from different disciplinary backgrounds. Most commonly, these were engineering, psychology, and medicine” (p. 20). Human factors engineering, according to Hawkins, evolved over about the past 100 years, beginning in large part, with the time and motion studies of Gilbreth and Gilbreth. “It was in the 1880s and 1890s that Taylor and the Gilbreths started, separately, their work on time and motion studies in industry. And at about the same time academic work was being carried out by, amongst
The Role of World War II
21
others, Galton on intellectual differences and by Cattell on sensory and motor capacities.” (p. 16). World War I provided further impetus, as widespread aptitude testing emerged. “In the USA from 1917 to 1918 two million recruits to the forces were given intelligence tests so as to assign them more effectively to military duties” (Hawkins, 1987, 1993, p. 16). In the United Kingdom, Cambridge had established, in the late 1800s, an experimental psychology laboratory, and “progress made during the war resulted, in 1921, in the foundation in the United Kingdom of the National Institute for Industrial Psychology which made available to industry and commerce the results of experimental studies” (p. 16). To some extent, this signaled an important growth period for what would become the interdisciplinary fields of industrial-organizational psychology, industrial engineering, and human factors engineering. It was during the 1920s that a series of studies were conducted on the effects of various motivating factors on human work performance and efficiency at the Hawthorne Works of Western Union in the United States. The term “Hawthorne effect” is fairly ubiquitous in the modern scientific lexicon, and the related “placebo effect” is arguably even in the popular lexicon. Specifically, the Hawthorne studies involved a series of manipulations intended to increase worker productivity. What they discovered was that any change to worker’s environment (e.g., increase or decrease lighting and play music or have silence) increased productivity. In other words, the change itself was a motivator to improved performance. This was a very important discovery, with implications for experimental design. The Hawthorne studies were among the first to really concentrate specifically on this emerging science, for two primary reasons. First, the goal of the studies was then, as it is still in human factors engineering, to “study the factors and development of tools that facilitate the achievement” (Wickens et al. 2004, p. 2) of optimal performance, safety, and user satisfaction, particularly as it involves humans interacting with machines and technology. Secondly, the Hawthorne studies may be considered among the first to focus directly on the intermediating factors or variables, rather than direct stimulus-response, cause-effect. In other words, “It was determined that work effectiveness could be favourably influenced by psychological factors not directly related to the work itself.” (Hawkins, 1987, 1993, p. 16). According to Hawkins, “A new concept of the importance of motivation at work was born and this represented a fundamental departure from earlier ideas which concentrated on the more direct and physical relationship between man and machine” (pp. 16–17).
The Role of World War II The year is 1947. America is basking in the glory of her victories in a global war; a war which greatly accelerated many changes; economically, politically, and technologically. It is with this latter element that America, and indeed, her recent allies and enemies, would soon develop an intense fascination. From households to universities, they would anxiously await the products of this latest revolution; some filled with hope, some with doom. Amidst the fear and fervor, the dreams and the dread
22
4 Emergence of a New Discipline for the Twentieth Century: Human Factors…
of this postwar era, there begins to emerge, slowly at first, a set of scientific inquiries which will soon solidify the foundation of a new science. It is in this year, 1947, that a paper is published that will become a classic in a field that will not have a name until 2 years later, and no academic society until the next decade. I speak, of course, of the field of human factors/ergonomics, and of the paper of Fitts and Jones (1947). The work contained reports of systematic studies conducted on WWII aviators concerning human errors and information processing as a function of cockpit design. In a world in which the mechanization of American society would soon become a philosophical question lost in the momentum of robotics and rocketry, these individuals had introduced the scientific community to the systematic study of man’s place in this brave new world. Hawkins (1987) credits Edwards (1985) with “the most appropriate definition of the applied technology of Human Factors {as} concerned to optimize the relationship between people and their activities by the systematic application of the human sciences, integrated within the framework of systems engineering” (p. 20). Hawkins also points out the term ergonomics, which has come to be synonymous with human factors, was used more frequently in the United Kingdom and other parts of Europe. The term was adopted by a Professor named Murrell, and “derived in 1949 from the Greek words ergon (work) and nomos (natural law)” (p. 20). But Hawkins perhaps describes human factors best when he states, “Human factors is about people. It is about people in their working and living environments. It is about their relationship with machines and equipment, with procedures and with the environment about them. And it is also about their relationship with other people” (p. 20). Chan (2001) defines complexity succinctly, as resulting “from the inter- relationship, inter-action and inter-connectivity of elements within a system and between a system and its environment” (p. 1). This definition is useful for applications to behavioral research involving human cognition, decision making, and organizational performance. It captures the essential elements involved with respect to both research and application in this domain, with a focus on multiple variables that interact with each other, as well as with the physical and cultural environment in which humans operate. WWII brought another momentum for the emerging human factors discipline. This is due, in large part, to research into the nature of human errors in decision making.
hen Things Go Wrong: The Study of Human Error W in Decision Making Roscoe, writing in Human Factors and Ergonomics Society: Stories From the First 50 Years (2006), described the frequent training accidents during WWII as an impetus for burgeoning human factors discipline/research (Fig. 4.1).
When Things Go Wrong: The Study of Human Error in Decision Making
23
Fig. 4.1 Stanley Roscoe, as found in The Human Factors and Ergonomics Society: Stories from the First 50 Years, Copyright 2006. (Reprinted with Permission, Human Factors and Ergonomics Society) Note. Roscoe (2006a, b, p. 24). Photo of Stanley Roscoe (1950) Retrieved from https://www.hfes.org/Portals/0/Documents/HFES_First_50_Years.pdf (p. 24) The term pilot error started appearing with increasing frequency in training and combat accident reports. It is a reasonably safe guess that the first time anyone intentionally or unknowingly applied a psychological principle to solve a design problem in airplanes occurred during the war (p. 12).
It was a young Army Air Force Aviation Physiologist and Psychologist, Alphonse Chapanis, who established the foundation of aviation human factors at the Aeromedical Lab in Dayton, Ohio: In 1943, Lt. Alphonse Chapanis was called on to figure out why pilots and copilots of P-47s, B-17s, and B-25s frequently retracted the wheels instead of the flaps after landing. He immediately noticed that the side-by-side wheel and flap controls – in most cases identical toggle switches or nearly identical levers – could easily be confused. Chapanis realized that the so-called pilot errors were really cockpit design errors and that the problem could be solved by coding the shapes and modes of operation of controls…These mnemonically shape-coded wheel and flap controls were standardized worldwide (Roscoe, 2006a, b, p. 12).
While we may think of standardization of controls—whether in cockpits or automobiles—as “common sense,” modern versions really are the result of a series of observations, controlled studies, and systematic human factors design applications. Building on this seminal work of Chapanis, experimental psychologists Paul Fitts and Richard Jones published findings from a series of laboratory studies on pilot error in a seminal 1947 article. Working from observations of the frequency and types of errors made in training accidents, LTC Paul Fitts, PhD lead the research
24
4 Emergence of a New Discipline for the Twentieth Century: Human Factors…
in controlled laboratory experiments of cockpit mock-ups. From this, they established one of the first comprehensive classification systems of human error, but, more importantly, helped to solidify the new discipline of human factors engineering. According to Alluisi (1992), “The AAF {Army Air Force} Aviation Psychology Program was probably the taproot of engineering psychology or human factors engineering,” with Paul Fitts “generally regarded as a founder of the field” (p. 12). As reported by Shorrock (2018), Fitts and Jones noted: It has been customary to assume that prevention of accidents due to material failure or poor maintenance is the responsibility of engineering personnel and that accidents due to errors of pilots or supervisory personnel are the responsibility of those in charge of selection, training, and operations (para. 2).
Fitts and Jones took a different slant altogether. The basis for their study was the observation that many aircraft accidents could actually be directly linked to how the cockpit equipment was designed and placed. “What had been called ‘pilot error’ was actually a mismatch between characteristics of the designed world and characteristics of human beings, and between work-as-imagined and work-as-done” (Shorrock, 2018, para. 2). The common phrase that would develop in the human factors discipline was design-induced human error. But Alluisi (1992) also points out that the confluence of new technologies and the demands of war meant that the scientists at Wright Patterson were not the only ones realizing that human behavior, particularly with respect to information processing and decision making, needed to be studied within the context of the complex systems with which they were operating. It was critical to both safety and performance. “The AAF psychologists were not alone in the creation of the new discipline. Rather, they appear to have been part of a zeitgeist that led military psychologists to address issues regarding the design and operation of equipment” (p. 13). According to Hawkins (1987, 1993), At Cambridge, the Psychology Laboratory of the University was responsible for what might be seen as a second major milestone. They constructed a cockpit research simulator which has since become known as the ‘Cambridge Cockpit’. From experiments in this simulator it was concluded that skilled behavior was dependent to a considerable extent on the design, layout and interpretation of displays and controls. In other words, for optimum effectiveness, the machine had to be matched to the characteristics of man rather than the reverse, as had been the conventional approach to system design (p. 17).
Fitts and Jones and their colleagues began to recognize, and emphasize, without necessarily using the term CAS, that humans, themselves a complex adaptive system, become part of a greater complex adaptive system within a cockpit. More importantly, if researchers focused on any part of the system in isolation, rather than focusing on interactions between elements of the system, as well as the external environment, results could be, quite literally, deadly. Because of this focus on human behavior within the context of a complex system, human factors engineering can be considered a subset of systems engineering, one that focuses more on the cognitive and physiological aspects of humans interacting with machines and technology.
References
25
Chapter Summary The discipline of Human Factors Engineering and Ergonomics emerged to address practical applications to real-world problems of both system performance and system safety that emerged during the twentieth century. While rooted firmly in scientific methodology, the science began to focus on human behavior from a systematic point of view, rather than focusing on individual human behavior resulting from genetic and/or environmental factors. Moreover, not only was human behavior viewed from a systematic, rather than individualistic, standpoint, but the human factor was viewed as an important component within complex adaptive systems, both effecting and being effected by the multiple and interacting variables within a total biological, as well as socio-technical, system.
References Alluisi, E. A. (1992). APA Division 21: Roots and rooters. In H. L. Taylor (Ed.), Division 21 members who made distinguished contributions to engineering psychology (pp. 5–21). Presented at the annual meeting of the American Psychological Association, August 1992. https://www. apadivisions.org/division-21/about/distinguished-contributions.pdf Baumgart, A., & Neuhauser, D. (2009). Frank and Lillian Gilbreth: Scientific management in the operating room. Quality & Safety in Health Care, 18(5), 413–415. https://doi.org/10.1136/ qshc.2009.032409 Chan, S. (2001). Complex adaptive systems. Research Seminar in Engineering Systems, 31, 1–9. MIT. http://web.mit.edu/esd.83/www/notebook/NewNotebook.htm Edwards, E. (1985). Human factors in aviation. Aerospace, 12(7), 20–22. Fitts, P. M., & Jones, R. E. (1947). Analysis of factors contributing to 270 “pilot error” experiences in operating aircraft controls (Report TSEAA-694-12A). Aero Medical Laboratory, Air Material Command, Wright-Patterson Air Force Base: Aeromedical Lab. Gibson, J. W., Clayton, R. S., Deem, J., Einstein, J. E., & Henry, E. L. (2015). Viewing the work of Lillian M. Gilbreth through the lens of critical biography. Journal of Management History, 21(3), 288–308. https://doi.org/10.1108/JMH-01-2014-0014 Gilbreth, L. M. (1914). The psychology of management: The function of the mind in determining, teaching and installing methods of least waste. Sturgis and Walton. Hawkins, F.H. (1987). Human factors in flight (H.W. Orlady, Ed.) (2nd ed.). Routledge. https://doi. org/10.4324/9781351218580 History of Industrial Engineering at UOFL Speed School. University of Louisville J.B. Speed School of Engineering. (n.d.). Retrieved December 2, 2021, from https://engineering.louisville. edu/academics/departments/industrial/history-of-ie/ Lang, W. (Director), Trotti, L. (Producer), & Trotti, L. (Screenplay). (1950). Cheaper by the dozen [Film]. 20th Century Fox. Perloff, R., & Namen, J. L. (1996). Lilian Gilbreth: Tireless advocate for a general psychology. In G. A. Kimble, C. A. Boneau, & M. Wertheimer (Eds.), Portraits of pioneers in psychology (Vol. 2, pp. 107–117). Lawrence Erlbaum Associates, Inc. Roscoe, S. (2006a). The adolescence of engineering psychology. In J. Stuster (Ed.), Human factors and ergonomics society: Stories from the first 50 years (pp. 12–14). Retrieved November 21, 2022, from https://www.hfes.org/Portals/0/Documents/HFES_First_50_Years.pdf Roscoe, S. (2006b). Stanley Roscoe (1950). [Photograph]. Alex Williams, investigator and inventor. In J. Stuster (Ed.), Human factors and ergonomics society: Stories from the first 50 years
26
4 Emergence of a New Discipline for the Twentieth Century: Human Factors…
(pp. 23–25). Retrieved November 9, 2022, from https://www.hfes.org/Portals/0/Documents/ HFES_First_50_Years.pdf. Published with permission from HFES letter dated November 21, 2022. Shorrock, S. (2018). Human factors and ergonomics: Looking back to look forward. In Humanistic systems: Understanding and improving human work. https://humanisticsystems. com/2018/02/25/human-factors-and-ergonomics-looking-back-to-look-forward/ Vasquez, M. J. (2007). Lillian Evelyn Moller Gilbreth: The woman who “had it all”. In E. Gavin, A. Clamar, & M. A. Siderits (Eds.), Women of vision: Their psychology, circumstances, and success (pp. 45–60). Springer. Wickens, C. D., Gordon, S. E., Liu, Y., & Becker, S. G. (2004). An introduction to human factors engineering. Pearson Prentice Hall. Wren, D. A. (2005). The history of management thought. Wiley.
Chapter 5
Complexity Science in Organizational Behavior
pplying Complexity Science to Organizational A Behavior Research In the 1990s, researchers began to apply complexity theory more consciously to organizations. In addressing the application of complexity science to organizational strategy, Lissack (1999) states, “It would be foolhardy to write off such contributions as insignificant or minor…Both complexity science and organization science have a common problem they wish to address: uncertainty” (p. 119). According to Barton (1994), “The failure of linear equations in predicting human performance is especially noticeable when continuous changes in certain control parameters lead to sudden jumps in behavior” (p. 6). Moreover, many human physiological and performance relationships are not linear, but curvilinear. For example, the optimal arousal theory holds that, across many dimensions—workload, stress, cognitive load, etc.—human performance increases—to a point. After that point— that optimal level—overwork or overstress leads to declines in performance. When graphed, the relationship looks like an inverted U; it is more formally known as the Yerkes-Dodson curve (Yerkes & Dodson, 1908; Pietrangelo, 2020). Wolf-Branigin (2013) argues, “The study of complexity arose because a group of scientists believed that complex systems – across many natural, societal and technological domains – shared similarities. This includes being adaptive, self-correcting, and emergent” (p. 1). Columbia professor and investment researcher Michael Mauboussin uses an ant colony as an illustration of a simple, natural model of a complex adaptive system: If you examine the colony on the colony level, forgetting about the individual ants, it appears to have the characteristics of an organism. It’s robust. It’s adaptive. It has a life cycle. But the individual ant is working with local information and local interaction. It has no sense of the global system. And you can’t understand the system by looking at the behavior of individual ants. That’s the essence of a complex adaptive system—and the thing that’s so vexing. Emergence disguises cause and effect (Sullivan, 2011, para. 3). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_5
27
28
5 Complexity Science in Organizational Behavior
Emergence in Complex Adaptive Systems According to De Domenico et al. (2019), the reason it is difficult, within the context of a complex adaptive system, to fully understand the whole or predict outcomes based solely on knowledge of constituent parts is due to emergence. “Emergence involves diverse mechanisms causing the interaction between components of a system to generate novel information and exhibit non-trivial collective structures and behaviors at larger scales” (p. 6). The idea of emergent behavior has roots in biological and physiological sciences. Perhaps one of the earliest papers to try to definitionally bound the concept of emergence was published in 1926 in The Journal of Philosophy and was written by Stephen Pepper. He defined emergence as “a cumulative change, a change in which certain characteristics supervene upon other characteristics, these characteristics being adequate to explain the occurrence on their level” (p. 241). Pepper distinguished emergence from chance occurrence, as well as “shifts,” the latter encompassing a change in which one characteristic replaces another, the sort of change traditionally described as invariable succession” (p. 241). He further outlined three propositions regarding the theory of emergence. The first, which involves different degrees of integration in nature, Pepper argues is widely accepted. The second and third, on the other hand, he contends involve controversy. The second proposition, in which “there are marks which distinguish these levels from one another over and above the degrees of integration” (p. 241), which he refers to as cumulative change, and the third, which involves the unpredictability of such change, make it “impossible to deduce the marks of a higher level from those of a lower level” (p. 241). de Haan (2006) argues that “as emergence is the core concept of complexity it is extremely relevant for…any field that is approached from a complexity perspective” (p. 293). According to de Haan (2006), “Since emergence is recognised in so many different fields as a relevant concept, it would be useful to have a general conceptual framework that allows a treatment of emergence without explicit reference to the specific underlying mechanism” (p. 293). de Haan further emphasizes the term is “found throughout the scientific literature from physics to sociology {to describe} a plethora of phenomena. The common denominator…appears to be that some property or phenomenon is observed that somehow transcends the level of the objects that nevertheless produce it” (p. 293). Goldstein (2004) described emergence as “nonlinear interactivity {that} leads to novel outcomes that are not sufficiently understood as a sum of their parts” (p. 53). He further emphasizes that “the construct of emergence is appealed to when the dynamics of a system seem better understood by focusing on across-system organization rather than on the parts of properties alone” (p. 57). Goldstein importantly adds that “emergence functions not so much as an explanation but rather as a descriptive term pointing to the patterns, structures, or properties that are exhibited on the macro-level” (p. 58). Perhaps one of the best-known and most pithy descriptions of emergence comes from the maxim made famous by Gestalt psychology (though Goldstein, 2004,
Applications in the Social Sciences
29
argues the construct falls short of capturing the dynamic nature of complex adaptive systems). Green (1993) provides an argument for Gestalt in biological systems: In no area of science is the saying “the whole is greater than the sum of its parts” more evident than in biology. The mere term “organism” expresses the fundamental role that interactions, self-organization and emergent behavior play in all biological systems (p. 1).
Emergence is further highlighted by medical doctor Jayasinghe (2011), who has applied complex adaptive systems to the study of epidemiology and public health. He describes a complex adaptive system (CAS) as one in which “the interactions with the environment and among sub-systems are non-linear interactions and lead to self-organisation and emergent properties” (p. 1).
Applications in the Social Sciences Wolf-Branigin (2013) argues that “although complexity theory evolved within the natural sciences, it serves as an approach for understanding the interactions of networks of services, and the evolution of policies…[that frame] social behavior in the social environment using systems theory” (p. 4). Furthermore, research into complex adaptive systems often involves “agent-based modeling created to simulate environments through computer-intensive techniques and replicate interactions of individuals (agents). Based on these agent-level activities and interactions with others, it becomes possible to study the layers of emergent behaviors produced by agent-level interactions” (Wolf-Branigan, 2013, p. 5). In other words, what Wolf-Branigan (2013) is saying here is that mathematically based computer modeling can lead to the prediction of human behavior in groups. While in general there is truth to this postulate, it is important at this point to offer a couple of notes of caution. First, when moving from the realm of mathematical and natural science to applying complexity science in the social and behavioral disciplines, one must remember the objective of a classic linear prediction model is to try to account for the most variance in the criterion variable (the change in which you are trying to predict), through the best linear combination of a set of predictor variables. Per complexity theory, these predictor variables interact with each other, and the outcome of the criterion variable can change as the set of predictors change. A second note of caution concerns the general principle of applying computer modeling to human behavioral outcomes. In order to model, parameters must be defined for each of the variables in the prediction model. However, with complex, multivariate human behavior, it can be difficult to parameterize or operationalize valuations in human behavior, or even to know what variables should be entered into the model. Hence, models may not be able to capture all the variables, nor their interactions, and behaviors may emerge in less predictable ways.
30
5 Complexity Science in Organizational Behavior
Chaos Versus Complexity The phrase chaos theory was popularized by television and movies in the late 1990s and early 2000s, in part with the colorful metaphor of the “butterfly effect.” While this book does not focus specifically on the elements of chaos theory, as it is related to complex adaptive systems, I will briefly address it here. Chaos has been defined as: The phenomenon wherein systems composed of inter-related parts or interdependent agents – each of which follows very simple, highly regular rules of behavior – generate outcomes that reflect these interactions and feedback effects in ways that are inherently nonlinear and intractably unpredictable. Because of nonlinearities that reflect interactions and feedback effects, very tiny changes in inputs can make enormous differences in outputs – a sensitive dependence on initial conditions (SDIC) or the so-called butterfly effect in which, for example, the proverbial moth flapping its wings in Brazil can cause a tornado in Texas (Holbrook, 2003, p. 2).
Complexity science is sometimes confused with chaos theory; an important distinction is that chaos theory implies complete unpredictability and/or disorder. Additionally, while the phrase complexity science can be described simply as the study of complex adaptive systems (CAS), I prefer to focus on the latter phrase as it is more descriptive of the features that I think both compose such systems, and upon which we can build tools to improve organizational decision making and performance. According to Barton (1994), “Nonlinear equations are not additive; therefore, they are often difficult to solve…oftentimes, the answer involves a pattern of solutions” (p. 6), as opposed to one singular solution. From an applications standpoint, the key to decision making tools is to help discern patterns. “Even if we are able to characterize all the variables in a nonlinear system completely, general patterns of future behaviors may be the best we can hope to predict” (Barton, 1994, p. 6). Phelan (2001) alludes to the potential predictability within complex adaptive systems as applied to the natural world. “Generative rules and equations can be discovered that are capable of explaining the observed complexity of the ‘real’ world/universe. Furthermore, these laws have the potential to predict and control the behavior of real-world systems” (p. 133). There is, in other words, predictability in the world, but the focus of complexity science, or the study of complex adaptive systems, is more upon multivariate interactions and patterns than simple cause- effect. Likewise, the focus is on predictability and probability, not certainty; perhaps on theory, rather than simple observable facts and/or laws. Phelan focuses on correlational models based on quantitative real-world data to distinguish complexity science from pseudo-science based on qualitative analogies of real-world complex adaptive systems. Hence, I would argue that, at least from the standpoint of practical application, chaos is not a good characterization of emergence or complexity theory. The latter is, most simply, and in my view, a recognition that most relationships in nature, and particularly in human behavior, while predictable, are not simple direct cause-effect. They involve interactions among multiple variables; different combinations can lead to different outcomes. It doesn’t mean we cannot derive models that are fairly
Detecting Emerging Patterns in a Complex World
31
good at predicting, but it is difficult, if not impossible, to predict with one hundred percent accuracy, or even near that. Indeed, new variables may be randomly introduced into the system, as Robert Sapolsky and Lisa Share described in a 2004 article on transmission of culture among baboons. The article provides a case study from the 1980s contrasting two social groups of baboons, “forest group,” who roamed the trees near a tourist lodge, and “garbage dump troop.” The latter was labeled as such because members of the troop—predominantly more aggressive males—would forage for food in the garbage dump outside the tourist lodge. Likewise, the more aggressive males from the forest group, which was characterized by a less aggressive culture than that of the garbage dump troop, also foraged from the dump, but in the early hours, as to show deference to the more dominant males from the garbage dump troop. The group of baboons was selectively eliminated due to the introduction of an epidemic of tuberculosis, precisely because of their aggression in seeking food from a trash dump that was contaminated with infected meat. The less dominant and aggressive males, as well as the females, survived, and a culture shift ensued to a more tolerant and less aggressive group dynamic. Moreover, when researchers observed the surviving troop 10 years later, when the adult male population now consisted only of males who had joined the troop since the original 1983 observations of the culture shift. The authors concluded, “The distinctive behaviors that emerged during the mid-1980s because of the selective deaths were being carried out by the next cohort of adult males that had transferred into the troop” (p. 0535). New variables may be introduced into a CAS, and the interactions among variables may change as a result, but there are still underlying patterns to be found. Such patterns can be difficult to find, especially with multivariate systems, but that doesn’t change the fact that there are underlying patterns, and predictable patterns, by definition, are not random. We have seen this with the Covid pandemic. We can predict that the population will become somewhat resistant over time, but the specifics of how long that will take and who will be affected are more difficult to determine. Likewise, we can be fairly certain the virus will continue to mutate over time, and variants will continue to emerge. The nature of their virulence will be much less certain. Throw into the mix the fact that you are dealing with two complex adaptive systems—the human organism and the virus, within biological, ecological, and social systems which are also complex and adaptive; it’s somewhat surprising we ever make any connections or learn anything! But we are hardwired to search for patterns in nature, as will be discussed in Chap. 10, and that is what is required, the search for emerging patterns.
Detecting Emerging Patterns in a Complex World Though it has its challenges, complexity science is useful in the study of organizational behavior because, within social and behavioral sciences, it provides an important framework that focuses on emerging group behavior. Despite some, albeit
32
5 Complexity Science in Organizational Behavior
inaccurate, associations with the term “chaos,” the theory predicts “the presence of hidden order and even great beauty…as well as patterns that emerge in the apparent randomness around us” (Holbrook, 2003, p. 184).
Chapter Summary Charles Darwin may have been one of the first, or at least most famous, scientists to begin to focus on detecting subtle patterns amidst apparent randomness with respect to the origins of species. Indeed, biological systems are one field among many in which the interdisciplinary application of both chaos theory and complex adaptive systems abound. “When such insights are applied to real-world systems – whether ant colonies, evolutionary biology, business organizations, or brand-positioning strategies – they shed light on dynamic processes of adaptation and survival” (Holbrook, 2003, p. 5). It’s important for scientists, as well as leaders and decision makers to understand and track emerging patterns, rather than expect specific outcomes. So how do we start to explore these underlying patterns?
References Barton, S. (1994). Chaos, self-organization, and psychology. American Psychologist, 49(1), 5–14. https://doi.org/10.1037/0003-066x.49.1.5 De Domenico, M., Brockmann, D., Camargo, C. Q., Gershenson, C., Goldsmith, D., Jeschonnek, S., & Sayama, H. (2019). Complexity explained. https://scholar.google.com/scholar?hl=en&as_ sdt=0%2C44&q=%22the+properties+of+the+whole+often+cannot+be+understood+or+pred icted+from+the+knowledge+of+its+components+because+of+a+phenomenon+known+as+ %E2%80%98emergence%E2%80%99+%22&btnG= de Haan, J. (2006). How emergence arises. Ecological Complexity, 3(4), 293–301. https://doi. org/10.1016/j.ecocom.2007.02.003 Goldstein, J. (2004). Emergence then and now: Concepts, criticisms, and rejoinders: Introduction to Pepper’s’ Emergence,’. Emergence: Complexity and Organization, 6(4). Green, D. G. (1993). Emergent behavior in biological systems. In D. Green & T. Bossomaier (Eds.), Complex systems: From biology to computation (pp. 24–34). IOS Press. Holbrook, M. B. (2003). Adventures in complexity: An essay on dynamic open complex adaptive systems, butterfly effects, self-organizing order, coevolution, the ecological perspective, fitness landscapes, market spaces, emergent beauty at the edge of chaos, and all that jazz. Academy of Marketing Science Review, 6(1), 1–184. Jayasinghe, S. (2011). Conceptualising population health: From mechanistic thinking to complexity science. Emerging Themes in Epidemiology, 8(1), 1–7. Lissack, M. R. (1999). Complexity: The science, its vocabulary, and its relation to organizations. Emergence, 1(1), 110–126. https://doi.org/10.1207/s15327000em0101_7 Pepper, S. C. (1926). Emergence. The Journal of Philosophy, 23(9), 241–245. https://doi. org/10.2307/2014779 Phelan, S. E. (2001). What is complexity science, really? Emergence, 3(1), 120–136. https://doi. org/10.1207/s15327000em0301_08
References
33
Pietrangelo, A. (2020, April 28). What the Yerkes-Dodson law says about stress and performance. Healthline. Retrieved December 17, 2022, from https://www.healthline.com/health/ yerkes-dodson-law#factors Sapolsky, R. M., & Share, L. J. (2004). A pacific culture among wild baboons: Its emergence and transmission. PLoS Biology, 2(4), e106. https://doi.org/10.1371/journal.pbio.0020106 Sullivan. (2011, September). Embracing complexity. Harvard Business Review. https://hbr. org/2011/09/embracing-complexity Wolf-Branigin, M. (2013). Using complexity theory for research and program evaluation. Oxford University Press. Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit- formation. Journal of Comparative Neurology and Psychology, 18, 459–482. https://www.ida. liu.se/~769A09/Literature/Stress/Yerkes,%20Dodson_1908.pdf
Part II
Science, Uncertainty, and Complex Adaptive Systems: The Search for Underlying Patterns
Chapter 6
The Scientific Method Applied to Complex Adaptive Systems
The Scientific Method The scientific method has long been the means of systematically exploring observed patterns. It’s long-established sequence, which should really be viewed more as a cycle, classically involves four steps. The first step, observation, involves extracting data from the environment. This can take the form of direct observation, based on our senses, or indirect observation, such as reading the scientific literature. From observations, we move on to the second step, or hypothesis generation. One of the most well-known social scientists of the twentieth century, Stanford University professor of psychology, Philip Zimbardo, defines a hypothesis as “a tentative and testable explanation of the relationship between two (or more) events or variables; often stated as a prediction that a certain outcome will result from specific conditions” (1985, p. X). While I like this definition because it focuses on relationships between two or more variables, I would prefer the word “may” as opposed to “will” in the prediction language. This reflects, again, my contention that the nature of complex adaptive systems is such that direct one-to-one cause-effect relationships are rare. Zimbardo also describes a hypothesis as “an educated hunch about some phenomenon in terms of its causes, consequences, or events that co-occur with it” (p. 6), which I believe captures much of the practical essence of the concept. One of the first things a student of research methodology learns is that hypotheses should be falsifiable. This essentially means they should be capable of being proven false, in its most basic sense, by a single observation that counters the hypothesized relationship. The principle of falsifiability is most associated with Karl Popper, described as “a social and political philosopher of considerable stature, a self-professed critical-rationalist, a dedicated opponent of all forms of skepticism and relativism in science and in human affairs” (Thorton, 2022, para. 1). The encyclopedic entry goes on to explain that Popper made a distinction between the “logic of falsifiability” and how it actually works in practice. The latter, applied view © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_6
37
38
6 The Scientific Method Applied to Complex Adaptive Systems
seems to acknowledge the foundations of complexity in most of the natural world, particularly within the social and behavioral sciences. Methodologically, however, the situation is complex: decisions about whether to accept an apparently falsifying observation as an actual falsification can be problematic, as observational bias and measurement error, for example, can yield results which are only apparently incompatible with the theory under scrutiny. Thus…Popper explicitly allows for the fact that in practice a single conflicting or counter-instance is never sufficient methodologically for falsification (Thorton, 2022, para. 20).
Hypothesis Testing. The next step in the process is to empirically test the hypothesis, which generally involves analysis using one or more inferential statistical techniques in which data collected through a formalized process, utilizing a representative sample from the population, is examined for systematic covariances within the context of known or assumed random variances, including measurement error. Reporting Results. The fourth and final step in the classic line-up is reporting results, which generally consists of presenting or publishing findings in peer- reviewed scientific conferences or journals. This is important, not only for disseminating information and adding to the body of knowledge in a specific domain, but also for what can be considered the fifth step, replication, which is not in the original formulation, but is arguably as important. Replication. A final “step” in the process of scientific inquiry is replication. Replication is described by Davis and Smith (2005) as the ability to verify and confirm the measurements made by other individuals by conducting additional studies “in exactly the same manner as the original research project” (p. 7). Replicability is often left out of the “classic” steps of the scientific method, but it is critical for a number of reasons. Replicability can help to uncover mistakes, as well as deliberate falsifications. Replicability is also an important part of the process of “self-correction of errors and faulty reasoning” (Davis & Smith, 2005, p. 6), which is actually closely tied to respect for inherent uncertainty in science. Finally, replicability increases our confidence in results. All sciences, including the so-called hard sciences accept that error is inherent in scientific measurement. In the social sciences, the convention is to accept a margin of error, or probability of obtaining “faulty” results by pure chance, of about 5% or less. In other words, results are considered statistically significant if the likelihood of said results being obtained in error would be expected to happen less than 5 times out of 100. If a researcher conducts a study once, therefore, he or she is accepting around a 5% probability that the results may be in error. If that study is replicated multiple times, however, and the same results are obtained each time, this greatly reduces the likelihood that the results were obtained by chance and consequently increases the confidence in those results. Too often, though, even in science, the principle of self-correction falls prey to bias. Rather than letting objective analysis of the data drive our reformation and refinement of hypotheses and theories, we let our initial hypotheses and theories drive our interpretation of the data. We shouldn’t fool ourselves into thinking this does not happen in academia or science. We should, instead, humbly accept our need for checks and balances and various tools to enhance objectivity.
The Scientific Method
39
While the scientific method is often discussed in terms of steps, as I’ve outlined above, it is actually a cyclical, iterative process, where changes in available information in a dynamic environment are constantly monitored and assessed, hypotheses modified, and methodologies continuously adapted. According to astrophysicist Dr. David Lindley, Modern science evolved through the application of logical reasoning to verifiable facts and data. Theories, couched in the rigorous language of mathematics, were meant to be analytical and precise. They offered a system, a structure, a thorough accounting that would replace mystery and happenstance with reason and cause (2007, p. 2).
There’s a lot to unpack in this Lindley quote, but fundamentally, a disciplined method encourages critical thinking, and critical thinking involves logic and reasoning, including our ability to cycle between inductive and deductive reasoning. This cycle is ultimately the foundation of the scientific method. Since people have been able to apply reason, they have been applying some aspect of the scientific method. At its most basic level, one could argue even non- human animals engage in rudimentary science. Anyone who has lived in an area with deer can attest to this. They observe things in their environments. They look for patterns of associations (one reason plants have evolved with particular defense mechanisms, such as odorous being associated with toxic); they test these “hypotheses.” While you can argue some such deer reactions may be the result of instinct (i.e., they seem to instinctively know that animals with eyes oriented forward are predators), the point is, the ability to recognize patterns and associations is, in fact, critical to survival. There are patterns in nature. This is why, at survival schools, they teach recognition of these patterns, such as the “berry rule,” in which the colors of certain berries are associated with the probability of their toxicity. Survival depends on the ability to learn patterns and associations. Animals obviously do learn, and it is difficult to learn without at least a rudimentary level of testing one’s environment for basic cause-effect relationships. Deer can learn to recognize an association between cars and death - the deer in my neighborhood practically have crossing guards! This is obviously not instinctual, though one could argue the rudiments of recognizing speeding objects as potential predators is a possibility. More likely, they have learned from observing the misfortunes of a herd-mate. Operant conditioning, or the idea that behavior is linked to reinforcement, was first described by the famous behavioral psychologist B.F. Skinner (1963). It depends on the ability to make at least these rudimentary connections between stimulus and reward or punishment. You would not be able to train your dog if the animal was incapable of making connections regarding predictable relationships between stimulus and response. That initial connection that goes on in the dog’s brain between, “If I do this behavior, I am likely to get a treat” is arguably an elementary form of hypothesis. The question of degree of reasoning comes most into play with respect to the degree of “thought” that goes into hypothesis formulation. What sets humans apart from other mammals is the ability to predict from these patterns, rather than just react. But let me get a bit more specific, because most mammals do, in fact, predict,
40
6 The Scientific Method Applied to Complex Adaptive Systems
Fig. 6.1 Portrait of Frederick Douglass. (Note. MET Museum. Portrait of Frederick Douglass, United States Minister Resident to Haiti. Retrieved from https://commons. wikimedia.org/wiki/ File:Frederick_Douglass_ MET_DT1144_ Retouched_by_N-Shea. png. CCO 1.0. Universal public domain)
as per the training example above. But their predictions involve relatively simple cause-effect “reasoning” if you will, based entirely upon either instinct or prior experience (through self or observing other members of the herd). They do not have the ability to speculate about more complex, multivariate, interacting predictions of things they have not directly observed. In other words, they don’t have the capacity for “what if” thinking. “What if” thinking is known more formally by cognitive psychologists as mental simulation, and it is a very powerful reasoning and prediction tool. Specifically, mental simulation is about predicting how combinations of variables not yet observed, or at least tested, might play out (Fig. 6.1).
nowledge Is Power: The Role of “What-If?” Thinking K in the Scientific Method To make a contented slave it is necessary to make a thoughtless one. It is necessary to darken the moral and mental vision, and, as far as possible, to annihilate the power of reason—Frederick Douglass; former slave, turned orator, writer, human rights activist, and statesman.
In 1892, Frederick Douglass wrote in his memoirs, The reader must not expect me to say much of my family. Genealogical trees did not flourish among slaves. A person of some consequence in civilized society, sometimes designated as father, was, literally unknown to slave law and to slave practice…From certain events,
Conceptual and Operational Definitions
41
however, the dates of which I have since learned, I suppose myself to have been born in February, 1817 (Douglass, 1892/2003, p. 11).
And so began, on the Eastern shore of Maryland, the life of the man who would become the most photographed American of the nineteenth century (Stauffer et al., 2015). Frederick Douglass would become a newspaper publisher, celebrated orator, world traveler, and the first black U.S. marshal, appointed in 1877 under President Rutherford B. Hayes (Trent, 2022). As a young boy, the woman Douglass initially saw as a mother figure, “Miss Sophia”, wife of slaveholder Hugh Auld, would open up a new world for the brilliant young mind by introducing him to the world of reading (Douglass, 1892/2003, p. vii): My mistress was, as I have said, a kind and tender-hearted woman; and in the simplicity of her soul she commenced, when I first went to live with her, to treat me as she supposed one human being ought to treat another (Douglass, 1849, p. 37).
Unfortunately, Lord Acton’s infamous admonition that power corrupts and “absolute power corrupts absolutely” (Moreell, 2010) proved true in the case of the once- gentle Mrs. Auld: But alas! This kind heart had a short time to remain such. The fatal poison of irresponsible power was already in her hands, and soon commenced its infernal work…The first step in her downward course was in her ceasing to instruct me (Douglass, 1849, p. 34).
Before that time, though, Mrs. Auld began teaching the young Douglass the alphabet and spelling, but she was soon forbidden to do so by Mr. Auld. Unbeknownst to the slaveholder, though, he inadvertently provided the child with an invaluable piece of advice with respect to the power of reading. “It would forever unfit him to be a slave” (Douglass, 1849, p. 34). The young and fertile mind reflected on this. “These words sank deep into my heart, stirred up sentiments within that lay slumbering, and called into existence an entirely new train of thought…From that moment I understood the pathway from slavery to freedom” (Douglass, 1849, p. 35). This story demonstrates, among other things, the ability to move beyond observations of simple cause-effect to more complex “what if” thinking. There’s not a direct link between reading and freedom. But just from the powers of human reasoning, Douglass made the connection between reading and the empowerment to obtain his freedom. But let’s take a closer look at the discipline required to move from observation to mental simulation. Let’s start with how we define our variables.
Conceptual and Operational Definitions “If I had an hour to solve a problem, I would spend the first 55 minutes determining the proper question to ask” —attributed to Albert Einstein
Disciplined research methodology follows from carefully formulated research questions, and each variable in a research question must be explicitly defined, both
42
6 The Scientific Method Applied to Complex Adaptive Systems
conceptually and operationally. A conceptual definition tells what a concept means. It can be anything from a simple dictionary definition to parts of a theoretical model. It is important because many terms in research can have multiple meanings. The conceptual definition answers the question “What do you mean by that?” and can include a dictionary definition, or may include detailed terms or subcategories from a theoretical model. An operational definition, on the other hand, specifies how a term is to be measured as a variable in research. Let’s take the example of the word power from the Frederick Douglass lesson. Power is an interesting word. It demonstrates the potentially equal clarity and confusion of language, and it is simultaneously useful in illustrating the importance of conceptual definitions as a starting point toward a disciplined method of scientific inquiry. I’ve used the word power several times in the last few paragraphs. But what does it mean? French and Raven (1959) defined five bases or sources of power, but when we use that word, we are often limiting it only to what they would have termed the position powers, and even within that, we typically limit it to the further sub-classification of legitimate power. On the other hand, the power Douglass realized was closer to information power, which isn’t even in the original five sources outlined by French and Raven. Information power was introduced later, but even then, psychometrics showed the items measuring it were closer to expert power than what Douglass was tapping into, which is that whoever holds the information holds the power. Leaders, warriors, and spies, as well as dictators, have known this for thousands of years. So, what do we mean by the word power? It depends on the context and intent of the speaker, as well as the situation. This is why a disciplined process is an important starting point as an aid to both human reasoning and the scientific method. It reduces unnecessary uncertainty. Information has been defined as a reduction in uncertainty (Rogers, 1962/1983). While we cannot eliminate all sources of uncertainty in the scientific method, clarifying definitions of variables conceptually is one way in which we can reduce it, while clarifying them operationally can aid not only empirical measurement but also both the evaluation and replication of methodologies. In addition to reducing uncertainty, a disciplined method serves to increase objectivity. Objectivity is critical to the scientific method and, indeed, to reasoning, yet it does not come naturally to us. We are hard-wired to be egocentric from the standpoint of perception. Survival depends on quickly recognizing and learning what can kill us, and what we can kill and eat. This is the primal part of our brain, which makes quick connections between “bottom-up” sensations to higher-order connections and learning centers. It is tied into our feelings of fear and desire to survive. Animals lower on the phylogenetic scale do not tend to speculate beyond their own perceptions and experiences regarding their immediate environment. True reasoning, as well as scientific methodology, demands a certain level of objectivity. Objectivity has been defined as “the idea that scientific claims, methods, results— and scientists themselves—are not or should not be, influenced by particular perspectives, value judgments, community bias or personal interests” (Reiss & Sprenger, 2020, para. 1). It may occur to the reader that this definition seems
Conceptual and Operational Definitions
43
difficult to fulfill, given our previous discussion of the egocentric nature of our brains with respect to survival. Indeed, this conceptual definition of scientific objectivity has spurred much philosophical debate regarding approaches to science. I prefer the more precise definition provided by Davis and Smith (2005) as based on quantifiable, measurable observations. Now, it may also occur to the reader that difficulties arise with quantifiable measurable observations of unobservable phenomenon, such as subjective experience. This is, indeed, a challenge inherent in behavioral science, but Davis and Smith (2005) acknowledge that complete objectivity is not the goal of the scientific method. “In conducting a research project, the psychologist, just as the good detective, strives to be as objective as possible” (p. 7). In addition to systematic methods of selecting research participants to avoid bias as much as possible, “researchers frequently make their measurements with instruments in order to be as objective as possible” (p. 7). While it is difficult for humans to maintain objectivity, it is not impossible, and the utilization of tools of reason aid in the process of thinking, predicting, and measuring within the scientific method. Stewart et al. (2011) rightly highlight that “a theory is an intended explanation that is capable of being corrected and so inherently capable of being wrong” (Stewart et al., 2011, p. 222). That is one of the reasons Davis and Smith (2005) include self-correction as an important step in scientific methodology. “Because scientific findings are open to public scrutiny and replication, errors and faulty reasoning that become apparent should lead to a change in the conclusions we reach” (p. 7). This is a good description of self-correction and captures the inherent objectivity that is the goal of science: the data should modify the hypothesis rather than the hypothesis modifying the data. Even in maintaining the iterative nature of scientific methodology, while adapting a hypothesis may mean more or different types of data are sought, care must be taken that such seeking of data is not filtered through the bias of the hypothesis. Objectivity is what helps lead us to self-correction. What is required of logic and scientific method is a more disciplined and objective approach because the goal of complexity science is to move beyond simple, direct cause-effect to complex, multivariate relationships that can help us predict more complex scenarios, and in turn manipulate and modify, our complex world. Traditional Versus Complexity Science “Traditional science has tended to focus on simple cause-effect relationships” (Phelan, 2001, p. 130). This can work well for classical, Newtonian mechanics, but few relationships outside of Newtonian mechanics are simple and direct cause-effect relationships. An example of a simple, direct, one-to-one cause-effect relationship is that gravity causes an object to fall. We can predict that relationship with 100% accuracy, every time. If I drop a ball outside a two-story building, it will fall, 100% of the time. But most relationships, even if there is a causal relationship, do not work that directly. Let’s take the relationship between smoking and cancer as an example. There has been a myriad of studies, using both animal and human models, with different combinations of variables, and smoking keeps surfacing as a significant predictor of several types of cancer. But we know the relationship is not direct cause-effect, because if it were,
44
6 The Scientific Method Applied to Complex Adaptive Systems
every individual who smoked would develop cancer, and we know that is not the case. Whatever role smoking plays and however large that role, it seems likely it takes place within the context of other variables with which it interacts, such as genetic and/or environmental factors.
Chapter Summary “Abandon All Hope, Ye Who Enter Here”—Dante Alighieri, The Divine Comedy (Inferno)
Is all hope lost, then? Can we glean any reliable predictions from these complex, multivariate, adaptive systems? All is not lost and, as centuries of applications of the scientific method have demonstrated, we can learn truths about the world and its reality through scientific methodology. Within the framework of complexity science, the key to finding patterns to underlying truths and foundational causal relationships, even if the latter are not direct one-to-one causal relationships, is to find those relationships and patterns that are consistent, despite variations in combinations of factors. They reveal some degree of predictability, whether they reveal cause-effect relationships or guiding models. This process of uncovering patterns can take time because most things in nature, including human behavior and physiology—perhaps especially—need multiple combinations of variables before patterns begin to emerge. Ultimately, it must be approached as a process, often a very long one, and there must be an inherent respect for uncertainty as part of this process.
References Davis, S. F., & Smith, R. A. (2005). An introduction to statistics and research methods: Becoming A psychological detective. Pearson/Prentice Hall. Douglass, F. (1849). Narrative of the life of Frederick Douglass an American slave. Written by himself. http://www.loc.gov/resource/lhbcb.25385 Douglass, F. (1892/2003). The life and times of Frederick Douglass (African American). De Wolfe & Fiske Company/Dover. French, J. R., & Raven, B. (1959). The bases of social power. In D. Cartwright (Ed.), Studies in social power (pp. 150–168). Research Center for Group Dynamics, Institute for Social Research, University of Michigan. https://isr.umich.edu/wp-content/uploads/historicpublications/studiesinsocialpower_1413_.pdf Lindley, D. (2007). Uncertainty: Einstein, Heisenberg, Bohr, and the struggle for the soul of science. Doubleday. Moreell, B. (2010, July 20). Power corrupts. Religion and Liberty, 2(6). https://www.acton.org/ pub/religion-liberty/volume-2-number-6/power-corrupts Phelan, S. E. (2001). What is complexity science, really? Emergence, 3(1), 120–136. https://doi. org/10.1207/s15327000em0301_08 Portrait of Frederick Douglass, United States Minister Resident to Haiti, and famed author of the autobiography, “A narrative of the life of Frederick Douglass, an American slave”.
References
45
[Photograph]. Retrieved from https://commons.wikimedia.org/wiki/File:Frederick_Douglass_ MET_DT1144_Retouched_by_N-Shea.png. CCO 1.0. Universal Public Domain. Reiss, J., & Sprenger, J. (2020, October 30). Scientific objectivity. Stanford encyclopedia of philosophy. Retrieved November 9, 2022, from Scientific Objectivity (Stanford Encyclopedia of Philosophy). Rogers, E. (1962/1971/1983). Diffusion of innovation (3rd ed.), The Free Press. Skinner, B. F. (1963). Operant behavior. American Psychologist, 18(8), 503. https://psycnet.apa. org/record/1964-03372-001 Stauffer, J., Trodd, Z., & Bernier, C.-M. (2015). Picturing Frederick Douglass: An illustrated autobiography of the nineteenth century’s most photographed American. Liveright Publishing Corporation. Stewart, J., Harte, V., & Sambrook, S. (2011). What is theory? Journal of European Industrial Training, 35(3), 221–229. https://doi.org/10.1108/03090591111120386 Thorton, S. (2022, September 12). Karl Popper. Stanford encyclopedia of philosophy. https://plato. stanford.edu/entries/popper/#BasiStatFalsConv/hypothesis/testing,and/reporting/of/results Trent, N. (2022, August 22). Frederick Douglass. Encyclopedia Britannica. https://www.britannica.com/biography/Frederick-Douglass Zimbardo, P. G. (1985). Psychology and life (12th ed.). Scott, Foresman and Company.
Chapter 7
The Challenge of Uncertainty in Complex Adaptive Systems Research
The Role of Uncertainty As a young twenty-something doctoral student in experimental psychology, specializing in human factors/human cognition, I came across a fascinating book called The Dancing Wu Li Masters (Zukav, 1979). It provided a broad overview of the history and concepts of quantum physics. What I found particularly interesting, as a behavioral science student and not a physicist, was a certain similarity in the measurement of both human behavior and the “behavior” of subatomic particles; namely, the uncertainty inherent in indirect observations. Both fields were predominantly concerned with the measurement of things that could not be directly observed. As such, the potential for measurement error, not to mention human error in interpretation, was more pronounced than in classical scientific observational research. Furthermore, what many laymen fail to realize, in interpreting scientific findings, is that such interpretation is based, fundamentally, on probability and perception. Lindley (2007) writes how, in 1927, the young physicist Werner von Heisenberg under the tutelage of his mentor Niels Bohr wrote down what would come to be known as Heisenberg’s uncertainty principle. Referring to subatomic particles, the domain of quantum physics, the very basics of the uncertainty principle concern the fact that measurement of certain properties can interfere with measurement of others. For example, measuring the speed of a subatomic particle can interfere with accurate measurement of the position, and vice-versa, so that you can have accuracy in one, but not both. Heisenberg’s uncertainty principle, according to Lindley (2007), to some degree challenged the “classical vision” of science, in which the goal “was to define your science in terms of observations and phenomena that lent themselves to precise description – reducible to numbers, that is – and then to find mathematical laws that tied those numbers into an inescapable system” (p. 3).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_7
47
48
7 The Challenge of Uncertainty in Complex Adaptive Systems Research
he Role of Probability and Statistical Reasoning T in Complexity Science Lindley (2007) provides a wonderful illustration of the workings of science in his discussion of the centuries of scientific speculation, theorizing, and debate that led to an understanding of Brownian motion and, more importantly, the uncertainty that belied the classic direct cause-effect understandings beneath much of Newtonian mechanics. Brownian motion has been defined as “the perpetual irregular motion exhibited by small particles immersed in a fluid. Such random motion of the particles is produced by statistical fluctuations in the collisions they suffer with the molecules of the surrounding fluid” (Basu & Baishya. 2003, p. 1). It is so called after the man who first made note of his observations of the seemingly random motion of pollen grains. Robert Brown (1773–1858; Britannica, 2022) was a Botanist whom his contemporary Charles Darwin described as “vastly knowledgeable” (Lindley, 2007, p. 10). It was this man who, according to Lindley (2007), “represented the capricious intrusion of randomness and unpredictability into the elegant mansion of Victorian science” (p. 10). As is not uncommon in science, Brown’s observations resurrected a very old theory that hitherto could not be confirmed: Democritus’ theory of the atom “from the Greek atomos, indivisible” (Lindley, 2007, p. 14). Lindley describes Democritus’ ideas as “a philosophical conceit more than a scientific hypothesis” as “what atoms were, what they looked like, how they behaved, how they interacted – such things could only be guessed at” (p. 14). Yet over 2000 years after Democritus lived, nineteenth-century scientists began piecing together the theories and observations that would confirm Democritus’ reasoning. In fact, one could say the atom became a theory that unified the observations of botanists, chemists, physicists, and mathematicians on the subject of Brownian motion. It was in 1803 that John Dalton “proposed that rules of proportion in chemical reactions…came about because atoms of chemical substances joined together according to simple mathematical rules” (Lindley, 2007, p. 14). However, it would take nearly another century, as well as conjecture and debate among various scientists across several disciplines, before they would arrive at an understanding of Brownian motion that involved probability and a certain degree of uncertainty; in short, a statistical understanding. Brownian motion, evidently, was a statistical phenomenon: the unpredictable, apparently random jiggling of tiny particles reflected in some way the average or aggregate motion of unseen molecules. It might not be possible to explain in minute detail just why a Brownian particle moved as it did, in its erratic way, but the broad parameters of its movement ought to follow from some suitable statistical measure of the motion of unseen molecules (Lindley, 2007, p. 19).
Lindley explains that, perhaps because a few early researchers into Brownian motion were unable to precisely define it mathematically, coupled with limitations underlying their adherence to the orderly cause-effect predictability of classical
Politics and Science
49
Newtonian mechanics, these researchers failed to pursue the conundrum “the more sophisticated advocates of kinetic theory soon found themselves obliged to contend with” (p. 19). The various theories and debates that took place through much of the nineteenth and early twentieth century over explaining Brownian motion and kinetic energy culminated in an acceptance of the role of statistical probability, even in the realm of physics. “To Einstein…the appeal of statistical reasoning was precisely that it allowed the physicist to make quantitative statements about the behavior of crowds of atoms, even while the motion of individual atoms remained beyond the observer’s ken” (Lindley, 2007, p. 29).
Politics and Science While academic hesitation when confronted with challenges to widely accepted existing theories is understandable and, indeed, necessary to critical evaluation, suppression of divergent views because of personal opinions, biases, or—worse— political persuasions is contrary to good science. History has repeatedly demonstrated that when politics mixes with science, at best it can delay advancement and, at worst, set it back (or eliminate it). Moreover, suppression or outright persecution of divergent scientific thought is often a harbinger of such groupthink with respect to the current scientific zeitgeist. While quantum physics was taking root in Germany prior to WWI, in the years following Germany’s military defeat, “the onerous Treaty of Versailles imposed huge reparations on an already impoverished country. Germany was made into an international pariah, excluded from the budding League of Nations. In the scientific world, Germans were ostracized, refused entry to international conferences, refused publication in many journals” (Lindley, 2007, p. 64).
Ironically, this banishment occurred on the eve of the rise to fame of one of the most, if not the most, famous theoretical physicists of all time, who hailed from Germany. The wry, witty, sometimes self-deprecating genius in the person of Albert Einstein would both challenge and establish some of the core foundations of both Newtonian and quantum physics. Lindley (2007) argues that although Einstein maintained a fundamental belief in the deterministic nature of the universe, he nonetheless recognized the limitations of the observer and introduced an inherent uncertainty that belied perfect predictability. According to Lindley (2007), the historian Henry Adams commented on Einstein’s disturbance in the acceptance of this unpredictability in science, as “it seemed to him…that kinetic theory was but a step away, philosophically, from chaos and anarchy” (p. 31). On the other hand, prominent physicists, contemporaries of Einstein, were more comfortable with the idea of relinquishing the dream of omniscience and perfect predictability. “To scientists it only seemed that their statistical theories actually
50
7 The Challenge of Uncertainty in Complex Adaptive Systems Research
gave them a greater grasp of the universe and an increased power of prediction. They understood more now than they had before, and would understand still more in the future” (Lindley, p. 31).
Embracing Uncertainty in Science A certain degree of uncertainty may be perhaps more widely accepted in the modern world of theoretical physics than among Einstein’s predecessors. Harvard Professor Lisa Randall, renowned expert in theoretical physics and author of several books on the topic, expressed this shift in her book Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World: Science is certainly not the static statement of universal laws we all hear about in elementary school. Nor is it a set of arbitrary rules. Science is an evolving body of knowledge. Many of the ideas we are currently investigating will prove to be wrong or incomplete. Scientific descriptions certainly change as we cross the boundaries that circumscribe what we know and venture into more remote territory where we can glimpse hints of the deeper truths beyond (2012, p. 4).
Einstein published four papers in 1905, including on the topics of his revolutionary theory of special relativity. However, the least known of the papers published that year addressed Brownian motion and Einstein’s subsequent development of the new field of statistical physics, which introduced elements of randomness and probability into the otherwise deterministic world of physical laws. According to Lindley (2007), Until this time, a theory was a set of rules that accounted for some set of facts. Between theory and experiment there existed a direct, exact two-way correspondence. But that was no longer quite the case. Theory now contained elements that the physicists were sure existed in reality, but which they couldn’t get at experimentally. For the theorist, atoms had definite existence, and had definite positions and speeds. For the experimenter, atoms existed only inferentially, and could be described only statistically. A gap had opened up between what a theory said was the full and correct picture of the physical world and what an experiment could in practice reveal of that world (p. 30).
Lissack wrote in 1999, “For 50 years organization science has focused on controlling uncertainty. For the past 10 years, complexity science has focused on how to understand it so as to better ‘go with the flow’ and perhaps channel that flow” (pp. 120–121). At the same time, Lindley (2007) makes the following important point: Uncertainty was hardly new to science in 1927. Experimental results always have a little slack in them. Theoretical predictions are only as good as the assumptions behind them. In the endless back-and-forth between experiment and theory, it’s uncertainty that tells the scientist how to proceed. Experiments probe ever finer details. Theories undergo adjustment and revision. When scientists have resolved one level of disagreement, they move down to the next. Uncertainty, discrepancy, and inconsistency are the stock-in-trade of any lively scientific discipline (p. 2).
References
51
Chapter Summary Uncertainty is inherent, not only in life, but in science. This is particularly true in the scientific study of complex adaptive systems, where systematic patterns of relationships must be differentiated from random relationships, the latter of which can be due to anything from measurement error to chance to biased interpretations. Placing undue certainty in hypotheses, theories, or methodologies without respect for the role of randomness and uncertainty in its various forms can be the prelude for scientific bias and fallacy. The expansion of knowledge of the physical universe that accompanied the great theoretical minds of the early twentieth-century quantum physicists opened the door, somewhat paradoxically, to an acceptance, however reluctant, of the role of uncertainty and probability in scientific exploration. Moreover, this slow revelation that had evolved through decades of debate over the existence of the atom, fueled by kinetic theory (Lindley, 2007) somewhat mirrored what would become a similar debate between the behaviorists and cognitive scientists in the 1920s and 1930s. In short, quantum physics helped pave the way for cognitive psychology.
References Basu, K., & Baishya, K. (2003, March). Brownian motion: Theory and experiment: A simple classroom measurement of the diffusion coefficient. Resonance, 8(3), 71–80. Retrieved December 2, 2022, from https://doi.org/10.48550/arXiv.physics/0303064 Britannica, T. Editors of Encyclopaedia. (2022, December 17). Robert Brown. Encyclopedia Britannica. Retrieved December 19, 2022, from https://www.britannica.com/biography/ Robert-Brown-Scottish-botanist Lindley, D. (2007). Uncertainty: Einstein, Heisenberg, Bohr, and the struggle for the soul of science. Doubleday. Lissack, M. R. (1999). Complexity: The science, its vocabulary, and its relation to organizations. Emergence, 1(1), 110–112. https://doi.org/10.1207/s15327000em0101_7 Randall, L. (2012). Knocking on heaven’s door: How physics and scientific thinking illuminate our universe. Random House. Zukav, G. (1979). The dancing Wu Li masters: An overview of the new physics. Morrow.
Chapter 8
Beauty and the Beast: The Importance of Debate in Science
Truth springs from argument amongst friends. Philosopher David Hume (1779)
1816: The Year Without a Summer It was a dark and stormy night… Well, maybe not stormy, but dark and cloudy. In 1815, a volcanic eruption of Mt. Tambora in Indonesia sent a spreading darkness over much of the world. Coming at the end of what is since known as “The Little Ice Age,” a period of global cooling that started in the 1400s, its effects were exacerbated (Fig. 8.1). During this “year without a summer” in Europe, legend has it that four friends were gathered and telling ghost stories, and one challenged the others in a contest to write the best horror story. As the story goes, one of the friends dreamed that night what has become one of the most famous horror stories of all time. The four friends were John Polidori, Lord Byron, Percy Shelley, and Mary Wollstonecraft. Inspired by stories and places she had visited during her summer in Germany and Switzerland, 19-year-old Mary Wollstonecraft started writing the story she would eventually call The Modern Prometheus (Center for Science Education, 2022; Frankenstein Castle, 2022; López-Valdés, 2018; Scriber, 2018). One such story was of an eccentric alchemist and scientist by the name of John Konrad Dippel. While Dippel did contribute many scientific advances, such as the invention of the clothing dye Prussian blue, he is also rumored to have been interested in alchemy and questionable animal elixirs to prolong life (Aynsley & Campbell, 1962). Mary’s story would be published 2 years later, after she had married Percy and was now known as Mary Shelley. One of the many castles Mary had visited during her summer in Germany was the birthplace of John Konrad Dippel in 1673––a castle belonging to the family whose name origin meant, in German, “Stone of the Franks.” We know it, of course, as the castle Frankenstein.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_8
53
54
8 Beauty and the Beast: The Importance of Debate in Science
Fig. 8.1 Image in Frankenstein, or The Modern Prometheus (Revised Edition, 1831) Note. Theodore von Holst (1831). Frankenstein, or The Modern Prometheus (Revised edition, 1831). Retrieved from File:Frankenstein, or the Modern Prometheus (Revised Edition, 1831) (page 6 crop).jpg - Wikimedia Commons. Public domain
Mary Wollstonecraft Shelley’s Frankenstein, or The Modern Prometheus, was published in 1818. She was just 21 years old at the time of publication. Though it is the monster that movies and Halloween costumes have made notorious, the story is much deeper, and fundamentally about the hubris of Dr. Frankenstein. Pride goeth before destruction (Proverbs 16:18, 2022). From the Bible to Greek tragedies to The Modern Prometheus, hubris has been recognized as a source of downfall. While morality fables speak of it with respect to its relation to spiritual blindness, in science, it can lead to a type of cognitive blindness that biases scientific approaches. Science is sometimes discussed in contemporary writings as if it is infallible, which is an assumption that is not only inconsistent with our understanding of human cognition, but also dangerous to advancement of knowledge. Specific problems with overconfidence in scientific findings and theories will be discussed below, but at the most fundamental level, as the classical Greek tragedies observed, all humans––even scientists––must be wary of hubris.
Beware of Groupthink in Scientific Research I hate crowds in scientific research…I tell my students, when you go to these {scientific} meetings, see what direction everyone is headed, so you can go in the opposite direction. Don’t polish the brass on the band wagon – Dr. V.S Ramachandran (as cited in Doidge, 2007, p.178).
The Spirit of the Times
55
Since surgeon Dr. Paul Broca discovered the area of the brain responsible for speech, now known as Broca’s area, the localization theory of the brain dominated the science of neurology for over 150 years. Based on the findings of Broca and later Wernicke, localization theory became well established. “Unfortunately…it went from being a series of intriguing correlations (observations that damage to specific brain areas led to the loss of specific mental function) to a general theory that declared that every brain function had only one hardwired location” (Doidge, 2007, p. 17). This is perhaps an example of even accomplished scientists falling prey to the error of drawing causation––particularly direct one-to-one causation, from correlation. More important than this error, however, was the groupthink mentality that is anathema to scientific discovery. “A dark age for plasticity began, and any exceptions to the idea of ‘one function, one location’ were ignored” (Doidge, 2007, p. 17).
The Spirit of the Times “Chance favors only the prepared mind” – Louis Pasteur (from 1854 lecture, as cited in Pasteur, Ector & Lancellotti, 2011, p. 1)
Zeitgeist is a German word roughly translated as “spirit of the times.” In this positive light, which engenders thoughts of enthusiasm and creativity, indeed, the mind can be prepared for scientific discovery. However, in the early 1960s, neurologist Dr. Paul Bach-y-Rita came to doubt localization while studying vision in Germany. His team showed that when a cat was shown an image, electrical activity was measured in the visual cortex, as expected. However, when the cat’s paw was accidentally stroked, the visual area fired as well, indicating it was also processing touch. Also a biomedical engineer, Bach-y-Rita became the father of neuroplasticity when he developed a perceptual “helmet” that was used to cure a woman of the sensation of perpetual falling. The woman’s vestibular cortex had been destroyed by an adverse drug reaction, and she was considered incurable based on localization theory of brain function. After using the “helmet” for several training sessions, she eventually was able to get around without it. Bach-y-Rita’s instrument successfully rewired her brain, so that her vestibular sense could be processed in another area of the brain. Because of Bach-y-Rita’s observations, he questioned, and because of his extensive reading of diverse thought, he discovered the overlooked work of “Marie-Jean- Pierre Flourens, who in the 1820s showed that the brain could reorganize itself” (Doidge, 2007, p.18). In fact, further reading lead Bach-y-Rita to conclude “even Broca had not closed the door to plasticity as his followers had” (Doidge, 2007, p. 18). As he furthered his reading of the existing literature, Bach-y-Rita discovered additional findings that contradicted a strict adherence to brain localization. Dr. Jules Cotard found in 1868 that children with brain damage to Broca’s area were able to regain the ability to speak, and in 1876, Dr. Otto Soltmann removed the
56
8 Beauty and the Beast: The Importance of Debate in Science
motor cortex from infant dogs and rabbits, yet found they regained the ability to move. Bach-y-Rita took these findings as evidence that, contrary to strict localization theory, “the brain demonstrates both motor and sensory plasticity” (Doidge, 2007, p. 19). However, despite both empirical evidence and a strong argument, because of the current zeitgeist surrounding the acceptance of brain localization, “one of his papers was rejected for publication six times by journals, not because the evidence was disputed but because he dared to put the word ‘plasticity’ in the title” (Doidge, 2007, p. 19). This demonstrates potential bias from what the Greek philosophers called “appeal to authority,” as well as groupthink, either of which is contrary to both objectivity and good scientific methodology. Such Zeitgeist-associated bias is not new to science. From the infamous arguments over geocentric versus heliocentric positions to Louis Pasteur’s challenge of the prevailing theory of his day, that of spontaneous generation, while academic arguments over such issues are critical, suppression of diverse theoretical perspectives is contrary to pursuit of truth in science. “Belief in light quanta went against the enormous and continuing success of Maxwell’s classical wave theory of the electromagnetic field…Insisting on the reality of light quanta, Einstein traveled for many years a lonely road” (Lindley, 2007, pp. 66–67). Indeed, Lindley’s (2007) overview of the history of quantum physics, in his book Uncertainty, does an excellent job showing the long trajectory of debate in physics, as well as its critical role in uncovering and proving theories. Indeed, in discussing the challenges faced by Niels Bohr upon first arriving in Copenhagen (where he would eventually be part of the monumental Copenhagen Interpretation), Lindley (2007) highlights the critical role of discussion and debate among scientists. “He had no laboratory and, burdened with teaching physics to medical students, hardly any time for research. Worse, he had no colleagues with whom he could thrash out his ideas” (italics emphasis mine; p. 57). Lindley goes on to describe how, “throughout his life Bohr’s ideal working method was to involve himself in a continuous, open-ended discussion, a permanently convened informal seminar with colleagues” (p. 57).
The Role of Debate in Complex Adaptive Systems This process of discussion and debate is particularly true in the science of complex adaptive systems, as it takes both coordination and debate among scientists before complex patterns begin to emerge. Again, we have seen this played out at an accelerated rate with the Covid pandemic. The sense of urgency necessitated compiling and sharing meta data. Patterns which ordinarily would have taken decades––or longer––to discover, such as unusual symptoms associated with long Covid, became clear more quickly because of the volume of data available to the scientific community globally. Likewise, rare problems associated with certain vaccines, such as blood clots and/or the increased chance of heart anomalies among young men,
The Wisdom of Crowds
57
helped better understand not only the vaccines, but also the mechanism of the virus itself. So, while scientific methodology demands a disciplined method, detecting the emergence of patterns in complex adaptive systems involves a certain openness to data, as well as entertainment of various hypotheses. The COVID-19 pandemic presented a unique opportunity, due to the global attention and availability of social media, to examine a phenomenon related to what has been called the “wisdom of crowds” (Surowiecki, 2005).
The Wisdom of Crowds The wisdom of crowds describes a social decision-making phenomenon often applied to classic management judgments or economic forecasts, where the averaged judgment across groups of individuals tends to outperform the judgment of even the strongest individual within a group. It has been widely researched and fairly well upheld since 2005. Indeed, researchers from MIT, Cambridge, Oxford, and the Max Planck Institute for Human Development designed a study to address their concern that “most empirical studies on the role of social networks in collective intelligence have overlooked the dynamic nature of social networks and its role in fostering adaptive collective intelligence” (Almaatouq et al., 2020, p. 11379). Their results indicated that “in the presence of plasticity and feedback, social networks can adapt to biased and changing information environments and produce collective estimates that are more accurate than their best-performing member” (p. 11379). Discussions surrounding the role of information dissemination on social media are often focused on the negative, but it is important to consider that the first step in the scientific method is observation, and social media allows the opportunity for a vast database of observations. While observation and judgment are not the same thing, and reliability of these reports and observations must obviously be taken into account, the sheer amount of data available has the potential to be mined for trends in health and behavior research, just as is the case with marketing or security or any other domain where trends and patterns are assessed. The dissemination of information during the COVID-19 pandemic serves as a good illustration of the potential of such. Arguably, the global focus, coupled with availability of social media discussions, accelerated the recognition of certain trends in otherwise unusual symptoms, such as the loss of taste and smell with the initial variant, long-term COVID, and the change in primary symptoms being reported with additional variants. Likewise, with diseases that are either rare or present with a myriad of symptoms, such as Lyme disease, viewing particular reports in a doctor’s office of seemingly unusual symptoms can be overlooked as unrelated or coincidental. In frustration, many sufferers of such ailments often turn to commiserating via social media, which has the potential to vastly increase the amount of data available. It should be noted that caution should be applied in cases of judgment, as social influences can increase
58
8 Beauty and the Beast: The Importance of Debate in Science
bias and lead to groupthink. Lorenz et al. (2011) argued that, while independent estimates may indeed adhere to the wisdom of crowd phenomenon, It is hardly feasible to receive independent opinions in society, because people are embedded in social networks and typically influence each other to a certain extent. It is remarkable how little social influence is required to produce herding behavior and negative side effects for the mechanism underlying the wisdom of crowds. (p. 9024)
Their experimental research demonstrated that, indeed, “social influence triggers the convergence of individual estimates and substantially reduces the diversity of the group without improving its accuracy” (p. 9024). While acknowledging the cautions that must be applied with respect to judgments, it remains the sheer amount of observational data available through even social media may enhance the ability to notice emergent patterns. Moreover, there may be opportunities to examine trends in judgments, such as diverse hypotheses, with the utilization of dispassionate automated aids across diverse social media groups consisting of both experts and nonexperts from different schools of thought.
Chapter Summary Scientists are not immune to cognitive bias. Complexity science, in particular, demands openness to various sources of data, as well as hypotheses, as systematic relationships within complex systems typically emerge only in the presence of large amounts of multivariate data. As for search for truth in any domain, lively and active debate is critical to avoid groupthink, as well as to uncover underlying and emerging patterns in complex adaptive systems.
References Almaatouq, A., Noriega-Campero, A., Alotaibi, A., Krafft, P. M., Moussaid, M., & Pentland, A. (2020). Adaptive social networks promote the wisdom of crowds. Proceedings of the National Academy of Sciences, 117(21), 11379–11386. https://doi.org/10.1073/pnas.19176871 Aynsley, E. E., & Campbell, W. A. (1962). Johann Konrad Dippel, 1673–1734. Medical History, 6(3), 281–286. https://doi.org/10.1017/s0025727300027411 Center for Science Education. (2022). Mount Tambora and the year without a summer. Retrieved November 11, 2022, from https://scied.ucar.edu/learning-zone/how-climate-works/ mount-tambora-and-year-without-summer Doidge, N. (2007). The brain that changes itself. Viking Penguin. Frankenstein Castle. (2022, October 26). Retrieved November 11, 2022, from https:/en.wikipedia. org/wiki/FrankensteinCastle Hume, D. (1875). Dialogues concerning natural religion (No. 2). Penguin Books, Limited. (Original work published in 1779). Lindley, D. (2007). Uncertainty: Einstein, Heisenberg, Bohr, and the struggle for the soul of science. Doubleday.
References
59
López-Valdés JC. (2018). Del romanticismo y la ficción a la realidad: Dippel, Galvani, Aldini y el moderno Prometeo». Breve historia del impulso nervioso [From Romanticism and fiction to reality: Dippel, Galvani, Aldini and “the Modern Prometheus”. Brief history of nervous impulse]. Gaceta Médica de México, 154(1), 105–110. Spanish. https://doi.org/10.24875/ GMM.17002889. PMID: 29420523. Lorenz, J., Rauhut, H., Schweitzer, F., & Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, 108(22), 9020–9025. https://doi.org/10.1073/pnas.1008636108 Pasteur, L., Ector, H., & Lancellotti, P. (2011). In the field of observation, chance favours only the prepared mind. Acta Cardiologica, 66(1). https://doi.org/10.1080/AC.66.1.2064959 PRO 16:18. King James Bible (2022). https://www.kingjamesbibleonline.org (Original work published in 1611). Scriber, B. (2018, October 15). Go inside Frankenstein’s castle. National Geographic. http:// nationalgeographiccom/travel/article/things-to-do/gernsheim-frankenstein-castle Surowiecki, J. (2005). The wisdom of crowds. Anchor. Von Holst, T. (1831). Frankenstein, or The Modern Prometheus (Revised edition, 1831). This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author’s life plus 70 years or fewer. Retrieved from File:Frankenstein, or the Modern Prometheus (Revised Edition, 1831) (page 6 crop).jpg -Wikimedia Commons. This work is in the public domain in its country of origin and other countries where the copyright term is the author’s life plus 70 years or fewer. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1927.
Part III
Cognitive Psychology: Understanding the Lens Through Which We Process Information in a Complex World
Chapter 9
Cognitive Psychology: What Is In The Box?
The Brave New World of Behavioral Research “Oh brave new world that has such creatures in’t!” – Miranda, Shakespeare’s The Tempest.
In 1932, Aldous Huxley published a novel that was in many ways both prescient of future scientific advancements, and critical of the ramifications of prominent theories of his day, including the new psychological approach of behaviorism. Combining genetic engineering with operant conditioning, he created a dystopian society of haves and have nots. As part of the story line, embryos were manipulated in the lab to have fetal alcohol syndrome and, in a twisted coverage of the nature- nurture debate, were then, as babies, subjected to operant conditioning to make them averse to beautiful things. In a particularly disturbing portion of the novel, they were shown colorful toys and such things that would normally attract their attention and, as they naturally approached them with excitement and anticipation, they received a painful electrical shock. These were destined to be the classes of “deltas” and “epsilons,” born and bred to be a slave class, doomed to work the dark and beauty-less realm of the fictional city’s underworld. In 1920 at Johns Hopkins University, Dr. John Watson and his graduate student Rosalie Rayner conducted an experiment on a 9-month-old baby. Before the advent of Institutional Review Boards for the protection of human participants, which did not come about until the 1970s, the boy, known as “Little Albert,” was exposed to various pleasant objects like bunnies and other furry things. Accompanying the presentation, however, were loud banging noises, and after a series of such exposures, Little Albert developed the expected conditioned fear response (De Angelis, 2010) (Fig. 9.1).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_9
63
64
9 Cognitive Psychology: What Is In The Box?
Fig. 9.1 Little Albert Cries At Sight of Rabbit Note. Kashyap (2021). Little Albert Cries At Sight of Rabbit. Retrieved from File:Albert experiment.jpg – Wikimedia Commons. CC BY-SA 4.0
The Behaviorists According to Zimbardo (1985), “for nearly fifty years American psychology was dominated by a behaviorist tradition expressed in 1913 by John Watson” (p. 273). One of the positions of the behaviorists was that for psychology to be a “real” science, they had to only concern themselves with the objectively measurable. “The only acceptable research method was objective, with no place for introspection, or any type of subjective data…behavior was viewed with an environmental emphasis – as elicited by environmental stimuli and, in turn, as having consequences in the environment” (Zimbardo, 1985, p. 273). I must stress that behaviorism contributed greatly to our understanding of both human and other animal learning and behavior. The field also contributed greatly to behavioral modification tools that have provided highly effective evidence-based treatments, such as systematic desensitization (Rachman, 1967; Sebastian & Nelms, 2017). But several behaviorists took what many would consider an extreme view in their scientific approach. Watson’s ideas heavily influenced one of the most famous psychologists of the twentieth century, B.F. Skinner, but Skinner was even more extreme in the view that the measurable environmental stimulus and the resulting measurable response were the only variables with which a serious scientific approach to psychology could be represented. He rejected any assumptions about ‘satisfaction’, S-R {stimulus-response} connections being learned, or any interpretation that resorted to inferences about an organism’s intentions, purposes, or goals. What an animal wanted was not important; all that mattered, according to Skinner, was what could be observed directly from an experimental analysis of behavior in which predictable relationships between overt actions and environmental
The Role of Quantum Mechanics
65
conditions were empirically determined. Only in this way could there be a true science of behavior. (Zimbardo, 1985, p. 273).
Zimbardo argued that Skinner was an extreme form of empiricist, “because he refuses to make inferences about inner states or about any nonobservable bases for the behavioral relationships that he demonstrates in the laboratory” (p. 273). While objectivity is a key element to the systematic approach of the scientific method, we must clarify that complete objectivity may be difficult, particularly if the phenomenon is not directly observable, as in the case of human perceptions and processing. We can directly observe human behavior; we can even directly observe human brain activity, but we cannot directly observe human processing. The fact that we often cannot, as yet, employ objectivity to the measurement of many critical underlying processes does not mean it is not important to try to approach objectivity to the degree possible. Much research on human cognition, perception, etc., is based on fundamentally subjective measurements, but with the use of tools to enhance the objectivity of the measurements. Lest one be tempted to dichotomize these into the “hard” and “soft” science camps, it is important to recognize that this is one similarity shared in studying both human behavior and the behavior of subatomic particles. Neither is directly observable but must be gleaned from evidence through tools that employ some degree of objectivity, as well as through replication. Several factors converged to create an urgency for the science of human cognition, what behaviorists often referred to as the “black box,” namely, quantum mechanics; the study of human performance, particularly human error; and computer science.
The Role of Quantum Mechanics Indeed, far from being contrary to the science of understanding human behavior, the study of the “black box” and the processes within it are critical, just as the study of unseen particles is critical to the theory of everything in physics. As Dr. Michio Kaku, theoretical physicist and author of the bestseller, The God equation: The quest for a theory of everything (2021), states, Even today most of our advanced science is done indirectly. We know the composition of the sun, the detailed structure of DNA, and the age of the universe, all due to measurements of this kind…This distinction between direct and indirect evidence will become essential when we discuss attempts to prove a unified theory. (p. 8)
One could say that quantum mechanics attempts to describe the behavior of unseen particles, whereas cognitive psychology attempts to describe the behavior of humans in terms of unseen information processing and decision making.
66
9 Cognitive Psychology: What Is In The Box?
The Role of Human Factors and Computer Science The near simultaneous rise of human factors with the advent of computers, starting in the 1940s and close on the heels of the dawn of quantum physics, solidified the need to understand these unseen information processes. The seminal vigilance and pilot error studies during WWII, which provided the cornerstone for human factors as a scientific discipline, demonstrated the very practical realities of failing to understand how humans process information. One can measure input and output, as the behaviorists emphasized, but to make an impact, the researchers had to peer into the “black box” to try to understand how the input was being processed. Just as with quantum physics, however indirect the tools to do so might be, output would be harder to predict without it, with often deadly consequences in the case of pilot error.
The Proverbial “Black Box” Decisions are about predicting the future. I am not talking about obscure, supernatural prognostications here. Decisions are (or should be) based on data and logic, but they are still predictions. If not, one would not actually need to make a decision; the necessary course of action would be obvious and rote in most cases, based on training. But decision making involves more than just applying knowledge. Decision making involves thinking, specifically cognition. To understand how a computer makes decisions, you need to know the software code. Likewise, to understand how a human makes decisions, you need to peer inside the “black box.”
Sister Mary Kenneth Keller Computer science provided a model of the proverbial “black box”: the software code. The first person in the United States to earn a PhD in Computer Science was Sister Mary Kenneth Keller. A member of the Roman Catholic order, Sisters of Charity of the Blessed Virgin Mary, she was awarded the PhD in Computer Science from the prestigious University of Wisconsin-Madison in 1965. According to the website of her alma mater, Sister Mary Kenneth Keller, PhD, recognized a truism that underpinned the overlapping disciplines of human factors and cognitive psychology, as well. In her words, “We’re having an information explosion…and it’s certainly obvious that information is of no use unless it’s available” (Ryan, 2019, para. 9). As we have learned subsequently from computer science, to understand the output, you must try to understand not only how the stimuli are processed but also, in the case of human factors, how the stimuli are even selected from the environment in the first place.
Brief History of Cognitive Psychology
67
In response to the improvements in human performance and safety following applications of research into how humans process information during various tasks and under different conditions, the field of cognitive psychology began to emerge in full force, starting in the late 1940s.
Brief History of Cognitive Psychology The “father of cognitive psychology” is considered Ulrich Neisser, not only because of his coining the term in his 1967 book, Cognitive Psychology, but also because of his extensive and lengthy career in research that contributed tremendously to the field. “Neisser brought together research concerning perception, pattern recognition, attention, problem solving, and remembering” (Association for Psychological Science, 2012). One of his findings was inattentional blindness, or our tendency to fail to notice the unexpected in a scenario, particularly when focusing on a particular task. It was popularized in a viral video in the early 2000s in which a gorilla walks through a group of people playing basketball, without being “seen” by a significant number of viewers (see videos at www.theinvisiblegorilla.com). The video was based on an experiment described in Chabris and Simons’ (2010) book, The Invisible Gorilla: How Our Intuitions Deceive Us, where basketball players were dressed in either black or white shirts, and viewers were asked to count the number of baskets made by team members wearing a particular color. About halfway through, a person in a gorilla suit walks through the scene, and more than half of viewers did not notice it. While the experiment proves to be a fun and enlightening demonstration, the results of inattentional blindness can be deadly, having been the theorized cause behind some aviation and automobile accidents, for example. At the time of his death in 2012, Neisser was a Professor of Psychology Emeritus at Cornell University. “Neisser showed that memory, no matter how certain we are of its accuracy, is often only a partially accurate or sometimes inaccurate reconstruction of past events” (Lowery, 2012, para. 4). Another prominent voice in the field of cognitive psychology was Sir Frederick Bartlett (1958), who “defined thinking as the skill of ‘filling in the gaps in evidence’” (Hastie & Dawes, 2001, p. 30). Michael Posner (1973) later expanded on Bartlett’s (1958) concept, defining thinking as “best conceived of as an extension of perception, an extension that allows us to fill in the gaps in the picture of the environment painted in our minds by our perceptual systems and to infer causal relationships and other important ‘affordances’ of those environments” (as cited in Hastie & Dawes, 2001, p. 3). Thinking is arguably about making connections, asking questions, looking for patterns, and seeking evidence from the environment to test those hypotheses. In short, thinking follows the scientific method. Remembering is not the same as thinking, but they are related. Much of how the human brain processes information can be explained by how it stores and accesses memories.
68
9 Cognitive Psychology: What Is In The Box?
How the Brain Retains Information Human performance, including memory, is affected by our limited ability to select and attend to information. Over the years of research in cognitive psychology, there have developed several theories, but all involve, to some degree, a limited capacity to store and process data in our memory resources. At this point, we must make a distinction between data and information.
Data Versus Information The terms are often used synonymously, but in actuality, they can be quite different, when considered from the standpoint of human cognition. There are numerous definitions of data, and they can vary greatly, depending on context. For example, with respect to data management, data has been defined as “information that has been translated into a form that is efficient for movement or processing” (Vaughan, 2019, para. 1). This definition, however, is inadequate from the standpoint of human information processing, as it conflates data and information. Efficiency in the movement of data by a computer does not necessarily translate into information for a human processor. More data do not necessarily communicate more information, if one accepts that information is a reduction of uncertainty, as defined by communications theorists (Berger & Calabrese, 1975). While this definition was originally intended for communications from a relationship perspective, it still conveys the complex nature of information in human cognitive processing, particularly with respect to the role of perception, and that sender and receiver may not interpret the same data in the same manner. In other words, from the standpoint of human information processing, while one might objectively define data, whether or not such data become information is more subjective, and dependent on other factors. Interestingly, a quick search of how data (or datum, singular) are defined demonstrates the assumed connection with information. Merriam-Webster defines data as “factual information (such as measurements or statistics) used as a basis for reasoning, discussion, or calculation” (2022). I could accept this definition, sans the inclusion of the word “information.” Human factors researchers know that data can be information, or it can be noise, contributing to greater uncertainty, greater confusion. One of the primary goals of human factors engineering is to manage the massive amount of data to which the brain is exposed, in order to maximize the information and minimize the noise. While increasing the efficiency of transmission of data may suffice for computer processing, efficiency of data transmission, alone, cannot be equated with enhanced information processing in the human brain. The phrase “design-induced human error” includes situations that place human operators in an environment that increases the chance of information––or, more accurately, data––overload, thereby increasing the chance of human error. The phrase originated in the aviation world.
The Magic Number Seven
69
As cockpits became more and more complex, with greater amounts of instrumentation, the issue of data overload became a real concern. Human-centered design and human-machine interfacing has since become a major scientific field applied to many domains in which humans must process large amounts of data quickly in order to make critical decisions. In order to do this effectively, we must first understand how humans process information. It is important to note that, from this point, the terms data and information will, indeed, be used synonymously, as that is ubiquitous in the literature. I just ask the reader to be cognizant of the fact that more data does not necessarily equate to more clarity or reduced uncertainty.
The Magic Number Seven Miller (1956), in his classic paper, The Magical Number Seven, Plus or Minus Two, was the first to discuss, not only the limits of short-term memory, but also the difference between bits and chunks of information. “One bit of information is the amount of information that we need to make a decision between two equally likely alternatives” (p. 83). Kantowitz and Casper (1988) stated that the human tends to transmit information at about the rate of 10 bits per second. When considering the amount of information that the brain can simultaneously manipulate, “one usually turns to the classic ‘seven plus or minus two’ capacity of working memory (Miller, 1956). Working memory, as its name implies, is that used to currently manage and process information inputs” (Carmody, 1993, p. 12). However, in the case of working memory within the context of complex and dynamic environments, the capacity of working memory may be reduced to three “chunks” of information (Moray, 1986). To distinguish bits from chunks, Miller used the analogy of someone learning code for radio telegraphs. Since the memory span is a fixed number of chunks, we can increase the number of bits of information that it contains simply by building larger and larger chunks, each chunk containing more information than before. A man just beginning to learn radio telegraphic code hears each dit and dah as a separate chunk. Soon he is able to organize these sounds into letters and then he can deal with the letters as chunks. Then the letters organize themselves as words, which are still larger chunks, and he begins to hear whole phrases…I am simply pointing to the obvious fact that the dits and dahs are organized by learning into patterns and that as these larger chunks emerge the amount of message that the operator can remember increases correspondingly. In the terms I am proposing to use, the operator learns to increase the bits per chunk. (p. 93).
Zimbardo (1985) described chunking as “the process of taking single items and recoding them by grouping them on the basis of similarity or some other organizing principle, or combining them into larger patterns based on information stored in long-term memory” (p. 308).
70
9 Cognitive Psychology: What Is In The Box?
Memory Stores Zimbardo (1988) wrote that “most cognitive psychologists define memory as a perceptually active mental system that receives, encodes, modifies, and retrieves information” (p. 301). It is typically classified into three different types or stores: sensory, short-term (a.k.a., working), and long-term memory. Sensory memory “is the first- stage component of most information-processing models...It is assumed that a register for each sense holds appropriate incoming stimulus information for a brief interval” (p. 305). For example, a sensory memory for the visual system is “called an icon and lasts about half a second” (p. 305). The sensory register is where most encoding takes place, or the “translation of incoming stimulus energy into a unique neural code that your brain can process” (p. 302). While cognitive psychologists concern themselves with all types of memory, it is the active processing that takes place in short-term (aka, working) memory, as well as the organization of information in long-term memory, that has long been the focus of cognitive psychology. This is because applications of cognitive psychology, such as human factors designs, are often concerned with managing the limited capacity of attentional resources, which is directly linked to memory. Where much of the active information processing takes place in working memory (hence the name), it interacts with, and is influenced by, long-term memory. Cognitive psychologists over the years have outlined the processes by which information is stored and organized in long-term memory, which is generally understood to be “stored in associative networks…where sections of the network contain related pieces of information…A semantic network has many factors in common with the network structure that may underlie a database or file structure” (Wickens et al., 2004, p. 136). Further details on the organization in long-term memory are provided by distinguishing between schemas and scripts, cognitive maps, and mental models.
Schemas and Scripts The entire knowledge structure about a particular topic is often termed a schema…Schemas that describe a typical sequence of activities, like getting online in a computer system, shutting down a piece of industrial equipment, or dealing with a crisis at work, are called scripts (Schank & Abelson, 1977). (Wickens et al., 2004, pp. 136–137).
Cognitive Maps Cognitive maps are simply the mental representations of visual-spatial information, such as physical layouts of a facility.
Memory Stores
71
Mental Models One of the earliest definitions of a mental model was that of Johnson-Laird (1980), who defined them as mental representations of the objects, sequences, and relationships that allow people to explain the functioning of a system, observed system states, and to make predictions about future system states. It was an extension of Minsky’s (1975) concept of “frames,” as the organizing of knowledge into “chunks” of information representing stereotyped situations, including expected behaviors and outcomes, and represented as networked connections. Wickens et al. (2004) define mental models as “the schemas of dynamic systems” (p. 137). They go on to explain. Mental models typically include our understanding of system components, how the system works, and how to use it. In particular, mental models generate a set of expectancies about how the equipment or system will behave. Mental models may vary on their degree of completeness or correctness…Mental models may also differ in terms of whether they are personal (possessed by a single individual) or are similar across large groups of people. (p.137).
Rasmussen (1982) proposed restricting mental models to the relational structures but also emphasized that a true mental model would allow for understanding and prediction of cause-effect that is not necessarily dependent on prior experience in a given situation. In other words, a certain degree of adaptability is inherent in Rasmussen’s concept of mental models. He also provided a taxonomy of mental representations that distinguished, in a progression of less to more complex and adaptive, rule-based, knowledge-based, and skill-based mental structures. One can argue this progression roughly follows that of the development of expertise.
Rule-Based At the lowest level, system properties are represented by the cognitive mapping of cues “representing states of the environment and actions or activities relevant in the specific context.” As such, Rasmussen argues rule-based structures do not meet his qualifications for a true mental model, as they do “not support anticipation of responses to acts or events not previously met, and it will not support explanation or understanding except in the form of reference to prior experience” (1987, p. 23). I am not sure I agree that such a structure would not technically be considered a mental model, but I do agree it is not a dynamic model, as distinguished in my 1993 dissertation, and as required for higher levels of situational awareness, per Endsley’s (1994) model (discussed in more detail in Chap. 16).
72
9 Cognitive Psychology: What Is In The Box?
Knowledge-Based Rasmussen characterizes representations at the knowledge-based level as the first in the taxonomy to qualify as true mental models, per his definition. At this point, the knowledge-based structure can be applied to problem solving, as “many different kinds of relationships are put to work during reasoning and inference…depending on the circumstances, whether the task is to diagnose a new situation, to evaluate different aspect of possible goals, or to plan appropriate actions” (p. 23).
Skill-Based The highest level of mental representation is characterized as skill-based. It involves dynamic sensory environments, where the representations are activated by particular sensory-perceptual cues. In addition to how we store and organize information in memory, there are fundamental distinctions in how we utilize information from both the environment and from our memories. The primary distinction made within cognitive psychology is that between automatic and controlled processing.
Automatic Versus Controlled Processing Delineating the differences between automatic and controlled processing has been a pursuit of cognitive researchers for several decades. The influential theory maintains that individuals process information via two qualitatively different means (Schneider & Shiffrin, 1977; Fisk & Scerbo, 1987; Fisk & Schneider, 1981). Automatic processing can be parallel in nature (i.e., one can “multitask”), is not limited by short-term memory (STM) capacity, requires little or no effort, is not under the person’s direct control, and––here is the kicker––requires extensive, consistent training to develop. Controlled processing, on the other hand, is serial in nature (i.e., one must be focused on a particular task), is limited by the capacity of STM since it actively manipulates information, and requires effort, hence the phrase “working memory.” Most importantly, it has the capacity to modify long-term memory. In short, active learning and training predominantly involves controlled processing, whereas application of expertise typically involves automatic processing. But more than this simple distinction, Fisk and Scerbo (1987) posed a hypothesis, empirically supported in a study by Fisk and Schneider (1981), that it is not only the quantity (e.g., amount of practice hours) but also the quality of practice that differentiates the development of automatic versus controlled processes. Stimuli and responses that are consistently mapped, or that require the same overt response to the same stimuli across trials, lead to the development of automatic processing…Stimuli
Automatic Versus Controlled Processing
73
and responses that possess varied mapping, on the other hand, in which subject responses to the same stimuli change across time, will not lead to automatic processing. (Carmody, 1993, p. 266).
It should be noted these early studies of automatic versus controlled processing utilized fairly simple vigilance tasks, in which an individual’s task is to respond to a particular signal in the middle of “noise,” the latter of which can be any distracting data, anything that confuses what is a signal and what is not. Vigilance tasks and, in particular, what is known as the vigilance decrement are among the most researched and consistent behavioral phenomenon in cognitive psychology, and they are discussed in more detail in Chap. 11. The most basic form of controlled processing is experienced when we learn anything new. Most people likely remember how much concentration was needed when they first learned to drive a car. Eventually, with much practice, the task became more automatic, with the ability to perform several tasks (e.g., drinking a cup of coffee, changing the radio station, etc.) simultaneously (at least theoretically!). This is further evidenced by the experience of most drivers who, at some point, can arrive at a destination and realize they have done so without much conscious thought. Malcolm Gladwell does an excellent job explaining the development of expertise in his book Outliers (2008), and those familiar with it often quote the 10,000-hour maxim as the amount of practice hours needed to develop expertise within virtually any domain involving complex skills. But there are more advanced forms of controlled processing that may be experienced even among experts in a particular area. These typically take place when experts are confronted with unfamiliar or novel situations, even within their domain of expertise. According to Garland et al. (1999), problem solving is “the general term for the cognitive processes a person uses in an unfamiliar situation, which the person does not already have an adequate working method or reference knowledge for dealing with” (p. 148). It can encompass both learning something new and an expert applying existing knowledge to a novel situation. Think again of driving. If you have ever experienced driving in a country where the rules of the road––such as on which side to drive––are very different from the ones in which you practiced, controlled processing once again becomes dominant. But problem solving also comes into play in more high-stakes, strategic decision-making scenarios, as will be discussed in Chap. 18. In other words, controlled processing can be thought of as the process of learning – whether learning something completely new or learning to apply one’s knowledge and skills in novel and/or creative ways. At least as far back as the seventeenth century, this distinction in human thinking was recognized. Philosopher John Locke (1632–1704) pointed out that while much of thinking is about recognizing simple associations between variables, and between stimuli and responses, “at the other extreme is controlled thought, in which we deliberately hypothesize a class of objects or experiences and then view our experiences in terms of these hypothetical possibilities” (Hastie & Dawes, 2001, p. 4).
74
9 Cognitive Psychology: What Is In The Box?
There is not necessarily a clear division between automatic and controlled processing. In complex, dynamic tasks, we often switch back and forth. Experts, in particular, often size up a situation – assess it through observation and diagnosis. This may be fairly automatic in the case of expertise, but, particularly when presented with a novel situation, they may then enter into a more controlled process of mental simulations, consisting of hypothesis formulations and testing that often involves mentally working through processes of trial and error. Indeed, prominent French psychologist Jean Piaget (1896–1980) defined what we would now term “what if” or, more to borrow from cognitive psychology terms, mental simulation as more “formal” controlled thought, in which “reality is viewed as secondary to possibility” (as cited in Hastie & Dawes, 2001, p. 4). Garland et al. (1999) distinguished two other forms of advanced controlled processing, planning and multitasking, from problem solving. Planning and multi-tasking are also types of processing that are able to deal with situations that are not the same each time. However, both take existing working methods as their starting points, and either think about them as applied to the future, or work out how to interleave the working methods used for more than one task. In problem solving, a new working method is needed. (Garland et al., 1999, p. 148).
While we have been studying the processes behind learning and memory for decades, much of the workings of the brain remain something of a mystery. Though the laws of Newtonian physics involve complex principles of nature, they are, to some extent, unchanging. The laws that governed the universe have remained the same since the universe began. Although nature itself adapts, the laws that govern it do not. So what happens when you add a complex adaptive system, such as a human being, into a complex, but highly predictable set of natural laws? Well, the law of gravity does not change, and one can predict with near 100% accuracy what will happen to an airplane under a given set of applied forces. What is more difficult to predict is the behavior of the human, who sets those forces into action through control of the machine.
Cognitive Neuroscience: The Biology of Thought and Behavior Santiago Ramón y Cajal (1852–1934) has been called the father of neuroscience, and he is in the ranks of Pasteur and Darwin for his contributions to the biological sciences. “Cajal produced the first clear evidence that the brain is composed of individual cells, later termed neurons, that are fundamentally the same as those that make up the rest of the living world” (Erhlich, 2022, para. 2). Neuroscience essentially grew out of the combined study of biology and psychology, with the term “neuroscience” first being used by Francis O. Schmitt in 1962, when he established the Neurosciences Research Lab at the Massachusetts Institute of Technology (Society for Neuroscience, n.d., p. 7).
Cognitive Neuroscience: The Biology of Thought and Behavior
75
Just as Neisser argued human behavior was too complex to be explained solely on the basis of operant conditioning, Sapolsky (2017) characterizes human behavior as “indeed a mess…involving brain chemistry, hormones, sensory cues, prenatal environment, early experiences, genes, both biological and cultural evolution, and ecological pressures” (p. 5). In the first chapter of Robert Sapolsky’s (2017) best-selling book, Behave: The Biology of Humans at Our Best and Worst, he cautions against scientists becoming “pathologically stuck in a bucket” (p. 9). This helps to explain both not only why it can be so difficult to study and predict human behavior, but also why diversity of theoretical models and debate must be inherent in the pursuit of understanding. By way of example, he quotes John Watson, founder of behaviorism, who believed behavioral development could be entirely explained by environmental influences. Sapolsky references Watson’s now infamous claim of, “Give me a dozen healthy infants, well formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select…regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors” (as cited in Sapolsky, 2017, p. 8). He goes on to provide a contrasting example in 1949 Nobel-prize-winning neurologist and developer of frontal lobotomies, Egas Moniz, who took the opposite, but equally “pathologically stuck” stance in the classic nature-nurture debate. Moniz stated, “normal psychic life depends upon the good functioning of brain synapses, and mental disorders appear as a result of synaptic derangements” (Sapolsky, 2017, p. 9). Beyond the damage to scientific reasoning of such unyielding certainty in one’s position, ethical violations obviously come into play. Scientists, too, must be wary of the trap of intellectual hubris, assuming they are insulated from narrow-minded, biased thinking. As Sapolsky reminds us, These were not obscure scientists producing fifth-rate science at Po-dunk U. These were among the most influential scientists of the twentieth century. They helped shape who and how we educate and our views on what social ills are fixable and when we shouldn’t bother. They enabled the destruction of the brains of people against their will. And they helped implement final solutions for problems that didn’t exist. It can be far more than a mere academic matter when a scientist thinks that human behavior can be entirely explained from only one perspective. (p. 10).
It is in speaking particularly of the horrors of zoologist Konrad Lorenz’ work with the Nazi Party’s Office of Race Policy that Sapolsky highlights the complex, sometimes seemingly contradictory behavior of human beings. Lorenz is widely known to many for describing imprinting in some species; it is even entered into popular culture, and referenced in the book and movie versions of Twilight (Meyer, 2005; Hardwicke, 2008). Sapolsky describes Lorenz as a man, at once “grandfatherly…being followed by his imprinted baby geese” and also “pathologically mired in an imaginary bucket related to gross misinterpretation of what genes do” (p. 10). The behavior of the human is more difficult to predict because the human is a complex adaptive system, acting and reacting within a complex adaptive system (CAS).
76
9 Cognitive Psychology: What Is In The Box?
Chapter Summary Understanding how the human brain stores, processes, and interprets information is critical to studying human behavior within complex systems, particularly given the adaptive nature of humans and the myriad of possible responses that can result. However, even when adhering to scientific methodology, there is both uncertainty and differences in perception, and a healthy respect for these factors is essential in the study of human information processing and resulting behavior. Indeed, the “black box” is enigmatic.
References Association for Psychological Science. (2012, April 27). Remembering the father of psychological science. Retrieved November 9, 2022, from https://www.psychologicalscience.org/observer/ Remembering the Father of Cognitive Psychology Bartlett, F. (1958). Thinking: An experimental and social study. Basic Books. Berger, C. R., & Calabrese, R. J. (1975). Some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication. Human Communication Research, 1(2), 99–112. https://doi.org/10.1111/j.1468-2958.1975.tb00258.x Carmody, M. A. (1993). Task-dependent effects of automation: the role of internal models in performance, workload, and situational awareness in a semi-automated cockpit. Texas Tech University. Chabris, C. F., & Simons, D. J. (2010). The invisible gorilla: And other ways our intuitions deceive us. Harmony. De Angelis, T. (2010, January). Little Albert regains his identity. APA Monitor, 41(1). Retrieved November 21, 2022, from https://www.apa.org/monitor/2010/01/little-albert Endsley, M. R. (1994). Situation awareness in dynamic human decision making: Measurement. Situational awareness in complex systems, 79–97. Erhlich, B. (2022, April 1). The father of neuroscience discovered the basic unit of the nervous system. Scientific American. Retrieved November 9, 2022, from https://www.scientificamerican.com/ article/the-father-of-modern-neuroscience-discovered-the-basic-unit-of-the-nervous-system Fisk, A. D., & Schneider, W. (1981). Control and automatic processing during tasks requiring sustained attention: A new approach to vigilance. Human Factors, 23(6), 737–750. https://doi. org/10.1177/001872088102300610 Fisk, A. D., & Scerbo, M. W. (1987). Automatic and control processing approach to interpreting vigilance performance: A review and reevaluation. Human Factors, 29(6), 653–660. https:// doi.org/10.1177/001872088702900605 Gladwell, M. (2008). Outliers: The story of success. Little, Brown. Hardwicke, C. (Director), Godfrey, W., Mooradian, G., Morgan, M., Rosenfelt, K., & Meyer, S. (Producers), & Rosenberg, M. (Screenplay). (2008). Twilight [Film]. Summit Entertainment. Hastie, R., & Dawes, R. M. (2001; 2009). Rational choice in an uncertain world: The psychology of judgment and decision making. Sage Publications. Huxley, A. (1998/1932). Brave New World. Perennial Classics. Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive Science, 4(1), 71–115. https://doi.org/10.1016/S0364-0213(81)80005-5 Kantowitz, B.H. and Casper, P.A. (1988). Human workload in aviation. In Wiener, E.L. and Nagel, D.C. (Eds.), Human factors in aviation (pp. 157-185). Academic Press, Inc. Kashyap V.C. (2021, February 2). Crying after seeing furry objects. [Photograph]. Retrieved December 2, 2022 from File:Albert experiment.jpg – Wikimedia Commons. This file is licensed
References
77
under the Creative Commons Attribution-Share Alike 4.0 International license. CC-SA 4.0. https://creativecommons.org/licenses/by-sa/4.0/deed.en Kaku, M. (2021). The God equation: The quest for a theory of everything. Doubleday. Lowery, G. (2012, February 28). Ulrich Neisser, a founder of cognitive psychology, dies at 83. Cornell Chronicle. http://news.cornell.edu/stories/2012/02/ulrich/ neisser-professor-emeritus-psychology-dies Merriam-Webster. (2022, December 11). Data. Retrieved December 20, 2022, from https://www. merriam-webster.com/dictionary/data Meyer, S. (2005). Twilight. Little Brown and Company. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. https://doi.org/10.1037/ h0043158 Minsky, M. (1975). A framework for representing knowledge. In P. Winston (Ed.), The psychology of computer vision (pp. 211–277). McGraw-Hill. Moray, N. (1986). Monitoring behavior and supervisory control. In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and human performance, Vol. 2. Cognitive processes and performance (pp. 1–51). Wiley. Neisser, U. (1967). Cognitive psychology. Appleton-Century-Crofts. Posner, M. I. (1973). Cognition: An introduction. Scott, Foresman. Rachman, S. (1967). Systematic desensitization. Psychological Bulletin, 67(2), 93–103. https:// doi.org/10.1037/h0024212 Rasmussen, J. (1982). Human errors: A taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents, 4, 311–333. Rasmussen, J. (1987). Mental models and the control of actions in complex environments. Risø National Laboratory. https://backend.orbit.dtu.dk/ws/portalfiles/portal/137296640/ RM2656.PDF. Ryan, M. (2019, March 18). Sister Mary Kenneth Keller (Ph.D., 1965): The first Ph.D. in computer science in the US. University of Wisconsin-Madison School of Computer, Data and Information Sciences. Retrieved November 9, 2022, from https://www.cs.wisc.edu/2019/03/18/2759/ Sapolsky, R. M. (2017). Behave: The biology of humans at our best and worst. Penguin. Schank, R. C., & Abelsson, R. (1977). Scripts, plans, goals, and understanding. Erlbaum. Schneider, W., & Shiffrin, R. M. (1977). Control and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84(2), 1–66. https://doi.org/10.103 7/0033-295X.84.1.1 Sebastian, B., & Nelms, J. (2017). The effectiveness of Emotional Freedom Techniques in the treatment of posttraumatic stress disorder: A meta-analysis. Explore, 13(1), 16–25. https://doi. org/10.1016/j.explore.2016.10.001 Society for Neuroscience. (n.d.). The creation of neuroscience: The society for neuroscience and the question for disciplinary unity 1969–1995. Retrieved November 21, 2022, from https:// www.sfn.org>images>pdf>HistoryofsfN Wickens, C. D., Gordon, S. E., Liu, Y., & Becker, S. G. (2004). An introduction to human factors engineering. Pearson Prentice Hall. Zimbardo, P. G. (1985). Psychology and Life (12th ed.). Scott, Foresman and Company.
Chapter 10
Is Perception Reality?
The Role of Perception In 1890, Oscar Wilde published a fascinating story about a young aristocrat by the name of Dorian Gray who, like many aristocrats of his time, commissions an artist to paint his portrait. In the story, Dorian is so enamored of his own youth and beauty that he makes a Faustian bargain, selling his soul in exchange for eternal youth. As he moves through the years unscathed in his physical beauty, his hubris and selfishness lead him to become more and more depraved, and he commits many terrible acts, including murder. While, indeed, Dorian does not age, he begins to notice that the portrait is aging in his stead. Moreover, it begins to take on a progressively more malevolent look with each of his transgressions, eventually becoming so hideous, he hides it in his attic. While all those around him perceive Dorian to be young and beautiful, the monstrous portrait, hidden from view, actually reflects reality – the true nature of the man. Is perception reality? The philosopher George Berkeley (1665–1753) famously said, “esse est percipi” (to be is to be perceived). While Berkeley’s philosophy is complex and beyond the scope of this book, it has broadly been interpreted (though one could argue, somewhat inaccurately or oversimplified) as nothing existing outside of our perception (or if not human perception, than God’s, or some grand perceiver, if you will) (Muehlmann, 1978). A modern approach to behavioral and social science is somewhat loosely related to this philosophy. Phenomenology has been defined as “an approach to research that seeks to describe the essence of a phenomenon by exploring it from the perspective of those who have experienced it” (Neubauer et al., 2019, p. 91). The authors go on to explain that phenomenology is “the study of an individual’s lived experience of the world {and} by examining an experience as it is subjectively lived, new meanings and appreciations can be developed to inform, or even © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_10
79
80
10 Is Perception Reality?
re-orient, how we understand that experience” (p. 92). Thus far, I am in agreement with this approach – if the researcher’s goal is solely to understand experience. But I greatly disagree with the extensions to scientific inquiry in general that seemed to be posed by Husserl (1970), to whom Neubauer et al. (2019) assigned the credit of defining the methodology. I am neither an expert in philosophy nor phenomenology, but according to Neubauer et al., “Husserl’s approach to philosophy sought to equally value both objective and subjective experience,” taking a pure phenomenology approach (p. 92) that “rejected positivism’s absolute focus on objective observations of external reality, and instead argued that phenomena as perceived by the individual’s consciousness should be the object of scientific study” (p. 92). The pursuit of scientific knowledge as conceptually defined in this book encompasses a pursuit of truth, and the previous characterization of Husserl seems to assume perception is truth, to which I vehemently disagree. This is not to say that perception, and the study of perception, is not important to understanding truth and reality, and, more importantly, how it impacts an individual’s behaviors. But, like the qualitative versus quantitative debate in research methods, it is not either/or, but both/and. I would argue that, while it is important to understand individual perceptions of the world, it is equally important, including to the understanding of human behavior and information processing, to try to objectively assess the world external to individual experience. A phenomenological approach that may do a better job of encompassing both perception and reality is that based on the philosophies of Martin Heidegger in Being and Time (2010/1953). Hermeneutic phenomenology, according to Neubauer et al. (2019), “moves away from Husserl’s focus” to an interest on “human beings as actors in the world and so focuses on the relationship between an individual and his/her lifeworld. Heidegger’s term lifeworld referred to the idea that ‘individuals’ realities are invariably influenced by the world in which they live” (p. 94). This hermeneutic approach to phenomenology seems to address at least part of something critical to understanding human behavior. An individual’s perception of the world, whether accurate or inaccurate, is, to a great degree, indistinguishable from that person’s reality and, as such, forms the interpretations and, more importantly, the individual’s reactions to that assessment of reality. Moreover, as we shall see, this can also extend to groups and organizations.
Bottom-Up Versus Top-Down Processing This blending of subjective perception with external reality can perhaps best be demonstrated with the human visual system’s processing of distance and depth. What we have learned through scientific research is that, indeed, depth perception, or “the ability of the brain to determine relative distance from visual cues” (Reinhart, 1996, p. 138), is that the brain uses two different means to assess depth:
Bottom-Up Versus Top-Down Processing
81
binocular cues, and various environmental cues of relativity and constancy that have been learned. Binocular cues include binocular disparity, which is the “displacement between the horizontal positions of corresponding images in your two eyes” (Zimbardo, 1985, p. 203). The binocular system also determines depth through angles of convergence between the two eyes (i.e., the movement of the two eyes inward when an object is closer). Perceptions of depth that are based on learned relationships, on the other hand, include cues such as size constancy, “the ability to perceive the true size of an object despite variations in the size of its retinal image” (Zimbardo, 1985, p. 206) and linear perspective, “a depth cue that also depends on the size-distance relation…when parallel lines recede into the distance, they converge toward a point on the horizon in your retinal image” (Zimbardo, 1985, p. 204). An example of the former would be if you are taking off in an airplane and look down to see the cars below, you judge that they are very far away, rather than very small, because you have learned that when actual cars or similar objects appear smaller than you know them to be, it is because they are very far away. These two means of perceiving depth provide a good example of what psychologists refer to as “bottom-up” versus “top-down” processing. Bottom-up processes take “sensory data into the system by receptors and {send} it ‘upward’ for extraction and analysis of relevant information,” whereas top-down processes invoke “a perceiver’s past experience, knowledge, expectations, cultural background, and language {to} feed down to influence how the object of perception is interpreted and classified” (Zimbardo, 1985, p. 188). Using binocular vision, the brain is driven by the visual sense organs to triangulate angles to perceive depth. At the same time, depth perception is also calculated in a top-down fashion (i.e., driven by learned experience) where the brain judges depth based on past experience. When there is a lack of either type of visual cue in the environment, it can be very difficult to perceive depth. For example, when I was in the Navy in the 1990s, Naval aviation survival training involved what-to-do when parachuting over the water. Because entering the water while wearing a parachute risks entanglement and consequent drowning, aviators were trained to dispense of their parachutes as soon as possible upon contact with the water. However, they were also trained to wait until they feel their boots contact the water. The reason for this is that, when over the open ocean in calm seas, there are very little visual cues of depth, whether binocular or learned. For example, convergence is useful only for relatively close objects (within about 10 feet), due to the mobility limitations of the eyes. Learned cues of relativity, such as size constancy or linear perspective, are also absent in an environment with limited visual cues. In such cases, it is extremely difficult to distinguish whether one is 10 feet over the water or 200 feet, and therefore, despite fears of entering the water with a parachute still on, pilots were trained to ignore their perceptions of depth, which could be faulty, and instead to focus on the horizon, to wait until they felt their flight boots hit the water, and then to egress from the parachute immediately.
82
10 Is Perception Reality?
The Perils of Perception In 1797, the British Admiral, Lord Nelsen, lost his right arm in an attack on Santa Cruz de Tenerife. Not long after the injury, “he vividly began to experience the presence of his arm, a phantom limb that he could feel but not see” (Doidge, 2007, p. 180). Phantom limbs have been recorded throughout history and scientific literature. First labeled by physician Silas Weir Mitchell who tended the wounded at Gettysburg. “Phantom limbs are troubling because they give rise to a chronic ‘phantom pain’ in 95 percent of amputees that often persists for a lifetime. But how do you remove a pain in an organ that isn’t there?” (Doidge, 2007, p. 180). In his book, How the Brain Changes Itself, Doidge (2007) describes Dr. V.S. Ramachandran as “the first physician to perform a seemingly impossible operation: the successful amputation of a phantom limb” (p. 187). According to Doidge, Ramachandran had been interested in phantom limbs since he was a student in medical school, and he began to suspect that neuroplasticity might explain phantom limbs. The “neurological illusionist” (p. 187) used a mirror box that he invented to fool the brain of a patient who had lost his arm due to a motorcycle accident into thinking the amputated limb was moving. “When Philip put his good arm into the mirror box, he not only began to ‘see’ his ‘phantom’ move, but he felt it moving for the first time” (p. 187). After 4 weeks of practicing with the box for 10 min a day, Philip’s sensations of the phantom limb––and the excruciating pain signals that went with it––were gone. His brain had “unlearned” the pain and relearned how to properly process the sensory-motor information. “Pain and body image are closely related…But as phantoms show, we don’t need a body part or even pain receptors to feel pain. We need only a body image, produced by our brain maps” (p. 188). Both Ramachandran and other scientists have since used the mirror box to treat patients with phantom limbs, and for those who show improvements from using the box, functional MRI scans have shown normalization of the sensory and motor brain maps. “The mirror box appears to cure pain by altering the patients’ perception of their body image. This is a remarkable discovery because it sheds light both on how our minds work and on how we experience pain” (Doidge, 2007, p. 188). Because they are so compelling, our perceptions do form our reality, as well as inform our decisions. The purpose of our senses (visual, auditory, tactile, etc.) is to convert the various physical forms of energy in our environment––be they light packets or sound waves or tactile vibrations––into the one language the brain understands, which is electrochemical in nature. This process of converting “one form of energy, such as light, to another form, such as neural impulses” is known as transduction (Zimbardo, 1985, p. 144). For example, many people are familiar with the receptor cells of the visual system. There are two main types of photoreceptors, rods and cones, which convert light energy to neural responses that result in various aspects of what we call vision (including brightness, detail, and color). In short, “sensation is the process of stimulation of a receptor that gives rise to neural
Perceptual Illusions
83
impulses which result in an ‘unelaborated’, elementary experience of feeling or awareness of conditions outside or within the body” (Zimbardo, 1985, pp. 143–144). On the other hand, what we might call the “interpreter” of these various forms of energy is the perceptual system. According to Zimbardo (1985), “the elaboration, interpretation, and meaning given to a sensory experience is the task of perception” (p. 144). This means, then, that our brains only use information based on our perception, or interpretation, of reality. But is there a reality outside of our perception? I would argue definitively yes. Years working as an aviation experimental psychologist with the US Navy provided me some of the best evidence that, while we base our decisions on our perceptions of reality, if those perceptions do not actually match reality, there is a problem. One of the most common causes of aviation accidents is spatial disorientation, which occurs when a pilot’s perception of reality does not match actual reality, and the consequences can be deadly. It is an example of a potentially fatal perceptual illusion.
Perceptual Illusions An illusion is “a misinterpretation of the sensory stimulus {that} is shared by most people in the same perceptual situation” (Zimbardo, 1985, p. 190). Spatial disorientation “defines illusions or perceived position associated with relative motion” (Reinhart, 1996, p. 133). In short, you are not in the position in space that you perceive yourself to be. Examples that occur commonly in flight include the “leans,” where the pilot may perceive the plane to be in a bank, when in fact it is straight and level, or, perhaps more dangerously, the pilot perceives the plane to be in a climb when it is, in fact, level. Such illusions typically occur after a series of aeromaneuvers that disturb the vestibular system, which is located in the inner ear. “There are two distinct structures – the semicircular canals, which sense angular acceleration, and the otolith organs, which sense linear acceleration and gravity” (Reinhart, 1996, p. 124). Because vision provides the most valuable data relative to orientation, illusions related to spatial disorientation arising from stimulation of the vestibular senses are more likely to occur under reduced visibility conditions. A famous example of this occurred in the tragic deaths of John F. Kennedy, Jr., his fiancé, and a friend. The accident occurred on takeoff during very foggy weather. Under such conditions, as vision is reduced, the vestibular system, which typically works in coordination with the visual system to orient us in space, can become faulty. This occurs for many reasons, but suffice it to say, it is unreliable. What can be a very compelling sensation of where we are in space (e.g., upside down or right side up) provided by the vestibular system can be quite inaccurate under these conditions, and hence, pilots are trained to “trust their instruments” rather than their perceptions. According to the National Transportation Safety Board (NTSB), JFK, Jr. was not instrument rated, meaning qualified to fly using only instruments. The accident was attributed to spatial disorientation
84
10 Is Perception Reality?
(NTSB, 2000; Phillips, 2000). JFK, Jr. simply was not where he perceived himself to be in space. His perception did not match the reality. Unfortunately, reality wins every time, unless, through training, the pilot learns to rely, not on their perception, but upon the objective facts. But why is the sensation of a pilot so compelling? If you have never experienced the mismatch of what your senses are telling you and reality, it is difficult to explain how compelling the senses can be, how you would bet your life savings (and often, your life) that your airplane is straight and level, when in fact it is in a severe bank and diving. The main reason is that our senses do not typically fail in orienting us on the earth, even when our eyes are closed (thus eliminating the visual “back-up” to the vestibular senses). The closest experience to disorientation one may get on earth is with an inner ear infection or vertigo, as the vestibular sense organs are located in the inner ear and, hence, anything that interferes with the normal movement of inner ear fluids can lead to disorientation or lack of balance. The fact is learning or experience and perception interact. Our past experience often guides our perceptions. This is known, again, as top-down information processing (i.e., from the memory stores in the brain to the sensory-perceptual system).
The Role of Uncertainty and Ambiguity in Perception Once again, I appeal to the original thinking of Malcolm Gladwell for an effective illustration. For centuries, the biblical story of David and Goliath has weaved its way into our cultural lexicon, to the point where it has become synonymous with the victory of the underdog under the most opposing foe, and most of us never question this interpretation – but Gladwell offers an interesting alternative interpretation that demonstrates how differently the same set of observations can be interpreted, and how easily such interpretation can be biased by prior assumptions and biases. Gladwell puts the story and its details into cultural and historical context, and, far from seeing David as the underdog, a reinterpretation emerges of Goliath as, essentially, the proverbial sitting duck. For example, Gladwell expertly alerts us to the fact that, while the modern reader perceives David’s sling shot as little more than a toy when faced with the sword and prowess of the much larger and battle-hardened Goliath, the latter is, quite to the contrary, bringing the proverbial knife to a gun fight. Gladwell proceeds to effectively argue that both the type of sling and the density of the stone used within it would likely have produced ballistic power similar to a modern handgun. In fact, within historical and cultural context, slingers were valued in battle for both their accuracy and their lethality. Moreover, as a shepherd, David would likely have been experienced with this weapon for protecting his flock from the dangers of predators (Gladwell, 2013).
Ambiguous Figures
85
Ambiguous Figures Ambiguous figures provide a dramatic illustration of the influence our concepts, beliefs, and expectations can exert on perception. These are figures that can be interpreted in at least two different ways. Often, the interpretations are based, at least in part, on the viewer’s past experiences or expectations. Consider the picture in Fig. 10.1. Some viewers may see a young woman wearing a veil with a feather, and with her head turned slightly away. Others may see an old woman with a very prominent nose. If you are very familiar with this picture, you are likely to be able to switch back and forth rather easily. But if you had never seen this picture and were first told a story about a young woman attending a ball, you would be more likely to initially see the young woman. Likewise, if you were told a story of an old and frail woman, you would more likely see the old woman. This is known as conceptual priming, and several empirical studies conducted over decades have demonstrated how this phenomenon can bias our perception toward our expectations, particularly when data can be interpreted multiple ways (e.g., Balcetis & Dale, 2007; Esterman & Yantis, 2010; Goolkasian & Woodberry, 2010). So what exactly is going on in that magnificent organ we call the brain? How does it process so much information, and at the same time be so vulnerable to misperception and misunderstanding?
Fig. 10.1 Ambiguous figure Note. Hill (1915). My wife and mother-in-law. Retrieved from https:/commons. wikimedia.org/wiki/ File:Youngoldwoman.jpg. Public domain
86
10 Is Perception Reality?
The senses are the brain’s interpreters, but the brain is constantly bombarded with data. Just in maintaining your body upright, your brain is processing massive amounts of data. It must first process a variety of forms of energy, from light packets to sound waves to proprioceptive data based on aspects of gravity and force. It is the task of the sensory system to convert these different forms of physical energy, via transduction, into the one language the brain can interpret, which is electrochemical in nature.
How The Brain Processes Information The brain is hard-wired to look for patterns. It is critical to survival. In the world of survival, speed is life or death, and that includes the speed of processing information from the environment. From the earliest time, the brain tries to organize and compress this massive amount of data into patterns. Take, for example, the infant visual system. It is your birthday. You emerge from a relatively dark world of muted external sounds into a cacophony of stimuli, all of which much be processed by your still developing brain. While the infant brain is initially bombarded by innumerable “bits” of data in the form of colors and angles and shadows, it quickly learns that certain colors and lights and angles tend to be regularly associated in patterns. In this manner, it converts multiple bits of data into one chunk of information, to recognize, for example, a human face. Since Miller published his “magical number seven” paper (Miller, 1956), evidence has accumulated that memory capacities are measured not by bits, but by numbers of familiar items (common words, for example, are familiar items). With experience, people start to literally “see” things differently; they process a situation much faster and more efficiently, due, in part, to “chunking” of bits of data into smaller meaningful units of information. Experts, in particular, “see” patterns that others do not readily perceive. In 1973, Chase and Simon conducted a series of seminal and classic studies within the field of cognitive psychology that demonstrated experts were able to reconstruct more pieces on a chess board from memory, after being able to view it for a short period of time, than were novices. The idea was that, through chunking, experts had to remember relatively few plays, rather than separate pieces on the board. However, the evidence is also strong that experts in a given domain store larger chunks of information than would be predicted by the capacity of short-term memory, so retrieval structures from long-term memory likely play a role, and may be accessed quickly by experts, when relevant, by recognition of cues in the task situation (Gobet & Simon, 1996). Indeed, such was alluded to by Chase and Simon, as well.
Filling in the Gaps
87
In relation to their work on the development of skill in the game of chess, they suggest that the representative heuristic is more a tool of the expert, whereas the availability heuristic is applied more by novice players. They attribute this to the notion that advanced players have an established repertoire of patterns to be used in a response. With each game, the advanced chess player makes similarity judgments to these patterns or prototypes. The novice, on the other hand, relies on a more limited selection of specific instances which are most readily brought to mind. (Carmody, 1993, pp. 281–282).
According to Hayashi (2001), “Simon found that grand masters are able to recognize and recall perhaps 50,000 significant patterns (give or take a factor of two) of the astronomical number of ways in which the various pieces can be arranged on a board” (p. 8). Carnegie Mellon professor of psychology and computer scientist, Herbert A. Simon, was a Nobel laureate and expert in human decision making. Simon concluded that “experience enables people to chunk information so that they can store and retrieve it easily” (as cited in Hayashi, 2001, p. 8). The advantage of pattern recognition and chunking is that they increase both the speed and the efficiency of information processing. This is both a strength and a weakness because, if we rely too much on learned “short-cuts,” it can lead to heuristics and cognitive biases, discussed in more detail in Chap. 13. The problem is that the brain, in its haste, may unconsciously construct its own picture to aid interpretation in the absence of cues. Let us return for a moment to Posner’s (1973) apt description of thinking as an extension of perception “that allows us to fill in the gaps in the picture of the environment painted in our minds” (as cited in Hastie & Dawes, 2001, p. 3).
Filling in the Gaps In 1999, a movie was released under the title The Sixth Sense. It was immensely popular, largely because of the surprise ending. But what made it such a surprise? Director M. Knight Shyamalan, knowingly or not, masterfully manipulated the human sense of perception, as well as any great magician. In particular, he demonstrated a keen insight into one of the most prominent cognitive biases, confirmation bias. In the beginning of movie, the viewer is given the “hypothesis” that the main character, a psychologist named Malcolm Crowe and played by actor Bruce Willis, survives a violent attack. Throughout the rest of the movie, the viewer is presented with ambiguous cues – situations that can be interpreted multiple ways. One example is a pivotal scene in the movie where Dr. Crowe and his wife are having an anniversary dinner. The audience generally presumes his wife to be angry and ignoring him because he is late, and this has exacerbated the tension that has built between them since the trauma. It is only after the “big reveal” at the end of the movie that the audience realizes this scene, as well as others, can be interpreted quite differently. Similar to Gladwell, M. Knight Shyamalan is a master at different
88
10 Is Perception Reality?
ways of interpreting the same set of observations. Understanding how these cues are interpreted, and more importantly, used in the process of decision making, is largely within the domain of cognitive psychology.
Expectations and the Black Box A book that offers a unique and fascinating perspective on just how important it is to understand this interaction between the environment, our connection to it through our sense organs, and the subsequent workings within the “black box” of the brain is Eyes Wide Open, by Isaac Lidsky (2017). Lidsky was born with retinitis pigmentosa, a condition which caused him to gradually become blind over the course of his youth. The experience, far from limiting Lidsky, led him to acute insights about how we create the reality in which we navigate. Again, I do not make the argument, nor does Lidsky, that there is not an actual physical reality independent of our senses or perceptions. However, we are actors within an external reality, and our actions are largely governed by the internal cognitive model of reality upon which we operate. “You do not see with your eyes, you see with your brain” (Lidsky, 2017, p. 9). This seemingly simple statement is actually quite profound and has far-reaching implications for the study of human behavior, particularly with respect to decision making. What Lidsky means by the phrase “eyes wide open” involves training one’s mind to recognize, and to some degree bypass, the limitations in vision, real or metaphorical, that are imposed by our preconceived notions. This process, he argues, can open our minds to almost limitless possibilities, as well as improve our practical behaviors. According to Lidsky (2017), “We build up a vast database of experiences and design for ourselves rules and logic consistent with those experiences. We generalize, simplify, predict” (p. 2). But throughout his experience of going blind, Lidsky learned––nay, experienced in a deeply moving and unique way––just how limiting our predispositions can be. There is a yin to the yang of increased processing efficiency. But far from lamenting over our potential limitations, Lidsky focuses on how unlocking our knowledge of how our brain processes information can help us to actively seek ways to enhance our perspective of the world. His experience of going blind provided him with a very unique insight into the true nature of what leaders refer to as vision, which goes far beyond the sensation, and into the understanding that “worse than being blind is having sight but no vision” (Helen Keller, as quoted in Lidsky, 2017, p. ix). I would argue that the primary source of this lack of vision, at least from the perspective of my own bias toward complex adaptive systems, is that it predisposes us to look for simple, direct, one-to-one causal relationships, rather than complex interactions. The mistake is in not recognizing the complexity, and the multivariate, interacting, and adaptive nature of the environment in which we make decisions. Decisions can, and often do, have very real––sometimes even deadly––consequences. As Lidsky puts it, “what you don’t know can’t hurt you, but what you think you know certainly can” (p. 1). According to the jacket that accompanies the hard
Expectations and the Black Box
89
cover of Lidsky’s book, “Fear can give us tunnel vision – we fill the unknown with our worst imaginings and cling to the familiar. But when we face new challenges, it’s most important for us to recognize the complexity of the situation, to accept uncertainty, and to avoid falling prey to our emotions…Whether we’re blind or not, our vision is limited by our past experiences, biases, and emotions.” This is a statement that has been supported in numerous empirical studies in cognitive science. In fact, some of the most well-supported and repeated findings, in both laboratory and field research, support this idea that our actions are based on our decisions, our decisions are based on our perceptions of reality, and our perceptions of reality are largely impacted by expectations based on past experience. As summarized in Lidsky’s (2017) book jacket, “It isn’t external circumstances, but how we perceive and respond to them, that governs our reality.” Lidsky’s blindness paradoxically opened his eyes to uniquely experiencing phenomenon that cognitive psychologists have been studying empirically for nearly a century. Lidsky begins his book with a story of his young daughter who, in her limited experience of the aquatic world and the academic understanding of biology and physics, perceived fish as swimming backward by wagging their heads. “In her mind, in her world, this was a fact as true as any others” (p. 1). Lidsky’s point is that our tendency to define reality based on such assumptions and presumptions is not completely lost as we mature. “Our lives are full of fish swimming backwards. We make faulty leaps of logic. We make myriad assumptions. We prejudge. We harbor biases. We assume. We experience our beliefs and opinions as incontrovertible truths. We know that we are right and others are wrong” (pp. 1–2). Empirical findings have demonstrated that, indeed, this is a universal principle in human cognition. It is not limited to one political, religious, ideological, or ethnic group or another. In fact, it is not even limited to the human species. Lidsky uses, in his book, the same phrase I have used in my classes for decades, “We are hardwired to do this” (p. 2). I came to this understanding through years of formal study in the brain sciences; Lidsky came to it through revelation brought on by his protracted experience of going blind and having, as he describes, the “blessing” of being forced to learn new ways of processing information from the environment, new ways of “seeing” the world. But why are animals hard wired to do this? As indicated in an earlier chapter, it is an essential part of learning how to navigate and, more importantly, survive in a world full of nearly limitless, but not always useful or relevant, data. The brain must figure out how to both recognize patterns, which is essentially a form of data crunching or chunking, and filter out irrelevant or redundant cues. According to Lidsky, “This is how we learn to survive, to interact with our world. We are creatures designed to find order in chaos, definition in ambiguity, certainty in a world of possibilities” (p. 2). The problem is that, while this tendency helps us process information much more quickly and efficiently, it also makes us susceptible to overreliance on previously formed connections and assumptions. In other words, it decreases our open-mindedness with respect to novel interpretations and can make us vulnerable to overfitting previously successful models into inappropriate situations. The latter can lead to disastrous business decisions, or even, catastrophic
90
10 Is Perception Reality?
loss of life and far-reaching political implications, as will be illustrated by examples in Chap. 21. When presented with an incomplete picture or ambiguous information, the brain will create a full picture. It will fill in the gaps. And this can lead to tragic results.
Chapter Summary Hastie and Dawes (2001) define thinking as “the creation of mental representations of what is not in the immediate environment” (p. 3). The data may be real and factual, but our interpretations are affected by our perceptions, and our perceptions are often affected by our preconceived notions. While there is a reality outside of perception, our decisions and behaviors are based largely on an internal construction of that reality, based on perception. This construction is grounded in both bottom-up processing, or information received from our senses, and top-down processing, which is based largely on past learned experience. Problems occur when there is mismatch between perception and external reality. Conditions are particularly ripe when relevant environmental and informational cues are ambiguous. So, before we delve further into how perception, and more importantly, misperception, may impact decision making, we must look further into the nature of human error in organizational performance.
References Balcetis, E., & Dale, R. (2007). Conceptual set as a top—down constraint on visual object identification. Perception, 36(4), 581–595. https://doi.org/10.1068/p5678 Carmody, M. A. (1993). Task-dependent effects of automation: the role of internal models in performance, workload, and situational awareness in a semi-automated cockpit. Texas Tech University. Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In W. G. Chase (Ed.), Visual Information Processing (pp. 215–281). Academic Press. https://doi.org/10.1016/B978-0-12170150-5.50011-1 Doidge, N. (2007). The Brain That Changes Itself. Viking Penguin. Esterman, M., & Yantis, S. (2010). Perceptual expectation evokes category-selective cortical activity. Cerebral Cortex, 20(5), 1245–1253. https://doi.org/10.1093/cercor/bhp188 Gladwell, M. (2013). The unheard story of David and Goliath. TED Conferences. https://www.ted. com/talks/malcolm_gladwell_the_unheard_story_of_david_and_goliath Gobet, F., & Simon, H. A. (1996). Recall of random and distorted chess positions: Implications for the theory of expertise. Memory & Cognition, 24(4), 493–503. https://doi.org/10.3758/ BF03200937 Goolkasian, P., & Woodberry, C. (2010). Priming effects with ambiguous figures. Attention, Perception, & Psychophysics, 72(1), 168–178. https://doi.org/10.3758/APP.72.1.168 Hastie, R., & Dawes, R. M. (2001). Rational choice in an uncertain world: The psychology of judgment and decision making. Sage Publications.
References
91
Hayashi, A. M. (2001, February). When to trust your gut. Harvard Business Review. https://hbr. org/2001/02/when-to-trust-your-gut Heidegger, M. (2010/1953). Being and time (J. Stambaugh, Trans.). State University of New York. (Original work published in 1953 by Max Neimeyer Verlag). Hill, W. E. (1915). My wife and mother-in-law, by the cartoonist W. E. Hill, 1915 (adapted from a picture going back at least to an 1888 German postcard. Retrieved November 21, 2022, from https:/commons.wikimedia.org/wiki/File:Youngoldwoman.jpg. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1927. Husserl, E. (1970). The crisis of European sciences and transcendental phenomenology: An introduction to phenomenological philosophy. Northwestern University Press. Lidsky, I. (2017). Eyes wide open: Overcoming obstacles and recognizing opportunities in a world that can’t see clearly. Penguin Random House LLC. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. https://doi.org/10.1037/ h0043158 Muehlmann, R. (1978). Berkeley’s Ontology and the Epistemology of Idealism. Canadian Journal of Philosophy, 8(1), 89–111. https://doi.org/10.1080/00455091.1978.10716210 National Transportation Safety Board. (2000). National Transportation Safety Board Aviation Accident Final Report. (Accident Number NYC99MA178). https://app.ntsb.gov/pdfgenerator/ reportgeneratorfile.ashx?eventid=20001212x19354&akey=1&rtype=final&itype=ma Neubauer, B. E., Witkop, C. T., & Varpio, L. (2019). How phenomenology can help us learn from the experiences of others. Perspectives on Medical Education, 8, 90–97. https://doi. org/10.1007/s40037-019-0509-2 Phillips, D. (2000, July 7). NTSB says disorientation likely caused JFK Jr. crash. The Washington Post. https://www.washingtonpost.com/archive/politics/2000/07/07/ ntsb-says-disorientation-likely-caused-jfk-jr-crash/08cd60a8-74ae-46e1-a2e8-960ab2e71116/ Posner, M. I. (1973). Cognition: An introduction. Scott, Foresman. Reinhart, R. O. (1996). Basic Flight Physiology (2nd ed.). McGraw-Hill. Shyamalan, M. K. (Director & Screenplay Writer), Marshall, F., Kennedy, K. & Mendel, B. (Producers). (1999). The Sixth Sense [Film]. Hollywood Pictures. Wilde, O. (2006). The picture of Dorian Gray. Oxford University Press. (Original work published 1890). Zimbardo, P. G. (1985). Psychology and life (12th ed.). Scott, Foresman and Company.
Chapter 11
The Nature of Human Error in Decision Making
To Err Is Human Error has long been recognized as inherent to the human condition. Hawkins (1987/1993) writes that when “the Roman orator Cicero declared that ‘it is in the nature of man to err’…he added that ‘only the fool perseveres in error’” (p. 30). But Hawkins counters that While a repeated error due to carelessness or negligence, and possibly even poor judgement, may be considered the act of a fool, these are not the only kinds of error made by man. Errors such as those which have been induced by poorly designed equipment or procedures may result from a person reacting in a perfectly natural and normal manner to the situation presented to him…Such errors are likely to be repeated and are largely predictable. (p. 30; italics emphasis mine)
The scientific study of human error really began in the middle of the twentieth century, spurred in large part by the pioneering work of researchers such as Fitts and Jones (1947). Error has been defined as “a human action that fails to meet an implicit or explicit standard…{It} occurs when a planned series of actions fails to achieve its desired outcome and when this failure cannot be attributed to the intervention of some chance occurrence” (Senders & Moray, 1991, p. 20). As necessity is the mother of invention, safety and error reduction is often the mother of human factors research. As such, a series of studies conducted during and just after WWII established one of the most enduring findings in human performance: the vigilance decrement.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_11
93
94
11 The Nature of Human Error in Decision Making
The Vigilance Decrement Vigilance, also known as monitoring behavior or sustained attention, has been defined as, “the ability of observers to maintain their focus of attention and to remain alert to stimuli for prolonged periods of time” (Warm, 1984, p. 2). The vigilance decrement, then, is “a decline in signal detection rate that occurs over time on a sustained-attention task” (McCarley & Yusuke, 2021, p. 1675). The classic vigilance task is a monotonous one in which there is a clear signal, and the task of the observer is to make a response to that signal. However, these signals occur amid what is called noise, distractions that can increase the difficulty of detecting a signal. According to Carmody (1993), “studies in vigilance date back to World War II, when it was noticed that British radar operators missed more and more targets as their time on watch progressed” (p. 256). Both field and laboratory studies were then developed in three countries, the United States, Canada, and England, to address this concern about the missing of signals by military personnel, including radar and sonar operators. Mackworth (1948) was the first to systematically and empirically study the phenomenon, finding a decline in performance over time, with performance defined predominantly as percentage of correct detections of signals. In fact, despite the very real motivation to perform in a time of war in order to prevent the death or destruction of self, loved ones, or homeland, detection of targets began to decline after approximately 30 min on duty (Parasuraman et al., 1987; Warm, 1990) (Carmody, 1993, p. 256). This regularly occurring and fairly predictable phenomenon came to be known as the vigilance decrement. Mackworth published a journal article in 1948 describing a study that had been implemented to examine why British radar operators were missing critical signals. For the study, he devised an instrument, similar to a clock, in which a pointer, similar to a clock’s second hand, slowly “ticks” along a blank background. The signal to which observers were instructed to respond was a “jump” of two ticks from the pointer, which occurred at random intervals. The “noise,” then, consisted of the regular, single ticks. There are four possible responses in a vigilance task: hits, in which there is a signal and it is correctly identified; misses, in which there is a signal and the observer fails to identify it; false alarms, in which an observer incorrectly identifies something as a signal; and correct rejections, in which there is no signal and the observer correctly exhibits a nonresponse. In fact, Bainbridge (1983) stated, “It is impossible for even a highly motivated human being to maintain effective visual attention towards a source of information on which very little happens” (p. 776). According to Carmody (1993), “subsequent research indicated that various manipulations could produce sharp decrements much sooner than the 30-35 minutes first indicated (Parasuraman et al., 1987)” (Carmody, 1993, p. 257). The vigilance decrement remains a concern with any situation in which humans are tasked with monotonous, passive monitoring. Modern real-world examples would include airport screeners searching luggage for contraband or radiologists searching for indications of pathologies. Though impacted by
What Is Error?
95
multiple factors, declines in performance can generally be expected after about 20–30 minutes on task. A few decades after the classic vigilance decrement studies, there was a profusion of interest within cognitive psychology in human decision making and, in particular, human error (Edwards & Slovic, 1965; Grossberg & Gutowski, 1987; Kanarick et al., 1969; Keinan, 1987; Tversky, 1972; Tversky & Kahneman, 1974; Tversky & Kahneman, 1981). Again, this was rooted largely in the aviation domain, in a large part because “the task of decision making within the aviation environment is, quite obviously, one involving complex, uncertain, risky decisions. Such decisions are often made under high time and load stresses, and often with the potential for producing hazardous results” (Carmody, 1993, pp. 253–254). This link between human error and overall system performance followed the general principle of complex adaptive systems, which is that relatively small changes (i.e., small errors) can produce disproportionately large outcomes (e.g., large-scale system failures and/or disasters). As such, the field of systems engineering, particularly as focused on the human factor, merged with safety research. If there is a father (or mother) of the scientific study of human error, particularly as applied to improving safety, it is undoubtedly Dr. James Reason. In The Age of Reason (Sumwalt, 2018), the Chairman of the National Transportation Safety Board (NTSB), Robert Sumwalt, writes that Reason is to safety as Freud is to psychoanalysis, Noam Chomsky is to linguistics, and Albert Einstein is to physics. Sumwalt writes that Reason’s “contributions to safety have been influential not only in transportation and workplace safety, but also in fields as varied as healthcare, nuclear power, and fraud prevention” (para. 4) Reason’s particular focus throughout his multidecade career has been upon classifying errors based on “the psychological mechanisms implicated in generating the error” (p. 81). In discussing the psychological origins of human error, Reason makes two major distinctions. The first is between errors and violations, and the second is between active and latent failures.
What Is Error? Reason (1995) defined error as “the failure of planned actions to achieve their desired goal” (p. 81). He outlined two ways in which such failure would occur as those in which the plan is adequate, but the actions do not go according to plan, and those in which “the plan is inadequate to achieve the intended outcome” (p. 91). In the former case, subcategories of error include slips and lapses. “Slips related to observable actions and are associated with attentional failures. Lapses are more internal events and relate to failures of memory” (p. 81). Both forms of error are “almost invariably associated with some form of attentional capture, either distraction from the immediate surroundings or preoccupation with something in mind. They are also provoked by change, either in the current plan of action or in the immediate surroundings” (p. 81).
96
11 The Nature of Human Error in Decision Making
Whereas both slips and lapses deviate from an adequate plan and involve errors of execution, mistakes, on the other hand, derive from a plan that is not adequate. Reason categorizes mistakes as either rule-based or knowledge-based. In the former, a person has adequate training, experience, and/or availability of appropriate procedures or solutions. Knowledge-based mistakes, on the other hand, result from Novel situations where the solution to a problem has to be worked out…{It} entails the use of slow, resource-limited but computationally-powerful conscious reasoning carried out in relation to what is often an inaccurate and incomplete ‘mental model’ of the problem and its possible causes. (p. 81)
Recalling Rasmussen’s classification of the highest level of mental representation, what he termed skill based, This dynamic world model can be seen as a structure of hierarchically defined representation of objects and their behaviour in a variety of familiar scenarios, i.e., their functional properties, what they can be used for, and their potential for interaction, or what can be done to them. (Rasmussen, 1987, pp. 20–21)
Errors Versus Violations Reason (1995) distinguishes errors from violations, the latter of which he defines as “deviations from safe operating practices, procedures, standards, or rules.” He highlights the major differences between errors and violations as predominantly cognitive in the case of the former and socio-organizational in the case of the latter. Whereas attention and memory issues are often at the core of errors, motivational and leadership issues are often at the core of violations. Moreover, with respect to possible solutions, “errors can be reduced by improving the quality and the delivery of necessary information within the workplace. Violations require motivational and organizational remedies” (p. 82). According to Reason (1995), violations fall into three main categories. Routine violations involve regular cutting of corners. Optimizing violations are “actions taken to further personal rather than strictly task-related goals.” Finally, violations can be “necessary or situational” and may be a means for increasing efficiency or used in cases “where the rules or procedures are seen to be inappropriate for the present situation” (p. 82).
Active Versus Latent Human Failures Reason established his principles of safety on the study of human behavior, particularly failures of the human factor within complex systems. Indeed, any student or practitioner of human factors would be familiar with Reason’s “Swiss Cheese Model,” shown in Fig. 11.1.
Active Versus Latent Human Failures
97
Fig. 11.1 Swiss cheese model by James Reason Note: Perneger (2000). Retrieved from https://openi.nlm.nih.gov/detailedresult?img= PMC1298298_1472-6963-5-71-1&req=4. CC By 2.0
The key to the Swiss Cheese model rests on an understanding of latent versus active failures. This concept is critical and has undergirded every understanding of human error within complex systems ever since its inception. Reason (1995) wrote, In considering how people contribute to accidents a third and very important distinction is necessary – namely, that between active and latent failures. The difference concerns the length of time that passes before human failures are shown to have an adverse impact on safety. For active failures the negative outcome is almost immediate, but for latent failures the consequences of human actions or decisions can take a long time to be disclosed, sometimes many years. (p. 82)
For illustration of active versus latent errors, Reason (1995) relays the observations of a Mr. Justice Sheen within the context of an accident investigation concerning the capsizing of a ship in 1987. The report illustrates not only latent versus active errors, but also the role of leadership and decision making within what Turner (1976) called the “Disaster Incubation Period.” At first sight the faults which led to this disaster were the…errors of omission on the part of the Master, the Chief Officer and the assistant bosun…But a full investigation into the circumstances of the disaster leads inexorably to the conclusion that the underlying or cardinal faults lay higher up in the Company. From top to bottom the body corporate was infected with the disease of sloppiness. (as cited in Reason, 1995, p. 82)
Unfortunately, such observations are not uncommon in accident investigations. It is said of the Titanic that decisions that contributed to the tremendous loss of life began in the shipyards, when the number of lifeboats, while not necessarily out of compliance with contemporary laws, nonetheless proved too few for requirements. Decision errors then continued to exacerbate throughout the ill-fated journey, due to a dangerous combination of overconfidence, poor risk analysis, and a high power distance culture among the crew that prevented communication of valid objections to those
98
11 The Nature of Human Error in Decision Making
poor decisions. Indeed, Heinrich’s (1941/1959) iceberg metaphor, which illustrates the final, direct cause of a major disaster as the proverbial tip of the iceberg, is both particularly relevant and somewhat literal in the case of the titanic. While most aviation accidents are attributable to human error, one can easily replace the Master, Chief Officer, and assistant bosun from Reason’s case above with the pilot and flight deck crew of an airplane. A researcher who testified regarding human factors issues in general with the Boeing 737 aircraft was Dr. Mica Endsley. A world-renowned expert in pilot situational awareness, in her testimony, Dr. Endsley summarized nearly a century of research and application of human factors as a discipline: There is a long history of blaming the pilots when aviation accidents occur. However, this does nothing towards fixing the systemic problems that underlie aviation accidents that must be addressed to enhance the safety of air travel. Often accidents are caused by design flaws that do not take the human operator’s capabilities and limitations into account. Bad design encourages accidents; good design prevents accidents. Solving these systematic design challenges is the primary calling of the field of Human Factors Engineering, which applies scientific research on human abilities, characteristics, and limitations to the design of equipment, jobs, systems, and operational environments in order to promote safe and effective human performance. Its goal is to support the ability of people to perform their jobs safely and efficiently, thereby improving the overall performance of the combined human-technology system.” (The Boeing 737 MAX, 2019, p. 120)
For this reason, the human factors discipline has derived the phrase, “design-induced human error” to illustrate the many latent factors that contribute to the final active error, which is the more obvious, but only the final domino in a long chain (Heinrich, 1959). Indeed, most, if not all, major disaster investigations arguably find similar patterns. One could write a book, and a long one, diagnosing disasters throughout history from the standpoint of active versus latent errors. So often have latent errors been linked to major disasters, Reason refers to them as latent pathogens (e.g., Reason, 1990). According to Carmody-Bubb (2021), “More recently, systems engineering principals focusing on latent errors within complex systems have been applied in areas as seemingly diverse as biopharmaceutical manufacturing (Cintron, 2015), medical practice (Song et al., 2020; Braithwaite et al., 2018; Neuhaus et al., 2018; Xi et al., 2018), military training (Hastings, 2019), innovation (Zhang et al., 2019), and cybersecurity (Kraemer & Carayon, 2007; Kraemer et al., 2009; Mitnick & Simon, 2003; Barnier & Kale, 2020). Such applications demonstrate why the work of researchers such as Reason (1995) and Rasmussen (1982) continues to be relevant today, expanding into fields outside of which their models were originally developed” (p. 5).
Potential Future Applications Applied in various areas, these human factors tools and methods have decades of evidence supporting their efficacy in improving, not only system performance, but overall safety, particularly in the domain of aviation, to which they have the longest
Risk Management in Complex Adaptive Systems
99
history of application. Despite the myriad of variables, from the potential mistake of an individual mechanic to the capricious vicissitudes of weather, aviation is generally considered the safest mode of travel. As I indicated in the introduction to this book, that is far from happenstance. It is due to the fact that, longer than any other socio-technical system, the aviation industry has been deliberately and systematically applying human factors models and systems engineering approaches, almost since its inception. But this long association between human factors applications and systems safety is not unique to aviation, as indicated by its successful application in other fields. At its core is the behavior of humans within complex socio- technical systems. However, despite its century-long foundation of empirical and practical support, human factors tools and methodologies remain highly underutilized. In the year since I have been writing this book, there have been three high-profile tragedies in socio-technical systems in which I believe these human factors applications are underutilized, two involving crowd surges at crowded venues, and one involving a school shooting in Uvalde, Texas. As I will focus on Uvalde in Chap. 14, I will turn now to the tragedies involving human stampedes.
Risk Management in Complex Adaptive Systems Within days of this writing, on October 31, 2022, Halloween revelers in Seoul, Korea, became victims of a crowd surge, or human stampede, resulting, according to a National Public Radio (NPR) report, in more than 150 deaths and more than 140 injuries. This occurred less than a year from a similar disaster that followed a concert halfway around the world, in Houston, Texas. The Houston concert crowd surge. A time line outlined by reporters Meredith Deliso, Jenna Harrison, Bill Hutchinson, Alexander Mallin, and Stephanie Wash presents a long prelude of red flag incidents (2021). Reporter Ray Sanchez wrote on November 7, 2021, that Paul Wertheimer, who founded the Crowd Management Strategies consulting firm and has campaigned for safer concert environments for decades, called what happened at the festival a crowd crush – a highly preventable tragedy…While many details surrounding the tragedy remain unclear, Wertheimer said crowd surges typically unfold over time. They don’t just happen. (para. 4, 11)
Those words echo decades of human factors research into various disasters. They do not just happen. The potential dangers of crowd behaviors during evacuation have been studied for quite some time. An article by Helbing and Johansson (2010) contends “Panic stampede is one of the most tragic collective behaviors, as it often leads to the death of people who are either crushed or trampled down by others” (p. 14). The authors cite several studies of this behavior, dating as far back as 1895 (Jacobs & Hart, 1992; Ramachandran, 1990; Turner & Killian, 1987; Miller, 1985; Smelser, 1963; Brown, 1965; Lebon, 1960[1895]; Quarantelli, 1957; Mintz, 1951).
100
11 The Nature of Human Error in Decision Making
While the tendency is to focus on the obvious, observable, often dramatic, and final human errors, whether those be the pilot error that is the final cause in a chain that leads to an air disaster, or the surgeon’s error that results in amputating the healthy, rather than damaged, limb, or the final act of an overzealous crowd at a music concert, what Reason captured is what is now a truism in human factors research. The final, active error is typically the result of a long chain of latent conditions and errors, often manifested in seemingly minor faults, which can build to disastrous consequences over time. Moreover, there are often patterns to these latent errors – patterns that can be linked to the leadership, culture, and strategic, as well as tactical, decision making practices of an organization.
Expectation and Human Error There are a couple of aspects of human error classification that have broad relevance to several concepts that are discussed throughout this book. The first is the classification of errors into random, systematic, or sporadic (Orlady & Orlady, 1999). There are two primary types of variation in any behavior: systematic, or that which can be predicted from one or more other independent variables, versus random, or that which cannot be predicted, or, in other words, does not exhibit a pattern that is associated with certain known variables. The remaining classification, sporadic, relates to what statisticians may refer to as a small sample size; occurrences are too few to draw inferences regarding whether or not they are random or systematic. Research has shown that many systematic errors have been linked to problems with human attention and expectation, or perceptual “set” (Orlady & Orlady, 1999). Though we do not tend to consciously think about it often, many of us would acknowledge that “most human experience starts with a form of physical energy being received by the senses” (Orlady & Orlady, 1999, p. 179). It further stretches our understanding to state that “sensing is different from perception” (p. 179). While different, the discipline of psychology has long determined that both sensing and perceiving are influenced by the individual sensor and perceiver. While we can never be sure that all of the stimulus energy available to an individual is perceived, stimulation never falls upon a completely passive receiver. Prior learning and current motivation are both important determinants in the quality and usefulness of the information that is perceived from a given stimulus. (Orlady & Orlady, 1999, p. 180)
Chapter Summary Research indicates human error is often the result of successive flaws in a system. Moreover, much of our understanding of human error goes back to psychological studies of human sensation and perception. Performance, including error, can be strongly impacted by prior experience, expectation, and motivation, with the
References
101
vigilance decrement studies indicated earlier emphasizing the compelling role of prior experience and expectation. Research on cognitive biases continues to find support for expectation as a strong predictor of perceptual interpretation and subsequent performance on decision making tasks.
References Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779. https://doi. org/10.1016/0005-1098(83)90046-8 Barnier, B., & Kale, P. (2020). Cybersecurity: The endgame – Part one. The EDP Audit, Control and Security Newsletter, 62(3), 1–18. https://doi.org/10.1080/07366981.2020.1752466 Braithwaite, J., Churruca, K., Long, J. C., Ellis, L. A., & Herkes, J. (2018). When Complexity Science Meets Implementation Science: A theoretical and empirical analysis of systems change. BMC Medicine, 16(1). https://doi.org/10.1186/s12916-018-1057-z Brown, R. (1965). Social psychology. Free Press of Glencoe. Carmody, M. A. (1993). Task-dependent effects of automation: the role of internal models in performance, workload, and situational awareness in a semi-automated cockpit. Texas Tech University. Carmody-Bubb, M. A. (2021). A systematic approach to improve school safety. Journal of Behavioral Studies in Business, 13, 1–14. https://www.aabri.com/manuscripts/213348.pdf Cintron, R. (2015). Human factors analysis and classification system interrater reliability for biopharmaceutical manufacturing investigations. Doctoral dissertation, Walden University. Deliso, M., Harrison, J., Hutchinson, B., Mallin, A., & Wash, S. (2021, November 8th). Astroworld festival timeline: How the tragedy unfolded. ABC News. Retrieved 11-12-21 from https://wxhc. com/astroworld-festival-timeline-how-the-tragedy-unfolded/ Edwards, W., & Slovic, P. (1965). Seeking information to reduce the risk of decisions. The American Journal of Psychology, 78(2), 188–197. Fitts, P. M. & Jones, R. E. (1947). Analysis of factors contributing to 270 “pilot error” experiences in operating aircraft controls (Report TSEAA-694-12A). Aero Medical Laboratory, Air Material Command, Wright-Patterson Air Force Base, OH: Aeromedical Lab. Grossberg, S., & Gutowski, W. (1987). Neural dynamics of decision making under risk: Affective balance and cognitive-emotional interactions. Psychological Review, 94(3), 300–318. https:// doi.org/10.1037/0033-295X.94.3.300 Hastings, A.P. (2019). Coping with complexity: Analyzing unified land operations through the lens of complex adaptive systems theory. School of Advanced Military Studies US Army Command and General Staff College, Fort Leavenworth, KS. https://apps.dtic.mil/sti/pdfs/ AD1083415.pdf Hawkins, F.H. (1987/1993). Human factors in flight (H.W. Orlady, Ed., 2nd ed.). Routledge. https://doi.org/10.4324/9781351218580 Heinrich, H. W. (1941; 1959). Industrial accident prevention. A Scientific Approach. McGraw-Hill. Helbing, D., & Johansson, A. (2010). Pedestrian, crowd, and evacuation dynamics. Encyclopedia of Complexity and Systems Science, 16, 6476–6495. https://doi.org/10.48550/arXiv.1309.1609 Jacobs, B. D., & Hart, P. (1992). Disaster at Hillsborough Stadium: A comparative analysis. In D. J. Parker & J. W. Handmer (Eds.), Hazard management and emergency planning. James & James Science. Kanarick, A. F., Huntington, J. M., & Petersen, R. C. (1969). Multi-source information acquisition with optional stopping. Human Factors, 11(4), 379–386. Keinan, G. (1987). Decision making under stress: Scanning of alternatives under controllable and uncontrollable threats. Journal of Personality and Social Psychology, 52(3), 639–644.
102
11 The Nature of Human Error in Decision Making
Kraemer, S., & Carayon, P. (2007). Human errors and violations in computer and information security: The viewpoint of network administrators and security specialists. Applied Ergonomics., 38(2), 143–154. https://doi.org/10.1016/j.apergo.2006.03.010 Kraemer, S., Carayon, P., & Clem, J. (2009). Human and organizational factors in computer and information security: Pathways to vulnerabilities. Computers & Security, 28(7), 509–520. https://doi.org/10.1016/j.cose.2009.04.006 LeBon, G. (1960). The crowd: A study of the popular mind. Viking.. (Original work published in 1895). Mackworth, N. H. (1948). The breakdown of vigilance during prolonged visual search. Quarterly Journal of Experimental Psychology, 1, 6–21. https://doi.org/10.1080/17470214808416738 McCarley, J. S., & Yusuke, Y. (2021). Psychometric curves reveal three mechanisms of vigilance decrement. Psychological Science, 32(10), 1675–1683. https://doi. org/10.1177/09567976211007559 Miller, D. L. (1985). Introduction to Collective Behavior. Wadsworth. Mintz, A. (1951). Non-adaptive group behavior. The Journal of abnormal and social psychology, 46(2), 150. https://doi.org/10.1037/h0063293 Mitnick, K. D., & Simon, W. L. (2003). The art of deception: Controlling the human element of security (1st ed.). Wiley. Neuhaus, C., Huck, M., Hofmann, G., St Pierre, P. M., Weigand, M. A., & Lichtenstern, C. (2018). Applying the human factors analysis and classification system to critical incident reports in anaesthesiology. Acta Anaesthesiologica Scandinavica, 62(10), 1403–1411. https://doi. org/10.1111/aas.13213 Orlady, H. W., & Orlady, L. (1999). Human factors in multi-crew flight operations. Routledge. Parasuraman, R., Warm, J. S., & Dember, W. N. (1987). Vigilance: Taxonomy and utility. In L. S. Mark, J. S. Warm, & R. L. Muston (Eds.), Ergonomics and human factors: Recent research (pp. 11–23). Springer-Verlag. https://doi.org/10.1007/978-1-4612-4756-2_2 Perneger T. (2000). Swiss cheese model by James Reason published in 2000. (Depicted here is a more fully labelled black and white version published in 200)1. Retrieved from https:// openi.nlm.nih.gov/detailedresult?img=PMC1298298_1472-6963-5-71-1&req=4. Creative Commons- Attribution 2.0 Generic- CC BY 2.0. Creative Commons — Attribution 2.0 Generic — CC BY 2.0 Quarantelli, E. (1957). The behavior of panic participants. Sociology and Social Research, 41, 187–194. Ramachandran, G. (1990). Human behavior in fires—a review of research in the United Kingdom. Fire Technol, 26, 149–155. https://doi.org/10.1007/BF01040179 Rasmussen, J. (1982). Human errors: a taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents, 4, 311–333. https://doi. org/10.1016/0376-6349(82)90041-4 Rasmussen, J. (1987). Mental models and the control of actions in complex environments. Risø National Laboratory.. https://backend.orbit.dtu.dk/ws/portalfiles/portal/137296640/ RM2656.PDF Reason, J. (1990). Human error. Cambridge University Press. Reason, J. (1995). Understanding adverse events: human factors. BMJ Quality & Safety, 4(2), 80–89. https://doi.org/10.1136/qshc.4.2.80 Sanchez, R. (2021, November 7). Beyond your control: The recipe for a deadly crowd crush. CNN. Retrieved November 21, 2022, from https://www.cnn.com/2021/11/06/us/what-is-a- crowd-surge/index.html Senders, J. W., & Moray, N. P. (1991). Human Error Cause. Lawrence Erlbaum. Smelser, N. J. (1963). Theory of collective behavior. The Free Press. Song, W., Li, J., Li, H., & Ming, X. (2020). Human factors risk assessment: An integrated method for improving safety in clinical use of medical devices. Applied Soft Computing, 86, 105918. https://doi.org/10.1016/j.asoc.2019.105918
References
103
Sumalt, R.L. (2018, May 1). The age of reason. NTSB Safety Compass Blog. https://safetycompass.wordpress.com/2018/05/01/the-age-of-reason/#:~:text=In%20the%20field%20of%20 safety,single%20name%20is%20better%20known.&text=His%20contributions%20to%20 safety%20have,nuclear%20power%2C%20and%20fraud%20prevention The Boeing 737 MAX: Examining the Federal Aviation Administration’s oversight of the aircraft’s certification hearing before the Committee on Transportation and Infrastructure House of Representatives, 116th Congress. (2019). https://www.govinfo.gov/content/pkg/ CHRG-116hhrg40697/pdf/CHRG-116hhrg40697.pdf Turner, B. A. (1976). The organizational and interorganizational development of disasters. Administrative Science Quarterly, 21(3), 378–397. Turner, R. H., & Killian, L. M. (1957/1987). Collective behavior (Vol. 3). : Prentice-Hall. Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79(4), 281–299. https://doi.org/10.1037/h0032955 Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 30, 453–458. https://doi.org/10.1007/978-1-4613-2391-4_2 Warm, J. S. (1984). An introduction to vigilance. In J. S. Warm (Ed.), Sustained attention in human performance (pp. 1–14). Wiley. Warm, J. S. (1990). Vigilance and target detection. In C. D. Wickens & B. Huey (Eds.), Teams in transition: Workload, stress, and human factors. National Research Council. Xi, Y., Ce, L., Xueqin, G., Furlong, L., & Ping, L. (2018). Influence of the medication environment on the unsafe medication behaviour of nurses: A path analysis. Journal of Clinical Nursing, 27(15-16), 2993–3000. https://doi.org/10.1111/jocn.14485 Zhang, Y., Yi, J., & Zeng, J. (2019). Research on the evolution of responsible innovation behavior enterprises based on complexity science. In International Conference on Strategic Management (ICSM 2019). Francis Academic Press.
Chapter 12
Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error and Decision Making
The Role of Cognitive Bias Cognitive bias has been defined as “a systematic (that is, nonrandom and, thus, predictable) deviation from rationality in judgment or decision making” (Blanco, 2017, p. 1). Blanco explains that, contrary to ideas of rational decision making, people do not tend to collect all relevant information to a decision and weigh potential costs and benefits. Instead, the preponderance of evidence, including from experimental data, “suggests that people’s judgments are far from rational; they are affected by seemingly irrational irrelevant factors” (p. 1). He uses the analogy of visual illusions, with which many are familiar and/or can relate, to demonstrate how such biases are “systematic; people fail consistently in the same type of problem, making the same mistake” (p. 1). As with visual illusions, cognitive biases are more likely to occur with reduced or ambivalent environmental cues and/or uncertainty.
Dealing with Uncertainty in Decision Making Hastie and Dawes (2001) describe decision making as dealing with “the evaluation of the likelihood, or probability, of consequences of choice. All such future consequences are viewed as uncertain…an absolute essential of rational decision making is to deal constructively with this uncertainty” (p. 331). In 1974, cognitive psychologists Amos Tversky and Daniel Kahneman published an article that revolutionized how we view human decision making. Judgment Under Uncertainty challenged the classical, normative, or rationalistic view of decision making. Rationalistic decision making dates back to Von Neumann and Morgenstern’s (1947) expected utility theory, outlined in Theory of Economic
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_12
105
106
12 Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error…
Games and Behavior (1944/1947), which “sought to develop in a systematic and rigorous manner a theory of rational human behavior” (1947, p. 559). Expected utility theory was developed to examine strategic decisions within the context of economic games and involves mathematical formulas based on the probabilities associated with expected values associated with alternative consequences. According to Hastie and Dawes (2001), the relevance of expected utility theory “to noneconomic decisions was ensured by basing the theoretical development on general utility…rather than solely on monetary outcomes of decisions” (p. 20).
Normative Versus Descriptive Decision Making Expected utility is classified under the umbrella of normative decision making, in other words, how people should make decisions if optimizing rationality by adhering to mathematical principles based on probabilities. However, Tversky and Kahneman were among the first to challenge some of the assumptions of expected utility theory, and among the first to produce empirical data to contradict it under certain circumstances. Their research was seminal in considering the role of cognitive psychology in decision making, and rather than focusing on how people should make decisions (Normative), they focused on how people actually make decisions (Descriptive) under various conditions. Based on their field observations that “many decisions are based on beliefs concerning the likelihood of uncertain events” (1974, p. 1124), they posed the question, “how do people assess the probability of an uncertain event or the value of an uncertain quantity?” (1974, p. 1124). According to Kaempf and Klein (1997), by the late 1980s, there was, A growing realization that decision making was more than picking a course of action, that decision strategies had to work in operational contexts, that intuitive or nonanalytical processes must be important, and that situation assessment had to be taken into account. (p. 233)
In other words, they were not formulated for strategic real-world decision making within complex adaptive systems. Classical or rationalistic decision theories “were formulated for straightforward, well-defined tasks. They were not intended for cases where time was limited, goals were vague and shifting, and data were questionable” (Kaempf & Klein, 1997, p. 235). Jensen (1982) and Klein (1993) were among several cognitive and human factors researchers to argue that classical, normative decision making may not adequately or accurately describe the process of decision making in dynamic, changing environments. On the basis of a growing body of empirical evidence, Kaempf and Klein (1997) claimed that “in operational settings people rarely compare options to select a course of action; that is, they do not decide what to do by comparing the relative benefits and liabilities of various alternative courses of action” (p. 235). Prominent aviation human factors researcher Dr. Christopher Wickens and his colleagues explained,
The Most Common Heuristics and Biases
107
If the amount of information is relatively small and time is unconstrained, careful analysis of the choices and their utilities is desirable and possible. To the extent that the amount of information exceeds cognitive processing limitations, time is limited, or both, people shift to using simplifying heuristics. (Wickens et al. 2004, p. 161)
Heuristics and Biases Tversky and Kahneman’s seminal 1974 article indicated that, particularly in uncertain and dynamic environments, “people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations” (p. 1124). Wickens et al. (2004) defined heuristics as “rules-of-thumb that are easy ways of making decisions” (p. 162). Because they can make information processing more efficient and generally capitalize on expertise, Tversky and Kahneman are quick to point out that, “in general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors” (1974, p. 1124). Kahneman (2013) defines heuristics as involving “a simple procedure that helps find adequate, though often imperfect, answers to difficult questions” (p. 98). Whereas a normative approach might closely align with the Bayesian statistics, in which “the essential keys…can be simply summarized {as} anchor your judgment of the probability of an outcome on a plausible base rate {and} question the diagnosticity of your evidence” (Kahneman 2013, p. 154), multiple studies have supported the prevalent usage of heuristics in human decision making.
The Most Common Heuristics and Biases Representative Heuristic Representativeness can be succinctly described as granting stereotypes greater weight than statistical probabilities. Wickens et al. (2004) describe the role of the representative heuristic in decision making as diagnosing, A situation because the pattern of cues ‘looks like’ or is representative of the prototypical example of this situation…{it} usually works well; however, the heuristic can be biasing when a perceived situation is slightly different from the prototypical example even though the pattern of cues is similar.” (p. 166)
Availability, Anchoring, and Adjustment In addition to the representative heuristic, Tversky and Kahneman (1974) discussed availability (the ease with which instances or hypotheses can be brought to mind) and anchoring and adjustment (the judgment of probabilities on the basis of arbitrary and irrelevant information).
108
12 Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error…
Overconfidence Wickens et al. (2004) discuss two other cognitive biases that are prevalent in human factors research on decision making, particularly in analyses of errors and accidents. The first is overconfidence, where people overweight the hypotheses they readily bring into working memory. “As a consequence, people are less likely to seek out evidence for alternative hypotheses or to prepare for the circumstances that they may be wrong” (p. 167). Of course, this is closely related to confirmation bias, in which people “tend to seek out only confirming information and not disconfirming information, even when the disconfirming evidence can be more diagnostic” (p. 167).
Cognitive Tunneling Closely related to confirmation bias is cognitive tunneling. While the terms are sometimes used interchangeably, where the former has to do with the filtering and weighting of evidence in support of a hypothesis, which is indeed part of cognitive tunneling, the latter is used more often in problem-solving situations, where the “phenomenon is to become fixated on a particular solution and stay with it even when it appears not to be working” (Wickens et al. 2004, p. 146).
Framing and Prospect Theory Another prominent bias, framing, was originally discussed in another classic article, Choices, Values and Frames (Kahneman and Tversky 1984). Hameleers (2021) states that framing “can be understood as patterns of interpretation that give meaning to issues and events. Frames emphasize some aspects of reality, whereas other aspects are made less salient” (p. 481). Sussman (1999) defined framing as a presentation that “orients a reader or listener to examine a message with a certain disposition or inclination…to focus ‘attention on data and premises within the frame’” (p. 2). The simplest definition of framing bias is that choices are often influenced by how––and what––information is presented to a decision maker. One could argue it is not necessarily a bias, per say, because it just refers to how information is presented, but it is perhaps a precursor. Framing was initially presented in terms of risk and loss within prospect theory. Prospect theory was an extension of the purely mathematical conceptualization of gambling presented in the classic 1947 article by von Neumann and Morgenstern, Theory of Games and Economic Behavior. Whereas the expected utility theory of Von Neuman and Morgenstern consisted of equations based on probabilities and monetary outcomes, prospect theory, initially described by Bernoulli in 1738
The Most Common Heuristics and Biases
109
(republished in Econometrica in 1954), incorporated more subjective values into the decision making equation. Prospect theory distinguishes two phases in the choice process: a phase of framing and editing, followed by a phase of evaluation (Kahneman and Tversky 1979). The first phase consists of a preliminary analysis of the decision problem, which frames effective acts, contingencies, and outcomes. Framing is controlled by the manner in which the choice problem is presented as well as norms, habits, and expectancies of the decision maker. (Tversky & Kahneman, 1989, p. S257)
It essentially describes the role of framing on decision makers’ assessments of risk. Prospect theory is usually described as decisions that are framed as a gain tend to lead to risk aversion, whereas those framed as a loss induce risk taking, despite the identical probability and utility outcomes. It may be more accurate, though, to speak in terms of uncertainty avoidance, as risk in this context tends to refer more to uncertainty than outcome. In their classic experiment, Tversky and Kahneman presented participants with an epidemic scenario that had a choice between two medical treatment programs, A and B. When framed as a gain, participants were told Program A would result in saving 200 people, whereas in Program B, there was a 33% chance no one would die, but a 67% chance no one would be saved. When framed as a loss, participants were told Program A would allow 400 people to die, whereas Program B would result in a 33% chance that no one would die, but a 67% chance all 600 would die. The interesting thing is that, when put into a utility equation, all four situations result in identical outcomes of lives saved versus lost, but people make decisions based on the framing of the information. When the treatment programs were framed in terms of gains (i.e., lives saved), people were more risk averse. It may be better to think of it in terms of uncertainty averse; they preferred the certainty of Program A as definitely saving 200 lives over the uncertainty of Program B, which framed the problem in terms of probability of lives saved versus lives lost, even though the expected utility outcomes of the two are identical (Van ‘t Riet, et al., 2016). “Because the evaluation of outcomes and probabilities is generally non-linear, and because people do not spontaneously construct canonical representations of decisions…normative models of choice, which assume invariance…cannot provide an adequate descriptive account of choice behavior” (Tversky & Kahneman, 1989, p. S257). According to Kahneman (2013), “Bernoulli suggested that people do not evaluate prospects by the expectation of their monetary outcomes, but rather by the expectation of the subjective value of these outcomes. The subjective value of a gamble is again a weighted average, but now it is the subjective value of each outcome that is weighted by its probability” (p. 434). Bernoulli’s concepts were perhaps the earliest written descriptions of “the inherently subjective nature of probability” (Kahneman 2013, p. 431), as well as one of the first observations of what Kahneman and Tversky (1984) later described as “the relation between decision values and experience values” (p. 341).
110
12 Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error…
Prospect Theory Applied to the Covid-19 Pandemic Prospect theory continues to be a widely accepted and researched theory within cognitive psychology and decision making, and evidence for its validity continues to mount. Indeed, recent research conducted in both the United States and the Netherlands tested prospect theory within the real-world crisis of the Covid-19 pandemic. Hameleers (2021) was interested in media messages concerning the Covid pandemic, applying the aspect of prospect theory that predicts “framing issues in terms of gains motivates people to avoid risks and protect the status quo (risk- aversion). However, when people are framed with dooming losses, they have the motivation to take a risk in order to prevent the worst-case scenario” (p. 481). They replicated the conceptual scenario behind the classic Tversky and Kahneman (1981) fictional epidemic. Hemeleers explains that many applications of prospect theory to real-world health scenarios frame risk as the severity of outcomes and examine preferences in terms of prevention behaviors (i.e., active behaviors that promote health, such as healthy diet and exercise) or detection behaviors, with the latter seen as more risky because they involve potentially finding out one has a disease. While his study adhered more to “the classic operationalization of risky choice framing” (p. 484), he acknowledges the complexities involved when trying to apply it to the Covid-19 pandemic. Once again, the well-defined problems inherent in much of the classic decision literature often fall short in real-world applications. Although it could be argued that stricter interventions are more risk-aversive in terms of casualties, at the time of data collection, all treatments were surrounded with high uncertainty. In addition, risk-aversive treatments for the health domain could imply risk-seeking consequences for the economy. (p. 483)
As such, Hameleers adopted a modified operational definition more specific to health aspects of the pandemic. Specifically, support for stricter interventions intended to promote social distancing was argued to correspond to lower risk. Hence, at the time of data collection, most information emphasized the risk of not taking action (and uncertainty with regard to casualties), whereas complying with interventions was seen as involving low risk. Some studies found that gain frames are more effective under conditions in which the elimination of risk is the desired outcome (e.g., Dijkstra et al. 2011) – which corresponds to the strict preventions to fight the pandemic proposed by governments throughout the globe. (p. 483)
Hameleers also acknowledged the impact of anxiety and negative emotions on assessment of risk. Acknowledging “framing the pandemic in terms of gains (people that recover) versus losses (people that do not survive) can affect the emotional states of receivers” (p. 484), he concedes possible interaction effects, with gain frames perhaps eliciting more hopeful emotions and loss frames more negative, more so than would be the case in laboratory-based hypothetical scenarios. A sample of 1121 participants from the United States and the Netherlands were randomly assigned to be presented either a gain-framed or loss-framed message adapted from the classic Tversky and Kahneman (1981) approach to fit the current Covid pandemic. As a control measure, survey response data were collected online
Response Sets, Cognitive Bias, and the Deadliest Aviation Accident in History
111
and within a 24-hour period, given the dynamic nature of Covid-related media messages. Their results indicated support, in general, for prospect theory (i.e., gain frames tended to lead to greater risk aversion); however, the multivariate and interacting nature of their real-world scenario complicated the findings (as they want to do!). Specifically, findings of support for prospect theory in general did not necessarily translate into support for particular government policies. The nature of this real-world challenge was summarized by Hameleers (2021) in terms of the practical implications of the findings: An important practical implication of these findings is that if governments want to motivate risk-aversion, they should rely on gain frames instead of loss frames (i.e., focusing on the amount of lives that can be saved if citizens incorporate the advice to integrate preventative behaviors in their daily routines). Yet, daily media coverage may impede this goal – as most legacy and alternative media coverage about the coronavirus contains a strong negativity bias that focuses on losses. (p. 495)
When applying prospect theory to health decisions, there remain valid criticisms with respect to the generalization of the theory (e.g., Van ‘t Riet et al., 2016). Nonetheless, there are certain basic principles of framing that remain relevant in most situations. According to Ditto et al. (2019), “If a decision about otherwise identical alternatives is affected by {how} those alternatives are presented {i.e., framed; for example: number of lives lost versus number of lives saved}(Tversky and Kahneman 1981) … then some deviation from rationality (i.e., bias) is implied” (p. 275).
esponse Sets, Cognitive Bias, and the Deadliest Aviation R Accident in History Cognitive biases are strongly related to the concept of response sets from perceptual research. At its most basic level of extracting information from the environment, a response set has been described as “a readiness to respond to the environment in a particular manner” (Orlady and Orlady 1999, p. 182). This can extend to applications in decision making and problem solving, with “the tendency to use a particular method or type of solution to a problem based upon previous experience or directions” (p. 182). This tendency to “perceive from and respond, in part at least, to what we are expecting” (Orlady and Orlady 1999, p. 182) can have very real and very deadly consequences. Though impossible to know for certain, data from flight deck voice recordings and a knowledge of human error have led many human factors professionals to the conclusion that the deadliest aviation accident in history was due, at least in part, to just such a perceptual error (Orlady and Orlady 1999). The disaster occurred in Tenerife in the Canary Islands of Spain in 1977, when two Boeing 747 s, a flight from the Dutch Koninklijke Luchtvaart Maatschappij (KLM) and one from Pan-Am, collided on the runway, resulting in the loss of 583 lives. As is most often the case, cognitive biases are particularly prevalent when cues are
112
12 Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error…
ambiguous and/or there is time or emotional stress. The stage was set, and the proverbial dominoes began to fall in place when a terrorist bomb at Gran Canaria (aka Gando) airport led to a diversion to Los Rodeos in Tenerife, a regional airport accustomed neither to such large body aircraft, nor to such a high volume of traffic. Other contributing factors were low visibility due to fog in Tenerife, flight crews coming up to crew rest minimums, fuel concerns, and reduced air traffic control staff at Los Rodeos. In addition to issues with lack of crew coordination, “the KLM 747 expected to hear a clearance to takeoff. Unfortunately, the takeoff clearance was partially misheard and then misinterpreted” (Orlady and Orlady 1999, pp. 182–183). Such conditions are ripe for mistakes that fall under Reason’s (1995) classification of knowledge-based. Again, knowledge-based mistakes occur in situations that demand high mental workload, involve dynamic and often changing environmental conditions, and in which a person’s situational awareness, or mental reconstruction of the environment, is lacking. “Under these circumstances the human mind is subject to several powerful biases, of which the most universal is confirmation bias” (p. 81).
Confirmation Bias Reason (1995) credits Sir Francis Bacon with first identifying confirmation bias. “The human mind, when it has once adopted an opinion draws all things else to support and agree with it (Bacon 1960/1620)” (p. 81). Reason indicates that confirmation bias, Is particularly evident when trying to diagnose what has gone wrong with a malfunctioning system. We ‘pattern match’ a possible cause to the available signs and symptoms and then seek out only that evidence that supports this particular hunch, ignoring or rationalizing away contradictory facts. (p. 82)
I first came across the work of former Harvard Business School Professor Michael Roberto when reading an interview about his book, the title of which intrigued me, Why Great Leaders Don’t Take Yes For An Answer (2005/2013). I ended up adopting this book for a decision making class I was teaching and found he had also developed a fascinating simulation tool that I have used several times in my graduate classes, to predicted effects. According to Roberto (2022), in the context of the simulation, “students receive a barrage of information through various channels… requesting that they determine the root cause of the issue and make recommendations on how {the organization} can get ahead of this problem” (para. 2). The most fascinating aspect of the tool concerns confirmation bias. The scenario involves a company that produces blood glucose meters, and there is a crisis decision involving resolution of a problem with either the meters or user error. Participants in the simulation, usually students in a decision making class, have been randomly assigned to one of two conditions. In one condition, the nature of the information provided to the group is biased toward the hypothesis of device
The Role of Evolution in Cognitive Bias: Fast Versus Slow Thinking
113
malfunction, whereas in the second condition, the information provided to the group is biased toward the hypothesis of user error. Participants must make an initial decision about the likelihood of device malfunction or user error. After this initial decision, a second set of information materials is presented. The key is that the information presented the second time is identical between the two conditions. This is designed, of course, to test for confirmation bias, and Roberto reports the majority of participants do fall prey to confirmation bias, and indeed, that most participants in the condition that presented with initial information that the problem was device malfunction became more entrenched in their beliefs after being presented with the second set of information, while most of those presented with initial information that the problem was user error were equally convinced after reading the second set, even though the second set contained the exact same information for both groups.
he Role of Evolution in Cognitive Bias: Fast Versus T Slow Thinking In his book, Thinking Fast and Slow, Daniel Kahneman (2013) identifies two types of cognitive processing: System 1, or “fast” thinking; and System 2, or “slow” thinking. System 1 evolved as a survival mechanism to allow us to react quickly to our environments. However, more complex, strategic decisions of modern life require the slower, more methodical reasoning characteristic of System 2. Since System 1 is more a part of our somewhat instinctual survival processing, activating System 2 can take effort, along with the conscious application of cognitive tools and decision aids. Kahneman begins his section on heuristics and biases by relaying an example of the law of small numbers: A study of new diagnoses of kidney cancer in the 3,141 counties of the United States reveals a remarkable pattern. The counties in which the incidence of kidney cancer is lowest are mostly rural, sparsely populated, and located in traditionally Republican states in the Midwest, the South and the West. What do you make of this? (p. 109)
Kahneman describes, how at this point, people begin to utilize both fast and slow processing, which he labels System 1 and System 2, respectively, as they are engaged in formulating hypotheses. The person’s need to both seek a pattern and formulate a logical hypothesis is described in an almost “elimination by aspects” process. One may quickly dismiss, states Kahneman, the hypothesis that Republican politics are somehow protective against kidney cancer but may conclude that the lower incidence of cancer could be attributed to the presumably healthier rural lifestyle. However, Kahneman makes the example more intriguing by then explaining that the exact same variables explain where the incidence of kidney cancer is highest: mostly rural, sparsely populated, etc. In the latter incidence, people hearing the information might conclude the higher incidences of cancer could be due to the effects of poverty associated with the rural lifestyle, such as lack of access to good medical care.
114
12 Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error…
Kahneman focuses on the statistical phenomenon of the law of small numbers. “The key factor is not that the counties were rural or predominantly Republican. It is that rural counties have small populations” (p. 110). While this is true, there are several other factors that help explain the propensity of people to try to form a cause-effect explanation, and they are related to the framing bias, discussed earlier. Sussman (1999) explains that “frames help decision makers reduce the complexity of issues and make sense of their environments” (pp. 2–3). Just as the case with director M. Knight Shyamalan (see Chap. 10), the scenario is told as a story, framed in such a way that the reader is given a presumed hypothesis. The reader’s mind immediately sets upon the task of explaining the connection precisely because the problem is presented as if there is an explanation contained within it. In Kahneman’s example of the kidney cancer incidence, however, this is a multivariate, complex, and adaptive system that is presented. There are multiple potential explanatory variables presented in the scenario. Each of these variables, in turn, has multiple covariates that could combine and interact in multiple ways to impact the incidence of cancer. Because the reader has been primed to expect that there is at least one variable within the story that helps explain the incidence of kidney cancer, the reader relies upon both System 1 and System 2, as indicated by Kahneman, to try to reach a logical hypothesis. While Kahneman states “When told about the high-incidence counties, you immediately assumed that these counties are different from other counties for a reason, that there must be a cause that explains the difference” (p. 110). What he does not mention, at least at this point, is that you assume there must be an explanation because the problem has been presented to you that way. Kahneman goes on to state, “System 1 is inept when faced with ‘merely statistical’ facts, which change the probability of outcomes but do not cause them to happen” (p. 110). While this is a true statement, it is not the only explanation for why people jump to the conclusion that at least one of the variables listed, or its covariates, is a cause of the increased incidence of kidney cancer. It is not just that System 1 is inept at judging statistical probabilities with small samples, it is that we are influenced by the way in which a problem is presented to us. Representativeness heuristic and its associated bias probably best explain the misinterpretation of the kidney cancer scenario that Kahneman highlights on the basis of small numbers. But confirmation bias best captures the general effects of both framing and storytelling, how the mind combines our subjective experience, past and present, to influence both our selection and interpretation of information. It is confirmation bias, I would argue, that undergirds them all.
Chapter Summary The study of specific heuristics and biases can occupy any cognitive psychologist at length, but for the practitioner of decision making, the key takeaways are awareness and training. Similar to what we have learned from decades of human factors
References
115
research, there are certain limitations to human memory and information processing, but these can often be overcome with awareness of these memory and cognitive limitations and biases, as well as the conscious application of empirically proven tools. Such tools can be relatively simple, such as the application of checklists to ensure critical steps are not overlooked by an otherwise distracted pilot in ensuring landing gear are down and locked. They can also be more strategic and complex, but just as readily consciously applied. The good news about biases, therefore, is that we do have tools that can reduce their occurrence. These will be discussed in more detail in Part V. The bad news about cognitive biases is that, left unchecked by systematically and deliberately applied tools, they can be one of the primary sources of human error in decision making.
References Bacon, F. (1960). The new organon and related writings. Liberal Arts Press. (Original work published in 1620). Bernoulli, D. (1954). Exposition of a new theory on the measurement of risk (L. Sommer, Trans.). Econometrica, 22(1), 22–36. (Original work published in 1738). Blanco, F. (2017). Cognitive bias. In J. Vonk & T. Shackelford (Eds.), Encyclopedia of animal cognition and behavior. Springer, Cham. https://doi.org/10.1007/978-3-319-47829-6_1244-1 Dijkstra, S., Rothman, A., & Pietersma, S. (2011). The persuasive effects of framing messages on fruit and vegetable consumption according to regulatory focus theory. Psychology of Health, 26(8), 1036–1048. https://doi.org/10.1080/08870446.2010.526715 Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., Celniker, J. B., & Zinger, J. F. (2019). At least bias is bipartisan: A meta-analytic comparison of partisan bias in liberals and conservatives. Perspectives on Psychological Science, 14(2), 273–291. https://doi. org/10.1177/1745691617746796 Hameleers, M. (2021). Prospect theory in times of a pandemic: The effects of gain versus loss framing on risky choices and emotional responses during the 2020 coronavirus outbreak – Evidence from the US and The Netherlands. Mass Communication and Society, 24(4), 479–499. https:// doi.org/10.1080/15205436.2020.1870144 Hastie, R., & Dawes, R. M. (2001). Rational choice in an uncertain world: The psychology of judgment and decision making. Sage Publications. Jensen, R. S. (1982). Pilot judgment: Training and evaluation. Human Factors, 24(1), 61–73. https://doi.org/10.1177/001872088202400107 Kaempf, G. L., & Klein, G. (1997). Aeronautical decision making: The next generation. In N. Johnston, N. McDonald, & R. Fuller (Eds.), Aviation psychology in practice (pp. 223–254). Routledge. Kahneman, D. (2013). Thinking, fast and slow. Farrar, Straus and Giroux. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica: Journal of the Econometric Society, 47(2), 263–291. https://doi. org/10.2307/1914185 Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39(4), 341–350. https://doi.org/10.1037/0003-066X.39.4.341 Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 138–147). Praeger. Orlady, H. W., & Orlady, L. (1999). Human factors in multi-crew flight operations. Routledge.
116
12 Seeing What We Expect to See: Cognitive Bias and Its Role in Human Error…
Reason, J. (1995). Understanding adverse events: Human factors. BMJ Quality & Safety, 4(2), 80–89. https://doi.org/10.1136/qshc.4.2.80 Roberto, M. A. (2022). Organizational behavior simulation: Judgment in crisis. Harvard Business Publishing. Retrieved December 20, 2022, from https://hbsp.harvard.edu/product/7077-htm-eng Sussman, L. (1999). How to frame a message: The art of persuasion and negotiation. Business Horizons, 42(4), 2–6. https://doi.org/10.1016/S0007-6813(99)80057-3 Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 30, 453–458. https://doi.org/10.1007/978-1-4613-2391-4_2 Tversky, A., & Kahneman, D. (1989). Rational choice and the framing of decisions. In Multiple criteria decision making and risk analysis using microcomputers (pp. 81–126). Springer, Berlin, Heidelberg. Van’t Riet, J., Cox, A. D., Cox, D., Zimet, G. D., De Bruijn, G. J., den Putte, V., De Vries, H., Werrij, M. Q., & Ruiter, R. A. (2016). Does perceived risk influence the effects of message framing? Revisiting the link between prospect theory and message framing. Health Psychology Review, 10(4), 447–459. https://doi.org/10.1080/17437199.2016.1176865 Von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behavior (2nd ed.). Princeton University Press. Wickens, C. D., Gordon, S. E., Liu, Y., & Becker, S. G. (2004). An introduction to human factors engineering. Pearson Prentice Hall.
Chapter 13
How Polarization Impacts Judgment: We Are More Emotional and Less Rationale Than We Would Like to Believe
Polarization Research “Beyond the fundamental discovery that people interpret information in biased and self-serving ways…conflict scholars have discovered that the existence of an opponent, competitor, or ‘other side’ radically effects the perception of issues and facts” (Robinson, 1997, p. 1). In a nutshell, the views and motivations of one’s own side are often viewed as reasonable and moral, whereas those of the opposing side are often viewed as irrational and/or immoral. Robinson highlights two phenomenon that occur as a result: reactive devaluation and naïve realism. Reactive devaluation is best explained within the context of an experiment described in Robinson’s article. During the cold war, two proposed plans for nuclear disarmament were presented to participants, consisting of American voters, in a research study. Half the participants were told Plan A was authored by then President Ronald Reagan and Plan B by then Soviet leader Michael Gorbachev, while the other half were told the opposite. According to Robinson (1997), Regardless of the content of the plans, Americans were ready to endorse a plan supported by their President, and reactively devalued the proposition by Gorbachev, even if it were the same plan voters enthusiastically endorsed when it was supposedly supported by Reagan. (pp. 1–2)
Such a phenomenon is seen repeatedly, not only in politics, but also in negotiations. Robinson (1997) adds that mounting research led to the development of the concept of “naïve realism,” in which “people generally assume they see the world objectively…underestimating the subjective forces that give rise to their perception and judgement” (p. 2). Additionally, naive realism involves the “false consensus effect,” in which “people assume that others share their judgments and perceptions” (p. 2). Finally, and perhaps most important to understanding polarization in conflicts, “when confronted with disagreement, partisans attribute their differences in judgment and their conflict to their opponent’s ideological bias and irrationality” (p. 2). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_13
117
118
13 How Polarization Impacts Judgment: We Are More Emotional and Less Rationale…
Robinson (1997) goes on to predict the following outcomes of naïve realism, which in light of highly polarized Western politics of 2022 seems particularly prescient: “Opposing partisans will a) exaggerate their opponent’s extremism, b) perceive their opposition to be ideologically biased, and c) overestimate the true magnitude of their conflict” (p. 2).
Partisan Perceptions and Biased Assimilation An important and oft-cited experimental study was published by Lord, Ross, and Leper in 1979 on the topic of biased assimilation. According to Lord et al., “people who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. They are apt to accept ‘confirming’ information at face value while subjecting ‘disconfirming’ evidence to critical evaluation” (p. 2098). Just as Reason (1995) is credited with first observing the ubiquity of what is now called confirmation bias, Lord et al. (1979) credit Francis Bacon with first describing the details of this phenomenon among human decision makers. The human understanding when it has once adopted an opinion draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusion may remain inviolate. (Bacon, 1960/1620, as cited in Lord et al., 1979, p. 2098)
Lord et al. tested this empirically, using experimental methodology and a controversial topic on which people were likely to hold strong opinions: the death penalty. In their study, Lord et al. (1979) recruited 48 college students, 24 of whom had previously been determined to support capital punishment (specifically, believing it to have a deterrent effect), and 24 of whom had been determined to hold anticapital punishment views (specifically, doubting its deterrent effect). According to Lord et al., “most of our subjects had reported initial positions at, or very close to, the ends of the attitude and belief scales” (p. 2101). The procedure involved mixed groups of pro and con participants, with an experimenter blind to the differences. Participants were presented with summaries of fictitious studies that provided data either supporting capital punishment as a deterrent or refuting it. Participants were then asked a series of questions in which they rated the degree to which their attitude toward capital punishment changed (i.e., became more or less opposed) and whether their beliefs about capital punishment as a deterrent changed. Following the summaries of the fictitious studies, participants were presented with critical reviews of the methodologies of said studies and were then asked to judge how well or poorly the study was conducted. “The entire procedure was then repeated, with a second fictitious study reporting results opposite of the first” (p. 2101). To prevent order effects, half the groups were presented antideterrent studies first, and half the groups were presented prodeterrent studies first.
What Can We Do About It?
119
Results supported the biased assimilation hypothesis, with proponents of the death penalty rating the prodeterrent effects of capital punishment as significantly more convincing than the antideterrent studies, whereas the participants who were anti death penalty reported the opposite, even though all participants read identical empirical findings and critiques. Even more interesting was support for Lord et al.’s primary hypothesis, which was “exposure to the ‘mixed’ data set comprised by the two studies would result in a further polarization of subjects’ attitudes and beliefs rather than the convergence that an impartial consideration of these inconclusive data might warrant” (p. 2102). The final conclusion of the study was “the net effect of exposing proponents and opponents of capital punishment to identical evidence studies ostensibly offering equivalent levels of support and disconfirmation - was to increase further the gap between their views” (p. 2105). In other words, the researchers found strong support for both biased assimilation and polarization hypotheses. A recent meta-analysis of 51 experimental studies, collectively involving over 18,000 research participants (Ditto et al., 2019) confirms many of the predictions outlined by Robinson (1997) and supported in the Lord et al. (1979) study. The 2019 meta-analysis “examined one form of partisan bias – the tendency to evaluate otherwise identical information more favorably when it supports one’s political beliefs or allegiances than when it challenges those beliefs or allegiances” (p. 273). This, of course, is related to confirmation bias, but in the form of values and allegiances, rather than hypotheses. Ditto et al. (2019) specifically focused on “cases in which {partisan bias}is less conscious and intentional, such that people are generally unaware that their political affinities have affected their judgement or behavior” (p. 274). In drawing on Tversky and Kahneman’s (1981) definition of cognitive bias as evaluating identical alternatives on the basis of how they are framed, Ditto et al. (2019) extend the analogy to politically partisan bias, which they define as “if the identical scientific study or policy proposal is evaluated differently depending on whether it reflects positively on liberals or conservatives” (p. 275). Moreover, they indicate the magnitude of the bias which can and has been empirically measured, which is the basis of their meta-analysis. Ditto et al. (2019) reported: The clearest finding from this meta-analysis was the robustness of partisan bias. A tendency for participants to find otherwise identical information more valid and compelling when it confirmed rather than challenged their political affinities was found across a wide range of studies using different kinds of samples, different operationalizations of political orientation and political congeniality, and across multiple political topics. (p. 282)
What Can We Do About It? Lord and Lepper were joined by Preston for a follow-up study in 1984 that focused on possible solutions to the problem of biased assimilation and polarization. Lord et al. (1984) argue that “social judgment result{s} from a failure - first noted by Francis Bacon - to consider possibilities at odds with beliefs and perceptions of the moment.” Their 1984 experimental study tested their hypothesis that “individuals
120
13 How Polarization Impacts Judgment: We Are More Emotional and Less Rationale…
who are induced to consider the opposite, therefore, should display less bias in social judgment” (p. 1231). Results indicated that strategies to induce participants to consider opposing possibilities, either through direct instruction or by increasing the salience of cues associated with alternatives, were effective in promoting impartiality in decision making. An important implication of Lord et al.’s (1984) study, aside from the potential to reduce cognitive bias, is that tools must be purposefully and effortfully applied. This is consistent with the findings of safety researchers, where even simple tools, like checklists, have proven to be lifesaving. In the concluding chapter of Malcolm Gladwell’s best-seller, Blink (2005), he describes how the use of simple physical screens between judges and performers helped end stereotypes against women playing certain instruments in the classical music world. According to Gladwell, There is a powerful lesson in classical music’s revolution…We don’t know where our first impressions come from or precisely what they mean…Taking our powers of rapid cognition seriously means we have to acknowledge the subtle influences that can alter or undermine or bias the products of our unconscious. (Gladwell, 2005, p. 252)
But Gladwell adds that the second lesson from the real-life parables told throughout his book are that prejudices and other preconceived assessments, though perhaps unavoidable on a conscious level, can, with awareness, be used to solve problems. Too often we are resigned to what happens in the blink of an eye. It doesn’t seem like we have much control over what bubbles to the surface of our unconscious. But we do, and if we can control the environment in which rapid cognition takes place, then we can control rapid cognition. We can prevent the people fighting wars or staffing emergency rooms or policing the streets from making mistakes. (Gladwell, 2005, pp. 252–253)
I am not sure I would agree we could completely prevent mistakes, but empirical evidence over decades has shown that, indeed, we can significantly reduce them by understanding how people process information in various environments, and by consciously implementing several tools to combat biases and other perils of what Gladwell calls rapid cognition. Lord et al. explain their finding in terms of perseverance. Citing Ross et al. (1975), Lord et al. (1984) define perseverance, with respect to evaluation of evidence, as “the tendency to retain existing beliefs even after the original evidence that fosters those beliefs has been shown to be invalid” (pp. 1239–1240). According to Lord et al. (1984), “Successful attempts to undo perseverance have required subjects to construct causal explanations for relations opposite to those indicated by the original evidence (Anderson, 1982; Anderson et al., 1980; Ross et al. 1975)” (p. 1240). This statement supports the idea that effort and deliberation are often required to counter confirmation biases, but with such effort, countering can indeed be effective. Robinson (1997) also suggests decision makers and negotiators can invoke deliberate strategies to reduce bias and enhance task performance and goal achievement: The first class of solutions involves self-awareness on the part of the negotiator. It is not enough to merely be aware that we are prone to partisan perception, we must also realize that our opponent sees us as extreme, unreasonable, and devious” (Robinson, 1997, p. 8).
Relationship as the Basis for All Negotiations
121
As part of the process of self-awareness, Robinson (1997) argues the responsibility falls on each individual to establish trustworthiness. “This may involve making the first gesture, being prepared to offer goodwill gestures, and inviting input from the other side on how they would like the negotiation to proceed” (p. 8).
Relationship as the Basis for All Negotiations This brings to my mind a book I selected for a class I taught on conflict management based on an intriguing title: George Kohlrieser’s (2006) Hostage at the table: How leaders can overcome conflict, influence others, and raise performance. Dr. George Kohlrieser is a clinical psychologist turned police hostage negotiator turned professor of leadership studies. While working as a clinical psychologist in a hospital, he was taken hostage by a patient, and he later applied this experience to his hostage negotiation role with law enforcement. Moreover, the title of his book comes from his revelation that the foundation of all negotiations, whether involving hostages or opposing businesses or other organizations, involve establishing some basis for human connection that is founded in a certain degree of both emotional bond and trust. Yes, he argues, even in the context of negotiating with a potentially dangerous individual, this possibility often exists and is necessary for conflict resolution. While most of us will thankfully never deal with an actual hostage negotiation, we do frequently encounter conflict in both our personal and professional lives. According to internationally known researcher of conflict management and professor of management, Dr. Afzal Rahim (2011), there are predominantly two broad types of conflict, affective and substantive. Substantive conflict refers to the “meat” of the problem: the actual issue or issues to be resolved. In order to effectively address these issues and devise some form of problem solving, the conflict itself must be confronted. It is this confrontation and, specifically, the negative affect, or emotions, that accompany it that often leads people to avoid resolution. Therefore, one of the keys to conflict management is not to avoid conflict, which can be beneficial as it can lead to growth, but to manage it effectively. This often involves management of the initial negative affect and, to Kohlrieser’s point, establishing some basis for relationship, or at least trust. Kohlrieser appeals to the physiological and neurological makeup of the brain as explanation for the importance of managing emotion in conflict situations. Specifically, he describes a physiological reaction that he refers to as the “amygdala hijack” (p. 5), referring to an area of the “primitive,” emotional brain called the limbic system. The phrase “amygdala hijack” was popularized by emotional intelligence researcher Dr. Daniel Goleman (1995). This is related to a phrase with which the reader is likely familiar: “fight or flight syndrome.” Our sympathetic nervous system is activated in the presence of a perceived threat (or potential prey), whether that threat is an actual predator, an opponent, or even a looming deadline. The physiological response is still geared toward a physical response: attack or run. As mammals and, indeed, carnivores (evidenced by the
122
13 How Polarization Impacts Judgment: We Are More Emotional and Less Rationale…
forward orientation of our eyes and the presence of canine teeth), it is often the former. Once this “fight or flight” response is triggered, we literally become more animal-like and less rational. Due to several physiological changes that result from the amygdala signaling the release of stress hormones, such as increased heart rate and blood pressure, we become more alert and ready to spring into action at the slightest provocation. However, this also means resources are diverted from the prefrontal cortex, that part of our brain responsible for executive functioning and higher-order reasoning. This is one of the reasons people can underperform on exams or other stressful situations, when, despite initial confidence in knowledge of subject matter, “nerves” get the better of you. Kohlrieser’s unique contribution to the discussion of conflict management is that we cannot separate emotion from reason completely. The sympathetic and parasympathetic (responsible for calming us down) systems exist in the same organism. Our brain is both emotional and logical, and we are social, relationshiporiented animals. Key leaders throughout history, including the founder of the Persian Empire Cyrus the Great and even the notorious Genghis Khan, founder of the first Mongolian Empire, have understood the longevity and influence of what French and Raven (1959) termed the “personal powers” that derive from attachment and respect (referent power) and deference to knowledge and wisdom (expert power). These derive, not from position, but from social relationships and trust-building. Hence, even military conquerors like Khan and Cyrus both understood that while force and coercion are highly effective in the short term, longterm devotion and establishment of unity require the personal powers. While each leader indeed reflected the brutality of their times with respect to subduing their enemies, they subsequently took measures to build true allegiance among conquered peoples, including, in both cases, the freeing of slaves, the instituting of religious and other freedoms, and the involvement of lower-level members of society in aspects of decision making (Frye, 2022; Cyrus Cylinder, n.d.; Genghis Khan, 2019; Weatherford, 2004).
Chapter Summary While we may like to think we judge information and make decisions based on sound reasoning, research indicates we are more likely to interpret data from a perspective that is biased toward our strongly held beliefs and/or affiliations. We must consciously apply tools in order to overcome this natural tendency toward an emotional, rather than rationale, approach to evaluating information. Understanding human emotion and relationships, it turns out, is critical in helping to predict, not only how people are likely to process information from the environment, but also how they tend to behave in groups, as well as in response to leadership. Moreover, it is important to understanding the underlying processes of organizational decision making in real-world settings.
References
123
References Anderson, C. A. (1982). Inoculation and counter-explanation: Debiasing techniques in the perseverance of social theories. Social Cognition, 1, 126–139. Anderson, C. A., Lepper, M. R., & Ross, L. (1980). Perseverance of social theories: The role of explanation in the persistence of discredited information. Journal of Personality and Social Psychology., 39, 1037–1049. https://doi.org/10.1037/h0077720 Bacon, F. (1960). The new organon and related writings. Liberal Arts Press. (Original work published in 1620). Cyrus Cylinder. (n.d.). In new world encyclopedia. Retrieved November 17, 2022, from https:// www.newworldencyclopedia.org/entry/Cyrus_cylinder Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., Celniker, J. B., & Zinger, J. F. (2019). At least bias is bipartisan: A meta-analytic comparison of partisan bias in liberals and conservatives. Perspectives on Psychological Science, 14(2), 273–291. https://doi. org/10.1177/1745691617746796 French, J. R., & Raven, B. (1959). The bases of social power. In D. Cartwright (Ed.), Studies in social power (pp. 150–168). Research Center for Group Dynamics, Institute for Social Research, University of Michigan. https://isr.umich.edu/wp-content/uploads/historicpublications/studiesinsocialpower_1413_.pdf Frye, R. N. (2022, October 21). Cyrus the Great. Encyclopedia Britannica. https://www.britannica. com/biography/Cyrus-the-Great Genghis Khan. (2019, June 6). Genghis Khan. History.com (Eds.). Retrieved December 6, 2022 from https://history.com/topics/China/genghis-khan Gladwell, M. (2005). Blink. Little Brown and Company. Goleman, D. (1995). Emotional intelligence. Bantam Books. Kohlrieser, G. (2006). Hostage at the table: How leaders can overcome conflict, influence others, and raise performance (Vol. 145). Wiley. Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098 Lord, C. G., Lepper, M. R., & Preston, E. (1984). Considering the opposite: A corrective strategy for social judgment. Journal of Personality and Social Psychology, 47(6), 1231–1243. https:// doi.org/10.1037/0022-3514.47.6.1231 Rahim, M. A. (2011). Managing conflict in organizations (4th ed.). Routledge. Reason, J. (1995). Understanding adverse events: Human factors. BMJ Quality & Safety, 4(2), 80–89. https://doi.org/10.1136/qshc.4.2.80 Robinson, R. J., & Harvard Business School. (1997). Errors in social judgment: implications for negotiation and conflict resolution. (Vol. Part 2, partisan perceptions). Harvard Business School. Ross, L., Lepper, M. R., & Hubbard, M. (1975). Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm. Journal of Personality and Social Psychology, 32, 880–892. https://doi.org/10.1037/0022-3514.32.5.880 Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 30, 453–458. https://doi.org/10.1007/978-1-4613-2391-4_2 Weatherford, J. (2004). Genghis Khan and the making of the modern world. Three Rivers Press.
Part IV
Decision Making in a Complex World
Chapter 14
Naturalistic Decision Making
The Intuition of Charlemagne Around the year 747 and somewhere near modern-day Germany or France was born a man who would revolutionize the western world. Carolus Magnus, more commonly known as Charlemagne or Charles the Great, would become the first king of the Holy Roman Empire and form the Carolingian dynasty. Both a warrior and an intellectual, Charlemagne possessed considerable native intelligence, intellectual curiosity, a willingness to learn from others, and religious sensibility—all attributes which allowed him to comprehend the forces that were reshaping the world around him. These facets of his persona combined to make him a figure worthy of respect, loyalty, and affection; he was a leader capable of making informed decisions, willing to act on those decisions, and skilled at persuading others to follow him. (Sullivan, 2022, para. 3)
Charlemagne was something of a Renaissance man, long before the term or the time period for which it was named. He is also considered a founder of the modern western university system because he combined the disciplined training of his palace guards with the intellectual traditions that had survived in European monasteries. Charlemagne selected Alcuin, a clergyman from England, to help form a system to bring education to his people. Alcuin, in turn, created what would become the basis of the liberal arts curriculum by focusing on seven main subjects: the Trivium, which included grammar, logic, and rhetoric; and the Quadrivium, which included arithmetic, geometry, astronomy, and music (Weidenkopf & Schreck, 2009). It is interesting to note, however, that despite his love of learning, his biographer, Einhard (775–840) wrote that Charlemagne was never able to master reading and writing, even though he practiced diligently (Crites, 2014). Einhard supposed it was due to the fact that Charlemagne attempted to learn the skill too late in life, but the modern reader has cause to wonder whether Charlemagne may in fact have been dyslexic. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_14
127
128
14 Naturalistic Decision Making
Fig. 14.1 Waldo of Reichenau and Charlemagne Note: Waldo of Reichenau presents Charlemagne various reliquiae (eighteenth century). Retrieved from https://upload.wikimedia.org/wikipedia/commons/2/2a/Waldo_of_Reichenau_and_ Charlemagne.jpg. Public domain
Though typically classified as a learning disability, recent case studies have started to speculate a possible link between dyslexia and entrepreneurial creativity (e.g., Big Think, 2011; Frost, 2021; Gobbo, 2010). Recent empirical data has found support for this observed relationship. Cancer et al. (2016) found those with developmental dyslexia had significantly higher scores on a creative thinking test particularly for “unusual combination of ideas” (p. 10). Sinclair (2011) linked such creativity behind entrepreneurship to a form of thinking known as intuition. Baldacchino et al. (2015) define intuition as “a way of processing information that is largely unconscious, associative, fast and contextually dependent” (p. 212). It is often what we mean when we talk about knowing something “in our gut.” (Fig. 14.1).
Experts and Their “Guts” In Blink, Gladwell shares, One of the questions that I’ve been asked over and over again since Blink came out is, When should we trust our instincts, and when should we consciously think things through? Well, here is a partial answer. On straightforward choices, deliberate analysis is best. When questions of analysis and personal choice start to get complicated – when we have to juggle
Naturalistic Decision Making
129
many different variables – then our unconscious thought processes may be superior. Now, I realize that this is exactly contrary to conventional wisdom. We typically regard our snap judgement as best on immediate trivial questions.…{but} maybe that big computer in our brain that handles our unconscious is at its best when it has to juggle many competing variables. (2005, p. 267)
Naturalistic Decision Making Kahneman (2013) introduces Gary Klein as an intellectual rival, whom he describes as “the intellectual leader of an association of scholars and practitioners who do not like the kind of work I do. They call themselves students of Naturalistic Decision Making, or NDM” (p. 234). Naturalistic decision making focuses on how humans actually make decisions in the real world, particularly upon the differences between novices and experts. According to Kahneman (2013), proponents of NDM criticize the focus on heuristics and biases “as overly concerned with failures and driven by artificial experiments rather than by the study of real people doing things that matter. They are deeply skeptical about the value of using rigid algorithms to replace human judgement” (2013, pp. 234–235). Despite the different approaches, Kahneman describes a “productive adversarial collaboration” (p. 234) with Klein, as well as admiration for the latter’s seminal studies of expertise in firefighters, the results of which were initially published in the 1970s. Dr. Gary Klein is a research psychologist most associated with helping to found the movement in naturalistic decision making (NDM). According to his biography on the website of the Human Factors and Ergonomics Society (HFES), his philosophical approach to researching cognition and decision making is “to understand how people actually think and make decisions in ambiguous and complex situations” (Human Factors and Ergonomics Society, n.d., para. 2). A 1986 study conducted by Gary Klein, along with Roberta Calderwood and Anne Clinton-Cirocco, examined decision making in teams during a critical incident. Specifically, the researchers examined the real-life decision points of “Fire Ground Commanders (FGCs), who are responsible for allocating personnel resources at the scene of a fire” (p. 576). The revelation was that FGCs did not use any of the classic normative or “rational” decision making strategies. In fact, “in less than 12% of the decision points were there any evidence of simultaneous comparisons and relative evaluation of two or more options” (p. 576). According to Klein et al., “FGC’s most commonly relied on their expertise to directly identify the situation as typical and to identify a course of action as appropriate for that prototype” (p. 576). On the basis of this and similar findings, Klein developed the Recognition Primed Decision (RPD) model, “which emphasizes the use of recognition, rather than calculation or analysis for rapid decision making” (Klein et al., 1986, p. 576). Klein defined the RPD model as “an example of a naturalistic decision making model
130
14 Naturalistic Decision Making
{that} attempts to describe what people actually do under conditions of time pressure, ambiguous information, ill-defined goals, and changing conditions” (Klein, 1997a, b, p. 287). He then outlines four criteria of focusing on “(a) experienced agents, working in complex, uncertain conditions, who face (b) personal consequences for their actions. The model (c) tries to describe rather than prescribe, and (d) it addresses situation awareness and problem solving as a part of the decision making process” (p. 287). Klein (1997a, b) wrote: The traditional decision research tried to identify a Rational Choice method (generate a range of options, identify evaluation criteria, evaluate each option on each criterion, calculate results, and select the option with the highest score) that could help people make better decisions…regardless of content area. (p. 49)
However, he discusses the challenge of applying classic rational choice strategies, noting, “general strategies may be weak strategies, because a one-size-fits-all strategy may not fit any specific setting very well” (p. 50). In citing research on experts from chess players to Navy officers involved in antiair operations, Klein argues “people with experience can use their experience to generate a reasonable course of action as the first one considered…Situation awareness appears to be more critical than deliberating about alternative courses of action” (p. 51). Revealing the important details of Klein’s interview with a fire department commander, in Blink, Gladwell, as he is apt to do, tells a fine story that vividly paints the picture of Klein’s revelation. In the interview, the Cleveland firefighter recalled a seemingly routine residential kitchen fire from when he was a Lieutenant. The crew began fighting the fire as usual, but the fire was not reacting as expected. The Lieutenant got a “there’s something wrong with this picture” feeling, and he ordered his crew out of the home moments before the basement floor collapsed. Gladwell records Klein’s memories of the interview with the Cleveland firefighter. According to Klein, the Lieutenant “didn’t know why he had ordered everyone out…He believed it was ESP” (2005, p. 123). Gladwell describes Klein as a “deeply intelligent and thoughtful man {who} wasn’t about to accept that as an answer” (p. 123). From this point, Gladwell goes on to summarize something akin to a cognitive task analysis, but which Klein refined to focus more on cues utilized (consciously or otherwise) in situation assessment. Critical decision method is a tool to elicit such knowledge, and it is based, in part, on the work of Flanagan (1954) on the critical incident technique. For the next two hours, again and again he led the firefighter back over the events of that day in an attempt to document precisely what the lieutenant did and didn’t know. “The first thing was that the fire didn’t behave the way it was supposed to,” Klein says. Kitchen fires should respond to water. This one didn’t. “Then they moved back into the living room”, Klein went on. “He told me that he always keeps his earflaps up because he wants to get a sense of how hot the fire is, and he was surprised at how hot this one was. A kitchen fire shouldn’t have been that hot. I asked him, ‘What else?’”. Often a sign of expertise is noticing what doesn’t happen, and the other thing that surprised him was that the fire wasn’t noisy. It was quiet, and that didn’t make sense given how much heat there was. (Gladwell, 2005, p. 123)
Naturalistic Decision Making
131
Perhaps the key point in that whole statement is that expertise is often about noticing what does not happen, what does not fit the pattern. What we call a “gut” feeling is often the processing of cues just below the level of consciousness. The Lieutenant’s decision was not ESP. It was quite logical, based on vital pieces of information his brain had tucked away from experience, but it was not quite at the level of his consciousness. “In retrospect all those anomalies make perfect sense” (Gladwell, 2005, p. 123). Amid the confusion, or literal smoke of the battle, there were still vital clues to what was going on – the muffled sound of the fire, the heat not centered in the kitchen, and the fire not abating when the hoses were directed in the kitchen. The fire was not in the kitchen; it was in the basement. Hayashi (2001) credits decision expert Herbert Simon for explaining the kind of intuitive reaction experienced by experts across many different domains. “Experts see patterns that elicit from memory the things they know about such situations” (p. 9). Dr. Gary Klein’s focus on decision making became rooted in cognition, or how the human brain processes information, rather than mathematical formulas. His focus was on intuition, or rapid decision making – the “gut” of experts, rather than the specific algorithms approach of many preceding studies of decision making. It is not that decision algorithms do not work; it is recognizing that different situations require different cognitive decision strategies – and that those strategies are not always consciously applied by experts. Hogarth (2010) acknowledged the data originally noted by Meehl in 1954 on clinical judgment and, later supported by the many studies of Tversky and Kahneman (e.g., Kahneman & Tversky, 1984; Tversky & Kahneman, 1974), indicated the influence of heuristics and biases can lead humans to underperform as compared to objective computer algorithms in predicting probabilistic outcomes. However, Hogarth contends “cognition is complex. Most of the actions we take involve a mixture of intuitive and non-intuitive elements or processes” (p. 339). Moreover, Hogarth cites Simon’s (1955) concept of bounded rationality, in which “the key notion is that, because people are physically incapable of performing all the rational calculations required by economic theory, they resort instead to other mechanisms” (p. 340). He discusses Klein’s work on recognition-primed decision making (RPD) as providing one such mechanism because “experts use of intuitive pattern recognition” (p. 340) can reduce cognitive load. As is often the case with kindred spirits, who may disagree but share the same passion for pursuit of knowledge, Kahneman and Klein collaborated on a project that questioned when an expert’s intuition can be trusted over algorithmic models, for which, after several years of collaboration, they remained at an impasse. But this is often how scholarship works, as it should. Interestingly, Kahneman (2013) brings up Gladwell’s (2005) Blink, as providing a real-world scenario upon which both Kahneman and Klein, and indeed Gladwell, agreed in terms of interpretation. The book begins with a story of art experts employed to determine the authenticity of a magnificent statue. According to Kahneman (2013),
132
14 Naturalistic Decision Making
Several of the experts had strong visceral reactions: they felt in their gut that the statue was a fake but were not able to articulate what it was about it that made them uneasy…The experts agreed that they knew the sculpture was a fake without knowing how they knew – the very definition of intuition. (p. 235)
Hastie and Dawes (2001) claim that “association is the simplest form of automatic thinking (p. 4) and make the interesting statement that “when automatic thinking occurs in less mundane areas, it is often termed intuition” (p. 5). Hogarth (2010) said of intuition that “the essence of intuition or intuitive responses is that they are reached with little apparent effort, and typically without conscious awareness. They involve little or no conscious deliberation” (p. 339). In Blink, Gladwell (2005) tells the story of a tennis coach by the name of Vic Braden, who had a seemingly uncanny ability to spot when a player – even a stranger – was about to hit a particular type of fault. As Gladwell tells it, Something in the way the tennis players hold themselves, or the way they toss the ball, or the fluidity of their motion triggers something in his unconscious…he just knows. But here’s the catch: much to Braden’s frustration, he simply cannot figure out how he knows. (p. 49)
What Gladwell is describing is a particular type of automatic processing exhibited by experts. Studies of eye movements in novice versus expert pilots have found that, given an emergency, the eye movements of expert pilots follow a very similar pattern, whereas novices are often more erratic. Similar to the findings on expert firefighters of Klein et al. (1986), the expert pilots tend to know, on a “gut” level, the critical information to extract from the environment. Indeed, I experienced something very similar in working with expert test pilots at the Naval Air Warfare Center in Patuxent River, Maryland, in the 1990s, but with respect to differences in performance (Carmody & Maybury, 1998). Using eye-tracking technology, we found that pilots who utilized secondary, “back-up” cues were able to rapidly adapt and reassess to a situation in which their primary cues upon which they were relying to acquire a target were obscured. The pilots broke out into two groups, when presented with a situation in which the primary cue they relied upon to acquire a target was obscured between the training run and the test run. Those pilots who continued to search for the primary cue failed to acquire their target, whereas those whose eye movements quickly switched to a backup cue were more successful in their mission. Similar to the descriptions of Klein and Gladwell regarding intuition, the more successful pilots often were not able to articulate their strategy. That is not always the case, however. Gladwell describes the laboratory of famous marriage researcher, Dr. John Gottman, who has spent decades meticulously coding hundreds of combinations of emotions into formulas that predict marital success with a surprising accuracy. Not only can experts develop the ability to spot patterns more quickly, but his research has homed in on a few particularly vital cues. Gottman’s successful algorithm is a good example of “Herbert Simon’s contention that ‘intuition and judgment are simply analyses frozen into habit’” (Hayashi, 2001, p. 9). Whether the situation involves expert pilots trying to find the source of an emergency or an expert at marital analysis honing in on clues to impending divorce, the
Naturalistic Decision Making
133
key to both finding patterns and picking out the most important cues in an otherwise cacophonous environment is expertise. Anyone familiar with reading can understand this. If you were to try to describe how you read to a nonreader, and how you see meaningful chunks of information in what to them is a series of random, meaningless symbols on a page, you would likely find that a difficult task. It is difficult to go back and see only a series of random, meaningless symbols, to “unsee” the patterns you now see so readily. The work of neurologist Dr. Antonio Damasio (1994) on patients with damage to their prefrontal cortex dramatically demonstrates the importance of being able to pick out the critical pieces of information. The ventromedial area of the prefrontal cortex is important to judgment. The key to judgment is not the ability to utilize data in a rational manner, but to zero in on the important pieces of information in any given situation. Damasio relays the effects of this inability, witnessed in a patient with damage to this area: I suggested two alternative dates, both in the coming month and just a few days apart from each other. The patient pulled out his appointment book and began consulting the calendar. The behavior that ensued, which was witnessed by several investigators, was remarkable. For the better part of a half hour, the patient enumerated reasons for and against each of the two dates; previous engagements, proximity to other engagement, possible meteorological conditions, virtually anything that one could think about concerning a simple date...walking us through a tiresome cost-benefit analysis, an endless outlining and fruitless comparison of options and possible consequences. It took enormous discipline to listen to all of this without pounding on the table and telling him to stop. (as cited in Gladwell, 2005, p. 59)
According to Hayashi (2001), Damasio explained this peculiar exchange as a result of the brain damage. “Decision making is far from a cold, analytic process…Our emotions and feelings play a crucial role by helping us filter various possibilities quickly, even though our conscious mind might not be aware of the screening.” (p. 8). The opposite of this type of “paralysis by analysis” is the quick-thinking decisiveness that saved the lives of multiple firefighters in the situation that first led Klein to begin the study of naturalistic decision making. As Gladwell points out, “in good decision making, frugality matters” (p. 141). In Blink, he highlights several case studies, as well as empirical research, to support the conclusion that “even the most complicated of relationships and problems… have an identifiable underlying pattern” (p. 141). Gladwell references two interesting studies conducted by Bargh and colleagues (1996) that demonstrated how people’s behavior can be impacted by a cognitive phenomenon known as priming. According to Bargh (2021), priming refers to the effects on responses of seemingly unrelated stimuli that may not even be consciously recalled (i.e., implicit memory). “The term ‘priming’ had its origins in the verbal priming, implicit memory research of Segal” (Bargh, 2021, p. 30). In the first study described in Blink, research participants who were given a word-sorting task that was sprinkled with words evoking images of aging, such as “grey,” “old,” and “wrinkle,” had a measurably slower gate upon walking down the corridor after the study. The second study described by Gladwell was an experiment in which one group had to unscramble sentences that included aggressive words like “bold,”
134
14 Naturalistic Decision Making
“rude,” and “disturb,” while the other group’s task included words such as “polite,” “respect,” and “considerate.” The participants were then instructed to talk to the person running the experiment. As part of the experimental design, however, the person participants were supposed to talk to was busy and engaged in conversation. The study measured how long it would take the study participants to interrupt the conversation. The effect was quite large, with those in the “rude” group not only interrupting much faster, on average, than those in the “polite” group; the great majority of participants in the latter group never interrupted at all. The inability to recognize what Gladwell calls “thin slices,” the cues you are picking up on, can sometimes be just below the level of consciousness, resulting in a “there’s something wrong with this picture” sensation, where you have a strong feeling something is amiss, but you just cannot put your finger on what that might be.
Problem Solving The “battle” between analytical and intuitive decision making is particularly relevant with respect to problem solving. “Problem solving is often a combination of ‘subconscious and conscious processing. Exploring, intuitive thinking and deliberate, systematic thinking are strongly interwoven in the practice of problem solving” (Nelissen, 2013, p. 39).
Training for NDM Canon-Bowers and Bell (1997) do an excellent job of summarizing both the characteristics and training applications of NDM in their chapter entitled Training Decision Makers for Complex Environments: Implications of the Naturalistic Decision Making Perspective (in Zsambok & Klein, 1997). While acknowledging the challenges of both researching and applying principles of NDM to decision making that “is seen as intertwined with task accomplishment, context-specific, fluid, flexible, and in some respects ‘procedure-free’” (p. 100), they nonetheless “contend that knowledge, skills, and processes that underlie decision making – even naturalistic decision making – can be identified and trained” (p. 100). They then proceed to outline how it can be effectively applied to improve strategic decision making. “The most fruitful way to characterize NDM-consistent training might be to view it as a mechanism to support natural decision making processes, and as a means to accelerate proficiency or the development of expertise” (p. 100). Before we can begin to consider how to train effective decision making, we must first determine the characteristics of effective decision making. This helps to establish the end goals to training. On the basis of both theories and empirical evidence behind NDM, Canon-Bowers and Bell (1997) outlined the attributes of effective decision makers, the underlying cognitive mechanisms to those attributes, and the implications for training.
Underlying Cognitive Mechanisms to Effective Decision Making Attributes
135
Attributes of Effective NDM Decision Makers Cannon-Bowers and Bell hypothesized several attributes of effective decision makers. Because of the nature of NDM decision contexts, effective decision makers “must be able to cope with environments that are ambiguous, rapidly changing and complex” (p. 100). Hence, flexibility and speed are the first two attributes considered. Rather than adhering to the somewhat rigid rules of classical normative decision making, “expert decision makers have a repertoire of decision making strategies that they can draw on in response to particular situational cues” (p. 100). The cue recognition both enhances the speed of decision making and incorporates another critical attribute of effective naturalistic decision makers: adaptability. “When events unfold rapidly in decision making, or decisions involve multiple goals…decision makers must engage in a continuous process of strategy assessment and modulation…must recognize when and how to apply a decision strategy and when to change” (p. 102). All training is dependent on the ability to recognize the important cues, as well as to distinguish them from noise. These important cues help distinguish significant, systematic – and therefore predictable – patterns from random noise. When I walk out of my bedroom in the morning wearing my athletic shoes, and the time may vary, as I am not particularly consistent in that domain, my dog instantly runs to the room where I keep her leash. She understands the wearing of these particular shoes as an important cue that indicates we are going for a walk. In order to manage the rapidly changing environment in an adaptable manner, effective decision makers must also be resilient. Cannon-Bowers and Bell point out the stressors associated with “ambiguity, uncertainty, and high stakes” (p. 102). As such, training for performance in naturalistic decision making must include operating under conditions of stress. Along with managing stress is risk assessment. High stakes decisions call for a balance in risk-gain probabilities, as well as severity of consequences. Finally, the most obvious but most difficult to assess attribute is that effective decision makers must be accurate. Often, this can only be determined in hindsight, which can introduce its own biases!
nderlying Cognitive Mechanisms to Effective Decision U Making Attributes First and foremost, for decision makers to be effective in dynamic, complex, high stakes decision environments, they must have excellent situation assessment skills. Situation awareness results from good assessment, and “is about having the information you need about the environment and the system you operate to make effective decisions” (SA Technologies, 2022, para. 2). The process by which one uses relevant cues from the environment to build an accurate internal mental model of the situation, which one might define as good situation awareness, is situation
136
14 Naturalistic Decision Making
assessment. Cannon-Bowers and Bell (1997) conclude, based on a review by Lipshitz (1993) of nine prominent NDM-related theories, that “overall, it is believed that this superiority in situation-assessment skills accounts for much of the ability of experts to make rapid decisions, and contributes to their decision making accuracy” (p. 103). Inherent in situation assessment is the ability to recognize relevant cues and patterns for a given environment. Moreover, there is evidence these cues and patterns are organized differently in expert neural networks. Indeed, modern artificial intelligence (AI) applies artificial networks, or “highly interconnected networks of computer processes inspired by biological nervous systems” to “help non-specialists to obtain expert-level information” (Park & Park, 2018, p. 594). Park and Park first conceptually define AI as “artificial general intelligence where ‘general’ indicates intelligence with a universal ability to cope with uncertain environments” (p. 595). They then go on to distinguish, wisely in my opinion, strong from weak AI, the latter of which is defined as “a concept that intends to build a cognitive and judgmental system inherent in computing, refusing the unreasonable reduction and reproduction attempt of the human intelligence, which is expected and intended by strong AI” (p. 595). For this reason, weak AI is often associated with the term “machine learning.” Although Park and Park trace the cognitive roots of AI all the way back to Aristotle, I concur with their assessment that “the scholars who established and developed the concept of an artificial neural network and machine learning gained lessons from the case of aviation technology development in the early 20th century” (2018, p. 596). In 1951, what is acknowledged as the first artificial neural network was developed by Marvin Minsky and Dean Edmunds. It simulated the activity of 40 interconnected neurons (Wang, 2019). At the time, Minsky and Edmunds were both graduate students in mathematics at Princeton University. Minsky would later become one of the most prominent names in both cognitive and computer science, founding the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Applications to expert systems in medicine date back to about 1972 at Stanford University. Such systems are “primarily aimed at organizing and classifying the knowledge that experts in a particular field are able to use”… {and deal} “mainly with expertise that has been systematized through research” (Park & Park, 2018, p. 598). The growing field of big data analytics can be argued, in fact, to somewhat mimic what the expert brain does naturally, “reveal hidden patterns and meaningful information in data so as to convert data into information and further transform information into knowledge” (Wang, 2019, p. 617).
Mental Simulation We saw in Chap. 9 how knowledge is generally organized in memory, but how this storage occurs is particularly relevant in expert decision making. According to Cannon-Bowers and Bell (1997), “expert decision makers build well-organized
Mental Simulation
137
knowledge structures that can be readily accessed and applied in decision making situations” (p. 104). The main task of expert decision making is to accurately assess a situation and generate a solution from among possible “templates” stored in memory, such as schemas or mental models. A notable exception, however, is in novel situations, where “a template for solving it does not exist in memory” (Cannon & Bowers, 1997, p. 104). In such cases, expert decision makers often rely on mental simulation, in which the expert utilizes existing knowledge to hypothesize possible solutions, and then thinks through the solutions to their probable outcomes. A classic case of mental simulation in a completely novel situation occurred in the skies over Sioux City, Iowa, in 1989.
United Flight 232 Imagine you are a passenger on a presumably routine flight. As is typically the case, the Captain’s voice comes over the loudspeaker. But this will not be a routine announcement about the weather at your destination. Ladies and Gentlemen, this is Captain Al Haynes speaking. We will be landing in approximately 8 min. “I’m not going to kid you. We’re going to make an emergency landing in Sioux City. It’s going to be hard…Harder than anything you have been through before. Please pay close attention to the flight attendants’ briefing” (as cited in Kennedy, 2019, para. 6). Although 112 of the 296 passengers tragically perished, the situation was so grave, the fact that there were any survivors has been called a near miracle. The McDonnel Douglas DC-10 suffered a catastrophic engine failure due to metal fatigue from a defect that had gone undetected by maintenance crews. According to the National Transportation Safety Board (NTSB), “The separation, fragmentation, and forceful discharge of uncontained stage 1 fan rotor assembly parts from the No. 2 engine led to the loss of the three hydraulic systems that powered the airplane’s flight controls.” (NTSB, 1990). The engine essentially exploded, and the debris took out all three hydraulic systems, which resulted in a near total loss of all flight controls of the aircraft. This was a situation for which there was no plan. Captain Haynes had over 30,000 flight hours with United Airlines alone, not counting his previous experience as a pilot with the US Marines. Copilot/First Officer William Records had over 20, 000 hours with United, and Flight Engineer Dudley Dvorak had over 15,000. Dennis Fitch, an off-duty DC-10 instructor pilot who was a passenger in the back offered his assistance. Despite all their extensive training and experience, including how to deal with a variety of emergencies, nothing had prepared them for this completely unforeseen circumstance. So improbable was the complete loss of all hydraulic systems, when Dvorak contacted United engineers on the ground for assistance, they were incredulous that all systems had gone down at the same time and were unable to offer any suggestions. There was no precedence, no plan. What ensued is considered a classic case of both successful crew resource management (including the coordination with the cabin and multiple ground crews) and the use of mental simulation in problem solving. Based on their collective
138
14 Naturalistic Decision Making
knowledge of aerodynamics, built through years of experience, the aircrew essentially had to figure out how to fly an aircraft without the availability of virtually any of the flight controls they had learned to use as pilots. Through creativity, communication, and trial and error, they were able to slow the aircraft and rate of descent enough to reach a runway and prevent complete loss of life, although it was still a devastating crash landing. The check airman {Dennis Fitch} said that based on experience with no flap/no slat approaches he knew that power would have to be used to control the airplane’s descent. He used the first officer’s airspeed indicator and visual cues to determine the flightpath and the need for power changes. (National Transportation Safety Board, 1990, p. 9)
Strategy Modulation and Reasoning Skills The case of United Flight 232 illustrates two other mechanisms behind naturalistic expert decision making that are closely related to mental simulation: strategy selection/modulation and reasoning skills. In fact, one might think of mental simulation as a tool or application of modulation and reasoning skills. It is a means by which experts hypothesize, assess, and adapt strategies cognitively. The ability to continually assess the dynamics of a situation and adapt strategies is critical to expert decision making, particularly problem solving. Orasanu (1993) maintained that when problems are ill-defined, decision makers must be able to diagnose situations, which requires them to engage in causal reasoning, hypothesis generation, and hypothesis testing. Orasanu also noted that creative problem solving (i.e., constructing novel solutions to a problem, or applying existing strategies in a new or different way) is required when existing knowledge and procedures do not meet the needs of the current decision. (Cannon-Bowers & Bell, 1997, p. 105)
This was indeed the case with United Flight 232.
Real-World Implications for NDM Training Because both the particular expertise and the dynamics of a situation are considered critical aspects of naturalistic decision making, training applications emphasize training that exposes novices to “typical” situations that build both context-specific domain knowledge and foster skills, such as situation assessment and mental simulation. Cannon-Bowers and Bell (1997) suggest simulations “can accelerate proficiency by exposing decision makers to the kinds of situations they are likely to confront in the real world” (p. 107).
Real-World Implications for NDM Training
139
Training for Stress Management Indeed, simulations have been used to train aviators since the beginning. In fact, as mentioned in Chap. 4, the seminal findings of Fitts and Jones (1947), based in part on observing errors pilots made in simulations, were pivotal in establishing the discipline of human factors. Likewise, the military and first responder organizations regularly utilize training simulations to try to mimic, not only scenarios trainees are likely to encounter in the real world, but also the stressors associated with them. Though it is not possible to completely simulate the types of stressors someone is likely to encounter in combat or in a real-life survival situation, simulations offer a relatively safe alternative that exposes trainees to both their general animal, as well as their unique individual, reactions to stress. As indicated by Cannon-Bowers and Bell (1997), stress management is a critical component of expert decision making, particularly in a crisis. Because stress is part of the animal survival mechanism, when we encounter a stressor, the sympathetic nervous system responds by going into “fight or flight” mode. As discussed in Chap. 13, energy is directed toward enhancing the senses and helping the body to either flee or defend itself. Energy is directed away from higher order cognitive functioning. As such, we do not reason as well under stress. We are ruled more by our animal brain, particularly the limbic system and including the amygdala, more than the higher-order cerebral cortex. Readers undoubtedly have experienced times when they were nervous about an exam or speaking publicly or facing some similar stressor and, suddenly, information they thought they knew by heart escapes them. This is because, even though the stressor may not be a predator or prey, the body reacts to any perceived stressor – whether an actual physical threat or excessive paperwork – in a similar manner physiologically. In order to enhance information processing and reasoning, in any case, one must first manage the stress. Stress management is also the target of many therapies intended to decrease generalized anxiety, post-traumatic stress, or specific phobias. In all these cases, the fight or flight response, which is intended to be short- lived and specific to survival in a given situation, has become overactive, long-term, and pathological, resulting in prolonged fear responses and potential negative physiological effects, such as chronic high blood pressure. Examples of therapies can include everything from more mild and self-guided therapies such as exercise and/ or yoga, and various forms of meditation (e.g., mindfulness, prayer, and breathing practices), to more extensive professional therapies, such as cognitive behavior therapy, systematic desensitization (typically used for phobias), or eye movement desensitization and reprocessing (EMDR), the latter of which has shown promise for PTSD in particular among recent peer-reviewed studies (e.g., de Jongh et al., 2019; Farrell et al., 2020). Managing stress is very much in keeping with the philosophical and theoretical underpinnings of NDM, which has foundations in the understanding that human responses to the environment, to include physiological and social, as well as cognitive responses, can very much impact both the nature and quality of decision m aking. Training under the NDM model, at its most basic level, includes “recognitional
140
14 Naturalistic Decision Making
processes that activate schemas in response to internal or external cues” (Cohen et al., 1997, p. 258).
pplications of Human Factors to Training Expert A Decision Making Much of human factors research and application over the many decades has focused on system design, based on understanding how humans process information. Designers frequently fail to realize or predict the difficulty people will experience in using their system. One reason is that they are extremely familiar with the system and have a very detailed and complete mental model…They fail to realize that the average user does not have this mental model and may never interact with the system enough to develop one. (Wickens et al., 2004, p. 137)
For this same reason, training novices in expert decision making may involve processes similar to human factors user design principles established over nearly 100 years of empirical research. These include principles such as encouraging the frequency of use (or training); active reproduction of information and/or processes (e.g., through practice or simulation); standardization of controls, equipment and processes, and memory aids; and careful design for ease of storage and retrieval of information from memory. The latter can include everything from meaningful “chunking” of bits of data to organizing information into sets. Aviation has long been a source of systematic study of NDM. This is due to several reasons, not least of which is the fact that it is one of the few work environments where every aspect of a crew’s movements, communications, and decisions are routinely recorded. But more and more studies of NDM in various domains have continued to grow over the years. Naturalistic decision making remains an active research area, with the 16th International Conference being held in Orlando in October of 2022. The Naturalistic Decision Making Association, according to their website, was incorporated in 2021, and it includes Dr. Gary Klein as a board member. They define NDM as “a framework used to study, inform, and improve how people make decisions in demanding, real-world situations” (Naturalistic Decision Making Association, 2022, para. 1). The goals of the NDMA include improving decision making, safety, performance, and innovation through research and utilization of “many different tools, theories, concepts, and methods for understanding complex cognitive functions” (Naturalist Decision Making Association, 2022, para 2). NDM proponents understand that, in order to improve any process, we must first understand it. How humans process information to make decisions is no different. “This position implies that we must be able to describe what expert decision makers do – delineate the knowledge, skills, and processes that may underlie effective decision making, and also describe the mechanisms of NDM that we seek to support” (Canon-Bowers & Bell, 1997, p. 100).
Applications of Human Factors to Training Expert Decision Making
141
Metarecognition Cohen et al. (1997) are quick to point out, however, that while training to enhance and accelerate recognition processes is important, it is inadequate to the decision maker facing novel situations, even in a particular expert domain. Although recognition is at the heart of proficient decision making, other processes may often be crucial for success. For example, Klein (1993) discussed mental simulation of a recognized option. We argue that many of these processes, which verify the results of recognition and improve situation understanding in novel situations, are metarecognitional in function. (p. 257)
Metarecognition better captures the dynamic nature of situation assessment. It is not a one and done, even for situation assessment. There must be constant monitoring, feedback, and update of the operating mental model. “Metarecognition is a cluster of skills that support and go beyond the recognitional processes in situation assessment” (Cohen et al., 1997, p. 258). It involves testing and evaluating the current state, modifying strategies to correct problems or gaps in understanding in a constant process of monitoring, feedback, and problem solving. The “process of correcting may instigate additional observation, additional information retrieval, reinterpretation of cues in order to produce a more satisfactory situation model and plan, or any combination thereof” (p. 258). Such metarecognition skills were on full display among the aircrew in the case of United Flight 232 in the skies over Sioux City. After-action reviews, common after various military training scenarios, are useful in improving training through monitoring, followed by critique, of various problem-solving skills and inevitable mistakes during simulated drills. According to Cohen et al. (1997), critiquing can result in the discovery of three kinds of problems with an assessment, including incompleteness, unreliability, or conflict. “An assessment is incomplete if key elements of a situation model or plan based on the assessment are missing. In order to identify completeness the recognitional meanings of the cues must be embedded within a story structure” (Cohen et al., 1997, p. 259).
As described by Cohen et al. (1997), story structures seem very analogous to strategy, as the main components are concerned with goals, capabilities, opportunities, actions, and consequences. These components must be connected in causal and interacting relationships with intended outcomes.
Goals of After-Action Reviews and Incident Investigations The goal of any postsimulation or accident/incident investigation should be to improve metacognition and metarecognition for future training. In discussing mistakes in judgment that are made during these critical, high stakes decisions involving multiple variables and uncertainty, and often under time stress, Gladwell states
142
14 Naturalistic Decision Making
“This is the second lesson of Blink: understanding the true nature of instinctive decision making requires us to be forgiving of those people trapped in circumstances where good judgement is imperiled” (pp. 262–263). That said, it is important to try to understand the nature of errors following events, because there are often patterns that, again, a century of human factors research has shown can be both detected and improved upon.
hat Might the Tragedy in Uvalde Have in Common W with the 3-Mile Island Nuclear Power Plant Meltdown? With that in mind, I want to address the tragedy that took place in Uvalde, Texas, in May of 2022. On May 24, 2022, students and teachers in Uvalde, Texas, were anticipating summer vacation. Nineteen students and two teachers would not live to see it. As of the writing of this book, the deadly shooting at this elementary school is less than a year old. Emotions are still raw and analysis in its infancy. It is important for investigators to strike a balance between navigating strong emotions and risking memory loss (or reconstruction) for events. But as data gradually unfold and patterns begin to emerge, thoughts can turn to analyses that can help reduce such events in the future. Though I must stress again that, at the writing of this book, analyses of the incident are undoubtedly ongoing, and I am not directly involved in any of this. We must always be cautious of initial reports, but there does seem to be emerging an initial consensus, and the information currently available does seem to support it, that there was a devastating failure of situation assessment at the heart of the response and decision errors. With respect to the response, the House Investigative Committee On The Robb Elementary Shooting Texas House of Representatives Interim Report, dated July 17, 2022, found “systemic failures and egregiously poor decision making” (p. 5). This was despite the fact that, according to the report, Uvalde ISD did have a well-formed policy for prevention and handling of such incidents. As directed by state legislation enacted in 2019, Uvalde CISD adopted a policy for responding to an active shooter emergency. And Uvalde CISD deserves credit for having done so—they are one of the few Texas school districts recognized by the School Safety Center as having submitted a viable active shooter policy. (Burrows et al., 2022, p. 14)
The policy included important evidence-based best practices, such as locked doors, perimeter security, and threat assessment teams. But such policies are only effective if enforced, and the report indicated in the case of Uvalde elementary, there was a “regrettable culture of non-compliance” (p. 6). This is likely not unusual in many, if not most, school districts, as the notion of “it can’t happen here” is likely prevalent. The report further specifies that, despite their own policy guidelines regarding communication and establishment of a command post, this was not done on May 24. Instead, there was a “void of leadership” (p. 11) from the lack of such a command
What Might the Tragedy in Uvalde Have in Common with the 3-Mile Island Nuclear…
143
post and effective communications. Notably, such a command post “could have transformed chaos into order, including the deliberate assignment of tasks and the flow of information necessary to inform critical decisions” (p. 11). The report determined the primary error in response to be the resulting failure of law enforcement officers to “adhere to their active shooter training” (p. 6). In short, the responders appeared to be using the right approach for the wrong situation – a barricaded subject versus an active shooter. Why this happened when there were clear indications that this was not the case (e.g., calls from inside the classrooms) remains to be determined, and again, I would caution careful analysis over rush to judgment. However, it is important to note that, undoubtedly, a major contributing factor was the lack of communication, coordination, and information flow – a pattern that is seen in virtually every major system breakdown from aviation to hospitals to schools. As we know from human factors research, it is during such times of ambiguous information and poor communications that cognitive biases are most likely to rear their ugly heads, and it is possible there was a cognitive bias operating – one that is similar to confirmation bias – known as cognitive tunneling. Where confirmation bias is selectively attending to and/or overweighting data that supports one’s initial hypothesis, cognitive tunneling has been described as sticking doggedly to one’s plan, despite clear indications that the plan is not working, or is making the situation worse. Wickens et al. (2004) make an important distinction between a plan and troubleshooting. “Informed problem solving and troubleshooting often involve careful planning of future tests and activities. However, troubleshooting and diagnosis generally suggest that something is ‘wrong’ and needs to be fixed” (p. 147). I would add that troubleshooting and problem solving are also much more dependent on active and continuous assessment of the situation, including constantly seeking cues from the environment and updating a dynamic internal cognitive model. An infamous example of cognitive tunneling occurred in 1979 at the Three Mile Island nuclear power plant near Harrisburg, Pennsylvania. It was part of a chain of events in which a cooling malfunction caused a partial meltdown in one of the reactors. According to the World Nuclear Association (2022), It involved a relatively minor malfunction in the secondary cooling circuit which caused the temperature in the primary coolant to rise. This in turn caused the reactor to shut down automatically. Shut down took about one second. At this point a relief valve failed to close, but instrumentation did not reveal the fact, and so much of the primary coolant drained away that the residual decay heat in the reactor core was not removed. The core suffered severe damage as a result. (para 2)
The actions of the operators were based on their training, but for the wrong situation, due to the initial faulty readings of the instrumentation. In fact, the relief valve was not closed and was leaking coolant. “Responding to the loss of cooling water, high-pressure injection pumps automatically pushed replacement water into the reactor system” (World Nuclear Association, 2022, para. 5). Because the operators believed the valve had closed, their response was to reduce the flow of replacement water. “Their training told them that the pressurizer water level was the only dependable indication of the amount of cooling water in the system. Because the pressurizer level was increasing, they thought the reactor system was too full of water”
144
14 Naturalistic Decision Making
(World Nuclear Association, 2022, para. 6). In writing of the incident, Wickens et al. (2004) explained that “operators mistakenly thought that that emergency coolant flow should be reduced and persisted to hold this hypothesis for over two hours” (p. 167). Their actions in responding to this misdiagnosis were actually making the problem worse. “Only when a supervisor arrived with a fresh perspective did the course of action get reversed” (Wickens et al., 2004, p. 167). In other words, the unbiased perspective of the supervisor led to an almost immediate recognition that the current hypothesis – and subsequent plan – was incorrect. Likewise, in the Uvalde tragedy, law enforcement officers testified to the Robb Elementary investigative committee that, upon first entering the building, environmental cues supported an initial situation assessment of a barricaded subject with no immediate evidence of students or teachers still in the rooms or visual confirmation of the threat. Indeed, transcripts from dispatcher communications indicate an attempt was made to resolve the ambiguity and determine if children or teachers remained in the threatened classrooms, and these seem to support the initial assessment. However, the investigative committee report adds this “‘barricaded subject’ approach never changed over the course of the incident despite evidence that {the} perspective evolved to a later understanding that fatalities and injuries within the classrooms were a very strong probability” (Burrows et al., 2022, pp. 57–58).
Distributed Decision Making Like incidents reported earlier of United Flight 232 and Apollo-13, Uvalde involved a distributed decision making situation, where “people with different roles and responsibilities often must work together to make coordinated decisions while geographically distributed” (Bearman et al., 2010, p. 173). Unlike the earlier incidents, however, there was clearly a breakdown in shared mental models and team situation assessment. The whys behind this are myriad and yet to be determined, but as Bearman et al. (2010) assert, At a practical level, decomposing breakdowns into operational, informational, and evaluative disconnects appears to capture useful distinctions that should aid the investigation of incidents in complex sociotechnical systems in which a breakdown in coordination or collaboration has occurred. When the categories of disconnects are employed, they can be used to guide more detailed questions about the incident. (p. 185)
Their research led them to identify three different types of disconnects. Operational disconnects involve a mismatch between actions or plans of team members, informational disconnects involve a lack of access to the same information, and evaluative disconnects involved members who possessed the same information but were mismatched in their appraisal of it or what it meant. Bearman et al. (2010) state “it is clear that operational disconnects, for example, often stem from informational and evaluative disconnects” (p. 185).
References
145
What makes Uvalde unique from either previous aerospace incidents, or even school shootings, is that equal to the focus on the shooting itself is the focus on the response of law enforcement to it. In January of 2022, less than 6 months before the events that unfolded in Uvalde, Steen and Pollock (2022) published an article evaluating, from the standpoint of NDM, the necessary elements to effective group decision making among police responders. The researchers used cognitive task analysis, a technique for mapping critical decision points of a task, as well as case studies and interviews to examine the effects of stress on decision making and performance among police officers. Among their findings, one that may be particularly relevant to the Uvalde tragedy is “The mechanisms that ensure the synchronisation of activities link to an operational communication strategy grounded on transparency and trust between the parties involved” (p. 1). Steen and Pollock (2022) also include recommendations “facilitating and stimulating proactive learning across the organization” (p. 1). These are consistent with the findings of Bearman et al. (2010) on distributed decision making in aviation and aerospace, as well as the findings of Kohlrieser (2006) and Roberto (2013) on the importance of trust building for both communication and facilitation of a learning culture. Steen and Pollock’s recommendations are also consistent with human factors tools that have been used for nearly 100 years, and will be discussed further in Chaps. 18 and 20.
Chapter Summary Research has demonstrated that rationalistic, normative models often fall short in predicting real-world human decision making, particularly within complex adaptive systems. Research supports the approach of naturalistic decision making, which takes into account how the brain processes information, including how information is extracted from the environment, how expertise affects recognition of underlying patterns, and how research into human decision making within naturalistic settings can enhance both individual and team decision making, including distributed decision making.
References Baldacchino, L., Ucbasaran, D., Cabantous, L., & Lockett, A. (2015). Entrepreneurship research on intuition: A critical analysis and research agenda. International Journal of Management Reviews, 17(2), 212–231. https://doi.org/10.1111/ijmr.12056 Bargh, J. A. (2021). All aboard! ‘Social’ and nonsocial priming are the same thing. Psychological Inquiry, 32(1), 29–34. https://doi.org/10.1080/1047840X.2021.1889326 Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of Personality and Social Psychology, 71(2), 230. https://doi.org/10.1037/0022-3514.71.2.230
146
14 Naturalistic Decision Making
Bearman, C., Paletz, S. B., Orasanu, J., & Thomas, M. J. (2010). The breakdown of coordinated decision making in distributed systems. Human Factors, 52(2), 173–188. https://doi. org/10.1177/0018720810372104 Big Think. (2011, May 12). Are dyslexics better visionaries? Culture and Religion. https://bigthink.com/culture-religion/are-dyslexics-better-visionaries/ Burrows, D., Moody, J., & Guzman, E. (2022). House investigative committee on the Robb elementary shooting Texas house of representatives interim report 2022: A report to the 88th Texas Legislature. Comm. Print. https://scribd.com/document/582917999/ robb-elementary-investigative-committee-report Cancer, A., Manzoli, S., & Antonietti, A. (2016). The alleged link between creativity and dyslexia: Identifying the specific process in which dyslexic students excel. Cogent Psychology, 3(1), 1190309. https://doi.org/10.1080/23311908.2016.1190309 Cannon-Bowers, J. A., & Bell, H. H. (1997). Training decision makers for complex environments: Implications of the naturalistic decision making perspective. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 99–110). Lawrence Erlbaum Associates, Inc.. Carmody-Bubb, M. A., & Maybury, D. A. (1998). Evaluation of prototype display of enemy launch acceptability region (LAR) on the F/A-18 HUD. Proceedings for the Third Annual Symposium and Exhibition on Situational Awareness in the Tactical Air Environment (pp. 9–15). Cohen, M. S., Freeman, J. T., & Thompson, B. B. (1997). Training the naturalistic decision maker. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 257–268). Lawrence Erlbaum Associates, Inc. Crites, G. J. (2014). For the love of learning. Christian History Institute. Retrieved November 21, 2022 from https://christianhistoryinstitute.org/magazine/article/ charlemagne-for-the-love-of-learning Damasio, A. R. (1994). Descartes’ error: Emotion, reason and the human brain. Harper Collins. de Jongh, A., Bicanic, I., Matthijssen, S., Amann, B. L., Hofmann, A., Farrell, D., et al. (2019). The current status of EMDR therapy involving the treatment of complex posttraumatic stress disorder. Journal of EMDR Practice and Research, 13(4), 284–290. https://doi. org/10.1891/1933-3196.13.4.284 Farrell, D., Kiernan, M. D., de Jongh, A., Miller, P. W., Bumke, P., Ahmad, S., Knibbs, L., Mattheß, C., Keenan, P., & Mattheß, H. (2020). Treating implicit trauma: A quasi-experimental study comparing the EMDR Therapy Standard Protocol with a ‘Blind 2 Therapist’version within a trauma capacity building project in Northern Iraq. Journal of International Humanitarian Action, 5(1), 1–13. https://doi.org/10.1186/s41018-020-00070-8 Fitts, P. M. & Jones, R. E. (1947). Analysis of factors contributing to 270 “pilot error” experiences in operating aircraft controls (Report TSEAA-694-12A). Dayton, OH: Aero Medical Laboratory, Air Material Command, Wright-Patterson Air Force Base, OH: Aeromedical Lab. Frost, S. (2021, August 5). Dyslexia can future proof your business. Forbes. https://www.forbes. com/sites/sfrost/2021/08/05/dyslexia-can-future-proof-your-business Gladwell, M. (2005). Blink. Little Brown and Company. Gobbo, K. (2010). Dyslexia and creativity: The education and work of Robert Rauschenberg. Disability Studies Quarterly, 30(3/4). https://doi.org/10.18061/dsq.v30i3/4.1268 Hastie, R., & Dawes, R. M. (2001). Rational choice in an uncertain world: The psychology of judgment and decision making. Sage. Hayashi, A. M. (2001). When to trust your gut. Harvard Business Review, 79, 33. https://hbr. org/2001/02/when-to-trust-your-gut Hogarth, R. M. (2010). Intuition: A challenge for psychological research on decision making. Psychological Inquiry, 21(4), 338–353. https://doi.org/10.1080/1047840X.2010.520260 Human Factors and Ergonomics Society. (n.d.). HFES fellows profile: Gary Klein. Retrieved November 9, 2022, from https://www.hfes.org/portals/0/documents/hfes_fellow_profiles/ gary_klein.pdf?ver=2020-12-23-082604-473 Kahneman, D. (2013). Thinking, fast and slow. Farrar, Straus and Giroux.
References
147
Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39(4), 341–350. https://doi.org/10.1037/0003-066X.39.4.341 Kennedy, M. (2019, August 26). Al Haynes, pilot from miraculous 1989 crash landing, has died. NPR. Retrieved October 6, 2022, from https://www.npr.org/2019/08/26/754458583 Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 138–147). Ablex. Klein, G. A. (1997a). The recognition-primed decision (RPD) model: Looking back, looking forward. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 285–292). Lawrence Erlbaum Associates, Inc. Klein, G. A. (1997b). An overview of naturalistic decision making applications. In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 49–59). Lawrence Erlbaum Associates, Inc. Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). Rapid decision making on the fire ground. Proceedings of the Human Factors Society Annual Meeting, 30(6), 576–580. https:// doi.org/10.1177/154193128603000616 Kohlrieser, G. (2006). Hostage at the table: How leaders can overcome conflict, influence others, and raise performance (Vol. 145). Wiley. Lipshitz, R. (1993). Converging themes in the study of decision making in realistic settings. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision-making in action: Models and Methods (pp. 103–137). Ablex. Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. University of Minnesota Press. https://doi.org/10.1037/11281-000 National Transportation Safety Board. (1990). United Airlines Flight 232, McDonnell Douglas DC-10-10, Sioux Gateway Airport, Sioux City, Iowa, July 19, 1989 {NTSB/AAR-90/06}. https://www.faa.gov/about/initiatives/maintenance_hf/library/documents/media/human_factors_maintenance/united_airlines_flight_232.mcdonnell_douglas_dc-10-10.sioux_gateway_ airport.sioux_city.Iowa.july_19.1989.pdf (faa.gov) Naturalistic Decision Making Association. (2022). What is naturalistic decision making? Retrieved September 12, 2022, from https://naturalisticdecisionmaking.org/ Nelissen, J. (2013). Intuition and problem solving. Curriculum and Teaching, 28(2), 27–44. https:// doi.org/10.7459/ct/28.2.03 Orasanu, J. (1993). Decision-making in the cockpit. In E. L. Weiner, B. G. Kanki, & R. L. Helmreich (Eds.), Cockpit resource management (pp. 137–168). Academic Press. Park, W. J., & Park, J. B. (2018). History and application of artificial neural networks in dentistry. European Journal of Dentistry, 12(04), 594–601. https://doi.org/10.4103/ejd.ejd_325_18 Roberto, M. A. (2005/2013). Why great leaders don’t take yes for an answer: Managing for conflict and consensus. FT Press. SA Technologies. (2022). About SA technologies. Retrieved December 8, 2022 from https://satechnologies.com/about/ Simon, H. (1955). A behavioral model of bounded rationality. Quarterly Journal of Economics, 69(1), 99–118. https://doi.org/10.2307/1884852 Sinclair, M. (Ed.). (2011). Handbook of intuition research. Edward Elgar Publishing. Steen, R., & Pollock, K. (2022). Effect of stress on safety-critical behaviour: An examination of combined resilience engineering and naturalistic decision-making approaches. Journal of Contingencies and Crisis Management, 30, 339. https://doi.org/10.1111/1468-5973.12393 Sullivan, R. E. (2022, December 2). Charlemagne. Encyclopedia Britannica. Retrieved December 21, 2022, from https://www.britannica.com/biography/Charlemagne Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 Waldo of Reichenau presents Charlemagne various reliquiae (18th century). [Artwork]. Retrieved December 21, 2022., from https://upload.wikimedia.org/wikipedia/commons/2/2a/Waldo_of_ Reichenau_and_Charlemagne.jpg. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author’s life plus 100 years or
148
14 Naturalistic Decision Making
fewer. This work is in the public domain in the United States because it was published (or registered in the U.S. Copyright Office) before January 1, 1927. Wang, L. (2019). From intelligence science to intelligent manufacturing. Engineering, 5(4), 615–618. https://doi.org/10.1016/j.eng.2019.04.0112095-8099/ Weidenkopf, S., & Schreck, A. (2009). Epic: A journey through church history, study set. Ascension Press. Wickens, C. D., Gordon, S. E., Liu, Y., & Becker, S. G. (2004). An introduction to human factors engineering. Pearson Prentice Hall. World Nuclear Association. (2022, April). Three mile Island accident. World-nuclear.org. https:// world-n uclear.org/information-l ibrary/safety-a nd-s ecurity/safety-o f-p lants/three-m ile- island-accident.aspx#:~:text=The%20Three%20Mile%20Island%20power,was%20shut%20 down%20in%202019 Zsambok, C. E. (1997). Naturalistic decision making: Where are we now? In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 3–16). Lawrence Erlbaum Associates, Inc.
Chapter 15
The Stages of a Decision
What Is a Decision? We’ve talked about the cognitive information processing that goes into making a decision, but what, exactly, is a decision? In writing this book, I debated whether this section should go earlier. Much of the application of the empirical research, along with the framing in terms of complex adaptive systems, is oriented toward both understanding and improving organizational decision making, so shouldn’t the question of decision making and its stages be outlined much earlier? The understanding of the processes that guide decision making is so critical, however, that it was important to first establish for the reader the cognitive environment in which decision making takes place. “The key to good decision making is not knowledge. It is understanding. We are {often} swimming in the former. We are desperately lacking in the latter” (Gladwell, 2005, p. 265). Seminal researchers in human factors and cognitive psychology, Wickens et al. (2004) summarize a decision making task as one that involves selecting “one option from a number of alternatives, {where} some information is available…the timeframe is relatively long (longer than a second)…and it is associated with uncertainty” (p. 157). That is a very general definition of a phenomenon that can be quite complex and multivariate. As I wrote in my 1993 dissertation, Human decision-making behavior is a complex, context-dependent arena, but there is a consensus of opinion in the literature that the process generally proceeds through a characteristic set of stages. Flathers, Giffin, and Rockwell (1982) have outlined four stages of decision making to include “detection,” “diagnosis,” “decision,” and “execution.” Other authors have proposed similar stages, often in the context of modeling human information processing within complex systems. Wickens and Flach (1988), for example, include within their model of information processing the sequence: “perception and attention,” “situation assessment (diagnosis),” “choice,” and “action.” (Carmody, 1993, p. 7)
Furthermore, the literature has also been consistent for decades in its depiction of underlying cognitive structures affecting each stage in the decision process. “Similar © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_15
149
150
15 The Stages of a Decision
concepts of internal models (Braune & Trollip, 1982), schemata (Rumelhart & Ortony, 1977), or frames (Minsky, 1975)…serve to establish patterns of associated events that reduce an operator’s search for information by directing attention away from irrelevant and/or redundant cues” (Carmody, 1993, pp. 7–8). In more recent models of complex organizational decisions, the social-affective elements have featured more prominently. Cristofaro (2020) expanded on Simon’s (1947) classic model of the problem space to include affect to the interactions among memories, experience, and risk in cue utilization within managerial decision tasks.
The OODA Loop One of the oldest and most “real-world” tested models of the stages of decision making is Boyd’s OODA loop, a concept developed over many years by the late US Air Force Colonel John R. Boyd, and crystalized in his 1996 paper (Boyd, 2012/1996). Though developed as a result of his experience in military aviation, it includes a learned understanding of basic human cognition with respect to the process of making decisions, particularly those involving complex, multivariate, and dynamic situations (Fig. 15.1). The OODA loop is an acronym that is arguably familiar to virtually all military aviators, operators, and strategists. It stands for Observe—Orient—Decide—Act. According to Richards (2020), the concepts inherent in the OODA loop “have influenced military thought in profound ways, from the design of modern fighter aircraft to the tactics used by the U.S. Marine Corps in both Gulf Wars” and “practitioners in war and business can use the loop to implement a framework that …might be described as ‘creativity under fire’” (pp. 142–143). Richards (2020) further describes the OODA loop as “an ancient pattern of actions that Boyd developed as a fighter pilot and then discovered that it could be documented back to at least the time of Sun Tzu” (p. 143). The OODA loop is typically depicted as a circular process linking the four stages, but this cycle has been criticized as overly simple (Osinga, 2005; Richards, 2004, 2020), and not in keeping with the original intent of Boyd (Richards, 2020). Richards (2020) is quick to point out that this simple circular model, so familiar to so many in human factors, military operations, and organizational strategy, is a gross oversimplification of the OODA loop concept. He further argues “with objections as serious as these, it is well that Boyd never included the OODA ‘loop’ … nor did he ever describe it as a sequential process in any of his presentations on competitive strategy” (p. 145). Osinga (2005) noted “a simple, sequential loop does not well model how organizations act in a conflict” (p. 8, as cited in Richards, 2020, p. 144). Osinga relayed the thoughts of Jim Storr, a British officer, who argued, “The OODA loop is not circular…Military forces do not in practice wait to observe until they have acted. Observation, orientation, and action are continuous processes, and decisions are made occasionally in consequences of them” (Osinga, 2005, p. 8, as cited in Richards, 2020, p. 144).
The OODA Loop
151
Fig. 15.1 US Col John Boyd as a Captain or Major Note. US Government (2012) Col John Boyd as a Captain or Major. Retrieved from File:JohnBoyd Pilot. jpg—Wikimedia Commons. Public domain
Although Boyd used the term “loop” informally, according to Richards (2020), the only model he actually presented, in his final 1996 paper (Boyd, 1996, p. 3), was much more complex, and as depicted in Fig. 15.2. Boyd (2012/1996) stated, “Without OODA loops, we can neither sense, hence observe, thereby collect a variety of information for the above processes, nor decide, as well as implement actions in accord with these processes” (p. 1). “In other words,” summarizes Richards (2020), “an OODA loop illustrates a scheme for obtaining inputs for certain processes and generating actions. Essentially, the rest of Boyd’s work describes processes and actions” (p. 145). Richards (2020) summarizes and captures a fundamental connection between the OODA loop and complex adaptive systems. Boyd, in a short slide he simply titled ‘Revelation’, insisted that the secret to winning was to create things – hardware, software, formations, tactics, leadership actions, whatever – and use them effectively ‘when facing uncertainty and unpredictable change’. The OODA ‘loop’, then, is a schematic for creation and employment…{that} requires creativity…and most actions are triggered very quickly via the implicit guidance and control link. To exploit this potential, organizations need certain attributes so that, for example, actions can actually flow from orientation without the need for explicit decisions. (p. 150)
152
15 The Stages of a Decision
Fig. 15.2 Boyd’s OODA loop Note. Moran, P. (2008). Full Diagram Originally Drawn By John Boyd. Retrieved from https:// commons.wikimedia.org/wiki/File:OODA.Boyd.svg. CC By 3.0
Chapter Summary The stages of a decision have been fairly consistent and upheld through decades of research. Despite slight variations in terms, most conclude there are approximately four major stages, often reflective of the OODA loop, made famous by the aviator Boyd (1996). It is obvious from Fig. 15.2 that the first two stages of Observe and Orient are multivariate and complex, particularly as compared to the latter stages. They reflect both the dynamic nature of most environments in which people must sense and observe, and the multiple sources of top-down (and sometimes biased) sources that act upon the perceptual system in both selecting and processing observations from the environment. Indeed, Richards (2020) argues “as the model diagram {in Fig. 15.2} suggests, orientation is key. Specifically, by maintaining better awareness, one can create opportunities to act” (p. 146). Such awareness is known in human factors research as situational awareness.
References Boyd, J. (2012/1996). The essence of winning and losing. Retrieved December 7, 2022, from https://ooda.de/media/john_boyd_-_the_essence_of_winning_and_losing.pdf Braune, R. J., & Trollip, S. R. (1982). Towards an internal model in pilot training. Aviation, Space, and Environmental Medicine, 53(10), 996–999. Carmody, M. A. (1993). Task-dependent effects of automation: The role of internal models in performance, workload, and situational awareness in a semi-automated cockpit. Texas Tech University. Cristofaro, M. (2020). “I feel and think, therefore I am”: An Affect-Cognitive Theory of management decisions. European Management Journal, 38(2), 344–355. https://doi.org/10.1016/j. emj.2019.09.003
References
153
Flathers, G. W., Giffin, W. C., & Rockwell, T. H. (1982). A study of decision-making behavior of aircraft pilots deviating from a planned flight. Aviation Space and Environmental Medicine, 53(10), 958–963. Gladwell, M. (2005). Blink. Little Brown and Company. Minsky, M. (1975). A framework for representing knowledge. In P. Winston (Ed.), The psychology of computer vision (pp. 211–277). McGraw-Hill. Moran, P. (2008). Full diagram originally drawn by John Boyd for his briefings on military strategy, fighter pilot strategy, etc. File:OODA.Boyd.svg - Wikimedia Commons. Creative Commons — Attribution 3.0 Unported — CC BY 3.0 Osinga, F. P. B. (2005). Science, strategy and war: The strategic theory of John Boyd. (Doctoral dissertation). Delft (The Netherlands): Eburon Academic Publishers. Richards, C. (2004). Certain to win: The strategy of John Boyd, applied to business. Xlibris. Richards, C. (2020). Boyd’s OODA loop. https://hdl.handle.net/11250/2683228 Rumelhart, D. E., & Ortony, A. (1977). The representation of knowledge in memory. In R. C. Anderson, R. J. Spiro, & W. E. Montague (Eds.), Schooling and the acquisition of knowledge (pp. 99–136). Laurence Erlbaum. Truman Library Institute. (2017, December 6). The faith of a first lady: Eleanor Roosevelt’s spirituality. The Faith of a First Lady: Eleanor Roosevelt’s Spirituality – Truman Library Institute. U.S. Government. (2012). Col John Boyd as a Captain or Major. Retrieved December 7, 2022 from File:JohnBoyd Pilot.jpg – Wikimedia Commons. This work is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. Wickens, C. D., & Flach, J. M. (1988). Information processing. In E. L. Weiner & D. C. Nagel (Eds.), Human factors in aviation (pp. 111–156). Academic. Wickens, C. D., Gordon, S. E., Liu, Y., & Becker, S. G. (2004). An introduction to human factors engineering. Pearson Prentice Hall.
Chapter 16
Situational Awareness and Situational Assessment
The Role of Situation Assessment Zsambok wrote in 1997 that “if there is one thing that NDM {Naturalistic Decision Making} research has done, it is the spotlighting of situation assessment (SA) processes as targets for decision research and decision aiding” (p. 11). That is still very much true today. I first need to note that, while situational awareness and situational assessment are often used interchangeably, it is best to think of situation awareness as describing the state of one’s internal cognitive (mental) model or representation of the world. In my 1993 dissertation, I conceptually defined these internal cognitive models as “learned associative relationships” (Carmody, 1993, p. 20). These internal cognitive models direct the process of decision making. The definition of the internal model, in addition to the evidence, signifies its ability to change as a result of interacting with the demands of a situation. If the internal model is modifiable, one must expect that the decision-making process directed by the internal model is modifiable, and that, ultimately, the outcome of the decision-making process is modifiable. (Carmody, 1993, p. 229)
Situation assessment, on the other hand, is the process through which this internal cognitive model is built. I will generally use the phrase situation assessment in order to emphasize the dynamic nature of that process. The importance of continued monitoring, updating, and adaptation of the mental model is particularly relevant in complex adaptive systems, as captured by Rasmussen’s assertion that “the model is structured with reference to the space in which the person acts and is controlled by direct perception of the features of relevance to the person’s immediate needs and goals” (1987, p. 21). Continuous monitoring and updating is critical to maintaining situational awareness, particularly in a dynamic environment.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_16
155
156
16 Situational Awareness and Situational Assessment
One of the seminal researchers in situational awareness is Dr. Mica Endsley, who has been an expert on the topic for over 30 years. Endsley has defined situational awareness, as “an up to date understanding of the state of the world and systems being operated.” She argues such awareness “forms a critical cornerstone for expertise in many complex domains, including driving, aviation, military operations, and medical practice” anywhere, essentially, “where there are many factors to keep track of which can change quickly and interact in complex ways” (2018, p. 714). In the early 1990s, I had the privilege to meet Dr. Mica Endsley, who has been President of the Human Factors and Engineering Society (HFES) and former Chief Scientist of the US Air Force. It was during her time working as a human factors engineer with the Northrup Corporation that she began her pioneering research in situational awareness. I sought her expertise while she was a professor in the Psychology Department at Texas Tech University, and she served on my dissertation committee, the topic of which was situational awareness following automation failure in semi-automated cockpits. At the time, I was a Navy Aerospace Experimental Psychologist, working as a researcher at the Naval Air Development Center in Warminster, Pennsylvania. Dr. Endsley developed many of her concepts and conducted much of her research while working with aviators at the Northrup Corporation. It was at Northrup, in 1987, that Dr. Endsley developed the Situational Awareness Global Assessment Technique, or SAGAT. SAGAT was developed specifically to collect objective data on situational awareness for human-in-the-loop tactical air simulations. The procedure involves several random stops of the simulation, at which time the screen goes blank, and the pilot is asked a series of questions regarding the information present on the screen at the time the simulation stopped. This allows for an objective comparison of the real and the perceived. Endsley’s model of situational awareness consists of three levels. Level I is the perception of events, a largely bottom-up process of extracting relevant cues from the environment. Level II involves not only perception but an understanding of the meaning of the information. Finally, Level III involves the ability to utilize that information to make predictions about future actions, as well as their consequences. Returning to Rasmussen’s taxonomy of cognitive structure, Level I could be said to represent rule-based, Level II knowledge-based, and Level III skill-based. She has since continued to develop the model into situations involving decision making within dynamic environments. For details on the model, see Endsley (2015). Despite the levels, Endsley is quick to point out that while characterizing SA into three levels does indicate increasing quality and depth of awareness, “they do not necessarily indicate fixed, linear stages...A simple Level 1–2–3 progression is not a sufficiently efficient processing mechanism in a complex and dynamic system, which is where expertise and goal-driven processing come into play” (Endsley, 2018, p. 714). Dr. Endsley’s research in situation awareness while at Northrup focused on recovering it following automation failure. During the 1980s, as cockpit automation grew, many began to see it as a potential panacea for human error. However, several aviation accidents in that time period were attributed to the pilot’s loss of situation
The Role of Internal Cognitive Models in Decision Making Within Complex Adaptive…
157
awareness as a result of being put into the position of passive monitor by overly automated tasks. This led to the designation among human factors researchers of automation-induced complacency (Singh et al., 1991, 1993) and subsequent human error. Indeed, on June 19, 2019, Captain Chesley “Sully” Sullenberger testified before the Subcommittee on Aviation of the United States House Committee on Transportation and Infrastructure regarding accidents involving a Boeing 737 Max. As an expert pilot, Captain Sullenberger was able to contribute insight into potential human factors involved in the crashes of Lion Air 610 and Ethiopian 302, which occurred within 5 months of each other and resulted in a loss of 346 lives. At issue was a flaw in the Boeing 737 Max aircraft, a recent design. Specifically, an automatic function intended to help stabilize the aircraft, known as Maneuvering Characteristics Augmentation System (MCAS), was designed such that, under certain conditions, it was able to move a secondary flight control by itself to push the nose down without pilot input (p. 2). Captain Sullenberger’s argument was that, in addition to violating the general aviation design principle of not having a single point of failure, the MCAS inadvertently threatened the pilot’s ability to rapidly assess and problem-solve the nature of the failure. With integrated cockpits and data being shared and used by many devices, a single fault or failure can now have rapidly cascading effects through multiple systems, causing multiple cockpit alarms, cautions and warnings, which can cause distraction and increase workload, creating a situation that can quickly become ambiguous, confusing and overwhelming, making it much harder to analyze and solve the problem. (p. 3)
Though MCAS was intended to enhance aircraft handling, it had the potential to have the opposite effect; being able to move the stabilizer to its limit could allow the stabilizer to overpower the pilots’ ability to raise the nose and stop a dive toward the ground” (p. 2). Sullenberger argued in his testimony that this could create a situation where, effectively, the pilot had to suddenly regain situation awareness under suboptimal conditions: I’m one of the relatively small group of people who have experienced such a sudden crisis – and lived to share what we learned about it. I can tell you firsthand that the startle factor is real and it is huge – it interferes with one’s ability to quickly analyze the crisis and take effective action. Within seconds, these crews would have been fighting for their lives. (p. 2)
Much of the theoretical understanding of human response to automation and automation failure is based on the seminal work on the vigilance decrement discussed in Chap. 11. Yet, most of the original vigilance studies involved relatively simple tasks. Things get more complicated, with interacting variables, as tasks become more complicated.
he Role of Internal Cognitive Models in Decision Making T Within Complex Adaptive Systems Hastings (2019) argues for the role of understanding complex adaptive systems (CAS) with respect to internal models in organizational behavior and strategy:
158
16 Situational Awareness and Situational Assessment
CAS consist of adaptive agents. Adaptive agents behave in accordance with an internal model, a set of stimulus-response rules for interacting with their environment…Often, adaptive agents do not fully understand and may not even know the rules and sub-routines exist which form the internal models that guide their behavior. Yet it is from these internal models that organized behavior of the agent emerges. (p. 8)
One of the case studies in Malcolm Gladwell’s (2005) Blink is that of a major war game conducted around the turn of the millennium at the US Pentagon. During such war games, “opposing” teams, labeled Blue Team for allies and Red Team for foes, compete in simulated battle scenarios. Based on the input of hundreds of military strategy experts, as well as computer decision aids, Blue Team was given “an unprecedented amount of information and intelligence from every corner of the U.S. government and a methodology that was logical and systematic and rational and rigorous” (p. 105). The premise was that, with enough information and algorithms and decision aids, the uncertainty inherent in warfare could be beaten. But they lost to Red Team. Gladwell relays how Red Team was led by a highly experienced combat military officer by the name of Paul Van Riper. “From his own experiences in Vietnam and his reading of the German military theorist Carl von Clausewitz, Van Riper became convinced that war was inherently unpredictable and messy and non-linear” (p. 106). It’s not that Van Riper eschewed systematic analysis, he just recognized when it was—and was not—appropriate. “He and his staff did their analysis. But they did it first, before the battle started” (p. 143). Truly successful decision making lies in a balance between deliberate thinking, what Kahneman labeled System 2 (see Chap. 12) and instinctive thinking, or what Kahneman labeled System 1. “Deliberate thinking is a wonderful tool when we have the luxury of time, the help of a computer, and a clearly defined task, and the fruits of that type of analysis can set the stage for rapid cognition” (Gladwell, 2005, p. 141). In other words, do the analysis when you have time. Determine the important patterns and cues; then, you will be more prepared to effectively apply them in a rapid cognition scenario requiring automatic processing. As Louis Pasteur realized over a century before the Pentagon’s war games, “in the fields of observation, chance favors only the prepared mind” (as cited in Gibbons, 2013, para. 1) . “One of the things Van Riper taught me was that being able to act intelligently and instinctively in the moment is possible only after a long and rigorous course of education and experience” (p. 259). Gladwell goes on to say, “Van Riper beat Blue Team because of what he had learned in the jungles of Vietnam. And he also beat Blue Team because of what he had learned in that library of his” (p. 259). Van Riper was a student of military history. This demonstrates that in order to know your enemy, to paraphrase Sun Tzu, and anticipate, not only his actions but his reactions to your actions, one must have experience across multiple scenarios. Now, that experience does not always come directly. While there is no substitute for real-world experience, the advantage of being a student of history is that such a student is exposed to multiple strategies and behaviors across different time periods, cultures, and geographic challenges—vicarious experiences that cannot be duplicated in one’s own life, but from which one can nonetheless gain valuable insight.
The Role of Situation Assessment in Developing Internal Cognitive Models in Dynamic… 159
That kind of diversity expands the realm of experience, if utilized well and internalized. This, in turn, helps us to understand human behavior across a vast array of circumstances and cultures.
he Role of Situation Assessment in Developing Internal T Cognitive Models in Dynamic Environments To examine interactions between the internal cognitive model, the environment, and the dynamic (or changing) aspect of a task, my dissertation explored several interactions between task type (i.e., requiring maintenance of stable vs. dynamic internal model) and level of engagement (i.e., degree of interaction required between operator and subtask in multi-task environment), and judgement type (comparative vs. absolute) on several flight and task performance metrics, situational awareness (as measured by Endsley’s 1987 SAGAT), and a subjective assessment of workload, the NASA Task Load Index (TLX), developed by the Human Performance Research Group at NASA Ames Research Center (Hart & Staveland, 1988). The study utilized an adaptation (Parasuraman, et al., 1991) of the Multi-Attribute Task Battery (MAT; Comstock Jr & Arnegard, 1992). The MAT is a “multi-task flight simulation package that accesses three different, aviation relevant, general information processing areas: perceptual-cognitive (via a system monitoring task), cognitive- strategic (via a resource [fuel] management task), and perceptual-motor (via a compensatory tracking task) (Parasuraman et al., 1991)” (Carmody, 1993, p. 40). The complex and adaptive nature of the human operator’s internal cognitive model can be summarized in the following manner: If one were to optimize the internal model to accurately represent the situation in question, one would expect optimization of the decision-making process and its subsequent product, performance. The closer one’s internal model is to accurate representation of the situation, the greater the “situational awareness” of the decision-maker…. The internal model determines for the decision-maker how much time should be taken before concluding the process, the amount and type of information to be used, as well as how it is to be manipulated in the decision. Such elements are affected by environmental and task factors such as level of arousal (Bahrick et al., 1952; Bursill, 1958; Easterbrook, 1959) and level of complexity (Baddeley, 1972; Keinan, 1987; Weltman et al., 1971; Wright, 1974). The components of the internal model and their use in the decision-making process are also affected by facets of the decision-maker, such as individual differences (Gopher, 1982; Gopher & Kahneman, 1971) and experience (Braune & Trollip, 1982, Chase & Simon, 1973; Chechile et al., 1989; Gopher, 1982). (Carmody, 1993, pp. 229–230)
From the above excerpt, it may be apparent to the reader the need for the human operator to maintain contact with assessments of the operating environment in order to continually update an internal cognitive model, particularly one that is dynamic, or rapidly changing. This was the crux for the development of a phrase often used among human factors researchers, automation-induced human error. Indeed, in Captain Sullenberger’s 2019 testimony he credits MIT professor Dr. Nancy Leveson with “a quote that succinctly encapsulates much of what I have
160
16 Situational Awareness and Situational Assessment
learned over many years: ‘Human error is a symptom of a system that needs to be redesigned’” (p. 3). A systematic viewpoint must consider the human operator, and in particular, the well-established vigilance decrement. Humans are, and always have been, poor passive monitors. “In automated systems, the removal of the human operator from the ‘loop’, so to speak, reduces the opportunities that are necessary to continually update the internal model, resulting in a lack of direct situational awareness for the particular task” (Carmody, 1993, p. 20). Building on decades of field and laboratory research establishing the vigilance decrement, as well as previous cognition and automation research, I proposed in my dissertation that, Without such constant situational assessment or awareness, the human operator would be dependent upon the ‘internal model’ provided by the automated component. As humans are, once again, poor passive monitors (Chambers & Nagel, 1985), this would lead to a situation where, upon reentering the loop, the human’s internal model would not be ‘up to date’, particularly if such reentry was unexpected or sudden, as in automation failure. Such a situation would have an effect, most directly, on the detection and diagnosis stages of decision making, as one must detect the need to reenter the loop, and then diagnose the problem if one exists. (p. 20)
This is precisely what was described in the 2019 testimony of Captain Sullenberger regarding the inability of the pilots in both the crashes of Lion Air 610 and Ethiopian 302 to rapidly assess the problem related to the sudden engagement of the Maneuvering Characteristics Augmentation System (MCAS). In my dissertation research, I applied Endsley’s (1991, 1995) model of situational awareness to predict that the level of situational awareness impacted by return to manual following automation failure would be influenced by the stable versus dynamic nature of a task. A stable task was conceptually defined as one in which the status of information might change across time, but not the meaning. In other words, once the particular status of a given instrument was detected, its meaning would be immediately understood. The stable task in the experiment involved a system monitoring task utilizing four vertical scales with pointers that moved vertically and indicated the temperature and pressure of two engines. Similar to classic vigilance tasks, Under normal conditions, the pointers oscillated minutely around the normal ranges of their respective scales (within a 3-tick mark range)… At randomly scripted points in time, each scale’s pointer shifted (independently of the others) to produce a ‘signal’… defined to a subject as ‘any time the yellow pointer goes either completely above or completely below ‘normal’ range. (Carmody, 1993, p. 43)
In the case of a decision task guided by a stable internal model, it was predicted that “Level I, perception of an event, would be most affected in terms of a loss of situational awareness. It would not be expected that one would lose significant awareness of the meaning (Level II) or future projection (Level III) of an event in the case of a stable model, as these things do not change across time” (Carmody, 1993, p. 234).
The Modern Flight Deck Remains a Real-World Laboratory for Studying Situational…
161
On the other hand, tasks associated with a dynamic internal model were expected to be more heavily weighted on the diagnosis stage of decision making. This is because the dynamic task, which involved management of fuel resources during flight in the experiment, involved not only status but also the meaning of that status changing across time. Moreover, the meaning of the task was dependent on variables that interacted with other variables that were also changing. In such a case, it was predicted that “any losses in situational awareness would be manifest at deeper levels (II & III) than in the case of a more stable model. The meaning of the information and its future projection would change across time” (pp. 234–235). The results of my dissertation led to the conclusion that the overall pilot performance, which included multiple factors in addition to stability of the internal cognitive model related to a decision making task, did “not unequivocally support a stable versus dynamic internal model task distinction as a basis for differential effects of automation and return-to-manual operation on various flight-related tasks. However, the performance data definitely indicate general task-dependent effects of automation and return-to-manual operation” (p. 238). In fact, results did indicate “significant differences in situational awareness following automation of the two tasks, in such a manner as was predicted on the basis of a stable versus dynamic internal model” (p. 238). “The most compelling evidence in support of hypotheses regarding separating tasks on the basis of a dynamic versus stable internal model {was} provided by the SAGAT data” (p. 238). Specifically, the percentage of questions related to Endsley’s (1991) Level II (Meaning) questions about the dynamic task that were answered correctly dropped when that task was automated. With the stable task, however, pilots in the experiment were able to answer Level II questions regardless of whether or not it was automated.
he Modern Flight Deck Remains a Real-World Laboratory T for Studying Situational Awareness in Complex Adaptive Systems In the past 30 years, both the issues of flight deck automation and the tasks of the flight crew have become increasingly complex. Klaproth et al. (2020) both validate previous findings on the importance of maintaining good situational awareness and summarize the current state of the issue: Automation has transformed pilots’ role from hands-on flying to monitoring system displays which is ill-matched to human cognitive capabilities (Bainbridge, 1983) and facilitates more superficial processing of information (Endsley, 2017)…More complex automation can impede the detection of divergence in the situation assessment by human operator and automated system, neither of which may adequately reflect reality (p. 2).
Falling “out-of-the-loop” remains a phrase used to describe the loss of situational awareness a pilot—or any human operator—may experience, particularly when certain tasks are automated and the human becomes a passive monitor in a dynamic
162
16 Situational Awareness and Situational Assessment
environment. The study of human perception and information processing and their role in situation assessment in high-stakes, dynamic, complex environments continues to be a concern, as evidenced by the ongoing development and application of more advanced tools and methodologies. A recent example is a study by Klaproth et al. (2020), who applied a methodology using passive brain-computer interface (pBCI) and neurocognitive modeling “to trace pilots’ perception and processing of auditory alerts and messages during operations” (p. 1). The researchers examined the potential for neuroadaptive technology to be applied to improve situational awareness, particularly in complex, highly automated flight decks. Neuroadaptive technology uses various assessments of cognitive states in order to “maintain a model that is continuously updated using measures of situational parameters as well as the corresponding cognitive states of the user (e.g., Krol et al., 2020). Adaptive actions can then be initiated based on the information” (Klaproth et al., 2020, p. 3). The study used electroencephalograms (EEGs) to measure pilots’ processing of auditory alerts, and this EEG data was used to better understand the cognitive models of pilots in the hopes of developing tools to aid pilot situational awareness and prevent common but potentially deadly errors.
Chapter Summary To understand the prediction of human behavior in complex adaptive systems, we must understand how the human brain uses information from dynamic and potentially adaptive environments (or, more precisely, the interpretation through sensation and perception) to judge relationships in the natural world, which itself is complex and adaptive. Decades of research has grounded decision making in situational awareness. It is especially critical in the context of complex adaptive systems, as continuous situation assessment is critical to ensuring the internal mental model upon which the decision maker operates is reflective of the dynamic external environment.
References Baddeley, A. D. (1972). Selective attention and performance in dangerous environments. British Journal of Psychology, 63(4), 537–546. https://doi.org/10.1111/j.2044-8295.1972.tb01304.x Bahrick, H. P., Fitts, P. M., & Rankin, R. E. (1952). Effect of incentives upon reactions to peripheral stimuli. Journal of Experimental Psychology, 44(6), 400–406. https://doi.org/10.1037/ h0053593 Bainbridge, L. (1983). Ironies of automation. In G. Johanssen & J. E. Rijnsdorp (Eds.), Analysis, design and evaluation of man–machine systems (pp. 129–135). Pergamon. https://doi. org/10.1016/B978-0-08-029348-6.50026-9 Braune, R. J., & Trollip, S. R. (1982). Towards an internal model in pilot training. Aviation, Space, and Environmental Medicine, 53(10), 996–999.
References
163
Bursill, A. E. (1958). The restriction of peripheral vision during exposure to hot and humid conditions. Quarterly Journal of Experimental Psychology, 10(3), 113–129. https://doi. org/10.1080/17470215808416265 Carmody, M. A. (1993). Task-dependent effects of automation: The role of internal models in performance, workload, and situational awareness in a semi-automated cockpit. Texas Tech University. Chambers, A. B., & Nagel, D. C. (1985). Pilots of the future: Human or computer? Communications of the ACM, 28(11), 1187–1199. https://dl.acm.org/doi/pdf/10.1145/4547.4551 Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In W. G. Chase (Ed.), Visual information processing (pp. 215–281). Academic. https://doi.org/10.1016/B978-0-12-170150-5. 50011-1 Chechile, R. A., Eggleston, R. G., Fleischman, R. N., & Sasseville, A. M. (1989). Modeling the cognitive content of displays. Human Factors, 31(1), 31–43. https://doi. org/10.1177/001872088903100 Comstock Jr, J. R., & Arnegard, R. J. (1992). The multi-attribute task battery for human operator workload and strategic behavior research (No. NAS 1.15: 104174). Easterbrook, J. A. (1959). The effect of emotion on cue utilization and the organization of behavior. Psychological Review, 60(3), 183–201. https://doi.org/10.1037/h0047707 Endsley, M. R. (1987). SAGAT: A methodology for the measurement of situation awareness (NOR DOC 87-83) (p. 18). Northrop Corporation. Endsley, M. R. (1991). Situation awareness in dynamic human decision making: Measurement. Department of Industrial Engineering, Texas Tech University, Lubbock. Unpublished document. Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64. https://doi.org/10.1518/001872095779049543 Endsley, M. R. (2015). Situation awareness misconceptions and misunderstandings. Journal of Cognitive Engineering and Decision Making, 9(1), 4–32. https://doi. org/10.1177/1555343415572631 Endsley, M. R. (2017). From here to autonomy: Lessons learned from human–automation research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/0018720816681350 Endsley, M. R. (2018). Expertise and situation awareness. In K. A. Ericsson, R. R. Hoffman, A. Kozbelt, & A. M. Williams (Eds.), The Cambridge handbook of expertise and expert performance (pp. 714–741). Cambridge University Press. https://doi. org/10.1017/9781316480748.037 Gibbons, G. H. (2013, December 24). Serendipity and the prepared mind: An NHLBI researcher’s breakthrough observations. Retrieved December 10, 2022, from https://nhlbi.nih.gov/ directors-messages/serendipity-and-the-prepared-mind Gladwell, M. (2005). Blink. Little Brown and Company. Gopher, D. (1982). A selective attention test as a predictor of success in flight training. Human Factors, 24(2), 173–183. https://doi.org/10.1177/001872088202400203 Gopher, D., & Kahneman, D. (1971). Individual differences in attention and the prediction of flight criteria. Perceptual and Motor Skills, 33(3_suppl), 1335–1342. https://doi.org/10.2466/ pms.1971.33.3f.1335 Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P. A. Hancock & N. Meshtaki (Eds.), Human mental workload (pp. 139–183). North-Holland. Hastings, A. P. (2019). Coping with Complexity: Analyzing Unified Land Operations Through the Lens of Complex Adaptive Systems Theory. In School of Advanced Military Studies US Army Command and General Staff College. https://apps.dtic.mil/sti/pdfs/ AD1083415.pdf Keinan, G. (1987). Decision making under stress: Scanning of alternatives under controllable and uncontrollable threats. Journal of personality and social psychology, 52(3), 639. https://doi. org/10.1037/0022-3514.52.3.639
164
16 Situational Awareness and Situational Assessment
Klaproth, O. W., Vernaleken, C., Krol, L. R., Halbruegge, M., Zander, T. O., & Russwinkel, N. (2020). Tracing pilots’ situation assessment by neuroadaptive cognitive modeling. Frontiers in Neuroscience, 14, 795. https://doi.org/10.3389/fnins.2020.00795 Krol, L. R., Haselager, P., & Zander, T. O. (2020). Cognitive and affective probing: A tutorial and review of active learning for neuroadaptive technology. Journal of neural engineering, 17(1), 012001. https://doi.org/10.1088/1741-2552/ab5bb5 Parasuraman, R., Bahri, T., & Molloy, R. (1991). Adaptive automation and human performance: I. Multi-task performance characteristics (Technical Report No. CSLN91-1). Cognitive Science Laboratory, The Catholic University of America. Rasmussen, J. (1987). Mental models and the control of actions in complex environments. Risø National Laboratory. https://backend.orbit.dtu.dk/ws/portalfiles/portal/137296640/ RM2656.PDF. Singh, I. L., Molloy, R., & Parasuraman, R. (1991). Automation-induced “complacency”: Development of a complacency-potential scale (Technical Report No. CSL-A-91-1). Cognitive Science Laboratory, The Catholic University of America. Singh, I. L., Molloy, R., & Parasuraman, R. (1993). Automation-induced “complacency”: Development of the complacency-potential rating scale. The International Journal of Aviation Psychology, 3(2), 111–122. https://doi.org/10.1207/s15327108ijap0302_2 Sullenberger, C. (2019, June 19). Statement of Chesley B. “Sully” Sullenberger III Subcommittee on aviation of the United States House Committee on Transportation and Infrastructure. https://transportation.house.gov/imo/media/doc/Sully%20Sullenberger%20Testimony.pdf Weltman, G., Smith, J. E., & Egstrom, G. H. (1971). Perceptual narrowing during simulated pressure-chamber exposure. Human Factors, 13(2), 99–107. https://doi. org/10.1177/001872087101300202 Wright, P. (1974). The harassed decision maker: Time pressures, distractions, and the use of evidence. Journal of Applied Psychology, 59(5), 555. https://doi.org/10.1037/h0037186 Zsambok, C. E. (1997). Naturalistic decision making: Where are we now? In C. E. Zsambok & G. Klein (Eds.), Naturalistic decision making (pp. 3–16). Lawrence Erlbaum Associates, Inc.
Part V
The Wrench in Newton’s Machine: Applying Models from Cognitive Science, Human Factors and Systems Engineering to Decision Making Within Complex Adaptive Systems
Chapter 17
The Human Factor in Complex Adaptive Systems
Learning from Disaster In Captain Sullenberger’s 2019 testimony regarding the crashes of Lion Air 610 and Ethiopian 302 (discussed in Chap. 16), he stated, Accidents are the end result of a causal chain of events, but in this case, the chain began by decisions that had been made years before to update a half century old design….not communicated to pilots….From my 52 years of flying experience and my many decades of safety work, I know that we must consider all the human factors of these accidents and how system design determines how many and what kinds of errors will be made and how consequential they will be. (Subcommittee on Aviation of the United States House, 2019, pp. 2–3)
The complex and multivariate nature of human performance and its optimization is reflected in the expansion of human factors into an increasing number of domains since the formation, in 1957, of the Human Factors and Ergonomics Society, “the world’s largest scientific association for human factors/ergonomics professionals” (Human Factors and Ergonomics Society, n.d., para. 1). According to Hawkins (1987), “since the pioneering days of flying, optimising the role of man and integrating him in this complex working environment has come to involve more than simply physiology” (p. 21). In short, because of its highly complex, highly interactive, highly multivariate, and highly multidisciplinary nature, the discipline of human factors engineering is at least one that is well-positioned to study the nature of complex adaptive systems, particularly human behavior within them.
The SHEL(L) Model One of the earliest conceptual models of Human Factors was that of Edwards (1985). Known as the SHEL model. SHEL is an acronym for the important components of the system, which include software, hardware, environment, and, central to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_17
167
168
17 The Human Factor in Complex Adaptive Systems
and interacting with them all, liveware, the human factor. The model was refined and published by Hawkins in 1987, with the additional emphasis of the liveware-to- liveware interaction, or SHEL(L). It emphasizes both the interacting nature of the components and their lack of clear defining boundaries, as the interactions between the four components is often depicted graphically with wavy or ill-defined borders. No one area can be considered in isolation; change one, and you impact the entire system, sometimes in less than easily predictable ways. This is a key concept in complex adaptive systems. It is the interaction among the components of SHEL(L) that are of particular interest in any systems engineering endeavor, particularly human factors engineering. According to the early and prominent human factors researcher Edwards (1988) “any change within a SHEL system may have far- reaching repercussions. Unless all such potential effects of change are properly pursued, it is possible for a comparatively small modification in a system component to lead to highly undesirable and dangerous consequences [and] continuous review of a dynamic system is necessary to adjust for changes beyond the control of the system designers and managers” (p. 17). This concept of relatively small changes in a system producing large effects is a key component in the definition of complex adaptive systems. Notably, the center component of the model is the Liveware, and it must be re- iterated that the boundaries between the modules within the model are deliberately “fuzzy” in order to indicate the complex and adaptive nature of the system as a whole. The Liveware—essentially, the human factor—involves all components of the human system, including physical characteristics (emphasized by the discipline of ergonomic design), fuel requirements (e.g., air, water, and fuel), output characteristics (again, variables such as amount of force needed for a control are often covered in ergonomic design), environmental tolerances (e.g., heat, cold, etc.), and finally, what is emphasized in this book, information processing capabilities and limitations. The first interaction under discussion in the SHEL model is that of Liveware with Hardware. According to Hawkins (1987), this “is the one most commonly considered when speaking of man-machine systems” (p. 24). Hardware involves all aspects of the physical environment (e.g., controls, equipment, seating, lighting, etc.). A second interface is that of Liveware and Software. Now, traditionally, software did not necessarily concern computer software, but rather things such as procedures and documentation. It has, of course, expanded to include computer software systems, as well. A third interface is between the human and the environment. The environment can be anything from the extremes of space to workplace exposure limits as outlined by the Occupational Safety and Health Administration (OSHA) to problems associated with circadian dysrhythmia given a particular work schedule. A fourth interface is, of course, liveware-to-liveware, which focuses predominantly on teamwork and crew behavior. This is particularly relevant to the study of leadership, as well as organizational and team performance.
Human Factors Analysis and Classification System (HFACS)
169
Hawkins (1987) rightly highlights the fact that, An important characteristic of this central component {liveware/human}…is that people are different…While it is possible to design and produce hardware to a precise specification and expect consistency in its performance, this is not the case with the human component in the system, where some variability around the normal, standard product must be anticipated. Some of the effects of these differences, once they have been properly identified, can be controlled in practice through selection, training and the application of standardized procedures. Others may be beyond practical control and our overall system must then be designed to accommodate them safely. (p. 23)
It bears repeating that the dynamic, complex, and adaptive nature of the entire system is illustrated in the uneven connecting lines of the components of the SHEL(L) model, thus representing the ever-changing relationships. This systems approach represented a fundamental change in scientific thinking, methodologies, and applications in the study of human performance.
Human Factors Analysis and Classification System (HFACS) Wiegmann and Shappell (2017/2003) built upon the classic work of Rasmussen (1982) and Reason (1995/1990) on classification of errors, particularly in applying the concept of addressing latent errors before they become active. The Human Factors Analysis and Classification System (HFACS) is a hierarchical framework for classifying errors within complex organizations. According to its developers, The original framework (called the Taxonomy of Unsafe Operations) was developed using over 300 Naval aviation accidents obtained from the U.S. Naval Safety Center (Shappell & Wiegmann, 2009/1997). The original taxonomy has since been refined using input and data from other military (U.S. Army Safety Center and the U.S. Air Force Safety Center) and civilian organizations (National Transportation Safety Board and the Federal Aviation Administration). The result was the development of the Human Factors Analysis and Classification System (HFACS). (Shappell & Wiegmann, 2000, p. 3)
While developed for analyzing human error within aviation systems, HFACS and other systems engineering principals focusing on latent errors in complex adaptive systems have been successfully applied in multiple and diverse domains. These include school safety (Carmody-Bubb, 2021), biopharmaceutical manufacturing (Cintron, 2015), medical practice (Diller et al., 2014; Braithwaite et al., 2018; Neuhaus, et al., 2018), military training (Hastings, 2019), innovation (Zhang et al., 2019), and cybersecurity (Kraemer & Carayon, 2007; Kraemer et al., 2009; Mitnick & Simon, 2003; Lier, 2013, 2015, 2018; Barnier & Kale, 2020). HFACS provides “a systematic tool for analyzing latent and active errors to facilitate awareness training, policy development, and prevention of future disasters. The methodology seeks to discover the origins of active errors by methodologically focusing the origins of latent errors within four organizational levels” (Carmody- Bubb, p. 5). Figure 17.1 displays the HFACS framework.
170
17 The Human Factor in Complex Adaptive Systems
Fig. 17.1 Human factors analysis and classification system (HFACS) Note: Shappell and Wiegman (2000). HFACS Framework. Retrieved from https://www.skybrary. aero/articles/human-factors-analysis-and-classification-system-hfacs CC BY-SA 4.0)
Unsafe Acts of Operators The lowest level addresses the latent errors that are most directly linked to the final, active error. Often, this is the error that draws the most attention, and unfortunately, further analysis often does not happen. However, applying Reason’s (1990) analogy Shappell and Wiegmann (2000) argue, What makes the “Swiss cheese” model particularly useful in accident investigation, is that it forces investigators to address latent failures within the causal sequence of events as well. As their name suggests, latent failures, unlike their active counterparts, may lie dormant or undetected for hours, days, weeks, or even longer, until one day they adversely affect the unsuspecting aircrew. (p. 2)
Unsafe Acts are divided into two categories of unsafe acts: Errors and Violations. Errors are unintentional and are further divided into errors based on faulty decision making, lack or misapplication of skill, or misperceptions of available information. Misperceptions occur when perception does not match reality. It is common in
Human Factors Analysis and Classification System (HFACS)
171
aviation, as it typically occurs when perceptual input is degraded, such as in visual illusions or spatial disorientation. Shappell and Wiegmann point out that such perceptual distortions are not, themselves, errors but the by-product of certain flight maneuvers and conditions. The error is in the response, either inappropriate or lacking. Pilots are trained to recognize, and correctly respond to perceptual illusions, predominantly by trusting their instruments. While not as common outside of aviation, errors in perception can still occur, for example, a doctor or nurse misreading an instrument. It is important to note at this point that there can be, and often is, overlap among the classifications, as once again, cause-effect in complex adaptive systems is not a direct one-to-one relationship. A high-profile example of an aviation accident that combined elements of both perceptual and skill-based error was that of the crash that killed John F. Kennedy, Junior and his two passengers in the summer of 1999. The combination of lack of training (skill) on instrument flight rules (how to trust your instruments under reduced visual conditions) and the right atmospheric conditions to induce perceptual illusions led to spatial disorientation as the most likely cause of the accident. Misapplication of Skill is common within a wide range of domains. It can be due, simply, to lack of experience or training, but it can also occur among experts who do not apply their skills properly or with good technique. In the latter case, it is often due to complacency, or inattention derived from among the many preconditions for unsafe acts, discussed in more detail below. Crash of Eastern Flight 401 into Florida Everglades As an example of a skill- based error among an experienced flight crew, Shappell and Wiegmann (2000) refer to the crash of Eastern Airlines Flight 401 into the Florida Everglades in 1972. The crew became preoccupied with determining whether an unsafe landing gear light was accurate (it was, in fact, burned out) and failed to notice the autopilot had become disengaged and the aircraft was descending. What followed is known in aviation as a controlled flight into terrain (CFIT), where a perfectly functioning aircraft is allowed (typically in error) to crash into the ground. For an example with which most people could relate, they add, Consider the hapless soul who locks himself out of the car or misses his exit because he was either distracted, in a hurry, or daydreaming. These are both examples of attention failures that commonly occur during highly automatized behavior. Unfortunately, while at home or driving around town these attention/memory failures may be frustrating, in the air they can become catastrophic. (Shappell & Wiegmann, 2000, p. 4)
Likewise, they can be catastrophic in the operating room, in preventing physical or electronic security breaches, or a myriad of other real-world incidents. Finally, within the classification of errors, there is the sub-classification of decision errors. Perhaps the most heavily investigated of all error forms, decision errors can be grouped into three general categories: procedural errors, poor choices, and problem-solving errors. Procedural decision errors (Orasanu, 1993), or rule-based mistakes, as described by Rasmussen (1982), occur during highly structured tasks of the sorts, if X, then do Y…There
172
17 The Human Factor in Complex Adaptive Systems
are very explicit procedures to be performed at virtually all phases of {the task}. Still, errors can, and often do, occur when a situation is either not recognized or misdiagnosed, and the wrong procedure is applied. (Shappell & Wiegmann, 2000, p. 5)
Decision errors, in aviation of other domains, such as emergency medicine, are more likely to occur when time is critical and the available data are somewhat ambiguous. The second sub-classification of Unsafe Acts of Operators includes violations. These involve “willful disregard for rules and procedures” and “are categorized as routine (meaning they regularly occur within the organizational system) and exceptional (out of the norm)” (Carmody-Bubb, 2021, p. 6). These can be particularly enlightening harbingers of problems in the organizational culture because they are often indicative of a failure to prioritize safety and/or error prevention. Moreover, they typically reflect an organizational culture not geared toward learning and improvement. The period between mounting precursors and a larger disaster is often known as the Turner Disaster Incubation Period (Turner, 1976). It has precedent in associations illustrated by previous researchers, including in the metaphors of the domino effect and accident pyramid (Heinrich, 1941/1959), which illustrate how incidents build one upon the other, and how the final grand system failure that is disastrous is only the “tip of the iceberg”, masking larger problems and an overall systemic failure beneath the surface. (Carmody-Bubb, 2021, p. 4)
Preconditions for Unsafe Acts It is under Level II of the HFACS framework that the systematic examination of underlying, latent errors really begins. The preconditions are further “divided into substandard conditions of operators (including the categories of adverse mental states, adverse physiological states and physical or mental limitations) and substandard practices of operators (including the categories of Crew Resource Management and personal readiness)” (Carmody-Bubb, 2021, p. 4). Within the context of aviation, this is where policies such as closed-loop communication and crew rest are applicable, since lack of coordination and poor communication are often antecedents for unsafe acts. Closed-loop communication occurs when a copilot or student pilot takes over control of an airplane, and the communication chain between the aircrew should follow as, “You have the controls – I have the controls – You have the controls” to ensure mutual understanding of who, in fact, has control of the airplane. Likewise, factors such as fatigue or emotional distractions can predispose an operator or operators to error. As an aviation psychologist in the Navy, I would often brief crews to “preflight yourself like you preflight your aircraft.” Aircrew routinely go through pre-flight inspections to ensure their aircraft is in good working order. The same applies to decision makers. You want to be operating to the full extent of your faculties. In military aviation, this is known as personal readiness.
Chapter Summary
173
Unsafe Supervision Level III of the HFACS classification system begins to delve into the leadership and underlying culture of an organization. According to Shappell and Wiegmann (2000), “Reason (1990) traced the causal chain of events back up the supervisory chain of command” (p. 9). HFACS classifies four types of unsafe supervision, to include inadequate supervision, planned inappropriate operations, failure to correct a known problem, and supervisory violations. “The supervisor, no matter at what level of operation, must provide guidance, training opportunities, leadership, and motivation, as well as the proper role model to be emulated” (Shappell & Wiegmann, 2000, p. 9). Hence, it is the failure to do this that constitutes inadequate supervision. Planned inappropriate operations can perhaps best be described as failures of risk assessment. The last two sub-categories within the level of Unsafe Supervision, failure to correct a known problem, and supervisory violations may be due to anything from passive leadership style to ethical violations.
Organizational Influences At the highest level of HFACS, Organizational Influences include resource management, organizational climate, and organizational processes. Shappell and Wiegmann (2000) indicate that, “generally speaking, the most elusive of latent failures revolve around issues related to resource management, organizational climate, and operational processes” (p. 11). Of particular interest to leadership and organizational decision making is the sub-category of organizational climate, which encompasses “chain-of-command, delegation of authority and responsibility, communication channels, and formal accountability for action,” as well as the policies and culture of an organization. Whereas policies typically involve formalized regulations regarding everything from sick leave to pay, culture “refers to the unofficial or unspoken rules, values, attitudes, beliefs, and customs of an organization. Culture is ‘the way things really get done around here’” (p. 13).
Chapter Summary The application of systematic tools like the SHEL(L) model and HFACS depends on an organizational culture that supports a cyclical learning-based approach, rather than one that simply reacts to overt or active errors. While a focus on latent errors is important for analysis, it should lead to deeper analysis of the interacting elements at all levels of the organization. Hayashi (2001) contends this ability to understand these interconnections and “detect patterns that other people either overlook or
174
17 The Human Factor in Complex Adaptive Systems
mistake for random noise” is the same process that is behind the “instinctive genius that enables a CEO to craft the perfect strategy” (p. 8).
References Barnier, B., & Kale, P. (2020). Cybersecurity: The endgame – Part one. The EDP Audit, Control and Security Newsletter, 62(3), 1–18. https://doi.org/10.1080/07366981.2020.1752466 Braithwaite, J., Churruca, K., Long, J. C., Ellis, L. A., & Herkes, J. (2018). When complexity science meets implementation science: A theoretical and empirical analysis of systems change. BMC Medicine, 16(1). https://doi.org/10.1186/s12916-018-1057-z Carmody-Bubb, M. A. (2021). A systematic approach to improve school safety. Journal of Behavioral Studies in Business, 13, 1–14. https://www.aabri.com/manuscripts/213348.pdf Cintron, R. (2015). Human factors analysis and classification system interrater reliability for biopharmaceutical manufacturing investigations. Doctoral dissertation, Walden University. Diller, T., Helmrich, G., Dunning, S., Cox, S., Buchanan, A., & Shappell, S. (2014). The human factors analysis classification system (HFACS) applied to health care. American Journal of Medical Quality, 29(3), 181–190. https://doi.org/10.1177/1062860613491623 Edwards, E. (1985). Human factors in aviation. Aerospace, 12(7), 20–22. Edwards, E. (1988). Introductory overview. In E. L. Wiener & D. C. Nagel (Eds.), Human factors in aviation (pp. 157–185). Academic. Hastings, A. P. (2019). Coping with complexity: Analyzing unified land operations through the lens of complex adaptive systems theory. School of Advanced Military Studies US Army Command and General Staff College. https://apps.dtic.mil/sti/pdfs/AD1083415.pdf Hawkins, F. H. (1987). In H. W. Orlady (Ed.), Human factors in flight (2nd ed.). Routledge. https:// doi.org/10.4324/9781351218580 Hayashi, A. M. (2001, February). When to trust your gut. Harvard Business Review. https://hbr. org/2001/02/when-to-trust-your-gut Heinrich, H. W. (1941/1959). Industrial accident prevention. McGraw-Hill. Human Factors and Ergonomics Society. (n.d.). About HFES. Retrieved November 9, 2022, from https://www.hfes.org/about-hfes Kraemer, S., & Carayon, P. (2007). Human errors and violations in computer and information security: The viewpoint of network administrators and security specialists. Applied Ergonomics, 38(2), 143–154. https://doi.org/10.1016/j.apergo.2006.03.010 Kraemer, S., Carayon, P., & Clem, J. (2009). Human and organizational factors in computer and information security: Pathways to vulnerabilities. Computers & Security, 28(7), 509–520. https://doi.org/10.1016/j.cose.2009.04.006 Lier, B. V. (2013). Luhmann meets Weick: Information interoperability and situational awareness. Emergence: Complexity & Organization, 15(1), 71–95. https://journal.emergentpublications.com/ Lier, B. V. (2015). The enigma of context within network-centric environments. Cyber-Physical Systems, 1(1), 46–64. https://doi.org/10.1080/23335777.2015.1036776 Lier, B. V. (2018). Cyber-physical systems of systems and complexity science: The whole is more than the sum of individual and autonomous cyber-physical systems. Cybernetics and Systems, 49(7–8), 538–565. https://doi.org/10.1080/01969722.2018.1552907 Mitnick, K. D., & Simon, W. L. (2003). The art of deception: Controlling the human element of security (1st ed.). Wiley. Neuhaus, C., Huck, M., Hofmann, G., St Pierre, P. M., Weigand, M. A., & Lichtenstern, C. (2018). Applying the human factors analysis and classification system to critical incident reports in anaesthesiology. Acta Anaesthesiologica Scandinavica, 62(10), 1403–1411. https://doi. org/10.1111/aas.13213
References
175
Orasanu, J. (1993). Decision-making in the cockpit. In E. L. Weiner, B. G. Kanki, & R. L. Helmreich (Eds.), Cockpit resource management (pp. 137–168). Academic. Rasmussen, J. (1982). Human errors: A taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents, 4, 311–333. https://doi. org/10.1016/0376-6349(82)90041-4 Reason, J. (1990). Human error. Cambridge University Press. Shappell, S. A., & Wiegmann, D. A. (2009/1997). A human error approach to accident investigation: The taxonomy of unsafe operations. The International Journal of Aviation Psychology, 7(4), 67–81. https://doi.org/10.1207/s15327108ijap0704_2 Shappell & Wiegman. (2000). HFACS framework. [Diagram]. Retrieved from https://www.skybrary.aero/articles/human-factors-analysis-and-classification-system-hfacs CC BY-SA 4.0. File:Organizational Influences.jpg – Wikimedia Commons. Creative Commons — Attribution- ShareAlike 4.0 International — CC BY-SA 4.0. Shappell, S. A., & Wiegmann, D. A. (2000). The human factors analysis and classification systemHFACS (DOT/FAA/AM-00/7). Embry-Riddle Aeronautical University. https://commons.erau. edu/publication/737 Subcommittee on Aviation of the United States House Committee on Transportation and Infrastructure. (June 19, 2019). Statement of Chesley B. “Sully” Sullenberger III. Transportation. house.gov. https://transportation.house.gov/imo/media/doc/Sully%20Sullenberger%20 Testimony.pdf Sullenberger, C. (2019, June 19). Statement of Chesley B. “sully” Sullenberger III Subcommittee on Aviation of the United States House Committee on Transportation and Infrastructure. https://transportation.house.gov/imo/media/doc/Sully%20Sullenberger%20Testimony.pdf Turner, B. A. (1976). The organizational and interorganizational development of disasters. Administrative Science Quarterly, 21(3), 378–397. Wiegmann, D. A., & Shappell, S. A. (2017/2003). A human error approach to aviation accident analysis. Routledge. https://doi.org/10.4324/9781315263878 Zhang, Y., Yi, J., & Zeng, J. (2019). Research on the evolution of responsible innovation behavior enterprises based on complexity science. International Conference on Strategic Management (ICSM 2019). Francis Academic Press, UK.
Chapter 18
Strategic Decision Making Through the Lens of Complex Adaptive Systems: The Cynefin Framework
It is September of the year 9 A.D. In a cold and muddy ravine deep within an ancient forest, events are about to transpire that will change the course of world history. Three legions of Roman soldiers, consisting of around 20,000 men, are advancing. It is a sight that has struck terror in the hearts of the many whom they have previously conquered across various lands. They are among the most highly trained and disciplined soldiers in the world—both feared and admired for their prowess in battle. Within days, they will all—save for a few very unlucky souls—be dead (Fig. 18.1). This was the Battle of the Teutoburg Forest, which took place in what is today Germany. Publius Quinctilius Varus led his seasoned legionnaires to establish bases along the Rhine river, not suspecting the ambush that awaited them. Varus had put his trust in a young Roman officer, one who was familiar with the land of Germania and its peoples, because he was once one of them. In fact, the young Arminius was the son of a chief of the Cherusci tribe—originally a “barbarian” to the Romans, who had borrowed that word from the Greek bárbaros. Bárbaros was originally used to describe the languages which, to non-Greeks, sounded like babbling. The Romans eventually adopted it to refer generally to non-Romans. But this particular barbarian had served 5 years in the Roman army, earning his citizenship and the high rank of equite, or cavalryman, and of a similar status as an English knight. He understood how the Romans fought. He knew their strengths—and their weaknesses. The Romans were very good in battle, including the use of the phalanx. Also learned from the Greeks, the phalanx was a technique in which lines of soldiers stood shoulder to shoulder, and eventually closed ranks in order to use their shields to essentially form a human tank. It was highly effective—under certain circumstances and conditions. The technique worked well on the classic battlefield of open terrains and fields. It was useless in a scenario where the soldiers would be unable to maintain their formations, and Arminius knew this. Within the constraints of the cold, the wind, and the muddy hills of the imposing forest in Germania, 3 legions of nearly 20,000 men were funneled into a single file column that is estimated to have been 7 miles long. Arminius’ men were waiting. Lined up along the hills above the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_18
177
178
18 Strategic Decision Making Through the Lens of Complex Adaptive Systems…
Fig. 18.1 Battle of the Teutoburg Forest Note. Wilhelm, F. (March 20, 1813). Battle of Hermann in the Teutoburg Forest. [Artwork]. Retrieved from https://commons.wikimedia.org/wiki/File:Hermannsschlacht_(1813).jpg. Public domain
Roman soldiers, they descended upon them in a guerilla-style ambush (Hudson, 2019; Wells, 2003). All their training, all their experience, all their might would amount to nothing. It was the wrong tactic in a misguided strategy. They were the proverbial sitting ducks.
What Is Strategic Decision Making? Strategic decisions differ from routine ones in that the problem is often ill-defined, the stakes are often high, and yet there is often a substantial degree of ambiguity and uncertainty. In addition, they also typically involve “a substantial commitment of financial, physical, and/or human resources” (Roberto, 2022, p. 4). These days, most strategic decisions take place in the boardroom, rather than the battlefield. However, the same mistakes are often made as those of Varus over two thousand years ago, misreading the subtle cues of the situation and applying old routines in more complex, and often crisis, situations. Many readers have likely experienced a situation where a new leader comes into an organization and begins applying strategies that may have worked in other situations in the past, but may not necessarily be a fit in the current organization—or a particular situation—for a variety of reasons. Some leaders take the time to assess the dynamics of a decision before plunging headlong into application, but others fail to recognize that the situation, and therefore, the important cues, may have changed
The Cynefin Framework
179
significantly. As it turns out, it’s really important to know whether your proverbial habitat is an open field, or a narrow ravine, and even more important to adapt your strategy. Hastings (2019) frames the strategic applications of complexity science well with respect to strategy. The reader familiar with business strategy tools will likely recognize elements common to all organizational strategy. It is the interaction among these elements that qualifies many organizations for tools that assess or try to model complex adaptive systems. “Despite the inherent unpredictability of complex systems, the structure, properties, and mechanisms of the behavior of complex adaptive systems provide a framework with which to develop a greater understanding of these non-linear dynamics” (p. 8). A framework that was specifically developed for strategic decision making within complex adaptive systems is the Cynefin framework (Snowden & Boone, 2007).
The Cynefin Framework Cynefin is a Welsh word meaning habitat, and this is a good indicator of the conceptual framework of the tool. It is not one that provides algorithms or decision strategies for particular problems; it provides a framework—or perhaps more accurately—a reframing. The Cynefin framework was specifically designed to address situational decision making within complex adaptive systems. It is a method of systematically classifying decision contexts so that leaders can recognize the leader’s job in guiding the decision process, the danger signals indicating a poor process that may lead to cognitive bias and decision errors, and how to respond to those dangers. Tools derived from complexity science, such as the Cynefin framework, “provide a lens through which we can better understand multi-causal dynamics within our contexts, issues, organizations, and communities” (Van Beurden, et al., 2011, pp. 75–76). The Cynefin framework was first outlined in its relation to complexity science in the 2007 article, A Leader’s Framework for Decision Making, by Snowden and Boone. According to the authors, the Cynefin framework “allows executives to see things from new viewpoints, assimilate complex concepts, and address real-world problems and opportunities” (p. 70). It is particularly useful as a decision making tool for leaders “who understand that the world is often irrational and unpredictable” (p. 70). Snowden and Boone argue that All too often, managers rely on common leadership approaches that work well in one set of circumstances but fall short in others. Why do these approaches fail even when logic dictates they should prevail? The answer lies in the fundamental assumption of organizational theory and practice: that a certain level of predictability and order exists in the world. This assumption…encourages simplifications that are useful in ordered circumstances. Circumstances change, however, and as they become more complex, the simplifications can fail. Good leadership is not a one-size fits all proposition. (pp. 69–70)
180
18 Strategic Decision Making Through the Lens of Complex Adaptive Systems…
Fig. 18.2 Cynefin framework Note: Snowden (February 2, 2011). Cynefin framework with all five domains. [Drawing]. Retrieved from File:Cynefin framework Feb 2011.jpeg – Wikimedia Commons. CC-BY-3.0
The framework has been successfully applied to various domains, “from helping a pharmaceutical company develop a new product strategy to…the U.S. Defense Advanced Research Agency {applying it as} the framework to counter-terrorism” (p. 70). Figure 18.2 displays a graphic of the Cynefin framework. Each domain is described in terms of the characteristics of the decision making environment, the key challenges or tasks for the leader, and the pitfalls or biases to which the leader must take care not to fall prey.
Ordered Domains: Simple and Complicated The first two domains are characterized as ordered, meaning there are very few unknowns and a higher degree of stability. The simple domain is characterized by stability and clear cause-effect relationships. The task of the leader is to assess facts (sense), categorize them appropriately, and respond on best practices. Hence, Snowden and Boone refer to it as the domain of best practice. Potential errors or pitfalls are related to incorrect categorizations, entrained thinking, or complacency. With the complicated domain, there are multiple correct solutions, and cause-effect relationships are present, but not readily apparent to everyone. Snowden and Boone refer to this as the domain of experts. The tasks of the leader include assessing the facts (sensing), analyzing them, and responding to them. The difference from the simple domain lies in the latter two steps—analyzing and responding. Whereas cause-effect relationships and best practices are more obvious in the simple domain,
The Cynefin Framework
181
with the complicated domain, analysis involves more multivariate relationships, and there may be no one best practice. There may be multiple good practices, and creativity can modify these to the needs of a given situation. This may entail more time in decision making. Similar to the simple domain, pitfalls of the complicated domain can include entrained thinking and biases, but in addition, leaders and decision making groups may fall prey to “paralysis by analysis.” To avoid these pitfalls, it becomes particularly important to include a diverse range of experiences, including both experts from various domains, as well as non-experts, in the decision making body.
Unordered Domains: Complex and Chaotic The first two domains, simple and complicated, are classified by Snowden and Boone (2007) as fact-based management, and they can still be effectively managed using classic, normative decision strategies. It is in the latter two domains, complex and chaotic, for which naturalistic decision strategies have the potential to come to fruition, provided they are well managed. They are classified as pattern-based leadership, shifting the focus not only toward the extraction of information from the environment (from sensing measurable facts to probing for more subtle patterns of relationships) but also toward the role of the primary decision maker. Rather than managing facts, the goal becomes leading a team motivated toward maximizing situation assessment, and subsequent strategic decision making. There is a need for problem solving, which Rasmussen (1987) defined as taking place “when the reaction of the environment to possible human actions is not known from prior experience, but must be deduced by means of a mental representation of the ‘relational structure’ of the environment” (p. 16). In the complex domain, at least one solution exists, but is not easily found. There is a need for creative and innovative approaches. As with any complex adaptive system, minor fluctuations in the system can induce major changes that, in turn, introduce unpredictability. The tasks of the leader are now to probe, then sense, and then to respond. So what is meant by probe? The leader must actively—and openly to the extent possible—take specific precautions to avoid confirmation bias. This includes seeking out relevant information and remaining alert to patterns that may begin to emerge. One might call this the domain of emergence, and indeed, best practices may not exist, or even good practices among which to choose. The leader must be willing to coordinate with team members to enhance situation assessment and must be patient in waiting for the emergence of patterns. Indeed, one of the pitfalls of this domain is reluctance to wait for the emergence of patterns, as well as the temptation for the leader to take control too soon, thereby decreasing the input of subordinates, which is so necessary to good situation assessment within the complex domain.
182
18 Strategic Decision Making Through the Lens of Complex Adaptive Systems…
Chaotic Domains Chaotic domains, thankfully, are relatively rare, and typically short-lived. They involve constantly shifting cause-effect, which makes discovering the right answer extremely difficult. There is often a great deal of time pressure. The primary distinguishing element of this domain is that the leader must first act just to get the situation under some degree of control, then must sense the environment and respond. There is often a lot of flow back and forth between chaotic and complex in strategic decision making, but most strategic decisions involve ebb and flow between complex and complicated domains. The primary goal of the leader in the chaotic domain is to establish order and, as quickly as possible, move the situation into the complex domain, where the decision making team can begin to look for patterns and to seek possible resolutions of the problem, often through trial and error and/or mental simulation. Teamwork is critical. Balancing Chaotic and Complex Domains: Flight 232 The case of the United Flight 232, discussed in Chap. 14, was an example of a chaotic decision making domain. It was characterized by many unknown unknowns and no readily available solutions. According to Snowden and Boone (2007), “the most frequent collapses into chaos occur because success has bred complacency. This shift can bring about catastrophic failure” (p. 71). Such is often the case, or at least one “domino” in the chain of events leading to disaster of any kind. In the case of Flight 232, the failure was not on the part of the aircrew but more on the part of maintenance. However, as we saw with the work of Reason (1990) and its application to the HFACS model in Chap. 17, here, too, there is very rarely one component of failure, rarely one domino in the chain. With respect to the decision making situation that resulted from the catastrophic engine failure, however, teamwork was, indeed, critical and the leader did not fall prey to the potential pitfalls of the chaotic domain, which can include inflated self-image (i.e., the leader attempts to handle the problem alone) and missed opportunities for innovation. This case study provided a good example of teamwork in the form of Crew Resource Management.
Disorder The final domain is that of disorder. Interestingly, this domain is characterized more by the lack of leadership and teamwork than the nature of the decision making situation. “Multiple perspectives jostle for prominence, factional leaders argue with one another, and cacophony rules” (Snowden & Boone, 2007, p. 72). Theoretically, one could argue that any decision domain, but particularly complex and chaotic, would be subject to disorder. Had there not been effective communication and coordination among the aircrew, cabin crew, ground crew, air traffic control, first responders,
The Cynefin Framework
183
and national guard, the situation of Flight 232 could have easily descended into disorder. Effective Distributed Decision Making in the Case of United Flight 232 United Flight 232, as is the case with many crises and/or strategic situations, was an example of distributed decision making. As indicated in Chap. 14, distributed decision making is when “people with different roles and responsibilities often must work together to make coordinated decisions while geographically distributed” (Bearman et al., 2010, p. 173). As with any team decision making, such situations require high levels of communication in order to make effective decisions. The difference from, say, an emergency room team is that the key roles and players are not co-located, which complicates the development of a shared mental model, making communication even more important. Bearman et al. (2010) reiterate the availability of empirical evidence supporting the importance of mental models in complex decision making, but their focus is on the role of shared mental models in distributed decision making. They argue, and cite research to support, that both the degree to which mental models are shared (i.e., extent of overlap) and the degree of accuracy (i.e., conformance to reality) impact the quality of distributed decision making. Moreover, they once again emphasize the necessity of constant updates of mental models—shared or otherwise. “In dynamic situations, it is important to regularly update the shared mental model held by the team so that developments in the situation facing the team do not lead to inconsistencies in the shared team mental model” (p. 175). While there will be aspects of mental models that are specific to the perspective and training of a given role (the respiratory therapist versus the registered nurse responding to an emergency, for example), “shared mental models {of the situation as a whole} enable team members to anticipate other team member actions and information requirements (Converse et al., 1991)” (p. 175). Shared mental models are also necessary for the related concept of team situational awareness, in which team members—distributed or otherwise—have “a shared understanding of overarching goals and awareness of the current and future state of the system” (Bearman et al., 2010, p. 175). Bearman et al. (2010) examined communications between pilots and air traffic controllers in order to determine the causes of both disconnects (individual disagreements in information or communications) and breakdowns (system level failures of distributed decision making that lead to at least a temporary loss of ability to function as a team). Not surprisingly, they identified many more instances of disconnects than breakdowns, consistent with accident chains. While there were varied causes of the disconnects, a major theme uncovered was that one disconnect tended to lead to more, often exponentially. This was exacerbated by the “usual suspects” of ambiguous data in a rapidly changing environment. Bearman et al. outlined several different types of resolution, but all involved some degree of effective communication in order to resolve ambiguities in the shared model.
184
18 Strategic Decision Making Through the Lens of Complex Adaptive Systems…
Houston, We Have a Problem On June 30, 1995, actor Tom Hanks delivered a line that would become synonymous with calm in the face of disaster, “Houston, we have a problem” (Howard et al., 1995). His dramatic paraphrase of Astronaut Jim Lovell introduced the world to the behind-the-scenes drama of one of the most successful instances of teamwork amidst challenging odds. Additionally, distributed decision making was even more an issue than in the case of United Flight 232, as there had to be coordination among engineers and astronauts—both in space and on Earth—as well as management of news media and families over a period of several days. The chief flight director for the NASA 1970 Apollo-13 mission was Eugene Francis “Gene” Kranz, who led a team in a chaotic situation that could have descended into disorder. Using archived transcripts of the Apollo-13 mission, available to the public, Bearman et al. (2010) analyzed them for incidences of disconnects and breakdowns. They determined, relative to other studies of similar transcripts they had conducted, the distributed team behind the Apollo-13 mission demonstrated effective resolution of disconnects, with only 4 of the 204 identified disconnects remaining unresolved. In addition to the benefits of distributed decision making, a complex adaptive approach holds that, particularly for complex and chaotic systems, it is important to remain open to “searching for non-obvious and indirect means to achieving goals” (Levy, 1994, as cited in Lissack, 1999, pp. 111–112). The Cynefin framework likely evolved from thoughts along the lines of Levy (1994, as cited in Lissack, 1999), “By understanding industries as complex systems, managers can improve decision making and search for innovative solutions” (p. 112). Such innovation is required, not only in industry but when faced with novel crises of any nature.
Applying Cynefin to the Covid-19 Pandemic Regarding the current pandemic, Foss (2020) has argued, The COVID-19 disruption illustrates that strategy in general, and behavioral strategy more specifically, does not have strong frameworks for dealing with uncertainty that goes beyond standard treatments of risky decision making in various ways (fat-tailed distributions, ill- defined outcome space, diffuse priors, etc.). Existing thinking on ill-structured problems (Mintzberg et al., 1976) and sensemaking (Prahalad & Bettis, 1986; Weick, 1995) assume such conditions but do not offer much analytical detail when it comes to describing them. (p. 1325)
The Cynefin framework can be particularly useful in providing such analytical detail. Sturmberg and Martin (2020) have applied the Cynefin framework to the Covid-19 pandemic. They argue it is the most effective means to maximize decision making, as “the uncertainties created by COVID-19 require analysis that provides the deep understanding needed to formulate and implement necessary
Applying Cynefin to the Covid-19 Pandemic
185
interventions” (p. 1364). They specifically highlight the usefulness of Cynefin in characterizing the relationships between the known, where cause and effect (C&E) are perceivable and predictable; the knowable, where C&E are separated over time and space; the complex, where C&E are only perceivable in retrospect and do not repeat; and the chaotic, where no C&E relationships are perceivable. Covid-19 has tipped the health and political systems into the chaotic domain. Not only do Sturmberg and Martin classify Covid-19 in the chaotic domain of the Cynefin framework, but as a “typical wicked problem” (p. 1361), using the language of Rittel and Webber (1974) to indicate problems for which the typical scientific approach is ill-equipped. A wicked problem is full of unknown unknowns. Sturmberg and Martin both accurately describe and frame the Covid-19 pandemic: We did not see it coming, we experience its effects, and it challenges our entrained ways of thinking and acting. In our view, it is a classic example that demonstrates how suddenly changing dynamics can destabilize a system and tip it into an unstable state. COVID-19— rather than something else—turned out to be what we colloquially call the last straw that broke the camel's back or, put in system dynamics terms, what pushed our societal systems over a tipping point. When a system suddenly tips over, the linkages between most of its agents break, and a chaotic situation ensues. Chaotic states entail a high degree of uncertainty, a state in which previously proven interventions no longer maintain the status quo. (p. 1361)
The Covid-19 pandemic truly is an excellent real-world example of the dynamics of complex systems interacting with other complex systems in a rapidly changing environment. Moreover, it emphasizes the need for decision- and policy makers to shift from simple cause-effect thinking to a greater understanding of complexity and how any one decision or policy cannot be viewed in a vacuum. Foss (2020) applies Weick’s (1995) concept of sensemaking to the decision demands surrounding the Covid-19 pandemic. “Sensemaking emerged as an attempt to argue that decisions take place against a backdrop of shared emergent meaning” (p. 1322). In particular, Foss argues, sensemaking applies to situations that cause widespread disruption, as the pandemic undoubtedly did on a number of fronts. “Research suggests that decision making under these conditions follows a groping, iterative approach as decision makers seem to literally make sense out of the situation, which certainly describes decision making throughout spring of 2020” (Foss, 2020, p. 1322). Indeed, the chaotic nature of the pandemic fostered an environment ripe for “biased decision making resulting from more or less automatic application of heuristics” (Foss, 2020, p. 1322). From the standpoint of the Cynefin framework, Foss seems to be describing a situation where much of the world reacted to the pandemic as a complicated decision domain, when it was in fact complex, and at times chaotic. Rather than relying on past best practices, as would be characteristic of the complicated domain, the situation really demanded a greater focus on probing for emerging patterns. The point here is not whether the strategies that were adopted were the right ones or not but rather that conditions were created under which certain strategies were quickly identified as “right” and other perspectives that would allow for the identification of other alternatives
186
18 Strategic Decision Making Through the Lens of Complex Adaptive Systems…
did not enter high-level decision making and public discourse until long into the COVID-19 crisis. As a perhaps telling example, only Norway appears to have created an expert group with a representation of expertise other than medical and epidemiological expertise. (Foss, 2020, p. 1323)
Sturmberg and Martin (2020) warn policy makers that failures to recognize the complexity of the situation can lead not only to unintended consequences but to a breakdown of the total system; that system, in this case, being the healthcare system. As all parts of a CAS at every level of organization are connected to everything else, a change in any part of the system has reverberations across the system as a whole. Two processes control the dynamics of a CAS—top-down causation transfers information from higher system levels to lower ones, which constrains the work that lower system levels can do, that is, it limits the system's bottom-up emergent possibilities. If top-down constraints are too tight, they can bring the system to a standstill. System stability also depends on the law of requisite variety, meaning that a system must have a sufficient repertoire of responses (capacity for self-organization) that is at least as nuanced (and numerous) as the problems arising within its environment. If the possible ways of responding are fewer than what is demanded from the system, it will fail in its entirety. (p. 1362)
This quote emphasizes three key points in both analyzing the pandemic as a complex adaptive system and in applying the Cynefin framework to decision and policy making. First, the interaction between top-down and bottom-up processing must be considered. In a chaotic system, particularly with many unknown unknowns, which is the case with this somewhat atypical virus, situation awareness is critical, as it will alert us to emergent—and adapting—patterns. It was bottom-up processing that allowed scientists to quickly identify the symptom of loss of smell as a distinctive marker for the early Covid strains. The coordination of scientific agencies around the globe, as well as social media and the internet, allowed for observation-based, largely anecdotal data, but on a large scale, to identify emerging patterns. This is bottom-up processing. There was no initial hypothesis driving loss of smell as a distinctive identifier of Covid. Based on individual observations, hypotheses could be formed and tested. This actually adheres to the classic scientific method, but also emphasizes the cyclical nature, and the need to remain open-minded and to adopt a learning mindset. The same is true for the link between certain vaccines and increased risk for cardiomyopathy among otherwise healthy and predominantly young men. Surowiecki’s (2005) concept of the “wisdom of crowds” (discussed in Chap. 8) relates to the second point. Monitoring for observations and emergent patterns is critical. Leaders must be very careful to not allow top-down biases to influence either the process of gathering data or the process of strategic decision making and policy making. Finally, the multivariate and interacting nature of the problem demonstrates the need for policy and decision makers to understand there is no “one-size-fits-all” solution. As with any complex and particularly any chaotic situation, the key is constant situation assessment and monitoring for emergent properties, and a willingness to learn and adapt.
Chapter Summary
187
Having to solve problems with high levels of unknowns often results in interventions that are ad hoc focused on what appears to be the most obvious without considering the wider consequences. Dörner’s studies in the 1980s demonstrated how people of all walks of life handle unexpected contextual problems. Most of us succumb to the logic of failure; we over-respond and, when realizing the consequences, promptly react with an over-response in the opposite direction and so forth. Few among us use the approach of first closely analysing the problem, second responding by introducing small interventions, and finally taking time to observe what happens. In dynamic systems, the true effects of an action are only evident after a time delay. One has to observe and evaluate a CAS' feedback to guide responses; invariably infrequent small tweaks—rather than rapid and dramatic actions— achieve a stabilization of the situation and ultimately provide the necessary space for a (re) solution to emerge. (Sturmberg & Martin, 2020, p. 1361)
With respect to leadership actions and strategic decision making, Sturmberg and Martin (2020) remind us of the circumstances and suggest the needed leadership characteristics with which to navigate such a crisis: We did not have ready-to-go pandemic plans—who is in charge, what are proven population control measures, what advice to provide for protection to people and health professionals and public health services, and what are potentially effective treatments. Such problems are known as VUCA problems—they entail Volatility, Uncertainty, Complexity, and Ambiguity. Resolving such problems requires VUCA leadership—Vision, Understanding, Clarity, and Agility. (pp. 1361–1362)
Chapter Summary Whether it is the eye movements of expert pilots during emergencies, chess players in recognizing key patterns of movements, or firefighters reading cues to diagnose a particular type of fire, experts can often rapidly, intuitively, and sometimes without conscious awareness diagnose a problem. The key is in assessing the situation, including the decision making situation. Is it a “board room” decision or a “battlefield” decision? While time—how much time in which a decision can be made— may be one indicator, it is not the only factor. Raisio et al. (2020) argue that “a complexity-aware leader ought to be able to separate routine management issues from the more complex variants” (p. 5). Since many decision makers are biased toward thinking that normative models with weighted predictors are the best way to make decisions in every situation, the Cynefin framework is a potentially valuable tool to help decision makers objectively assess the true nature of a decision situation. Moreover, while Cynefin focuses on the leader’s role in framing and guiding the decision process, Foss (2020) emphasizes that, “in general, strategy making does not take place in a vacuum but is a deeply social process” (p. 1325). He argues that behavioral strategies, which he defines as “merging cognitive and social psychology with strategic management theory and practice,” are particularly useful in situations that are unprecedented and unanticipated and have widespread effects, such as we have seen with the Covid-19 pandemic. Such behavioral strategies “bring realistic assumptions about human cognition, emotions, and social behavior to the strategic
188
18 Strategic Decision Making Through the Lens of Complex Adaptive Systems…
management of organizations” (Foss, 2020, p. 1322). Lissack (1999) poignantly adds that “leaders can be effective in guiding the decision process, not by changing behavior, but by giving others a sense of understanding about what they are doing. In this sense lie the potential strengths of complexity as a management tool” (pp. 122–123). Finally, Braithwaite et al. (2018) argue that in any organizational system, “for improvement to be realized the context must be re-etched or re- inscribed such that its culture, politics, or characteristics are altered” (p. 7).
References Bearman, C., Paletz, S. B., Orasanu, J., & Thomas, M. J. (2010). The breakdown of coordinated decision making in distributed systems. Human Factors, 52(2), 173–188. https://doi. org/10.1177/0018720810372104 Braithwaite, J., Churruca, K., Long, J. C., Ellis, L. A., & Herkes, J. (2018). When complexity science meets implementation science: A theoretical and empirical analysis of systems change. BMC Medicine, 16(1). https://doi.org/10.1186/s12916-018-1057-z Converse, S. A., Cannon-Bowers, J. A., & Salas, E. (1991). Team member shared mental models: A theory and some methodological issues. In Proceedings of the human factors society 35th annual meeting (pp. 1417–1421). Human Factors and Ergonomics Society. https://doi. org/10.1177/154193129103501917 Foss, N. J. (2020). Behavioral strategy and the COVID-19 disruption. Journal of Management, 46(8), 1322–1329. https://doi.org/10.1177/0149206320945015 Hastings, A. P. (2019). Coping with complexity: Analyzing unified land operations through the lens of complex adaptive systems theory. School of Advanced Military Studies US Army Command and General Staff College, Fort Leavenworth, KS. https://apps.dtic.mil/sti/pdfs/ AD1083415.pdf Howard, R. (Director), Grazer, B. (Producer), Broyles, W. Jr., & Reinert A. (Screenplay). (1995). Apollo-13 [Film]. Universal Pictures. Hudson, M. (2019, August 22). Battle of the teutoburg forest. Encyclopedia Britannica. https:// www.britannica.com/event/Battle-of-the-Teutoburg-Forest Levy, D. (1994). Chaos theory and strategy: Theory, application, and managerial implications. Strategic Management Journal, 15(S2), 167–178. https://doi.org/10.1002/smj.4250151011 Lissack, M. R. (1999). Complexity: The science, its vocabulary, and its relation to organizations. Emergence, 1(1), 110–112. https://doi.org/10.1207/s15327000em0101_7 Mintzberg, H., Raisinghani, D., & Theoret, A. (1976). The structure of “unstructured” decision processes. Administrative Science Quarterly, 21, 246–275. Prahalad, C. K., & Bettis, R. A. (1986). The dominant logic: A new linkage between diversity and performance. Strategic Management Journal, 7(6), 485–501. https://doi.org/10.1002/ smj.4250070602 Raisio, H., Puustinen, A., & Jäntti, J. (2020). “The security environment has always been complex!”: The views of Finnish military officers on complexity. Defence Studies, 20(4), 390–411. https://doi.org/10.1080/14702436.2020.1807337 Rasmussen, J. (1987). Mental models and the control of actions in complex environments. Risø National Laboratory. https://backend.orbit.dtu.dk/ws/portalfiles/portal/137296640/ RM2656.PDF Reason, J. (1990). Human error. Cambridge University Press. Rittel, H. W., & Webber, M. M. (1974). Wicked problems. Man-made Futures, 26(1), 272–280. https://doi.org/10.1080/14649350802041654
References
189
Roberto, M. A. (2022). Organizational behavior simulation: Judgment in crisis. Harvard Business Publishing. Retrieved December 20, 2022, from https://hbsp.harvard.edu/product/7077-htm-eng Snowden, D. (2011, February 2). Cynefin framework with all five domains. [Drawing]. Retrieved from File: Cynefin framework Feb 2011.jpeg – Wikimedia Commons. CC-BY-3.0. Creative Commons – Attribution 3.0 Unported — CC BY 3.0. Snowden, D. J., & Boone, M. E. (2007). A leader's framework for decision making. Harvard Business Review, 85(11), 68. https://doi.org/10.1016/S0007-6813(99)80057-3 Sturmberg, J. P., & Martin, C. M. (2020). COVID-19–how a pandemic reveals that everything is connected to everything else. Journal of Evaluation in Clinical Practice, 26, 1361. https://doi. org/10.1111/jep.13419 Sun Tzu. (1988). The art of War (S. Griffith Trans.). Oxford University Press. (Date of original work unknown; generally thought to be c. 400 B.C.). Surowiecki, J. (2005). The wisdom of crowds. Anchor. Van Beurden, E. K., Kia, A. M., Zask, A., Dietrich, U., & Rose, L. (2011). Making sense in a complex system: How the Cynefin framework from complex adaptive systems theory can inform health promotion practice. Health Promotion International, 28(1). https://doi.org/10.1093/ heapro/dar089 Weick, K. E. (1995). Sensemaking in organizations (Vol. 3). Sage. Wells, P. S. (2003). The battle that stopped Rome: Emperor Augustus, Arminius, and the slaughter of the legions in the Teutoburg Forest. WW Norton & Company. Wickens, C. D., Gordon, S. E., Liu, Y., & Becker, S. G. (2004). An introduction to human factors engineering. Pearson Prentice Hall. Wilhelm, F. (1813, March 20). Battle of Hermann in the teutoburg forest. [Artwork]. Retrieved from https://commons.wikimedia.org/wiki/File:Hermannsschlacht_(1813).jpg. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author’s life plus 70 years or fewer. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1927.
Part VI
Behavioral and Social Aspects of Team Decision Making in Complex Adaptive Systems
Chapter 19
Team Decision Making and Crew Resource Management
In the summer of 2010, the world was transfixed by the fate of 33 miners trapped 2300 feet deep within the bowels of Chile’s Atacama desert, buried alive beneath 700 metric tons of some of the hardest rocks on the planet. Rescue experts estimated their chances at survival as less than 1% (Fig. 19.1). According to Rashid et al. (2013), the “rescue operation was an extraordinary effort, entailing leadership under enormous time pressure and involving teamwork by hundreds of people from different organizations, areas of expertise, and countries” (p. 113). So, despite the odds, the 13 trapped miners emerged—one by one. In accomplishing this mission, the team of rescuers were faced with a unique problem, one that required both innovation and effective implementation—and this meant coordinated teamwork would be needed. Useem et al. (2011) attribute much of the success of the operation to the leadership of Chile’s Minster of Mining, Laurence Golborne. He had the confidence, based on past leadership experience, to assume responsibility of the rescue mission. Additionally, he assembled a good decision making team, and at the same time managed the media and family members. Finally, he focused on actions to bring the crisis to resolution. While Golborne’s ability to lead during a crisis was undoubtedly instrumental, it is important to remember that multiple factors—and team members—are involved in such successes, just as they are in failures. Useem et al. (2011) also credit the crew foreman Luis Urzúa, who was among the 13 trapped miners. He “helped form them into a microsociety to ration food, preserve morale and prepare for rescue” (p. 50), motivating the men to stay alive and maintaining hope during their 69-day ordeal. Where Laurence Golborne was reminiscent of Apollo-13’s Gene Krantz, the incident commander on the ground, Urzúa was akin to astronaut Jim Lovell—himself trapped and needing to lead, calm, and motivate his crew. Hope should never be underestimated in a crisis situation. “The management expert Jim Collins refers to the dual need for hope and pragmatism as the Stockdale Paradox, after the coping mechanism that U.S. Navy pilot James Stockdale used to lead his fellow captives in a North Vietnamese prisoner of war camp” (Rashid et al., © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_19
193
194
19 Team Decision Making and Crew Resource Management
Fig. 19.1 Florencio Avalos: first trapped miner to exit Chilean mine Note: Secretaria de Comunicaciones (Secretary of Communications) (2010). Retrieved from File:Florencio Avalos (5076856843).jpg—Wikimedia Commons. CY BY 2.0)
2013, p. 115), and indeed, both Golborne and Urzúa were beacons in an otherwise bleak cavity. But another individual would also join this team who also provided both hope and practical innovation. André Sougarret, a mining engineer with over 20 years of experience, was “known for his composure under pressure” (Rashid et al., 2013, p. 115). This would be an important characteristic, because “at the accident site, Sougarret found chaos…He and his team cut through the confusion to establish situational awareness …assuming little and asking myriad questions…Maintaining situational awareness became a never-ending task” (pp. 115–116). By consulting with miners, geologists, and drilling experts, Sougarret was able to gain insight into the very complex problem. Rashid et al. (2013) stress that, During times of uncertainty, leaders must enlist a diverse group of highly skilled people but ask them to leave behind preconceived notions and prepackaged solutions. Those specialists need to understand that no matter how experienced they might be, they have never before faced the challenge at hand. (p. 116)
To meet this challenge, the leadership team needed to master “three key tasks: envision, enroll, and engage” (Rashid et al., 2013, p. 114). With envision, the leader directs by rapidly and realistically assessing the situation and its possible developments. At the same time, the leader enables the followers through hope and motivation to continue to focus on the vision. With enroll, the leader directs by selecting the team and grounding the problem, while enabling the team through collaboration among diverse experts. Finally, with engage, the leader directs by leading with “disciplined, coordinated execution” and enables the team by “inviting innovation through experimentation and learning” (p. 117). This culture of learning and
Crew Resource Management
195
willingness to be open to new ideas inspired Igor Proestakis, a field engineer with Drillers Supply, S.A., to present his novel idea to members of Sougarret’s team. His idea was to use a drill from an American company that he believed would be able to cut through the hard rock much faster. Proestakis said of Sougarret, “Despite my experience and age, he listened to me, asked questions, and gave me a chance” (p. 117). Proestakis’ drilling team was, in fact, the first to reach the trapped miners.
Crew Resource Management In the opening story in the prologue of this book, Captain Sullenberger summed the application of decades of research into human cognition and decision making behavior best in the following quote: What we’ve essentially done in aviation is observe attitudes and behaviors, the way people interact. We had to first create a team of experts and then create an expert team; those are different skills, but they’re critical skills so that we’re not acting as individuals, we’re acting in concert. We’re coordinating individual efforts toward common goals” (Capt Chesley Sullenberger, from an interview by Elliott Carhart on applying CRM to EMS). (Carhart, n.d., para. 9)
Crew Resource Management (CRM), originally known as Cockpit Resource Management, developed, in part, out of noticing the relationships between incidents and certain patterns of communications noted by social psychologists reviewing the “black box” following aviation accidents. What is colloquially known as the “black box,” though typically orange for visibility, actually consists of two boxes, the flight data recorder and the cockpit voice recorder. The latter, in particular, proved to be a particularly useful source of data with respect to crew communication and coordination, or lack thereof. Over the decades, CRM programs expanded into multiple areas where teams must coordinate in high-stakes, dynamic decision making. The first area in which it expanded was the medical field (e.g., Sundar et al., 2007; McConaughey, 2008; Carbo et al., 2011), and it has since moved to fields as diverse as automotive production (e.g., Marquardt et al., 2010) to maritime industries (e.g., Wahl & Kongsvik, 2018). Wahl and Kongsvik (2018) assert that while core CRM principles are applicable in many domains outside aviation, it is critical to tailor such programs to the given industries and tasks. Crew Resource Management (CRM) has been defined by Buljac-Samardzic et al. (2020) as a “principle-based” training program that has “a management concept at its core that aims to maximize all available resources (i.e., equipment, time, procedures, and people” (p. 31). It focuses on developing “skills such as situational awareness, decision making, teamwork, leadership, coping with stress, and managing fatigue” (p. 31), with the aim of preventing and catching errors, as well as mitigating their consequences if they do occur. Wahl and Kongsvik (2018) outline five main skills that CRM intends to improve: “assertiveness, decision making, communication, situation awareness, and team coordination” (p. 377). While the latter four may have obvious links to team decision making, power distance and lack of
196
19 Team Decision Making and Crew Resource Management
assertiveness of subordinate team members was one of the primary motivators for studying ways to improve cockpit communication as, based in part on observations from cockpit voice recordings, they emerged as contributing factors in both accident investigations and simulations. In fact, in Tenerife, the deadliest air disaster in history (discussed in Chap. 12), along with perceptual errors, power distance and poor crew coordination have also been addressed as factors in the accident. This was exacerbated when fears of airport closings due to fog and re-routed aircraft created an atmosphere of “get-home-itis,” when motivation to get to one’s destination can impact risk management decisions. In their 2010 article on the human factors involved in the infamous disaster, Ericson and Mårtensson wrote: In the KLM aircraft, the captain started the engines, eager to take off. The first officer said: “Wait – we have not yet got the clearance from ATC.” The captain: “I know, go ahead and get it.” Without waiting for the clearance, he accelerated the engines for take-off, but the Pan Am aircraft was still on the runway and the two aircraft collided…The Accident Investigation Board found that the accident was caused by the fact that the Dutch captain, who was said to be very authoritarian, did not listen to his first officer but started the engines on his own responsibility. (p. 245)
Captain Daniel Maurino wrote in his 1998 foreword to Helmreich and Merritt’s (2017) Culture at Work in Aviation and Medicine that Aer Lingus Captain Neil Johnston’s seminal contribution to aviation human factors was in introducing Geert Hofstede’s (1980) model to aviation human factors and presenting a compelling case for the “influence of culture on CRM in particular and on system performance in general” (p. xii). Maurino further describes Johnston’s contributions regarding recognition of the confluence of three major practical tools that could be applied to “turn abstract thinking into tangible practice” (p. xii). “Johnston’s insight closed a loop by providing the badly needed missing leg to the three-legged stool upon which broad, systematic-oriented aviation safety and efficiency endeavors rest” (pp. xvii– xviii). Maurino describes the first leg as the Hawkins’ (1987) SHEL(L) model, which was among the first to highlight the human factor in the total system. The second was Reason’s (1990) accident causation model, which modeled total system performance from the standpoint of how things go wrong, and the third was Hofstede’s (1980) cultural model. This brought an important emphasis on social decision making to the equation, particularly with respect to the culture within teams. The researcher most associated with the study of Crew Resource Management is Robert L. Helmreich (1937–2012), who was a professor of Psychology at the University of Texas in Austin. He was the principal investigator for the University of Texas Human Factors Research Project, which “studies individual and team performance, human error, and the influence of culture on behavior in aviation and medicine” (Plous, 2009, para. 1). According to Helmreich et al. (1999), the origins of crew resource management in the United States are typically traced to a 1979 workshop sponsored by the National Aeronautics and Space Administration (NASA), with the first comprehensive training program in the United States initiated in 1981 by United Airlines. “The United program was modeled closely on a form of managerial training called the
Assessing Effectiveness of CRM
197
‘Managerial Grid’, developed by psychologists Robert Blake and Jane Mouton (1964)” (p. 20). The earliest training programs focused heavily on psychological and general leadership principals. However, by the time of another NASA workshop in 1986, there were several programs in the United States, and these programs began to focus more on group dynamics, including “such concepts as team building, briefing strategies, situation awareness, and stress management…{as well as}… decision making strategies and breaking the chain of errors that can result in catastrophe” (p. 21).
Assessing Effectiveness of CRM In assessing the effectiveness of CRM training, Helmreich et al. (1999) indicate several challenges. The ultimate goal of CRM is to reduce aviation accidents, but there are many factors that contribute to aviation systems safety and performance, so general declines in accident rates alone cannot necessarily be attributed to the advent and spread of CRM training. They suggest “the two most accessible and logical criteria are behavior on the flight deck and attitudes showing acceptance or rejection of CRM concepts” (p. 23), but also add that such assessments made during formal training have limitations. Instead, they suggest the most useful data can be obtained under conditions in which the crews are not being rated in a way that might jeopardize their positions. Indeed, such data have indicated that recurrent CRM training is effective in producing the intended behavioral changes and that this training is consistent with the evaluations of crew members completing the training, who generally rate it as effective. In fact, an early study conducted by Helmreich et al. (1990) outlined three measurement categories for assessing CRM. The first set of measures included objective and subjective outcome measures defined as Group or organizational performance including incidents showing positive or negative crew performance, attitude measures reflecting crew acceptance of crew coordination concepts before and after training, subjective evaluations of the efficiency of training by active crew members, and indirect measures such as measures of organizational efficiency. (p. 2)
A second set of process measures examined incidents of crew behaviors during simulation of both normal and abnormal or emergency operations. Finally, a third measurement category examined the role of moderator factors. Moderator factors included additional variables that could directly or indirectly influence the first two categories of outcome and process measures. Such moderators consisted of multiple organizational factors, such as policies and resources; individual crew factors, such as experience and personality; situational factors, including stress; and training factors “such as the attitudes, personalities, and behavior of trainers and evaluators and course content and pedagogical method” (Helmreich et al., 1990, p. 2).
198
19 Team Decision Making and Crew Resource Management
Effectiveness of CRM Beyond Aviation A 2014 meta-analysis of data collected across 20 published studies of the effectiveness of CRM in acute care settings assessed improvements in teamwork and coordination among medical teams (O’Dea et al., 2014). Results indicated large effects of CRM-type training on both the knowledge and behaviors of medical participants, and small effects on attitudes toward the training. While the researchers concluded that there was insufficient evidence that CRM-type training directly impacted clinical care outcomes, “the findings support the conclusions of previous systematic reviews that report that team training can improve the effectiveness of multidisciplinary teams in acute hospital care” (pp. 704–705). As CRM initially developed based, in part, on findings of power distance in the cockpit leading to accidents (Ginnett, 1993), it is worth noting that a 2011 study by Carbo et al. of internal medicine residents found a significant increase following training, from 65% to 94%, in the willingness of residents to express safety concerns to senior clinicians. In a more recent systematic review of empirical, outcome-based studies, including 38 specific to CRM programs applied in various medical teams, Buljac- Samardzic et al. (2020) concluded that principle-based team training programs such as CRM, as well as simulations, showed the greatest promise for improving team performance. However, similar to Wahl and Kongsvik (2018), they emphasize the need to tailor both training and outcome metrics to the specific task domain.
Chapter Summary Whether a military group, an aircrew, or an operating room, teams must function not as individuals but as integral parts of a well-oiled machine. Indeed, this is an observation indicated by Wahl and Kongsvik (2018) who emphasized the social learning aspect necessary in the advancement of team decision making, safety, and performance.
References Blake, R. R., Mouton, J. S., Barnes, L. B., & Greiner, L. E. (1964). Breakthrough in organization development. Graduate School of Business Administration, Harvard University. Buljac-Samardzic, M., Doekhie, K. D., & van Wijngaarden, J. D. H. (2020). Interventions to improve team effectiveness within health care: A systematic review of the past decade. Human Resources in Health, 18(2), 1–42. https://doi.org/10.1186/s12960-019-0411-3 Carbo, A. R., Tess, A. V., Roy, C., & Weingart, S. N. (2011). Developing a high-performance team training framework for internal medicine residents. Journal of Patient Safety, 7(2), 72–76. https://www.jstor.org/stable/26632806 Carhart, E. (n.d.). Applying crew resource management in EMS: An interview with Capt. Sully. EMS World. Retrieved November 29th, 2022, from https://www.emsworld.com/article/12268152/ applying-crew-resource-management-in-ems-an-interview-with-capt-sully
References
199
Ericson, M., & Mårtensson, L. (2010). The human factor? In G. Grimvall, Å. Holmgren, P. Jacobsson, & T. Thedéen (Eds.), Risks in technological systems (pp. 245–254). Springer. https://doi.org/10.1007/978-1-84882-641-0_15 Ginnett, R. C. (1993). Crews as groups: Their formation and their leadership. In E. L. Wiener, B. G. Kanki, & R. L. Helmreich (Eds.), Cockpit resource management (pp. 71–98). Academic Press. https://doi.org/10.4324/9781315092898 Hawkins, F. H. (1987). In H. W. Orlady (Ed.), Human factors in flight (2nd ed.). Routledge. https:// doi.org/10.4324/9781351218580 Helmreich, R. L., & Merritt, A. C. (2017). Culture at work in aviation and medicine: National, organizational and professional influences. Routledge. https://doi.org/10.4324/9781315258690 Helmreich, R. L., Chidester, T. R., Foushee, H. C., Gregorich, S., & Wilhelm, J. A. (1990). How effective is cockpit resource management training? Flight Safety Digest, 9(5), 1–17. Flight Safety Digest May 1990 (researchgate.net) Helmreich, R. L., Merritt, A. C., & Wilhelm, J. A. (1999). The evolution of crew resource management training in commercial aviation. The International Journal of Aviation Psychology, 9(1), 19–32. https://doi.org/10.1207/s15327108ijap0901_2 Hofstede, G. (1980). Culture and organizations. International Studies of Management & Organization, 10(4), 15–41. https://doi.org/10.1080/00208825.1980.11656300 Marquardt, N., Robelski, S., & Hoeger, R. (2010). Crew resource management training within the automotive industry: Does it work? Human Factors, 52(2), 308–315. https://doi. org/10.1177/0018720810366 McConaughey, E. (2008). Crew resource management in healthcare: The evolution of teamwork training and MedTeams®. The Journal of Perinatal & Neonatal Nursing, 22(2), 96–104. https://doi.org/10.1097/01.JPN.0000319095.59673.6c O’Dea, A., O'Connor, P., & Keogh, I. (2014). A meta-analysis of the effectiveness of crew resource management training in acute care domains. Postgraduate Medical Journal, 90(1070), 699–708. https://doi.org/10.1136/postgradmedj-2014-132800 Plous, S. (2009, December 21). Bob Helmreich. Social Psychology Network. Retrieved December 12, 2022, from http://helmreich.socialpsychology.org/ Rashid, F., Edmondson, A. C., & Leonard, H. B. (2013). Leadership lessons from the Chilean mine rescue. Harvard Business Review, 91(7–8), 113–119. Reason, J. (1990). Human error. Cambridge University Press. Secretaria de Comunicaciones (Secretary of Communications). (2010, October 12). Florencio Avalos. [Photograph]. Retrieved from File:Florencio Avalos (5076856843).jpg – Wikimedia Commons CY BY 2.0. Creative Commons—Attribution 2.0 Generic—CC BY 2.0. Sundar, E., Sundar, S., Pawlowski, J., Blum, R., Feinstein, D., & Pratt, S. (2007). Crew resource management and team training. Anesthesiology Clinics, 25(2), 283–300. https://doi. org/10.1016/j.anclin.2007.03.011 Useem, M., Jordán, R., & Koljatic, M. (2011). How to lead during a crisis: Lessons from the rescue of the Chilean miners. MIT Sloan Management Review, 53(1), 49. https://ezproxy.ollusa.edu/login?url=https://www.proquest.com/scholarly-j ournals/ how-lead-during-crisis-lessons-rescue-chilean/docview/896570511/se-2 Wahl, A. M., & Kongsvik, T. (2018). Crew resource management training in the maritime industry: A literature review. WMU Journal of Maritime Affairs, 17(3), 377–396. https://doi.org/10.1007/ s13437-018-0150-7
Chapter 20
The Importance of Learning Cultures in Organizations
I think it’s important to note that nothing happens in isolation, that culture is important in every organization, and there must exist a culture, from the very top of the organization permeating throughout … in a way that is congruent. Captain “Sully” Sullenberger (NPR, 2009, para. 61)
Organizational Learning Disaster researchers concluded long ago that disasters could not be conceptualized except in terms of social, technological, or environmental agents (Dynes, 1970; Quarantelli & Dynes, 1977) (Short & Rosa, 1998, p. 94). Application of the SHEL(L) model (Edwards, 1985; Hawkins, 1987) or HFACS (Shappell & Wiegmann, 2009/1997) or Cynefin, (Snowden & Boone, 2007), or any other framework or tool for decision making, depends on an organizational culture that supports a cyclical learning-based approach, rather than one that simply reacts to overt or active errors. While a focus on latent errors is important for analysis—it should lead to a deeper analysis of the organizational culture at all levels of the organization.
Information Flow Westrum and Adamski (1999) boldly stated, “the critical feature of organizational culture … is information flow” (p. 83). Westrum’s earlier (1993) work described three organizational climates of information flow. A positive climate is known as generative and is one in which information flows freely and promotes novel ideas. There are two progressively more negative climates, however; bureaucratic and pathological. Because bureaucratic organizations don’t foster a learning organization, they can contribute to latent failures, but latent failures reach their peak in a pathological organizational climate, in which anomalies are handled “by using © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_20
201
202
20 The Importance of Learning Cultures in Organizations
suppression or encapsulation. The person who spots a problem is silenced or driven into a corner. This does not make the problem go away, just the message about it. Such organizations constantly generate ‘latent pathogens’.” Moreover, “the pathological organization is further characterized by information being hidden, responsibilities shirked, failures covered up, and new ideas crushed” (Westrum & Adamski, 1999, pp. 83–84). Indeed, Richards (2020) noted that “achieving harmony, while still encouraging creativity and initiative, is not as easy as it seems. Rigidly enforced organizational dogma, for example, can produce a type of harmony, but it rarely encourages initiative” (p. 151).
Creating Cultures of Learning Learning cultures, or what Westrum termed generative organizational climates, are critical to situation assessment, as well as the recognition of emerging patterns in complex adaptive systems. Dr. Michael Roberto, former Harvard Business School professor, and Dr. Afzal Rahim, professor and conflict management expert, have both argued that the culture of the organization impacts the flow of information and can contribute to or inhibit problem solving, conflict management, and strategic decision making.
Single- vs. Double-Loop Learning Argyris and Schon (1974) were among the first to define the levels of learning culture in an organization. Argyris (1976) defined learning, in general, in terms of error recognition and correction. In turn, he defined error as “any feature of knowledge or of knowing that makes action ineffective … The detection and correction of error produces learning and the lack of either or both inhibits learning” (p. 35). He further postulates a relationship between complexity, ambiguity, the probability of errors, and the necessity of learning, stating “the more complex and ill-structured a problem, the higher the probability of ambiguity and so the higher the probability of errors” (p. 365). That ambiguity, and subsequent increased probability of errors, will then increase the likelihood of a mismatch between actions and plans. In this line of reasoning, Argyris illustrates why the human error has been a source of study for both human factors and cognitive psychology through the decades. It is an effective means of illuminating how information is processed and, specifically, where breakdowns in processing occur. Likewise, with respect to learning, the aspect of feedback is critical. The process of sampling the environment for critical information and receiving feedback to either confirm or refute one’s assessment of both the environment and changes to it that result from one’s actions is foundational to the concept of learning, as well as human factors design.
Single- vs. Double-Loop Learning
203
Argyris (1976) then introduces the role of organizational factors, including conflict management and bureaucratic influences, in potentially inhibiting effective learning. At least two important sets of variables can be altered to increase the effectiveness of learning, no matter at what point the learning is to occur. One is the degree to which interpersonal, group, intergroup, and bureaucratic factors produce valid information for the decision makers to use to monitor the effectiveness of their decisions. The other is the receptivity to corrective feedback of the decision-making unit; that is, individual, group, or organization. (p. 365)
From this conceptual understanding of the role of information flow, particularly feedback, in learning, Argyris and Schon (1974) developed the basic dichotomous model of single- versus double-loop learning. According to Argyris (1976), “the literature suggests that the factors that inhibit valid feedback tend to become increasingly more operative as the decisions become more important and as they become more threatening to participants in the decision- making process” (pp. 366–367). Threatening means challenging to the status quo goals and decision-making processes of an organization. Argyris defines this type of learning, upon which artificial constraints are placed regarding the seeking out of valid information and feedback, as single-loop learning. “One might say that participants in organizations are encouraged to learn to perform as long as the learning does not question the fundamental design, goals, and activities of their organizations” (p. 367).
Single-Loop Learning Agryris (1976) defined single-loop learning in terms of the human behavioral tendency to search for “the most satisfactory solution people can find consistent with their governing values or variables, such as achieving a purpose as others define it, winning, suppressing negative feelings, and emphasizing rationality” (p. 367). Argyris goes on to explain the “primary strategies are control over the relevant environment unilaterally” (p. 368), which is often done by “giving the meaning of a concept to others and defining its validity for them is one of the most powerful ways to control others” (p. 368). At this point, the reader may be reminded of the concept of framing (see Chap. 12) and its ability to influence how and what information is processed in individuals and groups. Importantly, Argyris indicates how this can impede information flow and learning. Groups composed of individuals using such strategies will tend to create defensive group dynamics, reduce the production of valid information, and reduce free choice… There would be relatively little public testing of ideas, especially important or threatening ones. As a result leaders would tend to receive little genuine feedback and others would tend not to violate their governing values and so disturb the accepted fundamental framework. (p. 368)
204
20 The Importance of Learning Cultures in Organizations
Along with other experts in organizational decision making, Argyris argued that most organizations limit learning to single-loop with rigid policies, practices, and strategies. Time is then spent on detecting deviations from the rules and norms and subsequent correction. With double-loop learning, on the other hand, the actual rules and norms behind both information gathering and decision strategies may be questioned. Moreover, the mental models guiding strategic decisions are repeatedly tested, and their underlying assumptions are questioned.
Double-Loop Learning Argyris (1976) argued that double-loop learning provides the feedback necessary for more effective problem solving and decision making. The fruitfulness of applying learning to problem solving, he contended, depends on the ability of decision makers to “learn from their actions and adapt their decision making and behavior accordingly; also upon the availability of valid information from the environment within realistic time constraints to make corrections possible” (1976, p. 365). As such, the governing principles underlying double-loop learning focus not on power, but on maximizing the dissemination of valid and relevant information, as well as to continuously adapting to the changing environment that, consequently and continuously, redefines what is valid and relevant information. From a human behavior standpoint, Argyris (1976) argues the importance of “an invitation to confront one another’s views and to alter them, in order to produce the position that is based on the most complete valid information possible and to which participants can become internally committed” (p. 369).
elationship Between Learning Cultures R and Conflict Management It is important, at this point, to highlight two critical aspects to this statement. The first relates to conflict management. One of the reasons conflict is often viewed in negative terms is the fact that it necessarily begins with some form of confrontation, and it is the negative emotion that tends to accompany confrontation that tends to produce discomfort. However, it is a necessary step in order to reap the substantial problem-solving benefits that can be derived from the conflict of ideas. Rahim’s (2011) model of conflict management would identify this as integrating style, and it is the most conducive to problem solving. Integrating conflict management style has an important distinction from compromising, though the latter is probably a term more familiar to most. With compromising, there is a give-and-take. With integrating, on the other hand, the lack of artificial constraints can allow a novel solution arrived at by consensus so that there is no need for compromising particular aspects of different positions.
Relationship Between Learning Cultures and Conflict Management
205
Fig. 20.1 Triple-Loop learning Note. Rose (2021). Triple-Loop Learning. Retrieved from https://commons.wikimedia.org/wiki/ File-Single-,_Double-,_and_Triple-Loop_Learning.jpg.CC license 4.0
Second, the aspect of “internal commitment” is critical. While diverse viewpoints and diversity of thought are critical to the decision process itself, once a decision is made by a team, implementation of that decision requires the commitment, if not the total agreement, of all stakeholders. In other words, they must feel they’ve been able to present their cases, even if the end result is not in their favor. This creates a sense of “buy-in,” which in turn helps to foster commitment to the decision. Figure 20.1 graphically represents the concept of single- and double-loop learning, along with an extension, which is triple-loop learning.
Triple-Loop Learning According to Tosey et al. (2011), while there is no one scholar associated with the term, the concept of triple-loop learning developed as an extension of Argyris and Schon’s (1974) concept of single-loop versus double-loop learning. Similar to the theoretical underpinnings of human factors, cognitive psychology, and naturalistic decision making, the concept of triple-loop learning focuses on the underlying “why’s” behind the feedback mechanism that is utilized in human learning and which contributes to decision making. Whereas single-loop learning focuses on error correction and incremental improvements within perceived constraints, both double- and triple-loop learning allow for reflection on “business as usual” and whether there could be more novel approaches to solving particular problems. The distinction between double- and triple-loop learning is really more in terms of degree of creativity, with the fundamental goal of triple-loop learning focusing on a deep examination of why things are done the way they are, as well as a continuous
206
20 The Importance of Learning Cultures in Organizations
re-evaluation and active search for better ways to enhance situation assessment and improve both strategic decision making and innovative performance. Senge and Sterman (1992) put it into perspective using mental models. The problem lies, in part, with failing to recognize the importance of prevailing mental models. New strategies are the outgrowth of new world views. The more profound the change in strategy, the deeper must be the change in thinking. Indeed, many argue that improving the mental model of managers is the fundamental task of strategic management (p. 1007).
More recently, as organizations and events have become increasingly nonlinear, the shift has been toward a focus on systems of greater and greater complexity, and the new term, resilience engineering, has been introduced to address the increasingly adaptive nature required of these systems. “Accidents, according to Resilience Engineering, do not represent a breakdown or malfunctioning of normal system functions, but rather represent the breakdowns in the adaptations necessary to cope with the real world complexity” (Woods et al., 2010/2017, p. 3). In their article Dancing with Ambiguity: Causality Behavior, Design Thinking, and Triple Loop Learning, Leifer and Steinert (2011) describe the integration of human factors and system design principles into both innovation and software development. They argue that, traditionally, design and product development “aims to generate alterative solutions to satisfy performance requirements and software specifications” (p. 151). However, they argue this approach is not adept at capturing knowledge, particularly within the context of dynamic requirements. Instead, what they describe as a predominantly linear traditional model has given way to a process involving more rapid conceptual prototyping and triple-loop learning. This process, they argue, has led to, Notable success in a wide variety of industries through the use of adaptive design thinking and semi-formal use of a “coaching” model that has some members of each development team explicitly focused on the team’s behavior pattern with an eye to focusing activity on the critical tasks from a system integration point of view. (p. 152)
Liefer and Steinert (2011) address many of the same principles as Roberto and Rahim, particularly with respect to both managing affective conflict and encouraging a conflict of ideas, fostering what Rahim would label an integrative conflict management style, which is appropriate for both problem solving and creativity. To Liefer and Steinert, the fostering of an environment of triple-loop learning places the “focus on the informal and creative transmission of explicit and implicit knowledge” (p. 165). Similar to the approach of naturalistic decision making (see Chap. 14), the goal of designers who utilize the principles of triple-loop learning toward agile development is “to understand how designers and developers work {and}how we can improve the team composition and interaction” (p. 165).
Similar Models of Organizational Learning Korth (2000) compares and contrasts the seminal Argyris and Schon (1974) model of organizational learning with that of Kirton’s (1989). One area of similarity is that both single-loop learning and what Kirton calls adaptation involve “working within
Cultures of Indecision
207
existing paradigms … where individuals operate within the boundaries of the presented problem, make incremental improvements and improve efficiency” (Korth, 2000, p. 89). Both models offer an alternative approach, called double-loop learning by Agryris and Schon and innovation by Kirton, in which responses to a problem are not limited to perceived constraints of an existing paradigm. These constraints are often linked to organizational norms and embedded strategies that may not be challenged. Whereas both single-loop learning and adaptation can lead to improvements in organizational effectiveness, they limit creativity in problem solving when compared to either double-loop learning or innovation. According to Kirton … adaptors are capable of very creative solutions to organizational problems, but they will solve the problems within the perceived constraints. In single-loop terms, individuals will find solutions and modify strategies or the assumptions underlying the strategies in ways that leave the existing organizational values and norms unchanged. Adaptation and single-loop learning achieve effectiveness in short-term results. (Korth, 2000, p. 89)
The limitations of single-loop or adaptive learning were evident in the scenarios involving both United Flight 232 and Apollo-13, discussed in Chap. 14. In both of these cases, it was evident the situation itself violated organizational norms and strategies that would typically be applied in problem solving. The problems in both cases were so unique that effective decisions were virtually impossible without the application of more creative problem solving, such as defined in either double-loop or innovative terms. What most organizations face is more often a self-limitation of more creative problem solving strategies, based on organizational culture. This then creates an environment where problems of a more subtle but insidious nature, such as decreased diversity of information, lead to a lack of creativity and innovation. Both models, Argyris and Schon (1974) and Kirton (1989), emphasize the potential for overly bureaucratic organizations to reduce double-loop learning or what Kirton would term innovation. Once again, this can be linked to Westrum’s concepts of bureaucratic and pathological organizations and their increasing potential to reduce information flow, which in turn reduces feedback and learning, thereby increasing error.
Cultures of Indecision Dr. Michael Roberto has effectively described some of the factors that can lead to organizational cultures that are overly bureaucratic or even pathological. I first came across the work of Roberto in the published transcripts of an interview about his book, the title of which I found very intriguing. In Why Great Leaders Don’t Take Yes For An Answer (2005/2013), Roberto argues that the organizational culture has a profound effect on the dissemination of information and, consequently, strategic decision making. He outlines in his book three types of negative cultures, which he calls cultures of indecision, that inhibit the flow of information, and they are strongly linked with the management of conflict, particularly substantive conflict—or the
208
20 The Importance of Learning Cultures in Organizations
conflict of ideas. The negative cultures are labeled by Roberto as cultures of Yes, No, and Maybe. In all of the organizational cultures of indecision described in Roberto’s book, “certain patterns of behavior gradually become embedded in the way that work gets done on a daily basis and sometimes those patterns lead to a chronic inability to reach closure on critical decisions” (p. 144).
The Culture of Yes Perhaps the most damaging of the negative cultures is the culture of yes. This culture most accurately reflects Westrum’s concept of bureaucratic and pathological cultures, as opinions that dissent from the leadership or the majority are either implicitly or explicitly suppressed. This leads to a lack of openness that blocks the optimal flow of information. Again, optimal decision-making is grounded in good situation assessment, and good situation assessment depends on maximizing information from numerous perspectives. With the culture of yes, there is an environment in which people are either uncomfortable expressing dissent or are actively dissuaded or even punished for it. The result is often a silencing and what Roberto calls a “false consensus,” where people may agree in meetings, but are not committed to implementation of a decision or plan. This lack of transparency is particularly damaging because awareness of a problem may not be readily apparent, at least to the leadership.
The Culture of No This culture is similar to the culture of yes in its ability to impede the flow of information, but opposite in its means. Whereas the culture of yes dissuades dissention, most typically among groups lower in the hierarchy or in the minority opinion, the culture of no allows even one lone dissenter —typically in a management or leader position—to veto ideas and proposals. According to Roberto et al. (2005), A culture of no goes far beyond the notion of cultivating dissent or encouraging people to play the role of devil’s advocate…it undermines many of the principles of open dialogue and constructive debate…The organization does not employ dissenting voices as a means of encouraging divergent thinking, but rather it enables those who disagree with a proposal to stifle dialogue and close off interesting avenues of inquiry. Such a culture does not force dissenters to defend their views with data and logic...{it} enables those with the most power or the loudest voice to impose their will. (pp. 145–146)
The Impact of Organizational Culture on Decision Making
209
The Culture of Maybe Perhaps best described as “paralysis by analysis,” the culture of maybe describes one that, at first glance, would not appear to be negative. These are cultures that are highly analytical, focusing on gathering much quantitative and objective data before reaching a decision. While this process can counter cognitive bias and lead to enhanced performance, Roberto is quick to point out that such may be the case only in relatively stable environments. A curvilinear relationship exists, in fact, between the amount of information gathered and the benefit, where there is a point at which more information gathering offers no benefit, and may, in fact, decrease decisionmaking and performance. In such cases, decision makers may strive to resolve all uncertainty, which, of course, is not possible in complex, dynamic situations. Roberto elaborates that while “many factors explain why organizations become embroiled in a ‘culture of maybe’ when faced with ambiguity and environmental dynamism,” decision makers “often arrive at a false sense of precision in these circumstances, and these tactics exacerbate delays in the decision-making process, without truly resolving many outstanding questions” (2005; pp. 153–154). While managers may recognize that decision making has been impeded by this “paralysis by analysis,” their response is often to fall back on previous strategies or rules of thumbs, setting the stage for cognitive bias, single-loop learning, and stagnation. A better approach is a conscientious application of tools and processes that enhance the flow of information and enhance both the quality of decisions and their implementation. These will be discussed in more detail in Chap. 22. Cultures of indecision are negative precisely because they impede the flow of information and team decision making. “Tackling a culture of indecision requires leaders to focus not simply on the cognitive processes of judgment and problem- solving, but also the interpersonal, emotional, and organizational aspects of decision making” (Roberto, 2005, p. 160). Team decision making is dependent upon effective conflict management, balancing affective, or emotionally laden conflict, with substantive conflict—or the conflict of ideas. The latter is particularly important to foster in team decision making and is best implemented via problem solving conflict management styles such as integrating style, which can only be fostered in an existing climate of trust and openness.
The Impact of Organizational Culture on Decision Making The impact of organizational culture on decision making was dramatically illustrated in the case of the Columbia space shuttle disaster, the approach of which can be highly contrasted with Apollo-13. On Feb 1, 2003, the space shuttle Columbia disintegrated upon re-entry to Earth, killing all 7 crewmembers. The left wing had been damaged by a large piece of foam insulation that had broken free during the launch 16 days earlier (Roberto
210
20 The Importance of Learning Cultures in Organizations
et al., 2005). While it is not known whether a rescue from the damaged spacecraft would have been possible, it was later determined that, as was the case with Apollo-13, there was a remote, but distinct possibility. In the case of the Columbia disaster, however, there was a clear disconnect in the flow of information between lower-level engineers, who were concerned about potential damage from the foam strike, and leadership higher in the team. One of the explanations put forth by the Columbia Accident Investigation Board for this lack of information flow is that the underlying culture of NASA at the time was not conducive to coordinated communications and distributed decision making, including being hierarchical to the degree that lower-level engineers were dissuaded from asserting their concerns. This was a culture that had developed over time, but was recognized in a quote from astronaut Sally Ride. Upon hearing of the Columbia disaster, but harking back to the Challenger disaster, Ride is said to have uttered, “I think I’m hearing an echo here” (Roberto, 2005, p. 66). Now, the nature of the two disasters was quite different. The Challenger exploded on take-off due to failure of two rubber O-rings that seal joints between sections of the rocket boosters, whereas the Columbia burned up on re- entry, due to damage from the foam strike, but Ride was referring, not to the final, active cause, but to the underlying culture that contributed to the same latent errors in information flow. This could be compared to the early studies of the cockpit culture prior to Crew Resource Management training, in which analyses of communications leading up to disasters often indicated a dangerous combination of under-assertive lower-ranking crew members with overly assertive higher-ranking pilots-in-command. Indeed, Roberto is consistent with the approaches of both Crew Resource Management and Human Factors Analysis and Classification System in that failures can only be fully understood within the context of the total socio-technical system. Rather than placing blame on individual decision makers in the Columbia or similar disasters, Roberto et al. (2005) asks, “Would things have transpired differently if other people occupied the positions?” He answers, “Perhaps, but a close look at the situation suggests that this may not necessarily be the case” (p. 65). This has long been the concern of human factors researchers. It is easy to place blame solely on the individual or individuals most directly linked to a system failure, but decades of accident research demonstrate it is rarely that simple, and if a systemic perspective is not undertaken, problems will continue, despite the removal of what may be viewed as the final “cause.” Roberto is quick to emphasize that adhering “to the systemic view does not necessarily mean that one must absolve individuals of all personal responsibility for discouraging open dialogue and debate” (p. 67). In fact, one could argue that is one of the primary functions of leaders of decision-making teams: to foster an environment that encourages open discussion and substantive conflict in order to maximize problem solving and adaptive learning. However, this must be placed within the context of a complex adaptive systems view, where direct cause–effect is rare, and interactions must be considered in light of the dynamic system. To that end, Roberto, again similar to the approaches of human factors and crew resource management, emphasizes the need to focus on “organizational and historical factors that shaped individual behavior {and} typical patterns of
The Impact of Organizational Culture on Decision Making
211
communication and decision making over long periods of time” (p. 62). Additionally, considerations must be made for “hierarchical structures, status differences, rules of protocol, cultural norms, {and} cognitive beliefs/mental models” (p. 62). Such consistent findings across decades and domains point to an underlying truth. The way people process information, as well as how they behave, is influenced by their environment, and it is difficult to understand it without attention to the complex adaptive systems in which they operate. Roberto et al. (2005) summed the impacts of NASA’s organizational culture on the Columbia space shuttle accident as follows: Status differences had stifled dialogue for years. Deeply held assumptions about the lack of danger associated with foam strikes had minimized thoughtful debate and critical technical analysis of the issue for a long period of time. Because the shuttle kept returning safely despite debris hits, managers and engineers developed confidence that foam strikes did not represent a safety of flight risk. Ham’s behavior did not simply represent an isolated case of bad judgment; it reflected a deeply held mental model that had developed gradually over two decades as well as a set of behavioral norms that had long governed how managers and engineers interacted at NASA. (pp. 65–66)
What Roberto is describing is a perfect storm of single-loop learning, coupled with ineffective conflict management and inadequate coordination, leading to a lack of information which, in turn, led to incomplete situation assessment and resulted in sub-optimal team distributed decision making. Indeed, this is consistent with earlier observations of Argyris on single-loop learning, where “Control as a behavioral strategy influences the leader, others, and the environment in that it tends to produce defensiveness and closedness, because unilateral control does not tend to produce valid feedback” (1976, p. 368). This, in turn, effects both the social dynamic and the flow of information through the whole socio-technical system, reducing the capacity for problem solving. Groups composed of individuals using such strategies will tend to create defensive group dynamics, reduce the production of valid information, and reduce free choice … Many of the hypotheses or hunches that the leaders generate would then tend to become limited and accepted with little opposition. Moreover, whatever a leader learned would tend to be within the confines of what was acceptable. Under these conditions, problem solving about technical or interpersonal issues would be rather ineffective. Effective problem solving occurs to the extent individuals are aware of the major variables relevant to their problem and solve the problem in such a way that it remains solved. (p. 368)
Now, an important distinction should be made between the Apollo-13 and Columbia cases, in that, again, with respect to Apollo-13, the situation left no doubt that a critical, life-threatening problem had presented itself, and novel solutions were required. One of the major problems behind the Columbia shuttle disaster was the misunderstanding of the critical nature of the damage from the foam strike. There was no, “Houston, we have a problem” because the information to diagnose it as such was ambiguous. More importantly, the organizational culture had become so arguably bureaucratic, and perhaps even pathological, that communication of the information that could have resolved the ambiguity and led to more creative problem solving was not effectively disseminated.
212
20 The Importance of Learning Cultures in Organizations
Chapter Summary Maximizing the flow of information is critical to decision making. Good situation assessment is vital to optimizing the quality of a decision, and good situation assessment only results from peak information flow. The culture of an organization can dramatically impact this flow of information and, as a result, the quality of decisionmaking, particularly within complex adaptive systems, in which the situation is dynamic and information is constantly in flux.
References Argyris, C. (1976). Single-loop and double-loop models in research on decision making. Administrative Science Quarterly, 21, 363–375. https://doi.org/10.2307/2391848 Argyris, C., & Schon, D. A. (1974). Theory in practice: Increasing professional effectiveness. Jossey-Bass. Dynes, R. R. (1970). Organizational involvement and changes in community structure in disaster. American Behavioral Scientist, 13(3), 430–439. https://doi.org/10.1177/0002764270013003 Edwards, E. (1985). Human factors in aviation. Aerospace, 12(7), 20–22. Hawkins, F. H. (1987). Human factors in flight (H.W. Orlady, Ed.) (2nd ed.). Routledge. https:// doi.org/10.4324/9781351218580 Hudson Landing an Engineering Miracle, Pilot Says. (2009, November 12). NPR. Retrieved December 12, 2022, from https://www.npr.org/templates/story/story.php?storyId=120355655 Kirton, M. J. (Ed.). (1989). Adaptors and innovators: Styles of creativity and problem-solving. Routledge. Korth, S. J. (2000). Single and double-loop learning: Exploring potential influence of cognitive style. Organization Development Journal, 18(3), 87–98. https://ezproxy.ollusa.edu/login?url=https:// www.proquest.com/scholarly-j ournals/single-d ouble-l oop-l earning-exploring-p otential/ docview/197986254/se-2 Leifer, L. J., & Steinert, M. (2011). Dancing with ambiguity: Causality behavior, design thinking, and triple-loop-learning. Information Knowledge Systems Management, 10(1–4), 151–173. https://doi.org/10.3233/IKS-2012-0191 Quarantelli, E. L., & Dynes, R. R. (1977). Response to social crisis and disaster. Annual Review of Sociology, 3, 23–49. https://www.jstor.org/stable/2945929 Rahim, M. A. (2011). Managing conflict in organizations (4th ed.). Rutledge. Richards, C. (2020). Boyd’s OODA loop. https://hdl.handle.net/11250/2683228 Roberto, M. A. (2005/2013). Why great leaders don't take yes for an answer: Managing for conflict and consensus. FT Press. Roberto, M., Edmondson, A. C., Bohmer, R. M., Feldman, L., & Ferlins, E. (2005). Columbia’s final mission (pp. 305–332). Harvard Business School Multimedia/Video Case. Rose. 1472 (2021, April 23). Single-, double-, and triple-loop learning and how organizational thoughts change at each level. Retrieved December 12, 2022, from https://commons.wikimedia.org/wiki/File-Single-,_Double-,_and_Triple-Loop_Learning.jpg. This file is licensed under Creative Commons license https://creativecommons.org/licenses/by-sa/4.0/deed.en Senge, P. M., & Sterman, J. D. (1992). Systems thinking and organizational learning: Acting locally and thinking globally in the organization of the future. Paper presented at The Conference on Transforming Organizations, Sloan School of Management, MIT, 29-31 May 1990. https:// proceedings.systemdynamics.org/1990/proceed/pdfs/senge1007.pdf
References
213
Shappell, S. A. & Wiegmann, D. A. (2009/1997). A human error approach to accident investigation: The taxonomy of unsafe operations. The International Journal of Aviation Psychology, 7(4). https://doi.org/10.1207/s15327108ijap0704_2 Short, J. F., & Rosa, E. A. (1998). Organizations, disasters, risk analysis and risk: Historical and contemporary contexts. Journal of Contingencies and Crisis Management, 6(2), 93–96. https:// doi.org/10.1111/1468-5973.00077 Snowden, D. J., & Boone, M. E. (2007). A leader’s framework for decision making. Harvard Business Review, 85(11), 68. https://doi.org/10.1016/S0007-6813(99)80057-3 Tosey, P., Visser, M., & Saunders, M. N. K. (2011). The origins and conceptualizations of ‘triple-loop’ learning: A critical review. Management Learning, 43(3), 291–307. https://doi. org/10.1177/1350507611426239 Westrum, R. (1993). Cultures with requisite imagination. In Verification and validation of complex systems: Human factors issues (pp. 401–416). Springer. https://doi. org/10.1007/978-3-662-02933-6_25 Westrum, R., & Adamski, A. J. (1999). Organizational factors associated with safety and mission success in aviation environments. In D. J. Garland, J. A. Wise, & V. D. Hopkin (Eds.), Handbook of aviation human factors (pp. 67–104). Lawrence Erlbaum Associates Publishers. Woods, D., Dekker, S., Cook, R., Johannesen, L., & Sarter, N. (2010). Behind human error. Taylor & Francis Group. https://doi.org/10.1201/9781315568935
Chapter 21
Fostering Diversity of Thought in Strategic Decision Making
Diversity in Counsel, Unity in Command. Attributed to Cyrus the Great
Lessons from Cyrus the Great In 1879, archeologist Hormuzd Rassam uncovered a clay cylinder roughly the size of an American football while excavating at the site of ancient Babylon, in what is modern-day Iraq. Amazingly intact, the cylinder, which has since been on display at world-class museums and the United Nations, contained a declaration by Cyrus the Great, ruler of the Persian empire during the sixth century B.C. The declaration, written in cuneiform, is considered one of the first recorded declarations of human rights and religious freedom (The Cyrus Cylinder, 2013; Cyrus Cylinder, n.d.). Cyrus the Great was born around 580 B.C. in what is now Iran. As the founder of the Persian empire, he is remembered as both an exceptional warrior and a benevolent ruler, beloved by his people and esteemed by many other cultures of his time. In a vastly different approach from his contemporaries, Cyrus reportedly ruled with a relative tolerance and respect for diversity of thought. His freeing of the Jewish slaves from Babylon is recorded in the Old Testament, and his military and political leadership were the focus of the semi-biographical work of Xenophon’s Cryopaedia, or The Education of Cyrus (c. 370 B.C.). This discourse on the ideal leader was popular among European and American political and philosophical thinkers of the eighteenth century, and indeed, Thomas Jefferson, known to possess a vast library of well-read sources from many different cultures and time periods, had in his collection a copy of Cryopaedia. Many have speculated the contrast between the ideal ruler presented in Cryopaedia and other leadership treatises, such as Machiavelli’s (1513) The Prince, coupled with the brewing dissatisfaction with contemporary monarchs, may have contributed to inspirations behind the founding of the United States and its new form of government (Frye, 2022; Cyrus Cylinder, n.d.). In Sullivan’s interview of Mauboussin in the 2011 Harvard Business Review article, Embracing Emergence, mentioned in Chap. 2, Mauboussin provides clear recommendations for avoiding the kind of groupthink that can derail strategic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_21
215
216
21 Fostering Diversity of Thought in Strategic Decision Making
decision-making, just as it can derail scientific advancement. “The key is to make sure that as a leader, you’re not just tapping, you’re actually almost extracting this unshared information from everybody and putting it on the table to be evaluated. And that’s where a lot of organizations fail” (Sullivan, 2011, para. 21). Mauboussin poses that organizations can learn much from nature if processes are approached as complex adaptive systems. Despite the fact that some systems are seen as leaderless, he does argue that organizational strategizers should ask themselves what conditions need to be in place to solve complex, multivariate problems; and that is perhaps one of the greatest functions of the leader (or distributed leadership)—to set the appropriate conditions in place to maximize decision making in a complex adaptive system. Mauboussin goes on to recommend specific tools for creating the environment to maximize decision making, which include tools similar to those outlined by Roberto (2005/2013), including diversity of thought and experience in the decision-making team, and a leader who is not only open to dissent but encourages it through the conscious application of constructive processes that, fundamentally, create a culture of relationships built on trust. Both these things are vital to ensuring communication, which is vital to ensuring information, which is vital to ensuring situational awareness, which is vital to the decision-making process.
The Leader’s Role in the Strategic Decision Process In his book, Why Great Leaders Don’t Take Yes for an Answer, Roberto (2005/2013) does an excellent job of comparing and contrasting two crucial historical scenarios: The Bay of Pigs, a failed military operation in Cuba early in the presidency of John F. Kennedy, and the Cuban Missile Crisis. What makes this a particularly relevant comparison/contrast is that the same leader was ultimately responsible for each decision: President John F. Kennedy. This is important because it dramatically illustrates how differently a person’s decision-making quality and outcome can be within different socio-technical contexts. Without evading his ultimate responsibility for the decision in the Bay of Pigs, JFK reflected on the contextual factors that contributed to reduced information flow, increased opportunity for cognitive bias, and, consequently, resulted in poor situation assessment and a disastrous decision that cost the lives of many Cuban rebels, as well as contributed to the demoralization of their cause, and mistrust in the United States. But Roberto builds a strong case for how JFK seemed to very consciously apply specific tools to reduce opportunities for similar biases, groupthink, and mistakes when faced with the Cuban Missile Crisis less than 2 years later. Roberto conceptually defines strategic decisions as those where “the stakes are high, ambiguity and novelty characterize the situation, and the decision represents a substantial commitment of financial, physical, and/or human resources.” Moreover, “they differ from routine or tactical choices that managers make each and every day, in which the problem is well-defined, the alternatives are clear, and the impact on
How Leaders Can Guide the Decision Process Toward Implementation
217
the overall organization is rather minimal” (2005, p. 4). The task of a leader, he argues, is not only to select the appropriate course of action, but also to “mobilize and motivate” (2005, p. 5).
ow Leaders Can Guide the Decision Process H Toward Implementation Roberto’s view of how the leader can shape the decision process and summon the forces necessary to master what he calls the “fundamental dilemma” for leaders concerns how to “foster conflict and dissent to enhance decision quality while simultaneously building the consensus required to implement decisions effectively” (2005, p. 7). Implementation occurs when an individual (or other decision-making unit) puts an innovation into use. Until the implementation stage, the innovation-decision process has been a strictly mental exercise. But implementation involves overt behavior change, as the new idea is actually put into practice. (Rogers, 1962/1983, p. 174)
Rogers (1962/1983) defined communication as “a process in which participants create and share information with one another in order to reach a mutual understanding” (p. 5). Furthermore, he distinguished between the processes of Convergence (or divergence) as two or more individuals exchange information in order to move toward each other (or apart) in the meanings that they ascribe to certain events. We think of communication as a two-way process of convergence, rather than as a one-way, linear act in which one individual seeks to transfer a message to another. (p. 5)
Roberto (2005/2013) cautions, however, that premature convergence can reduce innovative solutions, while divergent thinking, which can be helpful in generating more critical thinking and creativity, can become polarizing and potentially lead to divisiveness and impede implementation of decisions and/or ideas. He explains that decision consultants often suggest the decision be brought to closure through a linear process that encourages “divergent thinking in the early stages of the decision process, and then shift to convergent thinking to pare down the options and select a course of action” (p. 199). However, Roberto reports that his research indicates it is more effective for leaders to “direct an iterative process of divergence and convergence,” in which they “seek common ground from time to time during the decision- making process” (2005, p. 199). Harking back to the importance of creating a learning culture (discussed in more detail in Chap. 20), he emphasizes such a nonlinear process may initially appear to be dysfunctional, “but without a doubt, effective decision making involves a healthy dose of reflection, revision, and learning over time” (2005, p. 13). Therefore, while it is important for dissention and debate to be part of the process from its inception, decision makers “must reach intermediate agreements on particular elements of the decision at various points in the deliberations, lest they find themselves trying to bridge huge chasms between opposing
218
21 Fostering Diversity of Thought in Strategic Decision Making
camps late in a decision process” (p. 200). This is not dissimilar to the process that occurred during the Chilean mine rescue operation described in Chap. 19. Roberto asserts that the greatest challenges to leaders in the decision-making process are socio-political, summing the inherent challenges as follows: Leaders need to understand how decisions actually unfold so that they can shape and influence the process to their advantage. To cultivate conflict and build consensus effectively, they must recognize that the decision process unfolds across multiple levels of the organization, not simply in the executive suite. They need to welcome divergent views, manage interpersonal disagreements, and build commitment across those levels. Leaders also need to recognize that they cannot remove politics completely from the decision process, somehow magically transforming it into the purely intellectual exercise that they wish it would become. (2005, p. 15)
Roberto argues that very careful consideration of this process must take place for the leader to foster both decision quality and implementation.
Decision Quality The quality of a decision is largely determined by the degree to which the leader is able to manage affective conflict while simultaneously encouraging cognitive conflict. Roberto refers to this management as constructive conflict. Cognitive conflict is imperative to both reduce cognitive bias and maximize the diversity and flow of information needed to enhance situation assessment. It contributes to decision quality through the testing of assumptions and critical thinking. Affective conflict, on the other hand, is a bit more complicated. It is largely based in our emotional responses and triggers and can, if not managed, reduce cognitive conflict. However, affect is an innate part of human dialogue, and properly managed, particularly within the context of an iterative process focused on communicating ideas, can lead to the commitment and shared understanding necessary for implementation effectiveness.
Implementation Effectiveness No matter how rational or “good” a decision may be, if those needed to implement do not feel they have “buy-in,” implementation is not likely to be effective. “Buy-in” is a familiar, somewhat colloquial phrase that captures the idea of comprehensive consensus, which Roberto defines as high in both shared understanding and commitment. Comprehensive consensus “does not mean that teams, rather than leaders, make decisions. Consensus does mean that people have agreed to cooperate in the implementation of a decision” (2005, p. 6), even if they don’t necessarily agree with it. For this to occur, people need to feel their arguments and ideas have been, not just “heard” but considered and critically evaluated. Moreover, an environment that
Roberto’s Managerial Levers in the Decision Making Process
219
stimulates trust and open communication must already be in place. Leaders can play a large role in either fostering or suppressing such an environment.
Roberto’s Managerial Levers in the Decision Making Process Roberto derived his decision-making process model, in part, based on his analysis of the contrasts between the decision-making processes of the Bay of Pigs and those of the Cuban Missile Crisis. His model focuses specifically on what he calls the “managerial levers,” though I would think leadership levers would be a more apt phrase, given their importance in establishing the culture of the leadership decision- making team. These levers can be used to create an environment that increases the diversity of thought and the candid communication necessary for the optimal flow of information. There are four, and they might be thought of as the 4 “C’s” of leadership levers.
Composition With composition, the leader must reflect seriously on the members who should compose the decision-making team. This involves a delicate balance of diversity, situational needs, expertise, and creativity. Rather than falling back automatically on job titles and hierarchical positions, Roberto argues “status and power within the firm should not be the primary determinants of participation in a complex high- stakes decision making process” (2005, p. 35). In fact, under certain circumstances, strong status differences can lead to information becoming “massaged and filtered on its way up the hierarchy” (p. 36). Indeed, in the case of the Bay of Pigs, Roberto contends “veteran officials from the Central Intelligence Agency (CIA) advocated forcefully for the invasion and they filtered the information and analysis presented to Kennedy” (2005, p. 30). Furthermore, lower-level officials from the State Department who had deep understanding of both the Cuban government and the society were excluded from cabinet-level discussions. Such information would have helped greatly in predicting behaviors and reactions, as opposed to relying too heavily on assumptions and maladaptive strategies. Repeating the mistake of the Romans in the Teutoburg Forest nearly two millennia earlier, Kennedy’s cabinet advisors, though intelligent and experienced, left “many critical assumptions … unchallenged.” In fact, the lack of “vigorous dissent and debate” undoubtedly contributed to the failure. Roberto writes, “Arthur Schlesinger, a historian serving as an advisor to the president at the time, later wrote that the discussions about the CIA’s plan seemed to take place amidst ‘a curious atmosphere of assumed consensus’” (2005, p. 30).
220
21 Fostering Diversity of Thought in Strategic Decision Making
Context In order to foster communication up and down all levels of the organization, as well as various types and degrees of expertise, leaders must consider both the structural and psychological contexts of a decision situation. Structural context refers to everything that governs formal organizational relationships, including who reports to whom, how communications and performance are monitored and controlled, and systems for correction and/or recognition/reward. Psychological context, on the other hand, is more implicit and, though often subtle, can have a great impact on the flow of information. It “consists of the behavioral norms and situational pressures that underlie the decision-making process” (Roberto, 2005, p. 40). Psychological Safety Amy Edmondson (1999) coined the term psychological safety to indicate a climate of trust that encourages open communication and risk- taking with respect to presenting diverse ideas and arguments. Roberto contends that “it can be difficult to enhance psychological safety, particularly in hierarchical organizations characterized by substantial status differences among individuals” (2005, p. 42). While this is true, one should realize that status differences do not necessarily impede psychological safety, nor does a lack of status difference guarantee it. The key is in the ability of leadership and the collective group to establish trust. This can be done in many ways, including the leader setting the example by taking risks and admitting to mistakes when appropriate. Additionally, the leader must follow through with claims to give serious consideration to arguments and viewpoints that may contradict the leader and/or in-group. Even more importantly, contrary ideas should not be silenced or punished, even if not accepted in the final analysis. Psychological safety can be particularly difficult for a leader to establish because it inherently relies on the ability to establish trust. Leaders must resist falling back on platitudes such as “I have an open-door policy” if they are not backed up with predictable action that conforms to a set of agreed-upon “rules of engagement” that are impartial and consistent. Roberto (2005) delineated how President Kennedy was able to learn from his failures in the Bay of Pigs and consciously alter all aspects of the decision process in order to improve outcomes. After the fact, President Kennedy “recognized that the Bay of Pigs deliberations lacked sufficient debate and dissent, and that he had incorrectly presumed that a great deal of consensus existed, when in fact, latent discontent festered within the group” (p. 33). Because of this realization, he took great care in establishing both the composition and contextual elements of his decision-making body during the historically unprecedented Cuban Missile Crisis, starting with “directing the group to abandon the usual rules of protocol and deference to rank during meetings {as} he did not want status differences or rigid procedures to stifle candid discussion” (p. 31). Lower-level officials and diverse experts were asked to participate in deliberations to encourage novel viewpoints and analyses, while at the same time all were instructed to consider the problem as “skeptical generalists” in order to foster a more systemic approach.
Roberto’s Managerial Levers in the Decision Making Process
221
Communication In addition to establishing structural and psychological contexts conducive to constructive conflict, leaders must reflect on the means of communication surrounding the decision process. Ask any critical response team, communication is vital to team situation assessment, decision making, and performance. It is the means by which “ideas and information are exchanged, as well as how alternatives are discussed and evaluated” (Roberto, 2005, p. 43). There are many approaches leaders might take in guiding communication within the process of strategic decision making, and there are many methods and combinations thereof from which to choose, but the main constant is such control must be conscious and deliberate. Ensuring the optimal flow of information is too important to leave to happenstance. Various tools and techniques have developed over the years to increase the current of diverse but relevant information and argumentation within the context of decision making. Brainstorming is a somewhat unstructured tactic that can generate creativity and may even foster group cohesiveness, but Roberto adds it can lead to suppression of dissent as a majority emerges, and/or a lack of appropriate critique. This, in turn, could lead to premature closure on one idea or alternative. More structured forms have been employed successfully for centuries. Dialectical Inquiry had its earliest formal recognition in the writings of the philosopher Plato as a form of testing hypotheses and arguments. The more formalized application of modern dialectical inquiry within the context of decision making has the goal of fostering divergent thinking and consequently avoiding groupthink. It typically involves dividing a team of decision makers into two groups, which debate in thesis–antithesis format. It is intended as a method to test both facts and underlying assumptions, as well as to force critical thinking and avoid cognitive bias through the consideration of at least two opposing arguments. It is similar to the approaches of both Darwin, who purportedly drew up two lists, one of the observations that supported his theory and one with observations that refuted it, and Benjamin Franklin’s emphasis on using a pros and cons list when considering alternatives in decision making. Devil’s Advocacy works similarly to dialectical inquiry in that it is intended to generate critiques in order to vet an argument or plan. It can be a role taken on by a single individual, or it can be presented in terms of divided subgroups, similar to the application of dialectical inquiry. Instead of opposing positions, however, one group develops and presents a comprehensive plan to a second group, the latter of which is tasked with finding the flaws in the first group’s argument or approach. Roberto describes the process as follows, The first subgroup then returns to the drawing board, modifying their plan, or, perhaps inventing a new option, as a means of addressing the criticisms and feedback that they have received. An iterative process of revision and critique then takes place until the two subgroups feel comfortable agreeing, at a minimum, on a common set of facts and assumptions. After reaching agreement on these issues, the subgroups work together to craft a plan of action that each side can accept. (2005, p. 46)
222
21 Fostering Diversity of Thought in Strategic Decision Making
The term Devil’s Advocate dates back to the sixteenth century, in the Roman Catholic Church’s process of beatifying individuals for sainthood. On the one side, an advocate for sainthood would present the “case” for a particular person to be beatified. It was the task of devil’s advocate, then, to critically examine “the life and miracles attributed to this individual {and to} present all facts unfavorable to the candidate” (Nemeth et al., 2001, p. 708). Not surprisingly, it can be a delicate and unpopular position when the role of devil’s advocate is assigned to a single person. There is a risk the devil’s advocate may be dismissed as a nay-sayer, or rejected altogether. This ties back to Roberto’s points about the leader needing to have a firm understanding of personal and political relationships within a group dynamic. Although it is highly detrimental to surround oneself with proverbial “yes men,” as implied by the title of Roberto’s 2005 book, “leaders can benefit by drawing on people with whom they have a strong personal bond, characterized by mutual trust and respect, to help them think through complex issues” (p. 37). Indeed, it is precisely these trusted advisors who are least likely to tell the leader what he or she wants to hear, and most likely to openly poke holes in the leader’s position. Roberto stresses, however, such “confidantes play a particularly important role … in turbulent and ambiguous environments, because most leaders face a few critical moments of indecision and doubt prior to making high-stakes choices” (p. 38). As such, the confidante, even if serving as devil’s advocate to the leader, can simultaneously prevent the leader from falling into paralysis by analysis by offering “solid advice and a fresh point of view,” as well as helping the leader “overcome last-minute misgivings” (p. 38). This delicate balance of choosing the right person or persons to serve as either devil’s advocate, confidante, or both, implies that leaders must understand the strengths and weaknesses of the decision team members, as well as the strengths and weaknesses of the leader’s relationship with them. President John F. Kennedy was able to master this and effectively employ the devil’s advocate technique into the Cuban Missile Crisis in the form of his brother and confidante Attorney General Robert Kennedy, as well as his trusted advisor, speech writer Ted Sorenson. “Kennedy wanted them to surface and challenge every important assumption as well as to identify the weaknesses and risks associated with each proposal” (Roberto, 2005, p. 31). Obviously, one would have to carefully consider the potential negative reactions to an individual or individuals within such a position. At first glance, it may seem odd that the attorney general and a speechwriter were involved at all in such a momentous foreign policy decision. Their positions in the bureaucracy certainly did not dictate their involvement in the process. However, the president trusted these two men a great deal and valued their judgment. Kennedy knew that others would not be quick to dismiss critiques offered by these two individuals because of the well-known personal bond between them and the president. In addition, Kennedy recognized that these two men would be more comfortable than most other advisers when it came to challenging his own views and opinions. (Roberto, 2005, pp. 38–39)
There have been some challenges to the idea that the devil’s advocate is one of the more useful tools for increasing divergent thought and decreasing groupthink. Nemeth et al. (2001), for instance, present empirical evidence that, while
Roberto’s Managerial Levers in the Decision Making Process
223
individuals appointed to roles as devil’s advocates generated more diversity of thought and constructive conflict than controls, they did not generate as much as “authentic dissenters”, or those presenting alternative, minority viewpoints based on genuine positions, rather than assigned roles. The subtle difference in acquiring authentic dissention is that the leader must actively search for true dissenters, as would have benefitted the problem solving in the Columbia space shuttle case study, discussed in Chap. 20. The accident board contended that despite insisting she had an “open door policy,” the chair of the Mission Management Team, Linda Ham, failed to actively seek out dissenting opinions that may have surfaced the concerns of lower-level engineers that the damage to the heat shields was, in fact, a major concern. The point is, whatever the tool or technique applied, a conscious effort must be put into generating constructive conflict and active debate.
Control The final lever for the leader to master in setting the stage for a good decision process is control. Roberto (2005) outlines four dimensions by which the leader can determine the extent to which he or she will control the process and content in decision making. The first dimension is how the leader will introduce his or her own views into the process. The second dimension concerns how the leader will, or will not, intervene in guiding debate and discussion. The third dimension concerns potential roles the leader may or may not play in the overall process, such as devil’s advocate. The fourth and final dimension concerns plans for bringing closure to the decision-making process. All of these must be carefully and deliberately considered by the leader prior to instigating the decision process, but it is perhaps the first and second dimensions that demand the most consideration. If leaders introduce their views too soon, there is a risk of biases due to how the problem is framed, or to the potential for dissenting views to be dissuaded from the onset. President John F. Kennedy was particularly cognizant of this in the Cuban Missile Crisis. He “deliberately chose not to attend many of the preliminary meetings that took place, so as to encourage people to air their views openly and honestly” (Roberto, 2005, p. 31). Furthermore, he focused on a plan to bring closure to the decision-making process that he felt would offer the most opportunity for diversity of thought and forced the consideration of alternative plans. In a demonstration of Cyrus’ precept of “diversity in counsel, unity in command”, Kennedy “asked that his advisers present him with arguments for alternative strategies, and then he assumed the responsibility for selecting the appropriate course of action” (pp. 31–32).
224
21 Fostering Diversity of Thought in Strategic Decision Making
Chapter Summary One of the greatest impacts a leader can have on the decision making process is to create an organizational culture that engenders the kind of open communication that fosters diversity of thought that in turn creates an environment conducive to the kind of problem solving and innovative approaches needed in dynamic and complex adaptive systems.
References BBC. (2013, March 11). Cyrus Cylinder: How a Persian monarch inspired Jefferson. BBC News. https://www.bbc.com/news/world-us-canada-21747567 Cyrus Cylinder. (n.d.). New world encyclopedia. Retrieved November 17, 2022, from https://www. newworldencyclopedia.org/entry/Cyrus_cylinder Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative science quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999 Frye, R. N. (2022, October 21). Cyrus the Great. Encyclopedia Britannica. https://www.britannica.com/biography/Cyrus-the-Great Machiavelli, N. (1993). The prince (1513). Wordsworth Editions. Nemeth, C., Brown, K., & Rogers, J. (2001). Devil’s advocate versus authentic dissent: Stimulating quantity and quality. European Journal of Social Psychology, 31(6), 707–720. https://doi.org/1 0.1080/02684527.2021.1946951 Roberto, M. A. (2005/2013) Why great leaders don’t take yes for an answer: Managing for conflict and consensus. Wharton School Publishing/FT Press/Pearson Education, Inc. Rogers, E. (1962/1983). Diffusion of innovation (3rd ed.). The Free Press Sullivan. (2011, September). Embracing complexity. Harvard Business Review. https://hbr. org/2011/09/embracing-complexity Tuplin, C. J. (n.d.). Cryopaedia. In Brittanica. Retrieved November 21, 2022, from https://www. britannica.com/biography/xenophon/legacy Xenophon. (2001). The education of Cyrus. Translated and annotated by Wayne Ambler. Cornell University Press.
Part VII
The Leader’s Role in Innovation and Implementation
Chapter 22
Innovation in Complex Adaptive Systems
“Any girl can be glamorous; all you have to do is stand still and look stupid.” – Hedy Lamar (Blackburn, 2017, para. 1).
It’s 1940 in the Hollywood Hills. A party is underway, complete with all the glitz and glamour that would be expected from this era in Hollywood. Amid the clinks of cocktail glasses, two people are deep in conversation. One is a stunningly beautiful starlet, best known thus far for a controversial film produced in the 1930s; the other is a young American composer. In the starlet’s mind, the setting is perhaps lightyears away from her former identity as the young wife of a controlling husband (Fig. 22.1). She was born Hedwig Eva Maria Kiesler in Vienna in 1913 or 1914. In 1933, at the age of 19, she married Friedrich Mandl, but he was reportedly very controlling. Legend has it she escaped to Paris, under the cover of darkness, dressed as a maid (Chodos, 2011). She moved to the United States in 1937 to pursue a career in Hollywood, under a contract with Metro-Goldwyn-Mayer. Her marriage did afford her one important thing, though—information. Mandl, 30 years her senior, was an Australian arms dealer and Hedy, who was Jewish, “presided over her husband’s lavish parties, attended by Hitler and Mussolini among others, and was often present at his business meetings. As a result, despite her lack of formal education, Lamarr acquired a great deal of knowledge about military technology” (Chodos, 2011, para. 3). Included in these discussions were issues related to jamming and interference of radio-guided torpedoes (Fig. 22.2). It was at that fateful party in the Hollywood hills in 1940 that composer George Antheil and actress Hedy Lamar engaged in a pivotal conversation. Considered one of the most beautiful women in Hollywood, Hedy was “more than just a pretty face: she had a natural mathematical ability and lifelong love of tinkering with ideas for inventions” (Chodos, 2011, para. 5). In turn, Antheil was a gifted pianist and composer of complicated musical score. Both Lamarr and Antheil had roots in Europe, so it is not surprising that amid this time of war and the potential for world domination from tyrannical forces, their conversation turned to how such forces might be defeated. They began “chatting about weapons, particularly radio-controlled torpedoes and how to protect them from jamming or interference. She realized that © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_22
227
228
22 Innovation in Complex Adaptive Systems
Fig. 22.1 Hedy Lamarr Note. Bull (1940). Publicity photo of Hedy Lamarr for film Comrade X. Retrieved from https://commons. wikimedia.org/wiki/ File:Hedy_lamarr_-_1940. jpg. Public domain
Fig. 22.2 George Antheil at the Piano Note. Boston Globe Archives (c. mid 1920s). George Antheil at the Piano. Retrieved from https:// commons.wikimedia.org/ wiki/File:Antheil at piano, mid 1920s.jpg. Public domain
“we’re talking and changing frequencies” all the time, and that a constantly changing frequency is much harder to jam” (Chodos, 2011, para. 7). According to Lemstra et al. (2011c), “their combined insights into technology and music generated the idea of changing the carrier frequency on a regular basis, akin to the changing frequency when striking another key on the piano” (p. 24). “Lamarr and Antheil presented their idea to the National Inventors Council, which had been established specifically to solicit ideas from the general public in support of US efforts in the Second World War” (Lemstra et al., 2011c, p. 24). On August 11, 1942, US Patent No. 2,292,387, entitled Secret Communications System, was issued to Hedy Lamarr and George Antheil. Though the technology remained
What Is Innovation?
229
classified until 1981, we know it today as frequency hopping spread spectrum technology and the forerunner of applications such as Bluetooth (Lemstra et al., 2011c). “They subsequently donated their patent to the US military as a contribution to the war effort” (p. 24). In their book, Lemstra et al. (2011c) argue that the history of the use of spread- spectrum technology from military operations to ubiquitous civilian product applications is a perfect case study for innovation. “The major challenge is the application of the technology in products that are affordable in a commercial rather than a military setting” (p. 42). They contend the trajectory of wireless technology is initially “a clear case of product innovation, by firms applying a new technology to provide more flexible solutions to their clients, through the application of wireless data communication and wireless data capturing” (p. 43), but signal relatively early in their book that they will be introducing the complexities of human behavior that are relevant to the current topic, with “an increasingly important role emerging for the end user in the innovation process” (p. 43). Innovation is a “human phenomenon,” according to Lemstra et al. (2011b), that in more recent times “has been recognized as the essential driver for continued economic growth and social development … For this reason, researchers have been interested in identifying why innovation occurs and under what conditions” (p. 9).
What Is Innovation? Tolba and Mourad (2011) define innovation as “the new technical products, scientific knowledge, application methods, and tools that facilitate problem solving for potential adoption” (p. 2). I appreciate this focus on the problem-solving aspect, which relates to both situational assessment and decision making. They further highlight the role of perception and cognition when, citing several innovation researchers, they argue that “different adopters perceive and assess innovation in a variety of ways,” including “cognitive assessment of cost/benefits associated with innovations” (pp. 2–3). Tolba and Mourad credit Rogers (1962/1983/2003), who popularized the innovation process theory, with first suggesting “analysis of innovations should be made in the context of the potential adopter’s own perspective and situation; in other words, to emphasize the subjective nature of innovations” (Tolba & Mourad, 2011, p. 2). Lemstra et al. (2011b) cite Rogers’ (1962/1983/2003) definition of the development process of innovation, which includes “all the decisions, activities, and their impacts that occur from recognition of a need or problem, through research, development, and commercialization of an innovation by users to its consequences” (p. 9). They opine that this definition “suggests an underlying sequential process” (p. 9), a point with which I don’t necessarily agree, as sequential implies linearity. However, I do agree with their clarification of the role of multiple interacting actors, including users, manufacturers, suppliers, and others. “In our case, we consider innovation, the innovation process and the resulting technological developments as
230
22 Innovation in Complex Adaptive Systems
the cumulative result of intentional behavior on the part of the actors” (p. 10). Here, we begin to see an exposition on the complex, multivariate nature of innovation. “According to Lundvall, an innovation system is constituted by ‘elements and relationships which interact in the production, diffusion and use of new, and economically useful, knowledge’” (Lemstra et al., 2011c, p. 43). They go further into the process of innovation as part of a complex, sociotechnical system in highlighting their agreement with Boudon (1981), “Although we take it as a central theme that actors’ behaviour is intentional, we submit that actors do not act in isolation but are socially embedded, and are part of ‘systems of interaction’ that influence their behaviour (Boudon, 1981)” (Lemstra et al., 2011c, p. 13). Whether intentional or not, this is a good description of the role of human knowledge and behavior within a complex adaptive system, focusing particularly on the social and organizational cultural aspects. Lemstra et al. (2011c) highlight the idea that innovation “is considered a social system, the central activity being constituted by learning as a social activity involving interactions between people. The innovation system is also dynamic, characterized by positive feedback and reproduction (Lundvall, 1992)” (p. 43). Moreover, they address the importance of a learning culture in innovation as An interactive learning loop is established and a process of cumulative innovation will be the result … The ‘habits of thought’ and ‘habits of action’ define the accepted ways that economic agents undertake search processes and exploit the learning trajectories for further development (van der Steen, 2001, p. 14). (Lemstra et al., 2011b, p. 13)
In particular, Lemstra et al. (2011b) point out the “importance of teamwork,” which they argue “shows clearly how knowledge that is accumulated and embodied in individuals migrates, combines and recombines as a result of people moving from one job or function to another and cooperating as part of a network of professionals” (p. 6). They further indicate “the exploration and analysis of the dynamic development process of Wi-Fi … includes a variety of explanatory variables, such as the technology, laws and regulations, values and norms, the strategies of the various forms, and the like” (p. 6). To further elucidate a sociocultural systematic framework with respect to innovation, Lemstra et al. write, “The behavior of actors concerning an innovation process is influenced largely by the institutional structures in their environment, such as laws and regulations” (p. 7). This would be akin to only part of the structure of an organizational culture (e.g., as demonstrated in the HFACS taxonomy; see Chap. 17). However, Lemstra et al. do recognize “actors interact not only with the structures in their environment but also with each other. In doing so they share ideas and they learn, but they also compete and try to control the behaviour of others” (p. 7). That one sentence incorporates the elements of both a learning culture and the foundations of game theory. Finally, they incorporate a principle introduced toward the beginning of this book: the importance of understanding the role of behavior in complex adaptive systems. “To explore and analyse the dynamics of Wi-Fi, we need to know about the behavior of the different actors involved” (p. 7).
Diffusion of Innovation
231
Diffusion of Innovation (Fig. 22.3) In 2018, Dr. Dixon Chibanda delivered a TED talk entitled, Why I Train Grandmothers to Treat Depression. One of only 12 psychiatrists in Zimbabwe, a country of approximately 15 million, he was deeply shaken by the suicide of one of his young patients, who had been unable to access mental health care during a crisis due to her rural location in Zimbabwe. This tragic event spurred him to consider new ways to meet the mental health demands in his country, where 70% of the population live below the poverty line (Wallén et al., 2021). In 2006, Chibanda had the idea for a win-win. He would train grandmothers. Grandmothers are trusted members of the community in Zimbabwe, and many of them have already donated their time as community health workers (Lewis, 2019). The pilot program, which has come to be known worldwide as the “friendship bench,” trains lay health workers to use cognitive behavior therapy (CBT) and problem-solving therapy (PST) (Chibanda et al., 2015). Along with growing quantitative and qualitative evidence that the program both reduces depression symptoms in those seeking intervention and also increases a sense of meaningfulness in those delivering treatment, Dr. Chibanda’s moving TED talk has inspired similar programs to spread globally. There are already friendship benches in Canada, the United States, and Qatar, and programs are slated to begin in the United Kingdom.
Diffusion of Innovation American sociologist Everett Rogers introduced the concept of diffusion of innovation in 1962, which he defined as “the process by which innovation is communicated through certain channels over time among the members of a social system” (Rogers, 1962/1983/2003, p. 5). The process involves a passing of not only the knowledge of the innovation but the attitudes toward it. “This process consists of a Fig. 22.3 Friendship Bench Grandmother Note. Juta (2022). Friendship Bench Grandmother and Client. Retrieved from https:// commons.wikimedia.org/ wiki/file:Friendship-Bench- Zimbabwe.jpg. CC BY-SA 4.0
232
22 Innovation in Complex Adaptive Systems
series of actions and choices over time through which an individual or an organization evaluates a new idea and decides whether or not to incorporate the new idea into ongoing practice” (Rogers, 1962/1983/2003, p. 163). Registered nurse, Dr. June Kaminski, in her article published in the Canadian Journal of Nursing Informatics, succinctly outlines the evolution of innovation theory. Beginning with French sociologist Gabriel Tarde’s now famous S-curve, developed in 1903, Kaminski defines diffusion of innovation as “the process that occurs as people adopt a new idea, product, practice, philosophy, and so on” (p. 1). According to Kaminski (2011), in 1943, Ryan and Gross introduced the categories that describe the process. These categories are familiar to many today studying technology and innovation because they were popularized by the American sociologist Everett Rogers. They include innovators, early adopters (aka visionaries), early majority (aka pragmatists), late majority, and laggards (aka skeptics). The process essentially identifies a curve in which a very small percentage (about 2.5%), known as innovators, are characterized by an appreciation for technology, as well as risk-taking. Next, the early adopters, at about 13.5%, are also known as visionaries. They introduce us to the social aspect of the process of innovation adoption. They serve as “opinion leaders,” a term first introduced by Katz in 1957. These visionaries tend to be “trend setters … attracted by high risk/high reward projects” (Kaminski, 2011, p. 3). The S-shaped curve illustrates how the process starts off slowly. At a certain point, there is an acceleration of the process. It is at about this point that we begin to see the “early majority,” also called the “pragmatists,” at about 34%. The pragmatists are more risk-averse and budget-conscious than their visionary counterparts but are still considered opinion leaders. The “late majority,” at about 34%, are known as the “conservatives” and are much more “skeptical” and “cautious.” They are influenced by the growing trend, as well as the need to remain competitive. Finally, there are the “laggards,” at about 16%, also known as the “skeptics.” As the label of “skeptic” implies, they tend to be suspicious of technology in general and more comfortable with the status quo. However, as the label “laggard” implies, they do tend to adopt at some point. For this reason, there is sometimes a sixth category added: the “non-adopters” (Kaminski, 2011, p. 1). The social nature of the process is further highlighted by Kaminski (2011), who points out, The concept of peer networks is important in the Diffusion of Innovation theory. It is the critical mass achieved through the influence of innovators and early adopters who serve as opinion leaders that sparks the initial ‘take off’ point in the innovation adoption process. (p. 4)
Tipping Points in Innovation At the turn of the twenty-first century, Malcolm Gladwell, who is very adept at capturing seminal ideas, summarized three key features to this “critical mass,” or sudden spread of innovative social ideas in what he called the “tipping point.” Gladwell
Tipping Points in Innovation
233
(2000) acutely described three different prominent forces that accounted for “tipping” a social epidemic: the law of few numbers, the stickiness factor, and the power of context. All of these have been elucidated in the context of the COVID pandemic, but they also help explain the spread of ideas through social media. With respect to the law of few numbers, Gladwell described three types of people who could “tip” an idea and make it, in the words of current social media parlance, go “viral.” The mavens are those who present a lot of data from the context of a standpoint of expertise, the connectors are those who, as the word implies, have many connections (or “followers” in the social media world), and the salesmen are closely aligned with what the social media world would call “influencers.” Along with the influential few, Gladwell describes the “stickiness factor,” which is essentially the marketability or “catchiness” of an idea, and finally, the power of context describes the environment in which an idea might spread. The impact of all three on human behavior during the Covid pandemic, as well as their interaction with social media, is both evident and exacerbated. The fact that Gladwell was writing before the widespread impact of social media, however, demonstrates the pervasiveness of these principles. Rogers argued that uncertainty is inherent to the process of innovation, and it is precisely this uncertainty that distinguishes it. Recognizing that comfort with uncertainty is culturally related, he highlights the ideas of Bordenave (1976), who pointed out that most innovation models at that time were western-centric and failed to accurately represent the role of the sociocultural environment, beyond even the access to technology, or even effective communication. “The social structure of developing nations has been found to be a powerful determinant of individuals’ access to technological innovation; often, structural rigidities must be overcome before the communication of innovations can have much effect” (Rogers, 1962, p. 125). “Drawing on examples from agriculture and industry, as well as health, Rogers conceptualized the spread, or diffusion, of innovations as a social process with multiple determinants well beyond the evidence supporting the innovation itself” (Bauer & Kirchner, 2020, p. 3). It is precisely those “multiple determinants” that place implementation squarely in the domain of complex adaptive systems. Lemstra et al. (2011b) present a theoretical framework for a sociotechnical system that is in many ways reminiscent of the Human Factors Analysis and Classification System (HFACS; Shappell & Wiegmann, 2000; see Chap. 17), demonstrating that underlying truths about human behavior within complex adaptive systems will be rediscovered and show some consistency across various domains— from aviation to medicine to advanced technology. They credit the basis of the model to Williamson (1998) and Keppenjan and Groenewegen (2005). The model includes five layers. At the lowest level (Layer 0) is technology and the innovations of the individual actors. Layer 1 introduces an element of game theory, as it includes “actors and games in sociotechnical systems … and their interactions aimed at creating and influencing (infrastructural) provisions, services and outcomes” (p. 7). Layer 2 focuses on more interpersonal elements, with “formal and informal institutional arrangements.” Layers 3 and 4 can be seen to focus on the formal and informal rules, laws, and norms, respectively. There are notable similarities with the
234
22 Innovation in Complex Adaptive Systems
HFACS model in terms of the interaction of human actors within the overall sociotechnical system, but the model presented by Lemstra et al. (2011b) lacks the focus on the leader’s role in impacting the sociocultural system and the “trickling down” effect of such on behavior, safety, and performance that is an important focus of the HFACS model. Nonetheless, both models serve to show how such tools can be used to analyze organizational performance in complex systems. Lemstra et al. point out that “the innovation trigger at the basis of the development of Wi-Fi can be construed as a ‘parameter shift’ occurring at layer 3, triggering changes in institutional arrangements, and creation of new arrangements between the actors involved (2011b, p. 8). As indicated, innovation systems build upon prior knowledge and social interactions in an open, demand-driven, learning culture. While we may think of innovation in terms of a technology-centered conceptual definition, it is a process, and a process that involves human behavior and social organization. Tolba and Mourad (2011) highlight the important socio-communication aspects, beyond the technical, particularly with respect to communicating information about new products. This, in fact, indirectly emphasizes both the role of leaders in the decision-making process and the power of information. While French and Raven’s (1959) model later included a sixth base of power as information, it is arguably defined in a manner too close, conceptually and operationally, to their model’s expert base of power. However, information power, if conceptualized as access and dissemination of information, is undoubtedly a critical source of power, particularly in the domains of science and technology. The science and technology gap that has been particularly illuminated with the advent of wireless technologies, as well as medical advancements, serves to emphasize this point. Tolba and Mourad effectively connect the two dimensions of technical availability and social aspects of innovation, proposing that “Relative advantage and complexity represent the functional dimension of innovation; while compatibility, trial-ability, and observability represent the ‘social dimension’” (p. 3). They also highlight the role of leadership, indicating, according to Tolba and Mourad (2011), “lead users are defined as being in advance of the market in terms of their needs, motivations, and qualifications” (p. 4). They cite empirical research to support their argument that “investigating the influence of lead users on accelerating diffusion rate offers far greater benefits in comparison to the traditional innovation diffusion model” (p. 5). Braithwaite et al. (2018) argue “complexity-informed approaches to implementation” may be critical, as complexity science “highlights the dynamic properties of every CAS and the local nature of each system’s culture” (p. 8). They further emphasize that innovation and implementation do “not unfold in a static and controlled environment awaiting the attention of top-down change agents; it takes place in settings comprised of diverse actors, with varying levels of interest, capacity and time, interacting in ways that are culturally deeply sedimented” (p. 7). The key is understanding the process as both iterative and interactive.
The Social Benefits of Diffusion of Innovation: Wireless Leiden Project
235
he Social Benefits of Diffusion of Innovation: Wireless T Leiden Project Lemstra et al. (2011d) present many case studies in their book, The Innovation Journey of Wi-Fi: The Road to Global. Many of these cases highlight not only the great achievements of innovation but the implementation of these ideas among social networks. One such case is that of a neighborhood in the Netherlands that worked together to bring an inclusive Wi-Fi network to a relatively remote area. The Wireless Leiden project began “as the result of an initiative by Jasper Koolhaas, a technical director at one of the early ISPs in the Netherlands, together with Marten Vijn, who met at a ‘Linux install party’ organized by HCC-Leiden, the local branch of the Hobby Computer Club, in October 2001” (p. 176). Over the course of a few weeks, more volunteers were recruited, “all with very diverse backgrounds” (p. 176). Lemstra et al. relay the words of the scientist who founded the Netherlands Office for Science and Technology, Huub Schuurmans, who explained, “Through the organization of volunteers, the Wireless Leiden network is strongly embedded in the economic and social structure of the city of Leiden” (p. 177). As of September 2020, the Wireless Leiden website, One World Connected, describes itself as a Nonprofit provider of open-source, wireless networking since 2002 in the Netherlands…run by volunteers and financed by local sponsors. Wireless Leiden presents a case that it is possible to build a community wireless network with the use of relatively low-cost technologies and open source software despite the technical problems, loosely-connected volunteers, or limited government support. (Wireless Leiden, 2020, para. 1)
The website goes on to state that “according to an Internet inclusivity survey conducted by The Economist, the Netherlands score high overall with regards to Internet inclusivity and availability,” which they credit, in part, to their community-based Internet project. Melody (2011) reports, The traditional models used in attempting to implement universal access policies have involved subsidies intended to cover the higher costs of serving rural areas … Historically … the incumbent national operator is expected to extend its network to provide universal access to basic services, and to cover the costs by charging uniform prices to all users regardless of location. In most countries, attempts to implement this model have failed dramatically. (p. 201, italic emphasis mine)
Could it be because such an approach tries to apply a relatively simple, univariate, linear model to a complex sociotechnical adaptive system? On the other hand, as reported by the Wireless Leiden’s nonprofit website, Wireless Leiden has no government affiliation or subsidy, but instead partners with patrons in order to provide connectivity at no cost to most of its users. It relies entirely on volunteer labor and expertise, and makes extensive use of community resources, corporate sponsorships, and institutional partnerships. (Wireless Leiden, 2020, para. 6)
While the project coordinators acknowledge the obvious technical, logistical, and security challenges, they also highlight what will continue to be keys to successful future innovation. These include not only sociability and community involvement,
236
22 Innovation in Complex Adaptive Systems
but also diversity of thought, experience, and expression. All are critical to both science and innovation.
The Nepal Wireless Networking Project There are similar themes in the case study of the Nepal Wireless Networking Project. According to Sæbø et al. (2014), in Nepal Wireless Networking Project: Building Infrastructure In the Mountains From the Ground Up, the case study of the Nepal Wireless Networking Project (NWNP) tells the story of how Mahabir Pun, a science teacher in the remote village of Nangi, “despite lack of access to proper equipment, lack of technical competence and the difficult terrain in the Himalayan mountains … succeeded in bringing internet access to these villages, contributing to improvements in education, health services, and income generating activities” (p. 241). It all started when I just wanted to check my email. (Mahabir, as cited in Sæbø et al., 2014, p. 243)
Mahabir had attended the University of Nebraska Kearney on a scholarship and received his Bachelor’s in Science Education in 1992. He returned to Nepal and began teaching at the Himanchal Higher Secondary School in Nangi, a village “so remote that it took a 5-hour hike to the nearest road to catch a bus for the 4-hour ride to the nearest big town of Pokhara” (Sæbø et al., 2014, p. 244). In 2001, Mahabir “stunned by the yawning gap between the information and communication technology he was used to at Nebraska and the stone age level he now had … had an inspiration and on an impulse, sent a message to the British Broadcasting Corporation (BBC)” (pp. 244–245). The BBC broadcast the message, which led to a chain reaction of brainstorming, donations, and volunteerism. “What started as a trickle” (p. 245) rapidly grew, but by 2001, Mahabir knew “he needed several other individuals and organizations” (p. 245). With the help of “like-minded actors” (p. 245), “Mahabir established a non-governmental organization (NGO) called E-Networks Research and Development (ENRD)” (Sæbø et al., 2014, p. 245). In 2002, along with the people in the community, the NGO, international volunteers and a technical team from an internet service provider (WORLDLINK), conducted a pilot test … They used antennae and dishes, donated by international volunteers … all the equipment was carried and installed by the villagers themselves. Through trial and error, they succeeded in setting up a wireless connection. (p. 245)
By 2009, 14 villages were connected through wireless technology, and by 2011, this had extended to approximately 150 villages, with involvement locally, as well as internationally. These innovative efforts and the benefits they brought resulted in many prestigious awards for Mahabir Pun, “including the Overall Social Innovations Award (2004) and, in 2007, an honorary degree as Doctor of Humane Letters (2007) from the University of Nebraska, Kearney, and the Magsaysay Award (the Asian equivalent of the Nobel Prize)” (p. 248). As of 2022, Mahabir Pun is the Chairman of the National Innovation Centre, whose vision and mission, according to their
A Note of Caution with Respect to Case Studies
237
website, is to “make Nepal an economically prosperous nation through research, innovation, and technology {by seeking} creative minds from all over the nation and provide them a platform to take their methods, ideas, or products out to the marketplace/life for nation development” (National Innovation Center, 2022, para. 1). The website lists multiple innovation projects for advancing Nepal, from healthcare to education to agriculture and, of course, wireless technology. According to an article reported in Nepal’s Republica in August of 2020, when the government of Nepal was not meeting the PPE demands of Covid-19, Mahabir Pun’s nonprofit National Innovation Centre supplied over 90 protective suits to medical personnel. According to the article, the NIC had “been collecting funds from the private sector to produce the PPEs. After helping hands suggested to produce PPEs for medics – working in the front line – the NIC started producing the protective suits to be distributed for free to health workers”. The report added that “one of the staffers at a private hospital in Kathmandu who received a few PPE sets from NIC told Republica that the quality of the protective suits produced by NIC is much better than the PPE sets available on the market” (Republica, 2020). The story of the Nepal Wireless Networking Project echoes many of the elements that led to successful community innovation in the Netherlands, despite the vast differences in geography and culture: brainstorming, innovation, trust, community, volunteerism, and corporate sponsorship. In addition, both cases focused upon various strategic decision-making tools that can enhance organizational performance, as well as innovation.
A Note of Caution with Respect to Case Studies Case studies are widely used in business and leadership classes, and indeed, they can be highly effective instruments for enhancing classroom learning. However, as with any source of information, one must exercise caution with respect to potential cognitive biases. With case studies, the availability bias is one that readily comes to mind. Once again, tools should be consciously applied to help ensure we are questioning and evaluating through multiple lenses and “what-if” situations. The Devil’s Advocate approach can be useful in ensuring we consider possible obstacles to applying these case studies to different sociocultural environments. Likewise, brainstorming could be useful in outlining the strategic strengths, weaknesses, opportunities, and threats. One of the biggest mistakes made by previously successful leaders is often applying a model that worked in the past without considering how it would potentially adapt, or fail to adapt, to a new situation. For instance, in trying to apply the community health mental model devised by Dr. Dixon Chibanda, one would have to ask if the unique culture and community of Zimbabwe, particularly with respect to perceptions of elders, would translate to a Western and urban culture in the United States. Indeed, modifications have been made to the program in Canada, where it is more likely to be a peer than an elder with whom one sits on the bench.
238
22 Innovation in Complex Adaptive Systems
Lemstra et al. (2011), in one of the final chapters of their book, draw a cautionary conclusion that is a perhaps too often dismissed maxim of complex adaptive systems, and in the study of them, as indicated succinctly in Lissack’s (1999) Uncertainty. “Provided our expectations about forecasts are realistic, and we do not look to them for firm predictions but simply to provide indications, they can be reasonably useful” (p. 335). Such an awareness was not lost on Mahabir Pun, regarding the Nepal wireless project. According to Sæbø (2014), an interviewer presented a situation that would potentially allow for such over-generalizing of a successful model, “You have done well, Mahabir. NWNP is spreading and the villages are now part of the greater world. Is there anything you are worried about?” (p. 252). In response, Mahabir stated, We have now reached 150 villages and we want to reach more. Can we do it? We have done well in many villages, but there have been others where things have not gone well. You know these villages may look the same, but they are not so. Not all are from one community, some seem to want the things we offer, others not that much. Some of the groups in some villages – you know the mother’s society, the youth society – are more enthusiastic than in other villages. (p. 253)
Perhaps, indeed, discretion is the better part of valor, and timing is everything. These adages, themselves, should be questioned and do not apply in every situation. In general, however, while it is important for leaders to be decisive and to “strike while the iron is hot,” so to speak, it is prudent to exercise caution and ensure good situation assessment prior to making any decision. There are systematic tools, as outlined in this book (though by no means in an exhaustive manner), which can aid in ensuring we are limiting cognitive bias.
Chapter Summary As Rossel and Finger (2011) conclude, “We are in a knowledge dynamic … in which a series of research programmes can appear successively, and concurrent paradigms can unfold in parallel” (p. 337). They postulate that technology innovation “involves taking account not only of growth figures but also of behaviors and choices made by different sorts of users … that are constantly helping the technology to change” (p. 340). Channeling Gladwell’s (2000) Tipping Point, Rossel and Finger (2011) conclude, The variety of reasons for which a linear innovation process may fail to succeed is much more important than just suggesting that there is a severe dropout along the technology- push line of sight – i.e., from idea to product. User-driven shaping influences, risks identified by citizens or their representative, the raising of acceptance issues at any moment … among other factors, may all interfere with the purely “Rogerian” destiny of a given technology. (p. 342)
Of course, this statement, though true, does not even begin to touch on all the possible interacting environmental influences that were introduced in 2019 by a microscopic virus we all came to recognize as a spiky sphere, harmless in appearance, but
References
239
devastating in the ripple-like effects of its ramifications. The nature of our attempts to deal with it scientifically, politically, and socially serves as a poignant, real-world example of Lemstra et al.’s (2011a) position that, The linear model of innovation is exceptionally rare in practice, and the initial sources of innovation, power struggles in the Porter sense or market and industry battlefield provide us more with indications of complexity than with clues as to the future of a particular technological concern. (p. 342)
References Bauer, M. S., & Kirchner, J. (2020). Implementation science: What is it and why should I care?. Psychiatry Research, 283:112376. https://doi.org/10.1016/j.psychres.2019.04.025 Blackburn, R. (2017, December 22). The secret life of Hedy Lamarr. Science. https://www.science. org/doi/abs/10.1126/science.aar4304 Bordenave, J. D. (1976). Communication of agricultural innovations in Latin America: The need for new models. Communication Research, 3(2), 135–154. https://doi. org/10.1177/009365027600300203 Boston Globe Archives (c. mid 1920s). George Antheil at the Piano. Retrieved from https://commons.wikimedia.org/wiki/File:Antheil at piano, mid 1920s.jpg. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1927. Boudon, R. (1981). The logic of social action: An introduction to sociological analysis. Routledge & Kegan Paul. Braithwaite, J., Churruca, K., Long, J. C., Ellis, L. A., & Herkes, J. (2018). When complexity science meets implementation science: A theoretical and empirical analysis of systems change. BMC Medicine, 16(1). https://doi.org/10.1186/s12916-018-1057-z Bull, C. (1940). Publicity photo of Hedy Lamarr for film Comrade X. Retrieved from https://commons.wikimedia.org/wiki/File:Hedy_lamarr_-_1940.jpg. This work is in the public domain in the United States because it was published in the United States between 1927 and 1977, inclusive, without a copyright notice. Chibanda, D., Bowers, T., Verhey, R., Rusakaniko, S., Abas, M., Weiss, H. A., & Araya, R. (2015). The Friendship Bench programme: A cluster randomised controlled trial of a brief psychological intervention for common mental disorders delivered by lay health workers in Zimbabwe. International Journal of Mental Health Systems, 9(1), 1–7. https://doi.org/10.1186/ s13033-015-0013-y Chodos, A. (2011, June). June 1941: Hedy Lamarr and George Antheil submit patent for radio frequency hopping. APS News, 20(6). This Month in Physics History (aps.org) French, J. R., & Raven, B. (1959). The bases of social power. In D. Cartwright (Ed.), Studies in social power (pp. 150–168). Research Center for Group Dynamics, Institute for Social Research, University of Michigan. https://isr.umich.edu/wp-content/uploads/historicpublications/studiesinsocialpower_1413_.pdf Gladwell, M. (2000). The tipping point: How little things can make a big difference. Little. Juta. (2022). Friendship bench grandmother and client. [Photograph]. Retrieved from https://commons.wikimedia.org/wiki/file:Friendship-Bench-Zimbabwe.jpg. By-SA 4.0. https://creativecommons.org/licenses/by-sa/4.0/deed.en Kaminski, J. (2011). Diffusion of innovation theory. Canadian Journal of Nursing Informatics, 6(2), 1–6. https://cjni.net/journal/?p=1444 Lemstra, W., Hayes, V., & Groenewegen, J. (2011a). The innovation journey of Wi-Fi: The road toward global success. Cambridge University Press.
240
22 Innovation in Complex Adaptive Systems
Lemstra, W., Groenewegen, J., & Hayes, V. (2011b). The case and the theoretical framework. In W. Lemstra, V. Hayes, & J. Groenewegen (Eds.), The innovation journey of Wi-Fi–The road to global success (pp. 3–20). Cambridge University Press. Lemstra, W., Johnson, D., Tuch, B., & Marcus, M. (2011c). NCR: Taking the cue provided by the FCC. In W. Lemstra, V. Hayes, & J. Groenewegen (Eds.), The innovation journey of Wi-Fi–The road to global success (pp. 21–52). Cambridge University Press. Lemstra, W., Van Audenhove, L., Schuurmans, H., Vign, M., & Mourits, G. (2011d). Wi-Fi based community networks. In W. Lemstra, V. Hayes, & J. Groenewegen (Eds.), The innovation journey of Wi-Fi–The road to global success (pp. 175–194). Cambridge University Press. Lewis, A. (2019, September 3). How Zimbabwe’s friendship bench is going global. Retrieved from https://mosaicscienc.com/story/How Zimbabwe’s Friendship Bench is going global. Lissack, M. R. (1999). Complexity: The science, its vocabulary, and its relation to organizations. Emergence, 1(1), 110–112. https://doi.org/10.1207/s15327000em0101_7 Lundvall, B. A. (1992). National systems of innovation: Towards a theory of innovation and interactive learning. The Learning Economy and the Economics of Hope (oapen.org). Melody, W. (2011). Wi-Fi in developing countries: Catalyst for network extension and telecom reform. In W. Lemstra, V. Hayes, & J. Groenewegen (Eds.), The innovation journey of Wi-Fi– The road to global success (pp. 197–229). Cambridge University Press. National Innovation Center. (2022). Retrieved November 19, 2022, from https://nicnepal.org/about/ Republica. (2020, August 17). Mahabir Pun’s NIC scrambles to meet high demand for PPEs. Republica. Retrieved November 19, 2022, from https://myrepublica.nagariknetwork.com/ news/mahabir-pun-s-nic-scrambles-to-meet-high-demand-for-ppes/ Rogers, E. (1962/1983/2003). Diffusion of innovation (1st, 3rd, 5th ed.). The Free Press. Rossel, P., & Finger, M. (2011). Exploring the future of Wi-Fi. In W. Lemstra, V. Hayes, & J. Groenewegen (Eds.), The innovation journey of Wi-Fi–The road to global success (pp. 331–366). Cambridge University Press. Ryan, B., & Gross, N. C. (1943). The diffusion of hybrid seed corn in two Iowa communities. Rural Sociology, 8(1), 15. Sæbø, Ø., Sein, M. K., & Thapa, D. (2014). Nepal Wireless Networking Project: Building infrastructure in the mountains from ground up. Communications of the Association for Information Systems, 34(1), 241–246. https://doi.org/10.17705/1CAIS.03411 Shappell, S. A., & Wiegmann, D. A. (2000). The human factors analysis and classification systemHFACS (DOT/FAA/AM-00/7). Embry-Riddle Aeronautical University. https://commons.erau. edu/publication/737 Tolba, A. H., & Mourad, M. (2011). Individual and cultural factors affecting diffusion of innovation. Journal of International Business and Cultural Studies, 5, 1. Microsoft Word – 11806 (aabri.com) Van der Steen, M. (2001). Understanding the evolution of national systems of innovation: A theoretical analysis of institutional change. Paper presented at the European Association for Evolutionary Political Economy conference ‘The information society: understanding its institutions interdisciplinary’. Maastricht, 10 November. Wallén, A., Eberhard, S., & Landgren K. (2021). The experiences of counsellors offering problemsolving therapy for common mental health issues at the youth friendship bench in Zimbabwe. Issues Ment Health Nurs. 42(9):808–817. https://doi.org/10.1080/01612840.2021.1879977. Epub 2021 Feb 8. PMID: 33555957. Williamson, O. E. (1998). Transaction cost economics: How it works; where it is headed. De Economist, 146(1), 23–58. https://doi.org/10.1023/A:1003263908567 Wireless Leiden. (2020, September 1). Retrieved November 16, 2022, from https://1worldconnected. org/project/europe_communitynetwork_wirelessleidennetherlands/
Chapter 23
The Dark Side of Innovation
Acknowledging the Risks Associated with Technological Advancement I’m probably violating some unwritten—or perhaps written—rule about ending a book on a negative note, but I feel it is a necessary consideration at this point. While hitherto we have focused on the benefits of innovation (references to Aldous Huxley’s (1932) Brave New World aside), as with any human endeavor, there is the potential for great good or great ill. The same is true of innovation. Many of us can relate to the everyday struggle with our smart phones, for example. There are obviously numerous potential advantages to these small devices. They contain a library, literally at our fingertips. They can immensely increase the speed and distance with which we can communicate with others—a feat which can, in fact, be life-saving. On the other hand, there have been a myriad of unintended and, to a large extent, unanticipated consequences, such as the mental health toll it has arguably taken on youth, in particular (e.g., Keles et al., 2020; Marino et al., 2018). There is also the undeniable impact this technology has had on privacy, or lack thereof. Further applications in the field of information security demonstrate why the seminal work of researchers such as Rasmussen and Reason continues to be relevant today, expanding into fields outside of aviation, for which they were originally developed. Kraemer and Carayon (2007) refer to and draw upon the studies of Reason (1990, 1995), as well as the foundational work of Rasmussen’s (1982) human error taxonomy in discussing the development of their “macroergonomic conceptual framework to identify and describe the work system elements contributing to human errors that may cause CIS vulnerabilities” (Kraemer & Carayon, 2007, p. 144). The authors go on to describe that “the interplay of these elements may create conditions that contribute to human error and violations {which} may result
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7_23
241
242
23 The Dark Side of Innovation
in security vulnerabilities and sometimes result in security breaches if these vulnerabilities are exploited” (p. 144). The common link in all these cases is, of course, that they involve human behavior and human error within complex adaptive systems. Just as with human error in other areas, security breaches and other vulnerabilities should be expected to follow similar accident paradigms. This is true even when security breaches are deliberate—someone has made a mistake and/or the system has broken down somewhere along the way, or the breach would not have occurred. It remains important to understand that “accidents, according to Resilience Engineering, do not represent a breakdown or malfunctioning of normal system functions, but rather represent the breakdowns in the adaptations necessary to cope with the real-world complexity” (Woods et al., 2010, p. 83). Kraemer and Carayon (2007) refer to and draw upon the seminal studies of Reason (1990, 1995), as well as the foundational work of Rasmussen’s (1982) human error taxonomy in discussing the development of their “macroergonomic conceptual framework to identify and describe the work system elements contributing to human errors that may cause CIS vulnerabilities” (p. 144). According to Kraemer and Carayon, the Computer Science and Telecommunications Board National Research Council (2002) described “causes of deliberate attacks {in network security systems} as resulting from poor management poor management or operational practices” (p. 144). Likewise, they indicate a study by Whitman (2003) provided findings that “support recognition of human error and failures as a significant area for consideration in the field of CIS {Computer Information Systems}” (2007, p. 144). The authors go on to describe that “the interplay of these elements may create conditions that contribute to human error and violations {which} may result in security vulnerabilities and sometimes result in security breaches if these vulnerabilities are exploited” (p. 144). Many security experts, whether physical or cyber security, have observed that an understanding of the human element in the system is as critical, if not more so, than understanding the technical aspects. The weakest links in security systems are humans, whether those humans are the designers or the end users (Eldar, 2010; Harris, 2002). This is particularly true in the case of insider threats, common to problems in information security, physical security, and terrorism. “It is recognised that insiders pose security risks due to their legitimate access to facilities and information” (Colwill, 2009, p. 186). Likewise, improvements in security must take into account existing knowledge of human factors, as well as the tools that have come with the trade.
Chapter Summary Mitnick, a former hacker turned security consultant, argued in his book, The Art of Deception: Controlling the Human Element of Security (Mitnick & Simon, 2003), that “security is not a technology problem—it’s a people and management problem
References
243
(p. 4). Mitnick and Simon further indicate that “left unaddressed the most significant vulnerability {is} the human factor” (p. 8). What Edwards (1985) first described regarding the central role of the human operator in his SHEL model nearly half a century ago is echoed in these words. Leaders making strategic decisions would be well advised to consider this maxim, lest they fall prey to the uncertainties of seeing things “through a glass, darkly” (1 Corinthians 13:12, 2022). The center of any complex adaptive system is, indeed, the human factor.
References 1 Corinthians 13:12. (2022). King James Bible. https://www.kingjamesbibleonline.org. (Original work published in 1611). Colwill, C. (2009). Human factors in information security: The insider threat–Who can you trust these days? Information Security Technical Report, 14(4), 186–196. https://doi.org/10.1016/j. istr.2010.04.004 Edwards, E. (1985). Human factors in aviation. Aerospace, 12(7), 20–22. Eldar, Z. (2010). The human factor in aviation security. Journal of Airport Management, 5(1), 34–39. Harris, D. H. (2002). How to really improve airport security. Ergonomics in Design, 10(1), 17–22. https://journals.sagepub.com/doi/pdf/10.1177/106480460201000104 Huxley, A. (1998/1932). Brave new world. Perennial Classics. Keles, M., McCrae, N., & Grealish, A. (2020). A systematic review: The influence of social media on depression, anxiety and psychological distress in adolescents. International Journal of Adolescence and Youth, 25(1), 79–93. https://doi.org/10.1080/02673843.2019.1590851 Kraemer, S., & Carayon, P. (2007). Human errors and violations in computer and information security: The viewpoint of network administrators and security specialists. Applied Ergonomics, 38(2), 143–154. https://doi.org/10.1016/j.apergo.2006.03.010 Marino, C., Gini, G., Vieno, A., & Spada, M. M. (2018). The associations between problematic Facebook use, psychological distress and well-being among adolescents and young adults: A systematic review and meta-analysis. Journal of Affective Disorders, 226, 274–281. https://doi. org/10.1016/j.jad.2017.10.007 Mitnick, K. D., & Simon, W. L. (2003). The art of deception: Controlling the human element of security (1st ed.). Wiley. Rasmussen, J. (1982). Human errors: A taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents, 4, 311–333. https://doi. org/10.1016/0376-6349(82)90041-4 Reason, J. (1990). Human error. Cambridge University Press. Reason, J. (1995). Understanding adverse events: Human factors. BMJ Quality & Safety, 4(2), 80–89. https://doi.org/10.1136/qshc.4.2.80 Whitman, M. E. (2003). Enemy at the gate: Threats to information security. Communications of the ACM, 46(8), 91–95. https://doi.org/10.1145/859670.859675 Woods, D., Dekker, S., Cook, R., Johannesen, L., & Sarter, N. (2010). Behind human error (2nd ed.). Ashgate Publishing. https://doi.org/10.1201/9781315568935
Index
A Automatic vs. controlled processing, 72–74 B Behaviorism, 63, 64, 75 Biased assimilation, 118–119 Book introduction, 3–5 Bottom-up processing, 90, 186 C Chilean mine rescue, 218 Chunking, 69, 86, 87, 89, 140 Cognitive bias, 58, 105–115, 119, 120, 143, 179, 209, 216, 218, 221, 238 Cognitive models, 88, 143, 155, 157–162 Complex adaptive systems (CAS), 3–4, 7–11, 13–15, 17, 24, 25, 27–32, 37–44, 47–51, 56–58, 74, 75, 88, 95, 99–100, 106, 145, 149, 151, 155, 157–159, 161–162, 167–169, 171, 179, 181, 186, 187, 202, 210–212, 216, 224, 227–239, 242, 243 Complexity science, 7, 8, 11, 27–32, 43, 44, 48–50, 58, 179, 234 Confirmation bias, 87, 108, 112–114, 118, 119, 143, 181 Conflict, 117, 118, 121, 122, 141, 150, 202–211, 217, 218, 221, 223 COVID, 57, 185, 233 Crew resource management (CRM), 4, 137, 172, 182, 195–198, 210 Cuban Missile Crisis, 216, 219, 220, 222, 223 Cynefin, 16, 179, 180, 184–187, 201
D Data vs. information, 68–69 Decision making, 3, 4, 9, 10, 16, 22–24, 30, 57, 65, 66, 87, 88, 90, 93–101, 105–115, 120, 122, 129–131, 133–142, 145, 149, 150, 155–159, 161, 162, 170, 179–187, 193, 195–197, 201, 203–205, 208–212, 216–224, 229, 234 Distributed decision making, 144, 145, 183, 184, 210, 211 Double-loop learning, 202–205, 207 Douglass, F., 40–42 E Edward’s SHEL-L model, 168 Emergence, 15, 19–25, 27–30, 57, 181, 215 Ergonomics, 20, 22, 23, 25, 129, 167, 168 Expert, 4, 15, 19, 42, 50, 58, 73, 74, 80, 86, 87, 98, 122, 129–132, 135–141, 156–158, 171, 180, 181, 186, 187, 193–195, 202, 204, 220, 234, 242 F Frankenstein, 53, 54 Friendship bench, 231 G Gilbreth, F., 19, 20 Gilbreth, L.M., 19, 20 Groupthink, 49, 55, 56, 58, 215, 216, 221, 222
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Carmody-Bubb, Cognition and Decision Making in Complex Adaptive Systems, https://doi.org/10.1007/978-3-031-31929-7
245
246 H Heuristics, 87, 107–109, 113, 114, 129, 131, 185 History, 20, 47, 49, 56, 82, 98, 99, 111, 122, 158, 177, 196, 229 History “cognitive psychology”, 67 Human error, 22–24, 47, 65, 68, 90, 93–100, 105–115, 156, 157, 159, 160, 169, 196, 202, 241, 242 Human factors, 3–5, 7, 15, 16, 19–25, 47, 66, 68, 70, 93, 95, 96, 98–100, 106, 108, 111, 114, 129, 139, 140, 142, 143, 145, 149, 150, 152, 156, 157, 159, 167, 168, 196, 202, 205, 206, 210, 242, 243 Human factors analysis and classification system (HFACS), 169–173, 182, 201, 210, 230, 233, 234 Hypothesis formulation, 39, 74 I Information flow, 143, 201–203, 207, 210, 212, 216 Information security, 241, 242 Innovation, 4, 98, 140, 169, 182, 184, 193, 194, 206, 207, 217, 227–239, 241–243 J Judgement, 93, 117, 119, 129, 142, 159 L Lamarr, H., 227, 228 Learning cultures, 4, 145, 202, 204–206, 217, 230, 234 M Mary Kenneth Keller, 66 Multivariate, 3, 8, 10, 14, 15, 17, 29–31, 40, 43, 44, 58, 88, 111, 114, 149, 150, 152, 167, 181, 186, 216, 230 N Naturalistic decision making (NDM), 129, 133–135, 138–140, 145, 155, 205, 206
Index Nepal wireless project, 238 Nonlinear systems, 9, 30 Normative decision making, 106, 135 O OODA loop, 150–152 Organizational behavior, 9, 27–32, 157 Organizational decision making, 10, 30, 122, 149, 173, 204 Organizational learning, 201, 206–207 P Partisan perceptions, 118–120 Perception, 42, 47, 65, 67, 68, 76, 79–90, 100, 117, 119, 149, 155, 156, 160, 162, 170, 171, 229, 237 Phantom limb, 82 Polarization, 117–122 Probability, 30, 38, 39, 47–51, 105–109, 114, 135, 144, 202 Prospect theory, 108–111 R Recognition-primed decision making (RPD), 129, 131 Risk management, 99–100, 196 S Scientific method, 37–44, 57, 65, 67, 186 Situational assessment, 155, 160, 229 Situational awareness, 71, 98, 112, 152, 155, 156, 159–162, 183, 194, 195, 216 Stages of decision, 160 Strategic decision, 106, 113, 178, 182, 204, 216–217, 243 Strategic decision making, 3, 5, 73, 134, 179, 181, 182, 186, 187, 202, 206, 207, 215–224, 237 Swiss cheese model, 96, 97 System 1, 113, 114, 158 System 2, 113, 114, 158 Systems engineering, 3, 5, 14, 16, 19–25, 95, 98, 99, 168, 169 Systems thinking, 13–17
Index T Team decision making, 145, 183, 195, 198, 209 Top-down, 80–81, 84, 90, 152, 186, 234 Triple-loop learning, 205–206 U Uncertainty, 8, 27, 38, 42, 44, 47–51, 56, 68, 69, 76, 84, 89, 105, 109, 110, 135, 141, 149, 151, 158, 178, 184, 185, 187, 194, 209, 233, 238, 243 Uvalde, 99, 142, 144, 145
247 V Vigilance decrement, 73, 93–95, 101, 157, 160 W Wisdom of crowds, 57, 58, 186 Y Year without a summer, 53 Z Zeitgeist, 24, 49, 55, 56