Accidents and Disasters: Lessons from Air Crashes and Pandemics 981199983X, 9789811999833

This book deals with the contemporary subject of perception of risk and its influence on accidents and disasters. The co

214 44 2MB

English Pages 161 [162] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Author
1 Introduction
References
2 Incidents, Accidents, and Unmitigated Disasters
2.1 Tenerife, Concorde, Comet, and the Boeing 737 Max
2.2 The Space Shuttle Disasters
2.3 Fukushima and Chernobyl
2.4 COVID-19, OxyContin, and Thalidomide
2.5 Other Accidents and Disasters
References
3 Learning from Failures—Evolution of Risk and Safety Regulation
3.1 Bird Strikes and Air Crashes
3.2 Elon Musk and Space Travel
3.3 Nuclear Energy and IAEA
3.4 Stagecoaches, Cars, and Seatbelts
3.5 Oil Rigs, Ferries, and Maritime Regulation
3.6 Masks, Vaccines, and WHO
3.7 Catering for a Black Swan: Foresight in Regulation
3.8 The Real Regulatory Role?
References
4 Keep It Simple but not Stupid—Complex Technology and Complex Organisations
4.1 Complexity
4.2 Complex Technology
4.3 Engineers and Complexity
4.4 Complex Organisations
4.5 Black Swans
4.6 High-Reliability Organisation (HRO) Theory
References
5 Are Failures Stepping Stones to More Failures? The Sociology of Danger and Risk
5.1 Mary Douglas and the Perception of Danger and Risk
5.2 Engineering and Risk
5.3 Perrow and the Normal Accident Theory
5.4 Turner and Man-Made Disasters
5.5 Downer and the Epistemic Accident
5.6 Vaughan and the Normalisation of Deviance
5.7 The Organisational Structure and Risk
5.8 Safety Culture
References
6 To Err is Human—What Exactly is Human Error?
6.1 The Lexicon
6.2 The Law
6.3 What Psychologists Say
6.4 Engineering and Human Error
6.5 The Honest Mistake in Healthcare
6.6 Regulation and Errors
References
7 What I Do not Know Will Hurt Me—Mental Models and Risk Perception
7.1 A Mental Model
7.2 Evolution of Mental Models
7.3 Mental Models of Technology and Automation
7.4 Faulty Mental Models and Accidents
7.5 Training and Refinement of Mental Models
7.6 Risk Perception
References
8 Is Greed Truly that Good? Avarice and Gain Versus Risk and Blame
8.1 Risky Behaviour and Gain
8.2 Human Behaviour and Probability
8.3 Nudge and Risk
8.4 Nudge and Deviance
8.5 Blame
8.6 Wilful Ignorance: The Deadly Combination
References
9 And Then There is Dr. Kato: How Does It Look and Where Do We Go from Here?
9.1 Human Behaviour
9.2 Inevitability
9.3 Scientists and Engineers: The Ostrich with Its Head in the Sand
9.4 Tragedy of Commons, Ostrom, and Risk Regulation
9.5 What Should Be the Nudge in Risk Regulation?
9.6 Why is HRO so Important?
9.7 Integration of These Concepts into Risk Regulation
References
Recommend Papers

Accidents and Disasters: Lessons from Air Crashes and Pandemics
 981199983X, 9789811999833

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Satish Chandra

Accidents and Disasters Lessons from Air Crashes and Pandemics

Accidents and Disasters

Satish Chandra

Accidents and Disasters Lessons from Air Crashes and Pandemics

Satish Chandra Healthseq Precision Medicine Bengaluru, India Formerly Program Director, National Aerospace Laboratories Bengaluru, India

ISBN 978-981-19-9983-3 ISBN 978-981-19-9984-0 https://doi.org/10.1007/978-981-19-9984-0

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

There are plenty of books on accidents and disasters and a number of theories about them as well. Interestingly, most are from sociologists who have had a ring-side view of accidents caused by human action. Many times, when sociologists write about accidents, the implication is that it is due to technology. Technology is about engineering things and more than science; it is about use of these in the real world, every day, and not just in a laboratory. So, when accidents occur, how well have we anticipated real-world failure scenarios will always be the question. On the other hand, many technologies can be complex and seen as ingredients for accidents by the sociologists. Engineers introduce automation to take care of complexity, but in some cases, automation itself becomes a cause for accidents. The other interesting bit is the worldview of engineers: learning from failures, which leads to refinement of models and processes and the belief is that the same failures will not be repeated. But sociologists see technology failures as inevitable, even calling it ‘normal’. Underlying all of this is risk, more importantly perception of risk. Even among scientists and engineers, especially engineers who use scientific risk approaches and probability-based risk methods, decision making is still based on risk perception of individuals and organisations, which passes through the social structure in these organisations. The question is whether risk regulation for accidents has a structure for risk perception, rather than just catering to the various components of a risk calculation based on science and engineering methods.

v

vi

Preface

Two recent events did strike me as worthy of further exploration in terms of risk perception and risk regulation: the COVID pandemic and the Boeing 737 Max air crashes. Both were extremely interesting as they unfolded and have provided the backdrop in this book to develop the conceptual arguments. The Boeing 737 Max accidents in 2018 and early 2019 revealed a number of factors that led to the disaster. The COVID pandemic was an absolute disaster, the tragedy far greater than the two air crashes of the Boeing 737 Max; the losses in human life, economic hardship, and mental suffering were very high. It exposed many human frailties and the poor perception of risk at the early stages of the pandemic. The COVID pandemic is a classic case to study risk perception. Eventually, having identified new conceptual positions, it was interesting to ask where does one go from here. The chasm between what sociologists say about technology and how scientists and engineers think about risk deserves to be bridged and there is a need to embed a risk perception framework into regulation that encompasses a sense of common good and deals with all aspects of human behaviour that play a role in accidents. I have spent the large part of my professional life in engineering, especially aerospace engineering. Fascinating and rigorous as it has been, there was much one noticed about varying degrees of risk perception during the long career, even in an industry where safety was a near obsession. I also spent a number of years looking at ways to enhance safety, building redundancies, new safety devices, testing, and a continued interest in the evolution of safety regulation. Even within the industry, I noticed that while engineers as a breed were committed to safety, but were equally, if not more, interested in achievement and accomplishment. There was joy in getting complex things to work. Our social fabric is typically built on an achievement-driven hierarchy, where the need to achieve is viewed to be at a much higher level than avoiding a disaster. Bluntly speaking, this can be described as “risk for gain”. Coupled with pressures to deliver at the lowest cost and ability to withstand intense competition, engineers have had challenges across the board, apart from the need to achieve. When accidents occur, apart from technology failures, it involves aspects of human behaviour and actions, in which risk taking for gain could play a part. So, I have felt one needed to explore risk perception, just as much as scientific risk, which has roots in the probability concept and is used widely by scientists and engineers. As the book unfolded, driven in the beginning by the Boeing 737 Max and earlier air accidents, the pandemic hit us. Events associated with the pandemic and Boeing 737 Max provided a continuous stream of stories in the media on risk taking, risk aversion, risk management, and policy making. At this point

Preface

vii

in time, both these disasters have reached a closure, normal life has resumed, and both have provided unique insights into accidents and disasters. The book has been written using contemporary information in terms of incidents associated with both disasters, twists and turns in decision making as the disasters unravelled, and human behaviour that was encountered. The idea is that this experience should be useful in reconfiguring regulation, which should account for common good. This book is written in a simple style, uses real-life incidents, reviews literature in sociology and engineering, and picks up newspaper and media reports to weave the story. The book, in a strict sense, is a worldview. Over my career, there have been only a few, a handful of engineers, who shared this need to study accidents and disasters differently, while a majority focused on the triumph of engineering achievement. As a result, it has always been left to sociologists to critique technology. This too has also been examined in the book, though my view is that the sociologists have also been extremely fatalist. Some three decades ago, I had an opportunity to look at mental models and risk, when I did a Ph.D. with Prof. David Blockley at the University of Bristol. I thank him for my time there. My time at the National Aerospace Laboratories, India, provided a unique ring-side position and experience to build a worldview on risk perception, mental models, and regulation. I thank my colleagues and many friends there and others in the industry as well. Ravi and Radhika, I recall thought it was worthwhile for me to attempt this. I owe immense gratitude to my wife Suma and my children Vikram and Aditi for their encouragement and steadfast support all along, especially during the many frustrating times in the pandemic. Lockdowns were, however, a strange, but real inspiration for this book. Bengaluru, India November 2022

Satish Chandra

Contents

1

Introduction References

1 5

2

Incidents, Accidents, and Unmitigated Disasters 2.1 Tenerife, Concorde, Comet, and the Boeing 737 Max 2.2 The Space Shuttle Disasters 2.3 Fukushima and Chernobyl 2.4 COVID-19, OxyContin, and Thalidomide 2.5 Other Accidents and Disasters References

9 11 16 19 20 22 24

3

Learning from Failures—Evolution of Risk and Safety Regulation 3.1 Bird Strikes and Air Crashes 3.2 Elon Musk and Space Travel 3.3 Nuclear Energy and IAEA 3.4 Stagecoaches, Cars, and Seatbelts 3.5 Oil Rigs, Ferries, and Maritime Regulation 3.6 Masks, Vaccines, and WHO 3.7 Catering for a Black Swan: Foresight in Regulation 3.8 The Real Regulatory Role? References

29 31 35 37 39 41 42 43 44 46

ix

x

4

5

Contents

Keep It Simple but not Stupid—Complex Technology and Complex Organisations 4.1 Complexity 4.2 Complex Technology 4.3 Engineers and Complexity 4.4 Complex Organisations 4.5 Black Swans 4.6 High-Reliability Organisation (HRO) Theory References

51 53 56 58 63 64 65 67

Are Failures Stepping Stones to More Failures? The Sociology of Danger and Risk 5.1 Mary Douglas and the Perception of Danger and Risk 5.2 Engineering and Risk 5.3 Perrow and the Normal Accident Theory 5.4 Turner and Man-Made Disasters 5.5 Downer and the Epistemic Accident 5.6 Vaughan and the Normalisation of Deviance 5.7 The Organisational Structure and Risk 5.8 Safety Culture References

71 73 76 78 80 82 84 86 87 89

6 To Err is Human—What Exactly is Human Error? 6.1 The Lexicon 6.2 The Law 6.3 What Psychologists Say 6.4 Engineering and Human Error 6.5 The Honest Mistake in Healthcare 6.6 Regulation and Errors References

93 94 96 97 100 103 104 105

7 What I Do not Know Will Hurt Me—Mental Models and Risk Perception 7.1 A Mental Model 7.2 Evolution of Mental Models 7.3 Mental Models of Technology and Automation 7.4 Faulty Mental Models and Accidents 7.5 Training and Refinement of Mental Models 7.6 Risk Perception References

109 111 113 115 116 118 120 123

Contents

8

9

Is Greed Truly that Good? Avarice and Gain Versus Risk and Blame 8.1 Risky Behaviour and Gain 8.2 Human Behaviour and Probability 8.3 Nudge and Risk 8.4 Nudge and Deviance 8.5 Blame 8.6 Wilful Ignorance: The Deadly Combination References And Then There is Dr. Kato: How Does It Look and Where Do We Go from Here? 9.1 Human Behaviour 9.2 Inevitability 9.3 Scientists and Engineers: The Ostrich with Its Head in the Sand 9.4 Tragedy of Commons, Ostrom, and Risk Regulation 9.5 What Should Be the Nudge in Risk Regulation? 9.6 Why is HRO so Important? 9.7 Integration of These Concepts into Risk Regulation References

xi

125 127 131 134 135 138 139 140 143 145 146 147 149 150 151 153 155

About the Author

Dr. Satish Chandra is a former Programme Director and Chairman, Structures and Materials Cluster at the National Aerospace Laboratories and presently, Director at Healthseq, a precision medicine and health systems company. He has spent an entire career looking at safety and risk in aviation. He established a crashworthiness laboratory dealing with aircraft and automotive safety and has experience with social and organisational behaviour and its impact on risk perception. Overall, Dr. Chandra’s expertise has combined research and technology with the broader aspects of regulatory compliance, technology development, operating economics and public policy. He has a Ph.D. from the University of Bristol from a systems engineering laboratory that has recognised work in dealing with human errors and organisational behaviour as contributory causes to accidents. The book draws on the Boeing 737 Max accidents and the pandemic and is shaped by the author’s earlier experience. Dr. Chandra has been particularly interested in sociology theories on accidents, and a view of developing a broader perspective regarding risk perception by embedding issues of human behaviour and sociology into regulation, especially in healthcare and aviation. Dr. Chandra has published work in aircraft design for safety and related areas. He has served as a reviewer and has been on editorial boards of technology journals.

xiii

1 Introduction

The crash on 29th October 2018 of an Indonesian Lion Air Boeing 737 Max made headlines. There was some surprise, as it was a brand-new aircraft. Many discussions started on the airline, training of pilots, and maintenance [1–3]. As always, an accident investigation also began. Five months later, the Ethiopian Airlines Boeing 737 Max crashed [4]. This generated an alert across the board. An initial investigation led to all Boeing 737 Max aircraft being grounded [5–7]. Over late 2018 and early 2019, the Boeing 737 Max story evolved as the investigators and the media dug deeper. Something had gone terribly wrong in an industry that had pioneered complex technology and had an unrelenting focus and commitment to safety, strong oversight, and regulation [8]. Facts, as they emerged, started to point to a major flaw in design [9], and there seemed to be a failure of regulatory oversight as well [8]. It was no longer just a couple of accidents but an aviation disaster, as all Boeing 737 Max aircraft were grounded. The unfolding story was about one of the world’s most reputed companies having prioritised profits over safety [8, 10]. It was also about the role of the regulator, the Federal Aviation Administration (FAA), which appeared to be unaware of the implication of the issues involved [8, 11]. What emerged from investigations pointed to faulty processes, organisational issues that were deep and sometimes greedy, and the rather controversial role of the regulator, all of which took over talk shows for a while. The accidents raised serious questions on risk taking, safety mechanisms, design and testing, project oversight, and organisational issues at Boeing [12–14].

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_1

1

2

S. Chandra

As it happens in the media, this story started slipping from the headlines, with an occasional piece here or there. No Boeing 737 Max aircraft had flown, all grounded, filling up aircraft parking lots like never before. However, this news story was chicken feed when compared to the pandemic that hit us by January 2020, an extraordinary and unparalleled saga that gripped the world. In the early days of the pandemic, when risk perception was still building, it was often said at that time that auto accidents killed more people each year than the COVID-19 pandemic would [15]. However, the COVID-19 pandemic was on, and decision making was confusing at every stage. Even with a century of effort in risk mitigation, push for regulation and creation of institutions such as the World Health Organisation (WHO) with specific goals to be prepared for something like this [16], it has been argued that there has been a record of failure at all levels with this pandemic [17, 18]. The textbook definition of an accident implies that the event is unintentional and the cause unknown but is not normal or routine [19]. Its cause could be an honest mistake, ignorance, or wilful ignorance, if not with malafide intent. Regulation is aimed at the prevention of accidents and disasters and is generally performed by governments using the power of legislation and oversight. In regulating those sectors or industries that are international in its effect, there is supposed to be international oversight as well, say under the aegis of the UN. This has occurred to an extent in some sectors, such as nuclear energy under the IAEA (International Atomic Energy Agency (IAEA), as seen in its mission statement and work internationally [20]. Nonetheless, it must be pointed out that viable and globally acceptable international regulation is a challenge, even though there is broad consensus on many global concerns: epidemics, nuclear disasters, climate change, etc. This is due to the multipolar power structures in the world order and is unlikely to change in the short term. Accidents and disasters are not new, but the COVID pandemic and the Boeing 737 Max disasters provide fresh evidence that despite the diversity of failures, there are deep similarities in how disparate complex systems fail. Preventing disasters is truly about risk perception and regulation. On the other hand, theories about risk and failure are not new either. The term risk is frequently used in conversation, colloquially, and in a qualitative manner. It is widely used scientifically, mathematically, and quantitatively as well. Science and engineering provide a quantitative and probabilistic approach to evaluate risk, called risk and reliability, and use probabilistic risk analysis (PRA) [21]. Envision of catastrophic failure scenarios and use of PRA in regulation for stemming faults and failures for the nuclear, space, and aeronautics sectors have prevented many accidents. While there is recognition that there will be

1 Introduction

3

black swan events, rare events that are not anticipated [22], those that should have been anticipated but were not, can be regarded as foresight failures. Historically, the risk regulation framework and processes have been heavily tilted towards science and engineering approaches, rather than issues of human behaviour. However, risk is also about perception, and sociologists see things very differently from scientists and engineers. Sociologists think that risk is about the perception of danger, human behaviour, social structure and interactions, economics, and prevailing culture. Sociologists also believe accidents occur due to human factors: human errors, deviance, inability to handle complexity, and, consequently, accidents are inevitable because of the human element. Work by Douglas [23], Turner [24], Perrow [25], Vaughan [26], Downer [27], and others concentrates on the possibilities of organisational and human failures and the inevitability for accidents to occur due to the interaction between complex technology and human behaviour. The sociologist is alarmed when the term risk is used confidently by engineers [23]. Sociologists view is that risk is as much about the perception of danger, human behaviour and social interaction, and engineers and scientists ignore this and the human element when they use quantitative methods for evaluating risk. The engineering view, on the other hand, has its foundation in the scientific approach and hence is about constant improvement as more understanding arrives, much like what the philosopher Karl Popper refers to, in his work on Conjectures and Refutations [28]. The Boeing 737 Max disaster, which has seen complexity of technology, engineering failure, and possibly human avarice, illustrates what the sociologists say. However, it is not that unique or special; it has happened before. The two Space Shuttle failures are stark reminders of organisational and oversight failures [26]. Even with the best of intentions, the engineering approach to design for safety does not fully account for human actions of negligence, mala fide intent, or plain avarice. During the coronavirus pandemic, the science of the virus was still evolving. Many governments, well intentioned, faltered initially and seem to have got more things wrong than right even after being advised in many cases by science. Science has produced contradictory information as well. Epidemiology models have been wrong many times [29], many drugs did not work [30], surface versus aerosol transmission was a big debate [31], and vaccine trials have had their fair share of controversy [32]. The pandemic provides insight into human behaviour, sociology, and perception of risk, which in turn drives decision making against the backdrop of weak national and international regulation. News reports suggest instinctive reactions by leadership in Wuhan to suppress information [33], the

4

S. Chandra

Chinese Government denying that the problem has international ramifications and the wider world weighing the implications of economic crises if shutdowns and travel embargoes were implemented. The interactions between politicians and scientists and the process of decision making as well as framing policy during the COVID-19 pandemic is a story about human behaviour and sociology rather than just about science or technology [34, 35]. It can be argued that science is not innocent, science, and politics are strange but necessary bedfellows [23, 35]. Written as a popular book, using the pandemic and the Boeing 737 Max as a backdrop, it examines other prominent accidents in aviation, space, nuclear energy, automotive, and healthcare. The Boeing 737 accidents have revealed a complex set of factors that were in play: faulty mental models at the engineering level, deviance in the organisation, and a regulator whose oversight structure had crumbled. In a sense, it was a perfect storm in the making for an aviation accident. However, the pandemic was even more intense, where the science was unknown, the processes to deal with were never fully developed, considerable organisational dysfunction at various levels, and a set of regulators who were unprepared for a storm of this magnitude. In the book, it is argued that there is more to the causes for disasters than is commonly thought: not just human error or technical failure as engineers see it, or complexity, deviance, or culture as sociologists view it. It is shown that many of these accidents illustrate poor mental models, avarice, and risk taking for gain. It is proposed that when risk taking occurs purely for gain, it could serve as a nudge and precursor for deviance, something seen in the pandemic as well as the Boeing 737 accidents, but not been written about explicitly. This book is all about developing a unified view of risk for refining regulation, exploring the rational choice theory, nudge as in behaviour economics, but importantly the concepts from Ostrom’s view of cooperation in regulation (see Chap. 9). A popular high-reliability organisation (HRO) theory has been around for a while [36] and is essentially about enhancing safety culture. What is proposed here is that it is possible to embed processes, models, and theories in engineering as well as models of human behaviour from the point of risk for gain into the HRO framework and be used in regulation. Nonetheless, regulation is also about foresight, and as Taleb’s theories of rare events go [16], there is only that much one can anticipate, a view that sociologists lay claim to. Even with this, the refined and embedded HRO could provide a framework for enhancing robustness and reducing fragility in risk regulation. Both the pandemic and the Boeing 737 Max disasters are contemporary events that have occurred recently. These disasters fit into some of the theories proposed by sociologists. However, it is more than that. There is an urgency

1 Introduction

5

to review the risk regulation framework developed by scientists and engineers, incorporating perceived risk as advocated by sociologists. Based on this book, scientists and engineers would have an opportunity to view risk perception from a whole new perspective. The audience for this book is truly across the board: policy makers, scientists and engineers in aviation, healthcare, nuclear energy, space, truly anybody working on complex technology, sociologists, management professionals who study organisational behaviour, risk analysts, and, most importantly, the public. Chapter 2 describes a diverse set of well-known accidents and explores the background and similarities to these events. Chapters 3–8 address the core issues: the state of regulation, the worldview of the sociologists, complexity, where humans err, mental models of complex systems, avarice, and blame. The chasm between scientists and engineers on the one hand and the sociologists and the public on the other need to be bridged. In the final chapter of the book, ideas to bridge that divide are discussed using lessons from the pandemic and the Boeing 737 Max accident. The future may lie in the refinement and enhanced use of the HRO theory [36] framework and incorporating Ostrom’s work on the management of shared resources to refine risk perception and reduce fragility in organisations and technology.

References 1. Specia, M. (2018). What we know about the Lion Air Flight 610 crash. New York Times, November 9, 2018. 2. Bertorelli, P. (2019). Lion air: Faulty design, weak pilot training and maintenance lapses caused 737 MAX crash. AvWeb, October 25, 2019. https://www. avweb.com/aviation-news/lion-air-faulty-design-weak-pilot-training-and-poormaintenance-caused-737-max-crash/. Accessed August 20, 2022. 3. Bogaisky, J. (2018). Lion air crash report raises questions about maintenance and pilots. Actions, Forbes, November 28, 2019. 4. Picheta, R. (2019). Ethiopian Airlines crash is second disaster involving Boeing 737 MAX 8 in months. CNN , March 11, 2019. 5. Farnoush Amiri, F., & Kesslen, B. (2019). Boeing 737 Max jets grounded by FAA emergency order. NBC News, March 13, 2019. 6. Isidore, C. (2019). The Boeing 737 Max grounding: No end in sight. CNN , June 11, 2019. 7. Nakahara, M. (2020). The continued grounding of the Boeing 737 MAX. The Regulatory Review, Analysis|Infrastructure|, July 16, 2020. https://www.the regreview.org/2020/07/16/nakahara-continued-grounding-boeing-737-max/. Accessed August 15, 2022.

6

S. Chandra

8. The House Committee on Transportation and Infrastructure. Final committee report: The design, development and certification of the Boeing 737 Max, September 2020. 9. Nadeau, B. L. (2020). Boeing hid ‘Catastrophic’ 737 MAX design flaws that killed hundreds. The Daily Beast, September 16, 2020. 10. Gelles, D. (2020). Boeing’s 737 Max is a saga of capitalism gone awry. New York Times, November 24, 2020. 11. Cusumano, M. A. (2021). Boeing’s 737 MAX: A failure of management, not just technology. Communications of the ACM, 64 (1), 22–25. 12. Hawkins, A. J. (2019). Everything you need to know about the Boeing 737 Max airplane crashes. The Verge, March 22, 2019. 13. Jennings, T. (2021). How ‘Boeing’s Fatal Flaw’ grounded the 737 Max and exposed failed oversight. The New York Times, September 13, 2021. 14. Steib, M. (2019). Report: Self-regulation of Boeing 737 MAX may have led to major flaws in flight control system, intelligencer. nymag.com, March 17, 2019. https://nymag.com/intelligencer/2019/03/report-the-regulatory-failuresof-the-boeing-737-max.html. Accessed August 20, 2022. 15. Boboltz, S., & Hobbes, M. (2020). Here’s why trump is wrong to compare the coronavirus to the flu and auto accidents. Huffpost Health, March 25, 2020. 16. WHO. (2022). Pandemic preparedness. Available via https://www.who.int/eur ope/news-room/fact-sheets/item/pandemic-preparedness. Accessed August 15, 2022. 17. Gebrekidan, S., & Apuzzo, M. (2021). Covid response was a global series of failures, W.H.O.-established panel says. New York Times. 18. BBC. (2021). Covid: Serious failures in WHO and global response, report finds. 19. Merriam-Webster. (n.d.). Accident. From https://www.merriam-webster.com/ dictionary/accident 20. IAEA. (n.d). The IAEA mission statement. IAEA. https://www.iaea.org/about/ mission. Accessed August 5, 2022. 21. Stamatelatos, M. (2000). Probabilistic risk assessment: What is it and why is it worth performing it? NASA Office of Safety and Mission Assurance. 22. Taleb, N. N. (2008). The Black Swan: The impact of the highly improbable. Penguin. 23. Douglas, M. (2002). Risk and blame: Essays in cultural theory. Routledge. 24. Turner, B. (1997). Man-made disasters. Butterworth-Heinemann. 25. Perrow, C. (2000). Normal accident theory: Living with high risk technologies. Princeton University Press. 26. Vaughan, D. (2016). The challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press. 27. Downer, J. (2010). Anatomy of a disaster: Why some accidents are unavoidable. Discussion Paper, No: 61, March 2010, London School of Economics. 28. Popper, K. (2002). Conjectures and refutations. Routledge. 29. Ioannidis, J. P. A., Cripps, S., & Tanner, M. A. (2022). Forecasting for COVID19 has failed. International Journal of Forecasting, 38(2), 423–438.

1 Introduction

7

30. Zimmer, C. (2021). Drugs that did not work: How the search for Covid-19 treatments faltered while vaccines sped ahead. New York Times, January 30, 2021. 31. Smith, D. G. (2020). The most likely way you’ll get infected with the coronavirus. Elemental , September 16, 2020. https://elemental.medium.com/ the-most-likely-way-youll-get-infected-with-covid-19-30430384e5a5. Accessed August 19, 2022. 32. McKie, R. (2020). Oxford controversy is the first shot in international battle over vaccine efficacy. The Guardian, November 28, 2020. 33. Mitchell, T., Yu, S., Liu, X., & Peel, M. (2020). China and Covid-19: What went wrong in Wuhan? Financial times, October 17, 2020. 34. Ball, P. (2021). Discussion: What the COVID-19 pandemic reveals about science, policy and society. Interface Focus. Royal Society Publishing, October 12, 2021. 35. Reynolds, S. (2020). Coronavirus has put scientists in the frame alongside politicians—And poses questions about leadership. TheConversation.com, November 19, 2020. https://theconversation.com/coronavirus-has-put-scient ists-in-the-frame-alongside-politicians-and-poses-questions-about-leadership148498. Accessed August 18, 2022. 36. Roberts, K. H., & Rousseau, D. M. (1989). Research in nearly failurefree, high-reliability organizations: Having the bubble. IEEE Transactions on Engineering Management, 36 (2), 132–139.

2 Incidents, Accidents, and Unmitigated Disasters

Why is an accident sometimes called just an incident, or a disaster is a matter of either semantics, industry classification, or what the media says. The Boeing 737 Max accidents are being termed by the media as a disaster [1]. The first of the accidents was regarded as a plain accident, although it killed 189 people. When it occurred, the initial reaction was one of regarding it as yet another accident in a region that had a poor reputation for airline safety [2]. It was called an accident using accepted terminology in the aviation industry [3, 4]. That was until the next accident occurred and all aircraft were grounded, by which time it was regarded as a clear disaster by the media [5]. Even as the Boeing 737 Max story was receding from the headlines and work on resolving the crisis proceeded, we started to see what looked at that time in January 2020 as a localised incident, this time in healthcare, as a viral disease started in Wuhan, Hubei Province, China [6]. Early reports provided no clue that it would envelop the world a few months later. It appeared to be a problem localised to China, which had seen this kind of a disease before. However, as we all know, it has been an unmitigated disaster, a far bigger one than any we have seen thus far. Worse still, it is not a natural disaster, it has been a man-made disaster, not because there is evidence that this was a designed virus that was supposed to cause havoc of such proportions, but because many did not anticipate the effect we see today, and a number of missteps were taken. The initial reactions of people in governance across the world were the now familiar ostrich with its head in the sand approach [7–9].

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_2

9

10

S. Chandra

Some incidents remain as incidents, some grow into accidents, and some become unmitigated disasters. Definitions in the lexicon for accidents are (1) an event or circumstance occurring by chance or arising from unknown or remote causes; lack of intention or necessity; an unforeseen or unplanned event or condition; and (2) sudden event or change occurring without intent or volition through carelessness, unawareness, ignorance, or a combination of causes and producing an unfortunate result [10]. Colloquially, what are termed as accidents are events that are not normal or routine. However, there are events that are abnormal; some may involve the mere loss of property or equipment, but some lead to human injuries or even fatalities. For example, in aviation, the definitions are made clear: an incident is an unintended event that disturbs normal operations. But with fatalities, it is defined as an accident. The world’s aviation body, ICAO, states that if injuries to a person lead to eventual fatality or major structural damage to an aircraft, it is classified as an accident. In comparison, an incident is an event that affects or could affect safe operation but has not led to an accident [4]. In reality, what the media terms as a disaster varies all the time. Perceptions vary, Chernobyl, which resulted in an immediate set of fatalities of 31 people according to an official and 50 deaths according to the UN [11], who were workers at the site but was coined as a disaster due to the implications for future casualties linked to radioactive exposures. On the other hand, over 30,000 people have died in automobile accidents every year in the USA alone between 1990 and 2019 [12]. Should these be termed as annual disasters? However, they are not. The pandemic killed more people in the USA in a few months than automobiles did in 5 years [13], clearly and unambiguously making the pandemic a real disaster. Accidents and disasters are more than what scientists and technologists perceive. Looking at what happened in 2019 and 2020, the Boeing 737 Max and the coronavirus pandemic provide enough opportunities to explore human behaviour, sociology, and regulation of risk. Reactions to incidents, accidents, and disasters follow a pattern, one of surprise initially, and quite often instinctive denial with a view that this cannot happen in my backyard, and then panic, poor decision making, reflection, and correction. In addition to aeronautics and healthcare, the world has seen incidents, accidents, and disasters in the space, nuclear, automotive, and maritime industries. These sectors are regulated for safety and risk. All these sectors activate a conscious sense of danger (you could colloquially call it risk as well) whenever an event occurs. However, the heightened sense of danger and the perception of risk are different for each of them. The following sections

2 Incidents, Accidents, and Unmitigated Disasters

11

describe several accidents and disasters, providing opportunities to explore human behaviour, sociology, and the regulation of risk.

2.1

Tenerife, Concorde, Comet, and the Boeing 737 Max

Aviation accidents always have high visibility. Many remain imprinted in the mind, some due to the sheer magnitude of the accident and some due to the peculiar nature of it. However, the total number of people who have died in commercial aircraft accidents was approximately 84,000 (including both commercial and corporate aircraft) between 1942 and 2021 [14], a figure far lower than automobile accidents, which accounted for over 30,000 per year in the USA alone [12] or the pandemic in 2020 in the USA [13]. Some notable examples in this sector are the Tenerife disaster, the Concorde crash, and the Boeing 737 Max accident. The Tenerife disaster [15] occurred on the ground but is still an aviation disaster, as it involved two planes killing over 583 people. In aviation, it is regarded as one of the worst. Two Boeing 747 planes were preparing to take off on March 27, 1977, the planes KLM 4805 and Pan Am 1736, both from very reputed airlines, collided on the runway at Los Rodeos Airport in Tenerife. A number of things coincided at that time, a terrorist incident had grounded the planes earlier, the taxiway was unavailable making the planes use the runway to taxi, and there was bad weather due to thick fog and poor visibility. The KLM aircraft started to take off even though the Pan Am aircraft was on the runway and turning into the taxiway. The collision between both aircraft caused a huge fire, killing all on the KLM 4805 and only 61 surviving on the Pan Am 1736. All accidents tend to generate blame, and this is passed between the actors in the accident. The Spanish control tower blamed the KLM flight crew, while the Dutch investigators and KLM noted that the KLM pilot was given clearance from the control tower. As with many accidents, poor communication between the control tower and the flight crew and between the crew was identified. Did this disaster lead to learning and to better procedures? It certainly influenced the improvement of crew resource management (CRM), better simulator training, and better and more precise communication. However, sociologists will ask: how far did the learning go, and would it happen again? Nearly twenty years later, on 12 November 1996, a Saudi Arabian Airlines Boeing 747 (SVA76) took off from Delhi en route to Jeddah. It had 312 people on board. Around then, a Kazakhstan Airlines Ilyushin IL-76TD (KZA1907) was to land in Delhi, carrying 39 people. It was cleared to

12

S. Chandra

descend to 15,000 feet when it was 137 km from Delhi, while on the other hand, SVA763, flying in the opposite direction, was cleared to climb to 14,000 feet. Accident investigation reports that KZA1907 reportedly descended below 15,000 feet, and the Air Traffic Control (ATC) warned KZA 1907, while alerting KZA1907 of SVA76 being in the vicinity. The two aircraft collided, and SVA176 disintegrated in air. The IL 76 went into an uncontrolled descent and crashed in a field. Altogether, 347 were killed. Investigation suggests that here too, the issue was communication, with the Khazak pilot not following ATC instructions, possibly due to lack of English understanding, but the KZA officials did blame turbulence. There was also the issue of confusion of using metres vs feet and nautical miles. The Delhi airport was not equipped with a secondary surveillance radar (SSR). Delhi, at that time, also did have a single corridor for both departure and arrivals. Recommendations of the investigation were many, including installation of SSRs, separation of in-bound and outbound aircraft, and mandatory collision avoidance equipment (TCAS: Traffic Collision Avoidance System) on commercial aircraft operating in Indian airspace [16, 17]. Sociologists have been sceptical about whether accidents of this kind can be avoided as unexpected failures are built into society’s complex and tightly coupled systems and argue that even after the Tenerife disaster, such collisions have continued to occur. In these accidents, apart from improper communications, there were other issues as well, which would fit into a theory from James Reason, called the Swiss Cheese theory, to be discussed in Chap. 4, where a number of smaller events or failures would align themselves to a larger catastrophic failure. As if this accident was not an adequate reminder to human error, we next see a chilling accident that subsequently led to a murder. “The accused is guilty of multiple cases of manslaughter”, said the judge regarding the conviction of air traffic controllers for the mid-air collision of a Russian passenger jet and a DHL cargo plane [18]. On 1st July 2002, a Bashkirian Airlines Flight 2937, a Tu-154 jet, carrying school children on holiday and DHL Flight 611, a Boeing 757 cargo aircraft collided in midair over Überlingen, close to the German–Swiss border, near Lake Constance, killing the 69 on the Russian plane and two crew on the DHL flight. The Air Traffic Controller named Peter Nielsen asked the Tu-154 to dive at the same time as the DHL flight’s TCAS was activated, which told it to dive as well, leading the two aircraft to collide at 35,000 feet. To understand why sociologists are so wary of the combination of human behaviour and technology, one needs to compare this accident with the Delhi mid-air collision. After the Delhi mid-air collision, which made it clear that a TCAS was supposed

2 Incidents, Accidents, and Unmitigated Disasters

13

to help out in a situation such as this, a recommendation was made to make TCAS mandatory on all commercial aircraft. The DHL Flight 611, a Boeing 757, also had TCAS installed. On that fateful day, however, telephones were out of order, the radar software was in a restricted mode, and Nielsen’s backup colleague had gone for a coffee break [18, 19]. The investigation showed the many shortcomings of the Swiss air traffic control and ambiguities in the use of the TCAS systems [19]. On 24 February 2004, Peter Nielsen was murdered by Vitaly Kaloyev, who had lost his wife and two children in the July 2002 accident [20]. The view of the Skyguide chief executive was that the interaction between people, technology, and procedures was primarily responsible [18]. These three mid-air collisions and the telling statement of the Skyguide chief executive on people, technology, and procedures show how the interplay between technology, human behaviour, and sociology is involved. However, there is more coming in later chapters, not just technology and sociology, but greed, risk for gain and how organisational compulsions for gain lead to accidents, as the next aviation accident shows. On 2 April 2011, there was a terrible accident in the testing of the most advanced business jet, the Gulfstream 650. This accident took place at the Roswell airport in New Mexico, USA. Two pilots and two flight engineers died [21, 22]. Test flying is known to be risky, but over the years, such is the advances in aeronautics, it is claimed that much of the risk can be mitigated. In tests such as these, one engine is intentionally shut off to prove that in the case of emergencies, the aircraft can operate on a single engine. The National Transportation Safety Board’s (NTSB) investigation of this accident found that the aircraft had in fact stalled as it was taking off. This had happened in fewer ways in previous tests, but Gulfstream did not carry out an in-depth investigation that should have pointed out what is known in aeronautics as a “stall” due to the ground effect that had previously occurred [22]. There appeared to be considerable management pressure to complete the test to prove the aircraft’s 6000-foot take-off performance commitment. While there was ample opportunity to carry out simulations with physicsbased models, the company had chosen to rely on modification of the piloting technique [22]. Gulfstream was very keen to ensure no slippage of the schedule for the Federal Aviation Administration (FAA)-type certification. The timeline pressure coupled with inadequate organisational processes, especially technical oversight and safety management, led to superficial reviews and reluctance to highlight anomalies. NTSB [22] pointed out deficiencies about the organisational structure, roles, and responsibilities of team members at one end

14

S. Chandra

and deficiencies in the identification of potential hazards and appropriate risk controls at the other, a deadly combination of errors. In fact, no postcrash egress was possible for the crew, as there were no escape hatches. At the end of it, Gulfstream made changes based on the recommendations and subsequently received FAA certification. As we explore more in Chap. 5, this accident was due to what Diane Vaughan, a sociologist who studied the culture at NASA and the Shuttle disasters, calls a normalisation of deviance [23] and is discussed in detail in Chap. 5. There are clearly other kinds of accidents, unintentional, and human error in its truest sense. On 8 January 1989, what has become known as the Kegworth air disaster occurred. Media categorised it as a disaster, although 47 people had died. A British Midland Flight 92, a Boeing 737, crashed on the embankment of a motorway near Kegworth, UK, while attempting to make an emergency landing because of an engine failure. The pilots mistakenly believed this to be in the right engine, as the cabin was filled with smoke and all air circulation was generally from the right engine, but this version of the aircraft, unlike its predecessors, used a different system. The pilots were seemingly unaware of this and thus had a faulty mental model of the aircraft. They believed they were doing the correct thing, shut off the right engine, and applied full thrust on the malfunctioning engine, which led to a fire. Of the 126 passengers, 47 died, and 74 were seriously injured. The Kegworth disaster again also led to several recommendations on making sure decision making is right, based on better training and better manuals [24]. However, what the Kegworth disaster shows us is that faulty mental models at any stage, whether during the operation of the aircraft, or during maintenance or even the design, can cause disasters, which leads us to the next set of famous accidents on the Concorde and the Comet. The Concorde supersonic aircraft was regarded as one of the finest technological achievements of the twentieth century and had trouble-free operation since 1973 but was an expensive aircraft to operate. On 25th July 2000, Air France Flight 4590 took off on a chartered flight from Paris to New York. What was a routine take-off turned into a disaster, when the landing gear wheels encountered debris on the runway, blew a tyre, the tyre fragments flew into the wing and the landing gear bay. This ruptured the fuel tank, which then ignited. The gear could not be retracted, flight control systems were damaged, and the aircraft crashed into a nearby hotel, killing all 109 people on board and four people on the ground, just two minutes after takeoff. Clearly, there was no collision, no management pressure, but a design flaw that had not considered this scenario. In fact, after the accident, modifications to the fuel tanks were made, and more care was taken to keep the

2 Incidents, Accidents, and Unmitigated Disasters

15

runway clean. However, this accident and the cost of operating the Concorde ultimately led to its retirement some years later [25]. This scenario was never a part of the design considerations, nor had the regulatory authorities insisted on compliance for a failure case such as this. The story of the Comet aircraft is equally fascinating. Developed after WWII, the de Havilland DH 106 Comet was actually the first commercial jet aircraft and there was much going for it. Using modern technologies, it was a jet aircraft with a pressurised cabin and could fly higher than the weather, making it very comfortable. It had large square windows. However, within a year of entering service in 1952, three aircraft crashed with the aircraft disintegrating in the air. Until the Comet accidents, metal fatigue was not that well understood, although there is some evidence that it was known that ships had in fact broken up due to metal fatigue. The Comet was grounded, and detailed research began. Several issues in the design were found to have caused the metal fatigue, which included stress concentration around the square windows, incorrect riveting, manufacturing flaws, etc. In fact, after the Comet crash, there was no return to this type of square window in a pressurised aircraft. All windows are oval. The reputation of the Comet never recovered, while some remained in service, the aircraft failed in the marketplace. Competitors such as Boeing and McDonald Douglas, however, learned from this experience [26]. The Comet disaster showed that unanticipated and surprising physics could emerge. The Boeing 737 Max accidents are now being termed by the media a disaster. The first accident was regarded as a plain accident, although it killed 189 people. At first, it appeared to be another one in a region that appears to have had a poor reputation for airline safety. The aircraft industry and the regulator would have called it an accident, but not a disaster, though. However, it was still a Boeing plane, a new plane, Boeing 737 Max, that promised to perform much better than other aircraft of its class. It used all the proven technologies from its previous versions and had a new engine and great performance. Apart from this, the story was that it had minimal changes from its predecessors. Airlines thought this was a vehicle that could be a cash cow, and Boeing had huge orders. About the accident, discussions flew around about poor piloting skills and the possibility of bad maintenance. Even as investigations were progressing, another crash, this time of an Ethiopian Airlines Boeing 737 Max occurred, repeating a precrash behaviour of the Lion Air Boeing 737 Max aircraft. This is within five months of the first crash. For a while, there was again the same set of theories—poor airmanship or training. However, Ethiopian was known

16

S. Chandra

to be a good airline, although it was also from the developing world. Regulators across the world decided to investigate these accidents further, and what then followed was an unmitigated disaster for aviation in general and Boeing in particular. All Boeing 737 Max aircraft were grounded. A longish article appeared in the New York Times by Langewiesche, a well-reputed aviation journalist, which implied that the accident was due to the poor training of the pilots at Lion Air and Ethiopian [27]. However, what emerged later on was something different. It began to unfold that the modifications to the aircraft caused a change in flying characteristics. The fault was recognised, and automation was adopted to control the undesirable modes. However, a hallowed set of processes in aviation of checks, validation, and training, such as redundancies, software verification, and simulator training, was ignored [28]. The regulator had missed this entirely, had believed the automation would overcome the fault, and had let Boeing through. The story is as much about organisational behaviour as it is about complex technology, all driven by profits and competition. More importantly, the company that had a reputation as a technology and engineering company used extensive cost cutting strategies to focus on shareholder value [29, 30] and ultimately led to a series of decisions resulting in the accidents. While the evolution of safety in aviation has been scientific and systematic, the complexity has also increased and consequently has led to automation. Redundancies and fault-tolerant systems have been built, and process quality has been documented. All what engineers can visualise in terms of failures are scenarios that are accounted for. However, accidents such as the Boeing 737 Max have occurred. However, human behaviour factors such as bad mental models, defective oversight by the regulator, and organisational deviance have also been major causes of aviation accidents, as discussed in subsequent chapters. There have been crashes based on what is speculated to be deliberate intent. In Chap. 6 on human error, where the classification of error is addressed, in a number of air crashes, Egypt Air, Silkair, Germanwings, and MH370, there have been interpretations that the crash was deliberate and caused intentionally, in some cases where the pilot was depressed or had suicidal tendencies.

2.2

The Space Shuttle Disasters

In 2003, the Space Shuttle Columbia was returning home after a mission that had lasted two weeks, but it disintegrated on re-entry. Seventeen years earlier,

2 Incidents, Accidents, and Unmitigated Disasters

17

another terrible accident occurred, with Space Shuttle Challenger disintegrating on launch. In both cases, all astronauts on board were killed. There have been other accidents during the Apollo missions and the Russian space programmes. Space flight is generally acknowledged as risky. It is a journey into the ‘unknown’, and there are severe constraints to build multiple redundancies. Each set of redundancies adds to the weight of the vehicle, which could severely constrain the mission. However, risks are constantly analysed, and there is constant training to avoid risks. In fact, many risk engineering, reliability methods, and hazard analyses have originated from work on space vehicles and aircraft. On January 28th, 1986, the Space Shuttle Challenger, approximately a minute into the flight, disintegrated at approximately 15 km altitude. An O-ring seal had failed, letting hot gases escape from a solid rocket booster (SRB), which then impinged on the external propellant tank. The SRB now had become unhinged and collided with the external propellant tank. Challenger lost control and disintegrated, with the loss of all seven crew members. This led to the formation of a commission to investigate the accident. While the Commission found issues with NASA’s organisational culture and decision-making processes, there were other issues as well. In fact, a deep understanding of engineering physics among the decision makers seemed to be lacking, a pattern of ignoring whistle blowers contributed, and the agency also ignored its own safety procedures [31]. The culprit that started it all in a series of uncontrollable and cascading events was an O-ring. It is argued that NASA knew that Morton Thiokol’s O-ring had flaws. However, this became catastrophic when a launch in extremely low temperature conditions exposed the flaw. Whistle blowers at Morton Thiokol and NASA were ignored, and the problems were also not adequately escalated to senior managers. As in the mid-air collisions described earlier, one hoped that learning, especially organisational learning, would have taken place, but it occurred again in 2003, Columbia, as it was returning from a mission, disintegrated as it entered the atmosphere. A foam block that broke away during launch (such foam pieces breaking off were seen earlier as well) hit the leading edge of the Shuttle wing and damaged it, leading to a hole through which hot gases damaged the Shuttle’s thermal protection system (TPS), leading to structural failure of the Shuttle’s left wing, and the spacecraft ultimately broke apart during re-entry at an altitude of under 65 km. Investigations revealed damage to the reinforced carbon–carbon leading edge wing panel, which resulted from the impact of a piece of foam insulation that broke away from the external tank during the launch [32]. Here, as well, there was repetition of management decision-making failures. While it was noted that a piece of

18

S. Chandra

foam had impacted the shuttle wing leading edge, there was no understanding that it was such a catastrophic failure mode. After the accident and investigation, NASA made improvements in design, monitoring, and organisation, including inspection in orbit. Redundancies in terms of a standby mission, etc., were also created. In the accident investigation, issues of organisational culture (which is called the normalisation of deviance by Diane Vaughan [23] and will be described in detail in Chap. 5) were spotted along with poor models of the (physics of ) impact of the foam. Some engineers expressed surprise that foam could cause such penetration and damage. There have been other space accidents as well. In the Apollo 12 mission, on 14 November 1969, lightning strikes damaged the fuel cells, guidance systems, and sensors, but the astronauts ensured that every essential system was functional and proceeded to the Moon [33]. The USSR/Russian space programme have also had their accidents. The crew of Soyuz 11 were killed after undocking from space station Salyut 1 after a three-week stay. A cabin vent valve was accidentally opened after service module separation. The recovery team found the crew dead [34, 35]. These are, as of 2019, regarded as the only human fatalities in space. On 18 March 1965, Voskhod had a problem with the world’s first spacewalk by Alexei Leonov. Leonov’s spacesuit inflated in the vacuum, and he could not re-enter the airlock. He allowed some pressure to bleed off and just managed to get back inside the capsule. Everything went awry after that, the spacecraft landed 386 km off course spending two nights in temporary shelters, the cosmonauts skiing to a clearing, from where a helicopter flew them back to base [36]. As of this date, 19 astronauts have died in four separate incidents. The current statistical fatality rate is 3.2% [37]. Space travel is inherently risky and regarded as an adventure. However, accidents are investigated thoroughly; in fact, in the USA, Presidential Commissions are formed to study failures. The Shuttle disasters specifically highlight several issues about human behaviour, whistle blowing, deviance, and the need to be alert to surprisingly different physics encountered in contrast to the original mental models. The Challenger accident has frequently been used as a case study for safety, engineering ethics, whistle blowing, and decision making.

2 Incidents, Accidents, and Unmitigated Disasters

2.3

19

Fukushima and Chernobyl

By all accounts, the most feared of accidents are nuclear accidents. Both Chernobyl and Fukushima are termed disasters; however, the number of fatalities that can be attributed directly is small. The fear of a nuclear disaster is imprinted in our minds, e.g. among the three important accidents, Three Mile Island was the first, at a nuclear plant in the USA. There was initially a partial meltdown of a reactor and then a subsequent radiation leak that occurred on March 28, 1979. It was regarded as a very significant accident in the U.S. On the seven-point International Nuclear Event Scale, the incident was rated a five as an “accident with wider consequences”. Failure in the secondary systems led to a valve in the main system to fail, which in turn caused the coolant to escape [38, 39]. Human error was in terms of operators not spotting these failures and understanding that this would lead to a loss of coolant as well. This occurred due to imperfect alert systems, lack of adequate scenario building, improper training, and imperfect operator mental models of the processes. These lead to mistakes such as overriding emergency cooling systems. While there was no loss of life directly due to the accident, there was widespread concern of long-term effects such as cancer. Studies later showed an increase in the rate of cancers, but they were insignificant, and no relationship could be established. However, it enabled new regulatory practice, a wave of anti-nuclear protests, and a move away from nuclear energy for a while. The Chernobyl accident was categorised as a disaster. It was a nuclear accident at the Chernobyl Nuclear Power station in the erstwhile USSR (actually in present Northern Ukraine) in 1986. Rated at 7, the maximum on the nuclear event scale, the accident occurred during a safety test that was to maintain cooling of the reactor. A shift change meant that unprepared operating personnel were on duty. The test proceeded, and operators were able to restore the specified test power only partially, which put the reactor in a potentially unstable condition. Using a flawed set of operating instructions, the reactor was shut down, but an uncontrolled nuclear chain reaction was triggered. The reactor core ruptured and led to an explosion. An open-air reactor core fire released radioactive contamination for more than a week into Europe and parts of the erstwhile USSR before being contained [40]. Evacuations were initiated, and exclusion zones were identified. Two staff were killed in the explosion, over 100 staff and emergency staff were hospitalised for radiation exposure, and approximately 28 of them later died. There have been conflicting reports of the number of deaths, but a UN report could ultimately attribute approximately 100 deaths directly from the disaster [41, 42].

20

S. Chandra

This accident highlighted a lack of understanding of the cascading scenarios, design faults, and lack of what is well known as “defence in depth” strategies in the nuclear industry. Many modifications to Russian reactor designs were recommended, and the IAEA used the learning from the accident to extensively revamp design and regulatory processes. The Fukushima nuclear disaster occurred on 11 March 2011, nearly 25 years after the Chernobyl disaster, and was classified as Level 7. Unlike other cases, it was started off by an earthquake and a tsunami [43, 44]. Detecting the seismic disturbance, the reactors shut down automatically. Failure of the power started the emergency diesel generators with the aim of running the pumps circulating the coolant. The tsunami waves flooded the buildings and the emergency generators. This led to the loss-of-coolant accidents and then to meltdowns and hydrogen explosions and the release of radioactivity. The radioactivity led to large-scale evacuation. A massive quantity of radioactive isotopes was released into the Pacific Ocean. The plant’s operator has since built new walls along the coast and created a 1.5-km-long “ice wall” of frozen earth to stop the flow of contaminated water. Accident investigations revealed that the accident scenario was foreseeable, and the operator of the plant had failed in risk assessment, evacuation planning, collateral damage, etc. The IAEA also noted that there was poor oversight by METI Japan, the concerned ministry. Clearly, an earthquake and a tsunami were not anticipated to occur near simultaneously [45]. The handling of safety and risk in nuclear power installations has interesting lessons. Awareness of the physics in nuclear power and, in some ways, paranoia associated with nuclear power has probably been responsible for international safety protocols. One argument from the nuclear power lobby is that since the 1950s, while the above-listed accidents have been widely publicised, there actually have been only a small number of casualties. Among other things, the risk and safety associated with the nuclear power industry are with meltdowns that cannot be contained, affecting not just the immediate vicinity but also the dispersed radio activity. There have been considerable efforts to ensure that the physics and the technology were understood and containment was a strong possibility.

2.4

COVID-19, OxyContin, and Thalidomide

Accidents in the healthcare sector would generally be in the form of individual errors, unless one looks at pharma, where drugs used in treatment could have severe side effects or cause other illnesses. However, it is truly

2 Incidents, Accidents, and Unmitigated Disasters

21

diseases such as COVID-19 or other pandemics that can be categorised as disasters. In healthcare, the most feared disaster is a pandemic. A review of the book October Birds in the Lancet eerily predicted a pandemic as one of the things to come based on Jessica Smartt Gullion’s experience with the H1N1 [46]. The Lancet review noted that “I hope it will also be read simply for pleasure, and instil the question: ‘What if?’ What if a devastating pandemic does emerge? How will we respond?” [47]. The possibility of a pandemic has also been featured in the novel genre and in movies such as the Contagion. As the pandemic unfolded, there is much one can say about all aspects of human behaviour and decision making. Except for the 1918 pandemic [48], there is no comparison to this in terms of the scale of fatalities. Earlier, H1N1, SARS, and MERS together led to a much smaller number of fatalities. Any examination of the spread, the decision making, the science involved, the epidemic modelling, and forecasting shows that there have been gaps on all fronts. The pandemic caught all governments on the wrong foot. Some of the most accepted redundancies, checks, and balances associated with risk mitigation principles have been unused. There have been decision-making issues with masks; early into the pandemic, the WHO argued that cloth masks were “cultural”, in the sense that they were being worn in many countries as part of the culture [49], compared to the protective N95 masks recommended for healthcare professionals. Sometime later, it changed the recommendation, saying it is beneficial if people who were infected wore masks to reduce transmission [50]. This was even after masks were made mandatory in many countries. There was considerable confusion regarding travel recommendations. While air travel is now so widespread, the possibility of saying a virus transmitting to Italy from Wuhan and then to New York was never considered a plausible scenario. The WHO did not recommend travel bans explicitly, and individual countries made decisions on their own. No universally accepted checklist exists. On vaccine development, some nations viewed this to be strategically important, from a position of geopolitical power [51], while others attempted to persuade companies to exclusively commit the vaccine to their populations or control data [52, 53]. Overall, the pandemic showed cases of real human error, poor mental models of virus transmission, and avarice of individuals and some governments. There have been disasters in the pharma sector as well, with drugs doing harm to patients, something that was not probably seen in clinical trials. Two

22

S. Chandra

drugs are used here as examples to illustrate pharma disasters. OxyContin is an opioid painkiller that began to be used to treat chronic pain. There was litigation about Purdue Pharma’s dispensation of the drug [54, 55]. Over the years, the drug has been known to cause addiction and the company is accused of making $30 billion from sales of OxyContin [55]. It has been noted by the Centre for Disease Control (CDC) that approximately 500,000 people have died in the USA from opioid-related overdoses [55, 56], both prescribed and illicit, some of which can be attributed to the pressures of marketing the drug. McKinsey, the management consulting firm that advised Purdue Pharma to increase the sales of the drug, has since apologised for its role in the disaster [57]. The plan appeared to be for Purdue to give retail pharmacy companies rebates, while their customers overdosed on OxyContin. This disaster illustrates deviance and gain to be part of the cause for the disaster, apart from a failure to spot possible issues with the drug. The thalidomide disaster is well documented. First sold in 1957 as effective against several skin conditions, it was subsequently popularly used as a sedative at that time. Under the brand name Contergan, it was also publicised as an “absolute nontoxic” and “safe” drug for use by pregnant women for nausea and morning sickness [58, 59]. This in many countries was sold as an Over the Counter (OTC) product and did not need a prescription. CBC news estimates that over the period until it was withdrawn, over 24,000 babies were born with thalidomide-induced malformations, apart from 123,000 stillbirths and miscarriages due to thalidomide [60].

2.5

Other Accidents and Disasters

There have been maritime and railway accidents and disasters as well. Many maritime accidents, some before WWII, were truly large disasters, e.g. the Titanic. On the other hand, ferry ships and cruise ships have also sunk since then and are categorised as disasters. On 28 September 1994, MS Estonia sank in the Baltic Sea as it was plying between Tallinn and Stockholm. The ship had 989 people: 803 passengers and 186 crew, and most of the passengers were Swedish. Over 852 died in the disaster [61]. It is regarded as one of the worst maritime disasters of the twentieth century (there are others, especially ferries that plies between islands in Southeast Asia). The ship appeared to be listing to the starboard because of poor distribution but was fully loaded. Accident investigation showed that the bow door locks had given way, and the door had become unlinked, and consequently pulled the ramp as well. However, the warning system on the bridge did not provide an alert

2 Incidents, Accidents, and Unmitigated Disasters

23

for such an event. No cameras could pick up the locking mechanism failures, and the design for roll-on or roll-off ferries (RORO) in times of flooding of the car deck was not adequate. The crew had also not operated correctly, not reducing speeds, were unaware of the ferry listing, and lack of leadership from the senior staff. This accident specifically led to many safety changes. There have been several ferry accidents in Southeast Asia: Indonesia and the Philippines, both island nations that depend on ferries for transport between the islands. Documentation about these accidents and accident reports are difficult to obtain. In 1981, a fire and explosion led to 580 people being lost on the Indo Tamponas II in the Java Sea [62]. On June 29th, 2000, the Cahaya Behari carrying refugees sank, and 481 were killed [63]. On December 27th, 1987, Dona Paz collided with an oil tanker Tacloban off Tablas Strait in Mindoro and sank 15 min later. The vessel was carrying 1004 passengers but was only cleared to carry 864 persons, including its crew [64]. On 17 December 1991, Salem Express, while on a voyage from Jeddah, Saudi Arabia to Safaga, Egypt, with more than 1600 passengers, struck a reef and sank within 10 min. While the official toll was said to be 470, local people said that many more died as the ship was overcrowded with unlisted passengers returning from pilgrimage to Mecca [65, 66]. There have been many other accidents, but many of the disasters have been a combination of bad weather, poor training, and bad decision making. However, these are disasters alright, as much as any other. Regulation and oversight in the maritime sector, while it exists, is not as strong as that of aviation, space, and nuclear energy. In railways, over the years, there has been a spate of accidents. Accident classification as far as the railways are in terms of head collisions, rear ending, derailment, etc. The maximum number of fatalities has been due to derailment. Most railway accidents are found to come from equipment failure: cracking rails, brakes, driver error, malfunctioning signals, etc. The WHO estimates that approximately 1.35 million people die from automotive accidents across the world each year and could be regarded by some as an annual disaster, if you look at the statistics. According to WHO, ninety-three per cent of the world’s fatalities on the roads occur in low- and middle-income countries, even though these countries have approximately 60% of the world’s vehicles [67, 68]. Approximately 50 million more people suffer nonfatal injuries, with many incurring a disability as a result of their injury [67]. Road traffic injury death rates are highest in the African region. Even within high-income countries, people from lower socioeconomic backgrounds are more likely to be involved in road traffic crashes. Additionally,

24

S. Chandra

road traffic injuries are the leading cause of death for children and young adults aged 5–29 years. From a young age, males are more likely to be involved in road traffic crashes than females. Approximately three quarters (73%) of all road traffic deaths occur among young males under the age of 25 years who are almost 3 times as likely to be killed in a road traffic crash as young females [67, 68]. Many factors that cause road accidents are specific to driving while under the influence of alcohol, speeding, incidents due to distractions, etc. The risk truly is not that much due to technology, or poor regulation, it is about driver discipline and state of roads. However, there have been cases where technology or product malfunction and faults have played a role. For example, a fuel tank rupture of Ford Pinto led to fatalities [69]. In the case of the Takata airbag, which malfunctioned, a safety device became responsible for the deaths of car passengers [70, 71]. In many of the accidents described here in this chapter, there truly is no single root cause. Some have been termed disasters, while others are just accidents. While engineers and scientists attempt to learn from previous accidents, the sociologist’s view of inevitability has also been demonstrated. In later chapters, this book examines complexity, poor training, organisational behaviour, and more importantly deviant human behaviour and the nudge to take a risk for gain. This is different from just finding a “root cause” for an accident or just looking for a chain of small errors that incubate and progressively become responsible for a disaster. The social and organisational aspects sometimes appear to be eerily similar in many accidents in terms of ignoring early signals of problems, pressures from cost and schedule, belief systems, and mental models of failure and consequences conditioned from previous experiences. All of these are addressed in subsequent chapters. Much of the regulation to prevent a disaster is based on theories, processes, and practices developed by scientists and engineers who deal with complexity. Human behaviour, social interaction, and the sense of a nudge to seek risk for gain have not been truly part of accident regulation. We explore each of these aspects further in the book.

References 1. Picheta, R. (2019). Ethiopian Airlines crash is second disaster involving Boeing 737 MAX 8 in months. CNN , March 11, 2019. 2. Suhartono, M., & Beech, H. (2018). Indonesia plane crash adds to country’s troubling safety record. New York Times, October 28, 2018.

2 Incidents, Accidents, and Unmitigated Disasters

25

3. Mark, R. (2019). NTSC issues final report on last fall’s Lion air accident. Flying. https://www.flyingmag.com/ntsc-lion-air-final-report/. Accessed August 18, 2022. 4. ICAO. (2010). Annex 13 to the convention on international civil aviation aircraft accident and incident investigation. 5. Jolly, J. (2022). Boeing 737 Max disaster casts long shadow as planemaker tries to rebuild fortunes, June 25, 2022. 6. Allam, Z. (2020). The first 50 days of COVID-19: A detailed chronological timeline and extensive review of literature documenting the pandemic. Elsevier Public Health Emergency Collection (pp. 1–7), July 24, 2020. 7. Christensen, S. R., Pilling, E. B., Eyring, J. B., Dickerson, G., Sloan, C. D., & Magnusson, B. M. (2020). Political and personal reactions to COVID-19 during initial weeks of social distancing in the United States. Plos One, 15 (9). 8. Weible, M. C., Nohrstedt, D., Cairney, P., Carter, D. P., Crow, D. A., Durnová, A. P., Heikkila, T., Ingold, K., McConnell, A., & Stone, D. (2020). COVID19 and the policy sciences: Initial reactions and perspectives. Policy Sciences, 53, 225–241.https://doi.org/10.1007/s11077-020-09381-4 9. Engel, C., Rodden, J., & Tabellini, M. (2022). Policies to influence perceptions about COVID-19 risk: The case of maps. Science Advances, 8(11). 10. What is an accident?—Definition from Safeopedia. https://www.safeopedia. com/definition/204/accident. Accessed August 7, 2022. 11. BBC News. (2019). The true toll of the Chernobyl disaster, July 26, 2019. 12. Statista. (n.d). Number of road traffic-related injuries and fatalities in the U.S. from 1990 to 2020. https://www.statista.com/statistics/191900/road-traffic-rel ated-injuries-and-fatalities-in-the-us-since-1988/. Accessed August 6, 2022. 13. Walker, T. (2020). First thing: Six months into the Covid-19 pandemic, 150,000 Americans are dead. The Guardian, July 30, 2020. 14. Aviation Safety Network. (2022). Statistics by period, (aviation-safety.net) https://aviation-safety.net/statistics/period/stats.php. Accessed August 6, 2022. 15. Ziomek, J. (2018). Collision on Tenerife: The how and why of the world’s worst aviation disaster. Post Hill Press. 16. Burns, J. F. (1996). Two airliners collide in Midair, killing all 351 aboard in India. The New York Times, November 13, 1996. 17. Lahoti, R. C. Report of the court of enquiry on mid-air collision between Saudi Arabian Boeing 747 and Kazakhstan IL-76 on 12th November, 1996 near Delhi-India (Chakri-Dadri, Haryana). Ministry of Civil Aviation, Government of India. https://skybrary.aero/sites/default/files/bookshelf/32383.pdf. Accessed August 20, 2022. 18. Buergler, E. (2007). Swiss convict 4 air controllers over 2002 jet crash. Reuters, September 4. https://www.reuters.com/article/idUSL04818580. Accessed August 25, 2022. 19. German Federal Bureau of Aircraft Accident Investigations. Investigation report AX001-1-2/02, May 2004.

26

S. Chandra

20. Bott, M., & Paterson, T. (2005). Father of air-crash victims guilty of revenge killing. The Independent, October 26, 2005. 21. Goyer, R. (2011). Gulfstream temporarily suspends G650 flying–FAA issues report. Flyingmag.com, April 6, 2011. 22. NTSB. (2011). Accident report NTSB/AAR-12/02 PB2012-910402, crash during experimental test flight gulfstream aerospace corporation GVI (G650), N652GD Roswell. New Mexico, April 2, 2011. 23. Vaughan, D. (2016). The challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press. 24. Sommerlad, J. (2019). Kegworth air disaster: What happened and how did the plane crash change airline safety? The Independent, January 8, 2019. 25. BCAS. (2002). Accident on 25 July 2000 at La Patte d’Oie in Gonesse (95) to the Concorde registered F-BTSC operated by Air France, Bureau of Enquiry and Analysis for Civil Aviation Safety, January 16, 2002. 26. Withey, P. (2019). The real story of the comet disaster: De Havilland comet-structural fatigue. Lecture organised by RAeS Hamburg Branch Hamburg Aerospace Lecture Series. HAW Hamburg (Hamburg University of Applied Sciences). https://www.fzt.haw-hamburg.de/pers/Scholz/dglr/hh/text_2 019_01_24_Comet.pdf. Accessed May 2022. 27. Langewiesche, W. (2019). What really brought down the Boeing 737 Max? New York Times, September 18, 2019. 28. Jennings, T. (2021). How ‘Boeing’s Fatal Flaw’ grounded the 737 Max and exposed failed oversight. The New York Times, September 13, 2021. 29. Gelles, D. (2022). How Jack Welch’s reign at G.E. gave us Elon Musk’s Twitter feed. New York Times, May 21, 2022. 30. Gelles, D. (2020). Boeing’s 737 Max is a Saga of capitalism gone awry. New York Times, November 24, 2020. 31. Rogers, W. P., Armstrong, N. A., Acheson, D. C., Covert, E. E., Feynman, R. P., Hotz, R. B., Kutyna, D. J., Ride, S. K., Rummel, R. W., Sutter, J. F., Walker, A. B. C., Wheelon, A. D., & Yeager, C. E. (1986). Report of the Presidential Commission on the space shuttle challenger accident. NASA, June 6, 1986. 32. Gehman, H. B., John, D. D., Hallock, J., Hess, K. H., Logsdon, J., Ride, S., Tetrault, R., Turcotte, S., Wallace, S., & Widnall, S. (2003). Report of Columbia Accident Investigation Board . NASA, August 6, 2003. 33. NASA. (1970). Apollo 12 mission report (publication number MSC-01855). NASA. 34. Time. (1971). Triumph and tragedy of Soyuz 11. Time Magazine, July 12, 1971. https://content.time.com/time/subscriber/article/0,33009,903 011,00.html. Accessed August 16, 2022. 35. Uri, J. (2021). 50 years ago: Remembering the crew of Soyuz 11. NASA Johnson Space Center, June 30, 2021. https://www.nasa.gov/feature/50-years-ago-rem embering-the-crew-of-soyuz-11. Accessed September 4, 2022. 36. Burgess, C. H., & Hall, R. (2009). The first soviet cosmonaut team: Their lives and legacies. Springer Science & Business Media.

2 Incidents, Accidents, and Unmitigated Disasters

27

37. Harwood, W. (2005). Astronaut fatalities. spaceflightnow.com. https://www.spa ceflightnow.com/shuttle/sts114/fdf/fatalities.html 38. Rogovin, M. (1980). Three mile Island: A report to the commissioners and to the public. Volume I. Washington, D.C.: U.S. Nuclear Regulatory Commission. 39. Walker, J. S. (2004). Three mile Island: A nuclear crisis in historical perspective (pp. 209–210). University of California Press. 40. Medvedev, G. (1989). The truth about Chernobyl . Basic Books. 41. Gray, R. (2019). The true toll of the Chernobyl disaster. BBC , July 26, 2019. https://www.bbc.com/future/article/20190725-will-we-ever-knowchernobyls-true-death-toll. Accessed August 12, 2022. 42. IAEA. The 1986 Chornobyl nuclear power plant accident. https://www.iaea. org/topics/chornobyl. Accessed August 12, 2022. 43. Fackler, M. (2011). Report finds Japan underestimated Tsunami danger. The New York Times, June 1, 2011. 44. Fackler, M. (2012). Nuclear disaster in Japan was avoidable, critics contend. The New York Times, March 9, 2012. 45. International Atomic Energy Agency (IAEA). (August 2015). The Fukushima Daiichi accident: Technical Volume 1/5—Description and Context of the Accident. 46. Gullion, J. S. (2014). October birds: A novel about pandemic influenza, infection control, and first responders, social fictions. Sense. 47. Hayward, A. (2014). Sociology of a pandemic by Jessica Gullion, October. Lancet Respiratory Medicine. 48. (2022). We can learn from how the 1918 pandemic ended. New York Times, January 31, 2022. https://www.nytimes.com/2022/01/31/opinion/covidpandemic-end.html 49. WHO. (2020). Advice on the use of masks in the community, during home care and in health care settings in the context of the novel coronavirus (2019nCoV) outbreak, interim guidance 29 January 2020. WHO reference number: WHO/nCov/IPC_Masks/2020.1. 50. WHO. (2020). Advice on the use of masks in the context of COVID-19. Interim Guidance, April 6, 2020. World Health Organization. 51. Huffpost. (2020). The vaccine race has become a high-stakes geopolitical gamble. 52. The Plos Medicine Editors. (2022). Vaccine equity: A fundamental imperative in the fight against COVID-19. PLOS Medicine, February 22, 2022. 53. Oleinik, A. (2020). The politics behind how governments control coronavirus data. The Conversation, June 4, 2020. 54. Van Zee, A. (2009). The promotion and marketing of Oxycontin: Commercial triumph, public health tragedy. American Journal of Public Health, 99 (2). 55. Editorial. (2020). OxyContin settlement doesn’t begin to make up for the harm Purdue Pharma has caused. LA Times. https://www.latimes.com/opinion/story/ 2020-10-22/purdue-pharma-settlement-justice-department. Accessed August 25, 2022.

28

S. Chandra

56. CDC. Understanding the opioid overdose epidemic. https://www.cdc.gov/opi oids/basics/epidemic.html. Accessed August 25, 2022. 57. Walt Bogdanich and Michael Forsythe. (2020). McKinsey issues a rare apology for its role in Oxycontin sales. New York Times, December 8, 2020. 58. James, E. R. (2013). The thalidomide disaster, lessons from the past. Methods in Molecular Biology, 947 , 575–586. 59. Knightley, P., & Evans, H. (1979). Suffer the children: The story of thalidomide. The Viking Press. 60. Schwartz, D. (2017). Thalidomide disaster: Background to the compensation debate. CBC News, December 6. https://www.cbc.ca/news/health/thalidomideexplainer-1.4434746. Accessed August 12 2022. 61. The Joint Accident Investigation Commission of Estonia, Finland and Sweden. (1997). Final report on the capsizing on 28 September 1994 in the Baltic Sea of the Ro-Ro passenger vessel MN Estonia. 62. Ingwerson, M. (1981). Sinking of Indonesian ocean liner puts new focus on ship safety reforms. The Christian Science Monitor, January 28, 1981. https:// www.csmonitor.com/1981/0128/012847.html 63. New York Times News Service, 500 Indonesian refugees feared lost I ferry sinking, Chicago Tribune, June 30, 2000. https://www.chicagotribune.com/ news/ct-xpm-2000-06-30-0006300210-story.html. Accessed August 15, 2022. 64. The Editorial Team. (2022). Sinking of Doña Paz: The world’s deadliest shipping accident, in accidents, maritime knowledge. Safety4Sea. https://safety4sea. com/cm-sinking-of-dona-paz-the-worlds-deadliest-shipping-accident/. Accessed August 15, 2022. 65. Murphy, K. (1991). Up to 470 missing as Egyptian ferry hits Red Sea Reef, Sinks. LA Times, December 16, 1991. https://www.latimes.com/archives/laxpm-1991-12-16-mn-491-story.html 66. The Associated Press. (1991). Divers recover bodies of captain and others from Egyptian Ferry By The Associated Press. The New York Times, December 18, 1991. https://www.nytimes.com/1991/12/18/world/divers-recover-bodiesof-captain-and-others-from-egyptian-ferry.html 67. WHO. (2018). Global status report on road safety 2018, June 17, 2018. https:// www.who.int/publications/i/item/9789241565684. Accessed August 15, 2022. 68. VOA. (2021). UN aims to cut millions of road traffic deaths, injuries by half. VOA, October 31, 2021. https://www.voanews.com/a/un-aims-to-cut-mil lions-of-road-traffic-deaths-and-injuries-by-half-in-next-decade/6292737.html. Accessed August 24, 2022. 69. Wojdyla, B. (2011). The top automotive engineering failures: The Ford Pinto fuel tanks. Popular Mechanics, May 20, 2011. https://www.popularmechanics. com/cars/a6700/top-automotive-engineering-failures-ford-pinto-fuel-tanks/. Accessed August 15, 2022. 70. Gearheads, P. (2020). Killer Takata airbags: The deadly secret automakers don’t want you to know. Autowise, August 26, 2020. 71. Eisenstein, P. A. (2020). First came a worldwide recall for air bags. Now, millions of Takata seat belts may also be faulty. NBCnews, October 16, 2020.

3 Learning from Failures—Evolution of Risk and Safety Regulation

Ralph Nader, later to become a Presidential candidate in the USA, has been known to be one of the greatest activists for consumer protection regulation in modern times, as evident from a documentary film made on him called “An Unreasonable Man” [1]. Born in 1934, a child of Lebanese immigrants, Nader went on to attend Princeton University. While Ralph Nader saw things differently, he was tenacious as well. One day, at Princeton, he saw dead birds on the sidewalk. He noted that pesticide had been sprayed the day before; he took one bird to the student newspaper, where the editor thought that DDT was harmless [2]. Everybody now knows that DDT was anything but harmless. It is in the folklore about him that while he was at the law school in Harvard, hitchhiking one day, he saw a car accident with a little child with a severed head because the glove compartment door had opened up like a guillotine. Later, his law school paper on automotive safety became a bestselling book titled “Unsafe at Any Speed” [3, 4]. What drove Nader was the state of the automobile in the 1950s and 1960s. While America was in unconditional love with the car in that period, it is well known that even with many technologies already available (e.g. the seat belt was invented in the nineteenth century and used first in 1946 [5]), carmakers were hesitant to make cars safer because of additional costs, and more importantly, the public was also ambivalent about the danger. Thus, as demonstrated repeatedly even today, manufacturers went by customer views, and in fact, many customers asked for the seat belts to be removed at that

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_3

29

30

S. Chandra

time until it became compulsory in cars. In other words, the industry catered to the risk perception of the day among the public. However, based on Nader’s work, the US Congress unanimously enacted the National Traffic and Motor Vehicle Safety Act [6]. Safety features such as seatbelts have become mandatory since. The story of safety regulation, especially of vehicles, goes back to the evolution of the stagecoach in the 1800s. Anecdotal reports appear to have led to improvements made based on some historical reports [7, 8]. The author would argue that modern-day engineers do the same on the back of science by testing and monitoring. However, as the sociologists have pointed out, you know that accidents continue to happen as human behaviour is involved. The sense of safety and perception of risk go hand in hand. While sometimes not explicit or direct, there has been continuous debate and pushback between sociologists and engineers. Interestingly, it is the scientists and engineers who participate in framing the regulations, and many times, it is the public who are activists for regulations based on their risk perceptions. The issue of why regulation is required to be made by the government lies in evaluating the risk and accepting that risk for the larger good. The evolution of regulation is based on learning from accidents leading to improvements, better processes, and oversight. Over the years, the practice of introducing regulations for risk and safety has been through legislation, after each piece of the regulation is scrutinised, drafts circulated for open debates, and voted into laws. However, there has always been a tussle between the activists who push to the hilt for safety and businesses that make and sell any product, and businesses find each additional safety feature would increase the price of the product and have an effect on viability and profit. In nuclear power, aviation, and healthcare, safety regulation has been rigorous in most countries. The international agencies that operate under the aegis of the UN in nuclear power: International Atomic Energy Agency (IAEA), aviation: International Civil Aviation Organisation (ICAO), or in health: World Health Organisation (WHO) do not have regulatory power over individual countries unless the nations accept the recognitions and certifications of the agencies, and many times these agencies play only an advisory role. It is the IAEA that seems to have had some success in terms of nuclear plant inspections and influencing policy. One suspects the fear of a nuclear accident, and its implications have given the IAEA more teeth. The creation of the IAEA occurred in a bipolar world where the US and USSR were major players, in comparison with the present-day situation where the number of stakeholders has increased. The paradox is that, overall, accidents using nuclear energy have directly been responsible for a much smaller number of

3 Learning from Failures—Evolution of Risk and Safety …

31

deaths than the pandemic, automobiles, or even in aviation. In this chapter, the state of regulation is examined further in aviation, space, nuclear energy, healthcare, automotive, and marine environments.

3.1

Bird Strikes and Air Crashes

On January 15th, 2009, an Airbus 320 (Flight 1549) ditched in the Hudson River in New York City, just after take-off from the LaGuardia airport. The aircraft lost both engines after a flock of Canadian geese flew into both engines and shut them down. Captain Sullenberger, in command of the aircraft, made the decision to ditch as the aircraft would not be able to make it to return to LaGuardia or Teterboro Airports (airports in the vicinity) and ditched in the Hudson River, New York. All 155 people on board survived and were rescued by boats that were nearby [9]. Sullenberger was a national hero in the USA, and there was a movie called Sully made later about the incident. However, many questions can be asked. Was such a possibility of a bird strike on both engines and they shutting off not accounted for? Was the procedure, Capt Sullenberger and the co-pilot adopted correct? Were the ditching-related regulations correct? Sociologists such as Charles Perrow, whose work is examined in detail in Chap. 5, argue that such events are inevitable, given the complexity involved. Even though the Federal Aviation Administration (FAA) had many regulatory changes to bird strike regulations since 1962, bird strike accidents have occurred, although the loss of life has been curtailed. United Airlines Flight 297 was a scheduled flight from Newark International Airport to Atlanta that crashed 10 miles southwest of Baltimore on November 23, 1962, killing all 17 people on board. An investigation concluded that the aircraft, a Vickers Viscount 745D turboprop airliner, had struck at least two whistling swans, which caused severe damage to the plane, resulting in a loss of control [10, 11]. The accident resulted in a greater understanding of the amount of damage that can be caused by bird strikes during flight. As a result, the Federal Aviation Administration (FAA) issued new safety regulations that required newly certified aircraft to be able to better withstand in-flight impacts with birds without affecting the aircraft’s ability to fly or land safely. On May 8, 1970, section 25.631 “Bird strike damage” of the Code of Federal Regulations of USA took effect. This regulation added a requirement that the empennage (the tail) structure of an aircraft must be designed to

32

S. Chandra

assure the capability of continued safe flight and landing after an impact with an eight-pound bird during flight at likely operational speeds [12]. There have been contrasting regulations as well. In the late 1960s and early 1970s, the Joint Aviation Authorities were formed to produce the Joint Aviation Requirements (JAR) for the certification of large aircraft in Europe. The joint aviation requirements were largely based upon section 25 of the U.S. Code of Federal Regulations. The JAR (section 25.631) specified that the entire aircraft, not just the empennage, had to be designed to withstand a bird strike, but instead of an eight-pound bird as specified in FAR, it specified only a four-pound bird. Therefore, there has been a difference of opinion between Europe and the USA on the regulation, with the FAA (USA) recommending an 8-pound bird and EASA (Europe) recommending a 4-pound bird [13]. In the Hudson River accident [9], the aircraft was hit by a flock of birds, both engines shut down and the aircraft ditched in water. There were also several pieces of new knowledge from this accident, principal among them being the fact that all pilots are trained to deal with in-flight shutdowns of engines, but with the aircraft at a higher altitude. In such cases, there is time to attempt to restart at least one of the engines. However, this was not the case in the Hudson River accident. The altitude was too low, and the pilots could not complete the checklist for engine restart before ditching into the Hudson River. Investigations and reviews showed that the pilots had followed the recommended procedure but just did not have time to restart the engines, given the low altitude. In March 2019, aviation authorities around the world grounded the Boeing 737 MAX aircraft, after the Lion Air and the Ethiopian Airlines Boeing 737 Max crashes killing 346 people (Lion Air Flight 610 on October 29, 2018, and Ethiopian Airlines Flight 302 on March 10, 2019). The aircraft had been certified based on previous configurations of the 737 aircraft and additional modifications. However, investigators found that a new automated flight control system, the Maneuvering Characteristics Augmentation System (MCAS), malfunctioned on both flights, which led the planes to nose dive and was unrecoverable. The regulator was alerted just after the first crash, and Boeing and the regulator (FAA) communicated new procedures to all airlines, which would disable the MCAS. By then, Boeing had already been criticised for not having incorporated the system into the crew and training manuals. Airlines believed that extensive simulator training was not needed, as pilots would not see the difference in flying it in comparison with previous configurations because of the MCAS software. The regulator FAA had accepted this [14].

3 Learning from Failures—Evolution of Risk and Safety …

33

The US regulator FAA’s certification of the MAX was investigated by the U.S. Government, Congress, FBI, and by the Department of Transport (DOT). New committees and panels were created that examined the FAA’s system of delegation of certification authority to the manufacturers and thirdparty designated engineering representatives (DERS). The issue was whether there was a conflict of interest, whether that was appropriate and if it had weakened the certification structure. During this process, more defects and faulty processes were also found regarding the Boeing 737 Max [14]. Accident investigation findings have been published. The Indonesian authorities found issues with design, certification, maintenance, and flight crew as well [15]. The Ethiopian authorities said that the flight crew had attempted the recovery procedure but blamed the aircraft software [16], while the U.S. NTSB also noted that multiple alerts issued by the aircraft warning systems confused the flight crew. The U.S. Congress (House of Representatives) were critical of Boeing’s culture and processes, especially of the concealment of information to both the certification authorities and airlines. The fact that no additional simulator training for the 737 Max was proposed was noted as being inappropriate [17]. The accidents and the grounding of the 737 Max cost Boeing dearly [18], which some would argue, was much more than actually designing and building a new aircraft, a lesson not to be forgotten. Some changes have occurred since. Boeing now recommends flight simulator training for all MAX pilots, and extensive testing has meant that the aircraft is flying again [19]. Regulatory oversight is now in for an overhaul. Nearly every country has an aviation regulator, probably an accident investigation organisation such as the National Transportation Safety Board (NTSB) of the USA. Therefore, when an accident takes place, the country in which the accident occurs assumes responsibility for the investigation. The manufacturer and the country’s accident investigator where the aircraft was manufactured join the investigation. However, there have been disagreements, and even the reading and interpretations of the cockpit voice recorder or the flight data recorder are hotly contested, as interpretations can lead to saying whether there was a problem with the aircraft or whether there was pilot error. Much aviation regulation is about ensuring that failures are not catastrophic, and aircraft have the ability to “take you back home”. This is achieved through damage tolerance, fault tolerance, resilient systems, and safe life redundancies, which are discussed in Chap. 4. In an aircraft or space system, layers of redundancies can only be built up to a certain point. Otherwise, the

34

S. Chandra

vehicle is too heavy to fly. The job of the regulator is to ensure that these redundancies work. Irrespective of the many redundancies, a disaster can happen if the regulatory organisation loses its ability for oversight, a job it was specifically supposed to do. The Boeing 737 Max disaster stands out in terms of failure of oversight. An article in the New York Times describes how regulators had never independently assessed the risks of the dangerous software known as MCAS when they approved the plane in 2017 [20]. Many employees at both FAA and Boeing described a broken regulatory process that had made the FAA powerless. A system that took the load off the regulator for what was considered to be well proven and routine processes by handing it over to the manufacturers themselves to monitor had broken. Regulatory oversight is also a matter of trust between the regulator and manufacturer; it clearly had not worked this time, even if it had before. Boeing had performed its own assessments of the MCAS system, which were not overseen or as New York Times notes “stress-tested” by the regulator, which had manpower constraints as well. The FAA had moved the responsibility for approval of MCAS to the manufacturer. Therefore, FAA did not obtain further information on the system complexity. Boeing had further modified the system, relying on a single sensor. Many of the responsibilities were compartmentalised, and there seemed to be no sharing of information or alerts. This accident has alerted law agencies to look at whether the regulatory system is flawed. This is important, as the entire regulatory system for aviation is not broken. After analysing the large number of accidents from 1920s till now, the number of commercial aircraft accidents has constantly reduced. There is always a comparison with road accidents, with the frequent comment that it is safer to fly than to cross a road. Therefore, the engineer’s philosophy of constant learning to minimise risk is not to be scoffed at. The Boeing 737 Max accident was essentially an aberration and, as we examine later in subsequent chapters of the book, a case of the known unknown. A challenge in aviation is with regard to automation, as examined in Chap. 4 on complexity. Automation, which is used to reduce accidents by minimising pilot workload, can in turn cause accidents if all failure scenarios are not envisaged during design. Every aspect of the processes in aviation, i.e. design, manufacture, operation, maintenance, and disposal, is supposed to have oversight and it generally does. What has happened with Boeing 737 is that oversight was handed back to the manufacturer for certain equipment based on having an internal oversight system of Boeing that could be trusted by the FAA. The plan of less

3 Learning from Failures—Evolution of Risk and Safety …

35

government clearly did not work here. Vaughan [21], in her seminal work on Shuttle disasters, calls the failure of oversight mechanisms the “normalisation of deviance”, a term that has stuck in the sociology analysis of man-made disasters. The story of the Boeing 737 regulation falls into his category. More discussion of this will be provided in Chap. 5.

3.2

Elon Musk and Space Travel

In space travel, the perception of risk is all about a sense of adventure, about astronauts fully conscious that they are doing things never done before; as the saying goes: “Go, where no man has gone before”. While it is not as if that space exploration and commercial use is completely unregulated, the author argues that right now, though it has an appearance of how stagecoach regulation must have been in the Wild West. Much of commercial/private company use of space is still uncharted territory, although there have been bilateral and multilateral agreements between countries, which too are now under a challenge given countries that are intending to militarise space. For example, on 29th November 2020, reports showed that Indian and Russian satellites are just metres away in space; and the Indian space agency said these things happen and are not so uncommon as there are thousands of satellites in space [22]. The general practice is that the two agencies discuss and carry out a manoeuvre. Therefore, in comparison with mid-air collisions in aviation, where regulatory agencies are involved, in space, the issue was sorted out bilaterally between two owners of the satellites. However, even as of now, the orbit space around the earth is crowded and has debris and possibilities of collisions. There is also the possibility of space debris falling back to earth. However, what if outer space is opened up to private companies and human space travel is also allowed for all? Private companies are now active in launching, running cargo flights to the space station and exploring planets. SpaceX has revolutionised space vehicle design, launch economics, and access to space. As NASA moves from being the launch operator to a regulator, it grapples with managing private companies launching space vehicles. Recently, organisational safety assessments (OSA) were triggered after Elon Musk, the SpaceX’s founder and chief executive, was seen smoking marijuana and sipping whisky during an interview [23]. NASA officials were more concerned about SpaceX, founded by Musk in 2002, than they were about Boeing, which had worked alongside NASA for decades.

36

S. Chandra

“Boeing didn’t do anything to trigger a deeper dive”, said one official in 2018 but reversed its decision, and both companies were assessed later [24]. Boeing’s crew capsule did not dock correctly, and the entire mission was abandoned. However, Musk and his marijuana were the triggers to carry out a safety assessment of SpaceX. This occurred again on 29 January 2021. Verge [25] reports that Elon Musk’s SpaceX violated its launch licence in an explosive Starship test, triggering an FAA probe. There is now growing recognition that the use of commercial space vehicles requires regulation. Lack of regulation in the early stages of the aviation sector also led to a number of accidents. However, Caryn Schenerwerk of SpaceX [26] noted that the Wright brothers were not flying over major populated cities and present space launches were not doing that either, and commercial aviation was not comparable with the space launches. Schenerwerk argued that in terms of sheer volume, there is a massive difference, with approximately 40–50 launches set for 2020 compared to millions of flights. Therefore, using regulation based on aviation appears to be a concern for new commercial space explorers. An early pushback is being seen. Overall, Schenerwerk suggested that anticipating some potential future issues, based on the opinion aviation and space industries are similar, could result in stifling progress towards the goals of space exploration. This has been the story of aviation as well, and regulators are aware of it as product developers see regulation as a barrier from an economics viewpoint. It has been seen in the automotive sector, as discussed earlier, in terms of safety devices that regulators asked for incorporation into cars. In any of the space exploration agencies, which are essentially under government control and funding (NASA of USA, JAXA of Japan, ISRO of India, CNSA of China, and Roscosmos of Russia), the regulation is by the agency itself, actually taking on the role internally. NASA was the operator and the oversight regulator. Therefore, in terms of science, engineering, etc., all of the “rocket science” was under one roof. That position had its problems, the Shuttle disasters, Challenger and Columbia showed up serious organisational and technical issues. There is no international watchdog like IAEA, although there is space law, under the UN. As a result, much needs to be done before the true commercial use of outer space begins. Today’s NASA, however, is becoming a regulator, as it contracts out launch vehicles and explorations to private companies. Space X, Boeing, and many others are now working to launch space vehicles for NASA as well as for private companies: to the international space station (ISS), satellites, observatories, etc. Elon Musk launched his own car as part of the test for the Falcon

3 Learning from Failures—Evolution of Risk and Safety …

37

Heavy launch vehicle. Now, the obvious question is whether there was any regulation for this. The need for a strong international regulator for space is now a reality. Space travel and exploration appear to be set for independent regulation at the national and international levels, with wide levels of consultations between governments and private sector launch companies. The role of any country’s space agency or even an international regulator would have to be multifaceted, dealing with launch vehicle safety, re-entry into earth orbit, flying over populated cities, space debris collisions, and many others.

3.3

Nuclear Energy and IAEA

What is not that well known is that in nuclear energy, there is an active international regulator on hand: The International Atomic Energy Agency (IAEA) based in Vienna. When the Fukushima nuclear disaster occurred in 2011, graded at level 7, the IAEA was involved straight away. Promptly, the IAEA activated its emergency response teams, its Incident and Emergency System, coordinated with Japan and other countries and started discussions with member countries [27]. The IAEA statute took it to Iraq during the crisis to know if it had nuclear weapons and was actively involved in Iran as well, monitoring its nuclear programme as it has been mandated to inspect proliferation issues [28, 29]. Formed in 1957 under the aegis of the UN, it has established oversight on nuclear safety worldwide. Its role is that of an auditor of world-nuclear safety and has played an enlarged role since the Chernobyl disaster. As in aviation, in some ways more international and global, the IAEA prescribes safety procedures and reports even minor incidents, and most countries work with the IAEA. The design of nuclear power plants is approved by national regulators based on design practice that is codified and which is many times based on international collaboration. While the plant operator is responsible for safety, the national regulator is responsible for approvals, inspections, and continued licencing. Scenarios of failure and accidents are examined, and standard operating procedures are designed for it, including setting up redundancies in several layers. There have been considerable discussions about nuclear power reactors exploding like a bomb, and after considerable research, such an event has been dismissed as impossible, as the level of fuel enrichment is very low for that to occur.

38

S. Chandra

While sociologists have a view that even the best run nuclear power plants could have an accident, the reality is that safety processes, standard operating procedures, and scientific risk analysis have been done and a very low probability of occurrence is being estimated. The Nuclear Regulatory Commission (NRC) of the USA has specified that reactor designs have a very low probability of failure (1 in 10,000 years) [30]. European safety regulators specify a 1-in-a-million frequency of occurrence [31]. This is the kind of metric used in civil aircraft accidents as well, such as a flight control failure of 1 in 10 million flights [32]. Nuclear power plants have been equipped with layers of safety mechanisms. Called the ‘defence in depth’ concept [33], design and construction, monitoring equipment that check and alert errors, redundancy in control systems that will prevent damage to the fuel, and containment of any release. In a sense, what these safety rings indicate is that the physics and chemistry are well known, the possible scenarios are ensuring the engineering is appropriate and monitoring equipment exists to alert the operators regarding any malfunction and emergency control activation in case of coolant inadequacy and possibility of damage to the fuel. There have been a series of physical barriers between the plant, its fuel containment, and the external environment. To the engineers, it actually looks quite clear from a safety perspective, yet “unforeseen” scenarios have occurred, especially in a cascading manner in the Fukushima plant. For a long time, nuclear reactor accidents have been regarded as low probability but high consequence risks. Because of the high consequence risks, there has been reluctance by the public and policy makers to accept this risk. However, the understanding about physics and chemistry of a reactor core, and the engineering involved, means that the consequences of an accident are likely to be much less severe than those from other industrial and energy sources [33]. Experience, including Fukushima, bears this out, as was discussed in a previous chapter. Even in nuclear accidents, it is argued that many of the events are driven by human error [31]. Clearly, focusing efforts on reducing human error will reduce the likelihood of events. Following the Fukushima accident, the focus has also been on the organisational weaknesses that increase the likelihood of human error. In Fukushima, three reactors were abandoned after the fuel was damaged, as cooling was inadequate. A hydrogen explosion led to the fourth reactor being abandoned. The lessons from the regulatory role of IAEA, the scientific approach, and the building of redundancies are many. However, it is important to ask if a major disaster, as we have seen in a pandemic, is a

3 Learning from Failures—Evolution of Risk and Safety …

39

possibility with nuclear power. That there are things one cannot envisage and foresee the real unknowns, there still is a possibility. No other sector has had as much opportunities or the scientific effort, as nuclear energy has had to resolve many foreseeable issues. As will be discussed in subsequent chapters, the sociologist’s view is different and is driven by the human inability to anticipate and foresee some events.

3.4

Stagecoaches, Cars, and Seatbelts

By the time Ralph Nader’s book “Unsafe at Any Speed” was published in 1965, the number of deaths in car accidents since 1903 (when the “horseless carriage” was introduced) until 1965 was approximately 1,690,000 involving motor vehicles in the USA, according to a book published by the National Academy of Sciences (USA) [34]. In 1965 alone, the accident death toll was 49,000 from motor vehicles, deaths from traffic injuries increased annually, and 10,000 more deaths occurred in 1965 than in 1955 [34]. The history of the stagecoach is replete with improvements made towards safety by the stagecoach builders, albeit with anecdotal inputs [7]. Again, many ideas were abandoned because they did not make economic sense in the marketplace through the eighth century until the nineteenth century. Likewise, it is not as if there were no safety-driven research or additions made to automobiles in the early twentieth century. By the 1940s [35, 36], padded dashboards, seats, and seatbelts were researched but never implemented, so why the GM Corvair had an unsafe glove box in the 1960s is a moot question, even though much was known earlier about safety. By the 1940s, GM had conducted crash tests [36]. In 1956, Ford introduced a safety package called Lifeguard tin, which is a safe steering wheel that would collapse, better safety latches on doors to prevent ejection of occupants, lap seat belts, etc. [37]. Robert McNamara, who became Ford’s president, pushed towards safety but as an option to the customer. However, safety customisation based on consumer preference failed. From seatbelts to crash avoidance technology, car safety features have come a long way in the past century. In the early 1900s, some of the major contributions came from unlikely people, who were not involved in the industry, such as cattle rancher Mary Anderson [38] and Hollywood starlet Florence Lawrence, who invented the windshield wiper and turn signal, respectively [39]. Some of the greatest car safety innovations took a while to be accepted. Airbags were first introduced by GM in 1973 but did not become standard equipment on all passenger cars until nearly a quarter-century later [35].

40

S. Chandra

Of course, not all car safety advances came in the form of technology. Laws also had major impacts on driving safety, especially dealing with alcohol and driving. On September 9, 1966, the National Traffic and Motor Vehicle Safety Act became law in the U.S., the first mandatory federal safety standards for motor vehicles [40]. By 1966, US passenger cars were required to be equipped with padded instrument panels, front and rear outboard lap belts, and white reverse (backup) lamps. Additionally, in the United States Department of Transportation (DOT), with automobile safety as its focus, the National Transportation Safety Board (NTSB) was created as an independent organisation in 1967 and became a well-respected organisation in its later years [41]. In 1979, the National Highway Transportation Safety Agency started crash-testing cars and publishing the results in conformance with FMVSS 208, a set of regulatory rules [42]. Over the subsequent years, this NHTSA programme was gradually expanded in scope. In 1984, New York State passed the first U.S. law requiring seat belt use in passenger cars [43]. Seat belt laws have since been adopted by 49 states in the USA [44]. Insurance companies and activist awareness also played a part in the history of car safety. Many times, regulation have been activated by personal experiences and efforts. Take the case of Janette Fennell, who with her family were kidnapped and put in the trunk of her car, she ran a dedicated campaign to ensure NHTSA made a regulation for making trunk releases mandatory for new cars [45, 46]. The Insurance Institute for Highway Safety founded in 1959 opened its vehicle research centre in 1992 [47], creating consumer demand for car safety. It is now a leading provider of auto safety advice and information. Insurance companies offered customers discounts for safety measures, such as taking driver safety courses and wearing seatbelts. Today, Volvo offers crash avoidance technology as a standard on some models [48]. The evolution of car safety has been a fascinating combination of technology, information, laws, and more. In 1958, a World Forum for Harmonisation of Vehicle Regulations [49] was established by the UN. Many countries have had regulatory systems for automobiles, including international systems such as ECE for Europe, FMVSS in the USA, CMVSS for Canada, Guobiao in China, ADR in Australia, JTRIAS in Japan, and AIS in India. Automotive regulations do not have the process oversight as rigorous as in aviation and have had major product recalls. It is interesting to note that while cars have become safer, there have been cases where the safety devices themselves have been defective. The case of Takata seatbelts [50] and airbags [51, 52] clearly shows that the safety devices themselves have killed people. There have been high volume

3 Learning from Failures—Evolution of Risk and Safety …

41

recalls because of defects as well, e.g. Toyota’s defective gas pedals led to a 9 million vehicle recall [53]. While automation and its effects are increasing, the risks from it have led to stringent regulation in the aerospace sector, and the issue of self-driving cars has now created a very different but similar scenario in the automotive sector. With accidents to self-driving cars being seen, even though they were still in the prototype stage, there are now calls for the enactment of regulation for such cars.

3.5

Oil Rigs, Ferries, and Maritime Regulation

Maritime accidents and disasters need not necessarily be of passenger ferries and cruise ships but can also be of tankers and oil rigs. Oil spills also constitute environmental disasters. Among many such disasters, the Exxon Valdez oil spill stands out. On March 24, 1989, it occurred in Prince William Sound, Alaska. Owned by Exxon Shipping Company and en route to Long Beach, California, it struck a reef and led to a spill of 10.8 million US gallons of crude oil [54]. The location was remote and not accessible by air or sea. Over 2000 km of coastline was affected, and the clean-up went on for years costing billions of dollars [55]. This disaster again galvanised regulatory oversight with the International Maritime Organisation (IMO) introducing comprehensive marine pollution prevention rules (MARPOL) through various conventions ratified by member countries [56]. The International Maritime Organisation is another specialised United Nations agency (such as IAEA, ICAO, and WHO) with responsibility for the safety and security of shipping and the prevention of marine and atmospheric pollution by ships. As a specialised agency of the United Nations, the IMO is the global standard-setting authority for the safety, security, and environmental performance of international shipping. Its main role is to create a regulatory framework for the shipping industry that is fair and effective, universally adopted, and universally implemented. Marine accidents are complex and difficult to handle. There is a basic international act regarding the safety of life at sea—the International Convention for the Safety of Life at Sea (SOLAS) [57]. For a long time, it was widely believed that the issues related to the construction and equipment of ships are sufficiently regulated by the flag state legislative acts. For example, the Titanic met all requirements of provisions of Great Britain at that time for design, construction, onboard equipment, etc., and the UK provisions on rules of construction and equipment,

42

S. Chandra

including lifesaving equipment, etc. In the IMO, a ship design and construction committee looks at all technical and operating rules regarding design, construction, and maintenance. It can be argued that in reality, maritime regulations still need to evolve; it does not still have regulation as stringent as aviation and accidents when they occur have the potential to become disasters. This can happen especially in cruise ships, which carry over 1000 people at any given time.

3.6

Masks, Vaccines, and WHO

On 28 January 2020, President Xi of China made a statement of a demon virus spreading in China and needs to be contained [58]. The virus, with possible origins in Wuhan, China, had started to spread. Clearly, however, the implication was global, far more than Ebola, as travel to and from China was very high. Now, the WHO, the health organisation for the world, got involved. By January 30th, 2020, the WHO had started communicating [59]. There was no standard procedure that the WHO could recommend and make all countries follow. Every country’s healthcare regulator, manager, e.g. CDC of USA, Dept of Health, the NHS and other organisations in Britain, Italy, France, and Spain, came up with independent messaging, sometimes confusing and contradicting. While the WHO sounded dire warnings, there was continued confusion about stopping flights, monitoring and quarantining people, classification of symptoms, theories about which drugs worked and which did not, what to do with ventilators in intensive care, and, importantly, epidemiological forecasting [60–63]. COVID-19 has been a true healthcare disaster. However, it has also been a gross regulatory disaster, especially since the healthcare sector is regarded as highly regulated. It is regulated for the use of drugs, hospital protocols, disease control, etc. Disease control protocols prior to the epidemic did not show the chinks in armour with respect to how global travel would lead to the rapid spread of COVID-19. In other words, there truly was no effective “defence in depth” strategy in pandemic control. For something as basic as masks, there is no clear consensus of the type of masks to be used and in what type of setting. When cloth masks are to be worn and when not to be worn was something that kept changing even though there have been epidemics before, such as the Spanish Flu epidemic in 1918, when masks were used [64–66]. Many vaccines were authorised under the Emergency Use Authorisation (EUA) protocol for a while. There is no global regulator for vaccines. Vaccine

3 Learning from Failures—Evolution of Risk and Safety …

43

approval is similar to the approval of the use of civil aircraft. It is a bilateral understanding between countries [67]. Many COVID drugs did not work [68]. Many of the COVID drugs were also repurposed. In that sense, they have records and data on toxicity and other studies. However, there were no global protocols for the recommended use of the drugs at what part of the illness phase of COVID-19, except WHO advisories. This applies equally to medical devices, chemicals used to disinfect COVID-19-impacted surfaces, etc. The COVID-19 pandemic exposed the fragility of the regulatory structure. Healthcare involves regulatory approvals of complex equipment. These include devices for measurement, drug delivery, robotic surgery tools, etc. Since these include automation and software, the regulation of these devices poses a new challenge, especially since they will be used in different environments across the globe. Overall, healthcare regulation within national systems, bilaterally and internationally, has had structural issues.

3.7

Catering for a Black Swan: Foresight in Regulation

A black swan event, as proposed by Nassim Taleb [69], is a rare event that can come as a surprise, produces a major effect, and is often inappropriately rationalised after the event with the benefit of hindsight. Therefore, a black swan event is difficult to handle from a regulation viewpoint. Regulation of risk is for anticipated events. As Taleb acknowledges, black swan events are high profile, hard to predict, and rare. They are beyond normal expectations and not computable or can be simulated easily, and there is a psychological bias in any case that blinds people to rare events. The Fukushima event was classified as a black swan by some. However, others quickly noted that both the earthquake and the tsunami occurring at nearly the same time could be visualised, especially since the nuclear plant was in a tsunami- and earthquake-sensitive area. This applies to the COVID19 pandemic as well, as many believed a global pandemic was going to occur, as smaller outbreaks had given enough indications. However, foresight in regulation is difficult in many ways, leading one to look at it in terms of what Donald Rumsfeld once mused about known knowns and known unknowns [70]. The Rumsfeld grouping provides regulation an opportunity in terms of how it can be viewed and improvements made. This is addressed in Chap. 9.

44

S. Chandra

Regulation has historically been science based, using risk scenarios that one needs to show to have been mitigated. In engineering, especially using probability and hazard analysis and showing by design, test and practice that the risk has been minimised has been the hallmark of regulatory compliance. While scenarios are examined, a number of them are sometimes discarded based on the low probability of them happening. What if the particular scenario had not been imagined, and what if human error had not been accounted for? These are the arguments sociologists use to explain why even with the best of regulation and oversight, accidents, and disasters can occur. What regulation has not consciously dealt with is organisational behaviour and human behaviour, contributing to accidents. Many compliance authorities, such as those in aviation, insist on an organisational structure for the manufacturer or even the airline, but there is no evaluation process that looks at a certain organisational structure and human behaviour and shows that it can contribute to an accident. When an accident occurs, the investigation reveals reasons, including human behaviour and organisational issues, and the regulation is updated. Presently, regulation of risk is not about foresight to predict and derisk for rare events. It implicitly acknowledges that those events that are of low probability of occurrence are passed over. However, as we will see in Chap. 4, the sociologists view this with concern. As also discussed in Chap. 5 on complexity, events that are not foreseen due to complexity will continue to be an issue with the regulatory process, and the view of a foresight failure will always haunt regulatory frameworks.

3.8

The Real Regulatory Role?

Any analysis of the various regulations suggests that it is generally driven by an engineering vision of minimising risk using science but does tacitly acknowledge that even with the best predicted scenarios of hazards, some risk exists. Where the scenarios are never envisioned, as in the Shuttle disasters, the Gulfstream 650 case, and the Hudson ditching of the A320, the regulator along with the accident investigator appears to acknowledge that errors have occurred and steps need to be taken for it not to repeat. Safety regulation tends to balance commercial viability and technology readiness with the risks involved and set up processes and procedures such that the risk is minimised while considering all aspects. In a sense, go only where it is perceived to be safe to go, in terms of what the vehicle or equipment is designed for. The interplay and negotiation between the regulator and the product developer is a constant process. The industry would like to save costs, while

3 Learning from Failures—Evolution of Risk and Safety …

45

the regulator would like to maximise safety. This tension is visible in all industries. What changes balance is when an accident occurs, then the regulator holds the cards. This is also seen in the case of Boeing 737 and earlier accidents. However, sociologists feel differently; they believe that learning from accidents has limited use, as many accidents have causes in human and organisational behaviour, rather than just technology failures. This is addressed in Chap. 5. In many accidents seen in Chap. 2 where serious organisational oversight was needed, it seems to have been given a miss by regulators. Therefore, when accidents occur and human error is involved, the issue is whether it is due to organisational failure, equipment failure, something that was anticipated and trained for, but still occurred. In accident investigations, these questions are answered in an engineering sense, but not in a sociological sense, nor in terms of human behaviour at a personal level or at the organisational level. No formal regulatory position exists for whistle blowers or for anonymous reporting of errors and incidents in all sectors, although a basic mechanism exists in commercial aviation. More importantly, except to an extent the IAEA, there is no unified regulator for many sectors. Advisory bodies under the UN have no real strength in terms of country-specific legislation, and the evidence for that is seen in the coronavirus pandemic and even the Boeing 737 Max case, where the European Union regulators certify the return of the 737 Max in Europe, the Chinese regulators certify for China, etc. In the pandemic, vaccines are certified and approved by national regulators. Much regulation remains bilateral between countries. What the pandemic has revealed is the state of international regulation in some sectors; healthcare is the most important of them but is seen in aviation and in the marine sector as well. The WHO has had difficulties on various fronts. Investigation requires permissions and approvals from countries where the pandemic originates. Its protocols for pandemic management were largely ignored initially. Therefore, if the pandemic events are not to be repeated, an international regulator that has more power to enforce is essential. Even in aviation, the air bubbles created during the pandemic between countries, sanitation of airports, and airlines are also not clearly part of any international regulation. Accident regulation needs to move beyond science and engineering aspects of design, manufacturing, and operation: it needs to accommodate risk perception, social structures, and cultural aspects that drive organisational behaviour in terms of risk for gain, apart from addressing complexity in terms of national systems, interaction between countries, and global frameworks.

46

S. Chandra

References 1. Skorvan, S., & Mantel, H. (2007). An unreasonable man. https://itvs.org/films/ unreasonable-man. Accessed August 8, 2022. 2. (1996). Dante and turrets and DDT: Princeton, as they’ve lived it. New York Times, March 3. https://www.nytimes.com/1996/03/03/nyregion/dante-and-tur rets-and-ddt-princeton-as-they-ve-lived-it.html 3. Jensen, C. (2015). ‘Unsafe at Any Speed’ shook the auto world. New York Times, November 26. 4. Nader, R. (1966). Unsafe at any speed: The designed-in dangers of the American automobile. Grossman. 5. Defensive Driving. (2016). A history of seat belts in, in Defensive driving online. Driving and Safety, September 14, 2016 https://www.defensivedriving. com/blog/a-history-of-seat-belts/. Accessed August 8, 2022. 6. National_Traffic_and_Motor_Vehicle_Safety_Act. (1966). USCODE. https:// uscode.house.gov/statutes/pl/89/563.pdf. Accessed August 8, 2022. 7. Holmes, O. W., & Rohrbach, P. T. (1983). Stagecoach East: Stagecoach days in the East from the colonial period to the civil war (p. 220). Smithsonian Institution Press. 8. History of the American Stagecoach. Worldhistory.us. https://worldhistory.us/ american-history/history-of-the-american-stagecoach.php. Accessed August 8, 2022. 9. USDOT. (2009). Testimony documents. US Airways Flight 1549, February 24. https://www.transportation.gov/testimony/us-airways-flight-1549. Accessed August 8, 2022. 10. United States Department of Transportation. (1963). Civil Aeronautics Board aircraft accident report. United Air Lines, Inc., Vickers-Armstrongs Viscount, N 7430 Near Ellicott City, Maryland, November 23, 1962. Department of Transportation Library, 22 March. 11. New York Times. (1962). Maryland crash kills 17 on plane. The New York Times, November 24, 1962. 12. Federal Register, Vol. 58, No. 48, Monday, March 15, 1993. Notices. https:// www.faa.gov/regulations_policies/rulemaking/committees/documents/media/ TAEgshT1&2-3151993.pdf. Accessed August 8, 2022. 13. EASA. (2012). Certification memorandum. Subject: Compliance with CS-25 bird strike requirements EASA CM No.: EASA CM-S-001, Issue: 01, Issue Date: 11th of April 2012. 14. The House Committee on Transportation and Infrastructure. (2020). Final committee report: The design, development and certification of the Boeing 737 Max, September. 15. Gates, D., & Kamb, L. (2019). Indonesia’s devastating final report blames Boeing 737 MAX design, certification in Lion Air crash. The Seattle Times, October 27.

3 Learning from Failures—Evolution of Risk and Safety …

47

16. Marks, S., & Dahir, A. L. (2020). Ethiopian report on 737 Max crash blames Boeing. The New York Times, March 9. 17. Thorbecke, C. (2019). NTSB report on Boeing Max 737 crashes urges certifiers to test for ‘real-world’ chaos. ABC News, September 26, 2019. 18. Isadore, C. (2020). The cost of the Boeing 737 Max crisis: $18.7 billion and counting. CNN , March 10. 19. Jolly, J. (2020). Boeing 737 Max given approval to fly again by US regulators. The Guardian, November 18, 2020. 20. Kitroeff, N., Gelles, D., & Nicas, J. (2019). The roots of Boeing’s 737 Max crisis: A regulator relaxes its oversight. New York Times, July 27, 2019. 21. Vaughan, D. (2016). The challenger launch decision: Risky technology, culture, and deviance at NASA (Enlarged Edition). University of Chicago Press. 22. Business Standard. (2020). India’s cartosat-2F, Russia’s Kanopus-V satellites barely miss collision. Business Standard , November 28, 2020. 23. CBS News. (2018). NASA reportedly to probe SpaceX safety and drugfree policy after Elon Musk appears to smoke marijuana, November 21. https://www.cbsnews.com/news/elon-musk-nasa-spacex-boeing-marijuanainterview-review-safety-drug-policy/. Accessed August 21, 2022. 24. Davenport, C. (2020). Boeing’s Starliner space capsule suffered a second software glitch during December test flight, February 6, 2020. https://www. washingtonpost.com/technology/2020/02/06/boeings-starliner-space-capsulesuffered-second-software-glitch-during-december-test-flight/. Accessed August 2022. 25. Whattles, J. (2021). FAA to oversee investigation of SpaceX Mars rocket prototype’s explosive landing. CNN , February 3, 2021. 26. Etherington, D. (2020). SpaceX cautions on launch regulation that outpaces innovation. Yahoo.com, January 30, 2020. 27. IAEA. (2015). The Fukushima Daichi accident. IAEA, Vienna. https://www. iaea.org/publications/10962/the-fukushima-daiichi-accident 28. ElBaradei. (2003). The status of nuclear inspections in Iraq, Press Release. IAEA. https://www.iaea.org/newscenter/statements/status-nuclear-inspectionsiraq. Accessed August 9, 2022. 29. IAEA. (2022). IAEA and Iran: Chronology of key events. https://www.iaea.org/ newscenter/focus/iran/chronology-of-key-events. Accessed August 9, 2022. 30. World Nuclear Association. (2022). Safety of nuclear power reactors, March. https://www.world-nuclear.org/information-library/safety-and-security/ safety-of-plants/safety-of-nuclear-power-reactors.aspx. Accessed August 9, 2022. 31. Nuclear Energy Agency. (2019). NEA No. 6861, comparing nuclear accident risks with those from other energy sources (oecd-nea.org). OECD. https:// www.oecd-nea.org/upload/docs/application/pdf/2019-12/nea6861-comparingrisks.pdf 32. Pawlowski, A. (2010). Aviation safety rate: One accident for every 1.4 million flights. CNN , February 22.

48

S. Chandra

33. IAEA. (2005). Assessment of defence in depth for nuclear power plants. Safety Reports Series No. 46. IAEA, Vienna. https://www.iaea.org/publications/7099/ assessment-of-defence-in-depth-for-nuclear-power-plants 34. National Academy of Sciences, Committee on Trauma and Committee on Shock, Division of Medical Sciences. (1966). Accidental death and disability: The neglected disease of modern society Washington, D. C, September, 1966. 35. Blog. (2014). A history of car safety: A century of automotive innovations, vehicle—safety. Nationwide.com, June 25, 25. 36. AA. (n.d). Evolution of car safety features AA (theaa.com), from windscreen wipers to crash tests and pedestrian protection. https://www.theaa.com/bre akdown-cover/advice/evolution-of-car-safety-features#:~:text=The%20first% 20motor%20vehicles%20were%20heavy%2C%20solid%2C%20and,blades% 20were%20developed%20and%20patented%20by%20Mary%20Anderson. Accessed August 15, 2022. 37. Flory, J. (2008). American cars 1946–1959 every model year by year. McFarland. https://mcfarlandbooks.com/product/american-cars-1946-1959/ 38. Slater, D. (2014). Who made that windshield wiper? New York Times Magazine, September 14. 39. Margeit, R. (2021). Florence Lawrence: The Hollywood star who invented two of the most common things found on cars today. Drive, February 8, 2021. https://www.drive.com.au/caradvice/florence-lawrence-the-hollywood-star-whoinvented-two-of-the-most-common-things-found-on-cars-today/. Accessed August 15, 2022. 40. Hendrickson, K. A. (2003). National traffic and motor vehicle safety act. In S. Kutler (Ed.), Dictionary of American history (3rd ed., Vol. 5, pp. 561–562). Charles Scribner’s Sons. 41. National Transportation Safety Board. https://www.ntsb.gov/. Accessed August 9, 2022. 42. NHTSA. (2011). Quick reference guide (2010 version) to Federal motor vehicle safety standards and regulations, DOT HS 811 439. U.S. Department of Transportation, National Highway Traffic Safety Administration, February, 2011. 43. Governor’s Traffic Safety Committee. New York State department of motor vehicles seat belts save lives. https://trafficsafety.ny.gov/occupant-protection#: ~:text=In%201984%2C%20New%20York%20State%20became%20the% 20first,you%20for%20not%20having%20a%20seat%20belt%20on. Accessed August 9, 2022. 44. Recording law, car seat laws in the United States. https://recordinglaw.com/uslaws/united-states-car-seat-laws/. Accessed August 9, 2022. 45. NPR Staff. (2015). In a tight spot, abducted family struggled for freedom—and hope. NPR, October 23, 2015. 46. United States Senate, Senate Committee on the Judiciary Subcommittee on Oversight, Federal Rights, and Agency Action. (2003). Justice delayed: The human cost of regulatory paralysis. Testimony of J. E. Fennell, August 1, 2013.

3 Learning from Failures—Evolution of Risk and Safety …

49

47. Insurance Institute for Highway Safety. (IIHS). https://www.iihs.org/about-us. Accessed August 9, 2022. 48. Yvkoff, L. (2011). CNET, IIHS: Volvo’s collision avoidance system most effective. https://www.cnet.com/roadshow/news/iihs-volvos-collision-avoidancesystem-most-effective/. Accessed August 9, 2022. 49. World forum for harmonization of vehicle regulations (WP.29). https://unece. org/transport/vehicle-regulations. Accessed August 9, 2022. 50. Eisenstein, P. A. (2020). First came a worldwide recall for air bags. Now, millions of Takata seat belts may also be faulty. NBCnews, October 16, 2020. 51. NHTSA. (2021). More Takata air bags recalled. https://www.nhtsa.gov/moretakata-air-bags-recalled. Accessed August 15, 2022. 52. Tabuchi, H. (2016). As Takata costs soar in airbag recall, files show early worries on financial toll. New York Times, April 13, 2016. 53. Evans, S., & MacKenzie, A. (2010). The Toyota recall crisis. Motortrend , January 27. From https://www.motortrend.com/news/toyota-recall-crisis/. Accessed August 15, 2022. 54. NTSB. (1990). Practices that relate to the Exxon Valdez (pp. 1–6). Washington, DC: National Transportation and Safety Board, September, 18. https://www.ntsb.gov/safety/safety-recs/recletters/M90_32_43.pdf. Accessed August 15, 2022. 55. Alaska Department of Environmental Conservation. (1993). The Exxon Valdez oil spill: Final report, State of Alaska response, Exxon Valdez oil spill trustee council (pp. 61–87). Anchorage, AK: Alaska Department of Environmental Conservation, June, 1993. 56. International Convention for the Prevention of Pollution from Ships. (MARPOL). https://www.imo.org/en/About/Conventions/Pages/InternationalConvention-for-the-Prevention-of-Pollution-from-Ships-(MARPOL).aspx. Accessed August 9, 2022. 57. International Convention for the Safety of Life at Sea. (SOLAS). (1974). https://www.imo.org/en/About/Conventions/Pages/International-Con vention-for-the-Safety-of-Life-at-Sea-(SOLAS),-1974.aspx. Accessed August 9, 2022. 58. Ramirez, L. (2020). Xi says China fighting ‘demon’ virus as nations prepare airlifts. Yahoo News, January 28, 2020. 59. WHO. (n.d). Timeline: WHO’s COVID-19 response. https://www.who. int/emergencies/diseases/novel-coronavirus-2019/interactive-timeline. Accessed August 15, 2022. 60. Gebrekidan, S., & Apuzzo, M. (2021). Covid response was a global series of failures, W.H.O.-established panel says. New York Times, January 18, 2021. 61. BBC. (2021). Covid: Serious failures in WHO and global response, report finds, May 12, 2021. 62. Maxmen, A. (2021). How the world failed to curb COVID. Nature, May 12, 2021.

50

S. Chandra

63. Ioannidis, J. P. A., Cripps, S., & Tanner, M. A. (2022). Forecasting for COVID19 has failed. International Journal of Forecast, 38(2), 423–438. 64. Barry, J. M. (2022). We can learn from how the 1918 pandemic ended. New York Times, January 31, 2022. 65. WHO. (2020). Advice on the use of masks in the community, during home care and in health care settings in the context of the novel coronavirus (2019nCoV) outbreak. Interim Guidance January 29, 2020. WHO reference number: WHO/nCov/IPC_Masks/2020.1. 66. WHO. (2020). Advice on the use of masks in the context of COVID-19. Interim Guidance, April 6, 2020, World Health Organization. 67. NDTV News. Stop making bilateral Covid vaccine deals: WHO urges countries, January 8, 2021. https://www.ndtv.com/world-news/stop-makingbilateral-covid-vaccine-deals-who-urges-countries-2349832. Accessed August 9, 2022. 68. Zimmer, C. (2021). Drugs that did not work: How the search for Covid-19 treatments faltered while vaccines sped ahead. New York Times, January 30, 2021. 69. Taleb, N. N. (2007). Black Swan: The impact of the highly improbable. Penguin. 70. Murphy, P. A. (2020). Rumsfeld’s logic of known knowns, known unknowns and unknown unknowns. Medium.com, December 12, 2020.

4 Keep It Simple but not Stupid—Complex Technology and Complex Organisations

“Airplanes are becoming far too complex to fly. Pilots are no longer needed but rather computer scientists from MIT. I see it all the time in many products. Always seeking to go one unnecessary step further, when often old and simpler, is far better. Split second decisions are needed, and the complexity creates danger. All of this is for great cost yet very little gain. I don’t know about you, but I don’t want Albert Einstein to be my pilot. I want great flying professionals who are allowed to easily and quickly take control of a plane!”—Donald J. Trump (@realDonaldTrump) March 12, 2019 [1]. Does the statement of President Trump that complexity creates danger appear profound after the second Boeing 737 Max crash? This has in fact been said before about automation. While danger is a primordial instinct, the aim of automation was actually to reduce the sense of danger by reducing the risk involved. However, the statement of Trump after the second Boeing 737 Max crash does fit into Charles Perrow’s view that many accidents are caused by complexity [2], and this view will be explored further in the chapter on the sociology of accidents and disasters. The Boeing 737 Max accident, as we later came to know, was not just due to the complexity of the technology but due to organisational issues and failure of oversight, as discussed in the book. Historically, Boeing was slower than Airbus in introducing automated flight control systems. The A320 of Airbus was the first civil aircraft to have automation for flight control. In the initial phase, two accidents occurred with the A320 (at Bangalore, India, in 1990 and a demonstration flight in France in 1988) [3–5]. The aircraft were grounded for a while in India [6],

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_4

51

52

S. Chandra

but changes were made, and the A320 remains a very popular aircraft and a strong competitor to Boeing 737. What happened at Boeing was the need to compete with Airbus; both aircraft were to have a new generation of engines that promised better fuel efficiency and hence provided cost benefits to operators. Boeing accelerated its design of a new version of the 737 and called it the Boeing 737 Max. It introduced a software update that did not have redundancy and that created a scenario that some pilots could not handle. In other words, the pilots of Lion Air and Ethiopian Airlines had a situation they had never seen before, even on a simulator. This issue had passed through Boeing design teams, its management, and its regulator, the FAA. Therefore, complexity in its truest form, technology and organisational, both caused a disaster. Is there a root cause? Well, in this case, yes, it was the design of the flight control system, but other redundancies and safety checks failed to stop it as it passed through the organisation. This seems to make sociologists wary of technology and organisational complexity. In the Fukushima nuclear disaster, as discussed in Chap. 2, there have been discussions about how technological complexity affected operator behaviour. Automation, with machines taking over complex tasks enabling information for quick reactions, was supposed to reduce operator load but not confuse the operator or decrease safety. However, in some cases, the reverse occurred. For example, a traffic collision warning system (TCAS) installed on aircraft provides a quick reaction tool for human action in the case of a possible collision. However, as discussed in Chap. 2, there has been a case of the TCAS and human decision making contradicting each other (see Chap. 2 and the Swiss traffic controller accident). There have been many cases where the complexity of technology has been a factor in accidents. An accident that killed a pedestrian was an Uber self-driven car, and NTSB and Uber did halt the use of self-driven cars [7, 8]. Complexity was seen in its full-blown form in the pandemic. The virus is complex, its transmission is complex, and human body reactions, especially its immune response, are complex. Together, a very complex story is aided and abetted by human behaviour, social interactions, and governments responding to this complexity in ways that seemed to make the problem effectively intractable. This chapter addresses complexity, the complexity in technology and organisations, the possibility of unanticipated failures, and the effect on rare events and black swan events in complex systems. The development of redundancies, defence in depth, and other concepts are discussed. A framework of a high-reliability organisation (HRO) [9] to reduce accidents in complex systems is introduced.

4 Keep It Simple but not Stupid—Complex Technology …

4.1

53

Complexity

Colloquially, something that is complex is something that is not easy to understand, not easy to figure out how it works, and could even be such that it is not intuitive. Things are complex, meaning it is not easy to get a hang of it. One would think there is a need to formalise this notion of complexity. Like the word risk, complexity is used liberally. It can mean different things to different people. A mathematician’s view of complexity is different from a system’s engineer and then a biologist who sees complex networks in biochemical processes. There has been work on the measurement of complexity, but it is domain specific. A universal, agreeable definition is probably difficult. In science and engineering, more importantly mathematical and computer sciences, complexity is defined based on the number of parts or elements in a system, equations required to formulate a behaviour, and so on. If there are a number of parts, elements, or even nodes and these have a large number of interrelationships with other elements, it could be regarded as complex. In The Ghost in the Machine, Arthur Koestler [10] argues for the fact that the behaviour of a system as a whole is more than the sum of its parts. Complexity is also about having a small number of elements, but their relationships are complex, may be combinatorically complex and the type of relationships between the parts, leading to system complexity. This is a broad definition, and the meaning of what is complex has been evolving over time. For example, in computing, the computing power required to solve a mathematical problem, simulate a physical behaviour or execute an algorithm measures complexity. There are axiomatic approaches to compute complexity such as Kolmogorov complexity. There is state complexity in information processing, where complexity is measured by the number of attributes or properties of an agent or object and if these are used to interact with other objects. If physical systems are modelled here, there is also the probability of the state vector that determines the measure of complexity. Over the years, much of what happens in the world, including supply chains, air traffic, money transfers, food supply, and many others, can be abstracted as networks. Here, too, complexity shows up as a measure of the richness of the connections, also related to coupling, either loose or tight. This is true in software engineering as well. There have been attempts to portray the behaviour of simple systems becoming “nearly” unpredictable, such as chaos and bifurcations. This has been discussed in terms of the unpredictability of the weather. Therefore, we

54

S. Chandra

could have simple systems behave in an extremely complex manner at certain inputs. Therefore, complexity could be about the richness of connections and each subsystem having complex behaviour, apart from the fact that the system has a very large number of elements. In some ways, the effect of this is what sociologists fear most in comparison with what engineers argue that by testing subsystems, they have an understanding of the system’s behaviour. Even if it is not obvious, attempts to address complexity have been made by structuring it and known as structured complexity [11]. Systems engineering attempts to differentiate into things that can be structured neatly into objects and their interactions, while those that cannot be structured fall into the realms of ill-defined complexity. Herbert Simon in an early paper addresses the structure of ill-defined problems [12]. Typically, engineering systems with “well-defined” behaviour are structured, regardless of the level of complexity. However, human behaviour and social interactions fall to a major extent into the realms of ill-defined complexity, or so says conventional wisdom. However, a well-structured engineered system can also fail because we have missed a scenario of failure, while human behaviour and social interactions can be precursors to failure, as they cannot be easily structured, whether complex or not. If structured, the belief is that it is predictable as the aggregated behaviour of subsystems can be foreseen. Now, even here, large-scale complexity at the subsystem level can lead to prediction problems. Chaos is an example, where the subsystem itself produces seemingly complex and sometimes unpredictable behaviour [13]. We can now contrast this with complexity that just cannot be structured, also known as ill-defined complexity and known to be messy. As a result, it is difficult to predict the overall behaviour of an ill-defined system. Many times, attempting to structure actual real-world issues involving human and social behaviour, one encounters messy problems. Here, there will be an inadequacy of data or uncertainty in structuring the problem and inappropriate assumptions. One could of course argue that these are the systems that deserve to be structured to improve predictability, especially with regard to systems involving social behaviour or human interaction, and the need to define the system or framework is crucial. Technological complexity can be classified and structured in terms of the number of techno units, e.g. the number of units in a tool kit [14]. On the other hand, at a behaviour level, complexity is expressed as failure in communication (transmission), inaccuracy, inexact or multiple sets connections, the presence of interrelated, but conflicting subtasks, the density of interactions

4 Keep It Simple but not Stupid—Complex Technology …

55

between a system’s parts, a function of the number of parts, the degree of differentiation or specialisation of these parts, and their integration. Some measures are in terms of failures: cascading or progressive failures caused by strong coupling between components in complex systems leading to catastrophic consequences [15]. Systems may be inherently unstable, and a small trigger could lead to disastrous consequences, particularly some dissipative systems that are far from an energy equilibrium. Complex systems can be embedded or nested in an overall complex system. However, what truly shows up as emergent behaviour is when complex systems interact and the emergent behaviour is at a different level, with the whole more than the sum of its parts. To add to this, we could see nonlinear behaviour and feedback loops. While not discussed in detail here, this is typical of human–human and human–machine interactions. Complex systems can be and are many times dynamic, where the interactions are continuously changing. The complexity of such systems is driven by the possibility of the number of interacting elements, strength of the dependencies, and the level of heterogeneity in the system. Given that these are time-dependent systems, behaviour happens at multiple time scales. In a complex environment, even small decisions can have surprising effects, as seen in chaotic systems, where small perturbations in initial conditions lead to surprising effects. The other issue is when interactions between elements in a system occur even though it has not been foreseen or believed to have possibilities of occurrence. To enable structuring of complexity, systems engineering groups adopt a systems of systems (SOS) approach [16]. This approach acknowledges that in terms of structuring the complexity in systems, where individual systems, large as it will be, will have an umbrella system and specified protocols. An example is a national aviation system that consists of aircraft, air traffic control, airports, and other systems, each of which is a complex system in its own right but operates under an umbrella in a system of systems structure. Interestingly, many of the accidents could be categorised as belonging to SOS; for example, the pandemic, if contained in China, was part of a national system, but after it spread across the globe, became an interaction between many national systems, acquiring even higher levels of complexity as it enveloped the globe. Human behaviour and social processes were also modelled and viewed as soft systems compared to technology networks, which were modelled explicitly. For example, in systems engineering, Peter Checkland’s work on soft systems can be used to understand how difficult it is to model human behaviour in comparison with agents and objects that represent parts of

56

S. Chandra

a machine [17]. This is discussed further in the section on organisational complexity. There was a view of accidents having a root cause in failure studies. However, Barry Turner [18] and others showed that an accident could many times be a culmination of a series of actions or events that occurred in sequence. The Swiss Cheese theory of James Reason [19, 20] proposes that apart from the order of events, which anyway are part of regular operations, the errors need to align to cause a major effect; by themselves, none are sufficient to cause an accident (e.g. Bogner [21]). In ill-structured environments, the odds are sometimes against the alignment, and hence, accidents do not occur, even though there are multiple individual errors. In many developing countries, a lack of road discipline should lead to more accidents, but it is not so. That is because, even with all actors that can cause an accident being available, the alignment does not occur many times to cause one. Complexity need not be the sole criterion for an accident. It is also possible that the variety of scenarios/events, subevents, and local failures that would lead to an accident is not “envisioned” enough, which truly is a failure of foresight, and as a result, one cannot generalise that an accident can happen because of complexity. It is thus possible to see the sociologist’s worldview: human limitations that cause a failure of foresight, as described in Chap. 5. Somethings will remain unanticipated. Donald Rumsfeld [22], as discussed in Chap. 3, noted that there will be known knowns, known unknowns, and unknown unknowns. Now, in attempting to structure complexity, the hope is to have prediction power or foresight that is clearly about known scenarios for which solutions and redundancies can be activated. In known unknowns, that luxury does not exist. An unknown unknown is a very ill-defined problem and cannot be structured or foreseen. It must, however, be noted that an accident can happen if complexity is not structured and remains ill-defined, when all actors or events align as in the Swiss Cheese theory. On the other hand, an accident can happen even in well-structured systems if proper operating protocols are not followed.

4.2

Complex Technology

What is complex technology? This question needs to be asked in context. What is complex, obviously, is something that is not simple. The phrase “It is not rocket science” has been heard before, but rocket technology is complex technology and will remain so, only that rockets of the seventeenth century are not as complex as those of the twenty-first century; it was complex

4 Keep It Simple but not Stupid—Complex Technology …

57

enough for its era. Space X’s reusable rockets land on a drone ship and can be retrieved, making them far more complex than the rockets of the seventeenth century. However, rocket science, rocket engineering, or even rocket technology came about in the 1700s in China, India, and elsewhere [23]. Nazi Germany had attempted to perfect it to bomb Great Britain in World War II (WWII). After WWII, much of rocket science was classified technology and part of the USA and USSR [23]. Today, this acknowledged complex technology is part of commercial space companies such as SpaceX and has been harnessed by North Korea for its missiles. The stagecoach was a complex technology in the 1700s, as it used manufacturing processes that were new and sophisticated at that time. Steam engines, turbines, and steamers were complex technologies in the 1800s and early 1900s. To the uninitiated into engineering of these, it could still remain a wonder and be complex. Even in 2019, a mobile phone is clearly quite complex but has had widespread use. However, such complex technology has been made accessible to everybody. Mobile phones, wireless communication, videos, and image transmission are part of a complex scientific and engineering understanding, the fundamentals of which are not something that everybody needs to grasp. In fact, much of the science in the mobile phone, if needed to be understood in depth, would require a very high level of physics knowledge. From that viewpoint alone, a generalisation that complex technology is unmanageable and leads to inevitable accidents may not be appropriate. On the other hand, the sense of danger and risk tends to lead to a ‘belief ’ that complex technology can never be made accident proof. While complex new technology is acknowledged as useful and a ‘game changer’, the belief is ‘too complex’ can be accident prone. As sociologists argue, human beings who devise technology may never get all possible scenarios of failure and may later see one failure scenario that will surprise them. Therefore, it is not just complex technology that inevitably leads to accidents, it is the humans who do not visualise the scenarios of operation and failures fully. One of the views regarding technology complexity is that it is near impossible to glean why accidents have occurred, as all scenarios would not have been visualised in design, something accident investigators constantly confront. After all, when an aircraft crashes, one must be able to know what went wrong based on analysis of the data that are continuously acquired before the accident. More importantly, an aircraft is made by humans, manmade if you may, and should fall under the realms of structured complexity. However, even there, there could be interaction between subsystems that have not been anticipated, and failures could have occurred. One then refrains

58

S. Chandra

from a single root cause theory, as there would have been a multiple set of errors all aligning together are a reason for the failure as will be examined in the Swiss Cheese theory or incubation theory later on. Accident investigators examine the epistemic nature of accidents using forensics for this. The explosion aboard a TWA B747 on 17th July 1996 [24, 25] was first thought to be due to terrorism. Only upon reassembling the wreckage was there a view that a fuel tank explosion could have caused it. The greater the technological complexity, which is not thoroughly understood, the greater the possibility of a long-drawn accident investigation. What exactly is complex technology can also be interpreted as being based on the science it uses. LED is based on complex physics, and electronic chips are based on complex physics, but today, we have taken LED lights and computer chips for granted. However, nuclear energy generates enormous fear, even though it was discovered and existed before the physics of chips and LEDs were discovered. The fear of a disaster is high in nuclear energy but not with an LED bulb. There have been foresight failures in what was perceived to be simpler technologies; nobody anticipated asbestos poisoning, and it was never thought of as causing widespread illness until it did. This applies to a number of drugs, use of medical devices, etc. The chicken and egg conundrum of automation and technology complexity is based on the many possibilities of automation. Managing a complex system is generally regarded as only possible by automation. However, more automation also leads to additional complexity. The way engineers handle this is to simulate, test, and accept the technology. What automation of technology complexity does is pass the buck to the designer from the operator and the ability of the designer to be able to account for all types of scenarios. Thus, foresight failures are about the inability to predict the type of failure and have means to address it. All of this adds to the wariness of the sociologists, an issue addressed in Chap. 5.

4.3

Engineers and Complexity

Engineers at heart are drawn to the scientific process of theory and experiment, although engineers accept that real-world problems need to be catered for and have to find means to do so, irrespective of whether a theory works perfectly or not. To go where no man has gone before is more than science, it is engineering. Engineering is about structuring complexity: understanding subsystem function and aggregating behaviour. Engineers are typically reductionist in their attitudes; they would like to first break up the problem into

4 Keep It Simple but not Stupid—Complex Technology …

59

smaller parts that can be handled and understood at its level. Lexicon, such as assembly, aggregation, subsystem, resolution, and granularity, is typical engineering concept. Much of how engineers see complexity is in terms of the various elements (or parts) and how to manage each element, such that the sub system (element) failure will not cause a catastrophic failure of the entire system. In terms of engineering methods, a fault tree is built, and various scenarios are analysed. The devil is in detail; if a certain subsystem failure scenario and its malfunction can cause a complete system failure, the end result could be catastrophic. If a series of system failures occur simultaneously and progressively, then the level of redundancy decreases. In an aircraft or spacecraft, if a part fails before the end of its predicted life, redundancy scenarios to be able to bring the vehicle back home are envisaged, even if it cannot complete its mission, and crew will be explicitly trained for the purpose. Engineers also use the probability of failure analysis to provide a measure of the number of hours that a system might perform before failure. This provides information about the robustness of the system to regulatory authorities. Apart from fault tree analysis, hazard analysis and failure mode effects and criticality analysis are also performed [26–28]. Fault tree analysis (FTA) [29] is quite popular to see how failures at subsystem levels influence a final catastrophic failure. This approach examines how the final failure was influenced by a propagation of subsystem failures. This allows engineers to minimise risk by attempting to account for subsystem failure at various levels and avoiding a system failure. In a sense, they are able to logically take note of expected subsystem failures but will not cater to unexpected, unanticipated scenarios or events. They will also not be able to look at operator failures classified as human errors, although mechanical/electronic system failures are noted. In a system-level failure, at the top of the fault tree, conditions are classified based on severity. The failure conditions are developed as part of a hazard analysis. Hazards are identified to define the risk. These may be a combination or a single hazard. A set of such events in sequence leading to final failure is regarded as a scenario, which is given a probability of occurrence, and these scenarios are ranked in terms of severity. To add to this, a failure mode, effects, and criticality analysis (FMECA) is also conducted, which uses acquired data and creates relationships between possible failures, their effects, and causes to determine the risk. When engineers use failure and fault analysis, risk and reliability theories, they also use a factor of safety approach, accepting that failures can occur because of defective material, part manufacturing, or even design.

60

S. Chandra

Thus, safety factors can be passed through a probabilistic failure methodology. However, they do not have an explicit measure of foresight failures or unanticipated events due to “unknown” interactions. Engineers build redundancies in many forms using a fault tree analysis leading to a defence in depth strategy [30] as used in nuclear power plants to contain radiation in terms of containment structures, automated shut-off systems, coolant systems, etc. These have been tested in many accidents. The defence in depth strategy has also been a well-known miliary strategy. In nuclear power plants, the defence in depth provisions are a series of physical barriers between the radioactive reactor core and the environment, multiple safety systems, and backups, all structured to minimise accidents from the vagaries of human interaction with the plant systems, which lead to errors and risk being systematically minimised by the layering of the defence mechanisms. There are other redundancies, such as bypass systems and alternate backup systems capable of taking the load from the subsystems that have failed and of course alerting systems. These systems can add to the cost of a total project. Apart from systems, there is training, organisational awareness, and repeated inspections. All of these factors attempt to make the safety systems as robust as possible. There are other concepts as well: damage tolerance [31] and a safe life concept [31] where a system is predicted to perform even with a flaw between certain inspection periods. A fail-safe system is a system redundancy where alternate safety paths are activated. A fault-tolerant system can in fact have backup systems that come into operation if a primary system fails. While these are limited to building redundancies, one cannot have defence in depth continuing to very many layers, as there is also the issue of costs. The Swiss Cheese theory attributed to James Reason [32, 33], a psychologist who made important contributions to the study of human error (and which will be discussed in Chap. 6), is widely recognised and used in engineering, healthcare, emergency management, and security. It imagines systems to be positioned and stacked in the form of Swiss cheese slices, and unless all holes, akin to subsystem failures, align themselves, there cannot be a catastrophic failure. This is somewhat akin to the incubation to final failure view of Turner (see Chap. 5), where there are subsystem failures that are incubated, which are represented as holes in each slice and all need to align themselves to let the subsystem errors propagate. A defence is “layered” in such a way that the holes are not aligned through and through: a defence in depth strategy. Therefore, errors, faults, lapses, and weaknesses in one subsystem do not allow a full-blown failure as there is defence in depth.

4 Keep It Simple but not Stupid—Complex Technology …

61

Cook [34] argues that complex systems are intrinsically hazardous systems. Catastrophic failures require multiple failures and failures in subsystems that are by themselves are regarded as small, and sometimes run of the mill, but could create the ground for catastrophic failures. The view is that there will always be flaws and latent imperfections. Complete removal of all flaws and latent failures is near impossible, as there is an economic cost to it. In fact, that is a concept embedded in the damage tolerance philosophy as well. The strategy is that when small failures occur, the complex system runs in degraded mode, effectively operating a curtailed performance until the small failures are rectified. Cook argues strongly that postaccident attribution to a “root cause” is inappropriate, especially because overt failure requires multiple faults, and there is no isolated “cause” of an accident. Change introduces new forms of failure [34]. There can be complacency when the rate of accidents is low in systems that are reliable, and modifications and changes are made based on that confidence. The Boeing 737 Max illustrates that to a point. It was a reliable aircraft, and changes were made with the confidence that the new aircraft would perform like the old aircraft, even assuming no new simulator training would be needed, as discussed in Chaps. 2 and 3. The introduction of new technologies, e.g., to reduce maintenance, which has low consequences but high-frequency failures, could create new failure scenarios in complex systems. Cook notes that when new technologies are used to eliminate well-understood system failures or to gain high-precision performance, they often introduce new pathways to large-scale, catastrophic failures [34]. The phrase “failures are stepping stones to success” is really about from learning from failures in complex systems. Training to recognise the envelope of operations, especially the edges of the envelope that should not be crossed, comes with experience of operating the system, truly mental models of the system, which will be discussed in a later chapter. Much of the problems come when operations are at the edge of the envelope, apart from of course the unforeseen scenarios. At this part of the envelope, performance and predictability deteriorate, and resilience and robustness become issues. Thus, in complex systems, the rules are in implementing defence in depth mechanisms, intensive training, and a recognition and belief that all foreseeable scenarios have been catered for and that these have a low probability of catastrophic failure. For example, Virgin Galactic aborted test flight of the space plane on December 13th, 2020. A follow-up statement from Virgin Galactic’s CEO, Michael Colglazier, was that VSS Unity’s “onboard computer that monitors the rocket motor lost connection”. This triggered a “fail-safe scenario that

62

S. Chandra

intentionally halted ignition of the rocket motor” [35]. However, building layers of redundancies costs money and could ultimately make a product unviable. Therefore, the challenge has always been to introduce technology and safety concepts that are feasible. Complex systems are sometimes designed to ensure that certain elements are decoupled such that overall systemic consequences can be minimised and important parts are shielded such that they remain functioning and are available to respond. In systems engineering, especially in complex systems, there is a measure of how robust a system is, effectively meaning if certain external effects will cause failures or it will not. It is argued that nature produces highly robust systems and there is important learning from them [36, 37]. Robustness measures, when introduced, can account for resistance to external action or anticipation and avoidance. In smart active systems, such as in the case of complex and agile guided weapons, they can avoid attacks and the ability to change paths. Other systems, if not agile, can be robust based or defence in the depth type of strategy. It can be argued that vulnerability is the inverse of robustness. Vulnerability refers to the inability rather than capability of a system to survive external effects. The term can be used very effectively in all complex systems modelling and definition: social, environmental, financial disasters, or even in terms of states: cognitive, psychological, or emotional. It is also a measure of harm to a system, social community, nation, etc. [38–40]. Then, there is the concept of antifragility [41], which is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure). Resilience is a term or concept used to describe complex systems that survive external effects or attacks. Many complex systems and computer networks can typically be resilient under attacks from hacking, faults, and natural disasters. They could also be tolerant of poor design or operation. These also apply as vulnerability to psychological states, emotional states, and natural disasters. Equally, the antithesis of this is fragility or vulnerability, or even brittleness. Complex systems can be fragile, while appearing to be robust might just break apart under certain conditions, which were not anticipated. Taleb, in his book Antifragile [41], discusses the property of systems that can survive and succeed even with external shocks, stresses, mistakes, and volatility. These could be human-engineered systems or financial systems. Taleb makes it a point to differentiate between robustness and resilience with antifragility. Taleb notes that antifragile and robust/resilient have differences,

4 Keep It Simple but not Stupid—Complex Technology …

63

with antifragility being beyond resilience or robustness. Taleb says, “The resilient resists shocks and stays the same; the antifragile gets better” [41].

4.4

Complex Organisations

In many cases, accidents have occurred because of failure of technology, but the weakness of an organisation has also led to cascading effects, accidents and disasters, and eventual failure of the organisation as well. While technology complexity is well acknowledged, organisations can be incredibly complex as well. The whole being more than a sum of its parts applies to organisations too. In an aircraft manufacturer’s organisation, the organisation would be dependent on units, departments, and subcontractors, all of whom would need to design, develop, and manufacture millions of parts that go into an aircraft. These then need to be tested and maintained. That requires an organisational structure that looks at both safety and business sustainability. Something can go wrong in an aircraft because of the organisation, or the organisation may crumble, due to a flaw in the aircraft and having bet the company on the product, the product could be unsafe and fail, leading to the ultimate failure of the company. Therefore, as seen in Chap. 2, there have been cases where the complex organisation structure, poor channelised communication, a chaotic interaction protocol and structure, and a hierarchy that stifles opinions and alerts of dangers can and has led to disasters. There have been instances where organisations dealing with complex technology have made it a point to retain a flat hierarchy and enable better communication. In some cases, a certain type of hierarchy has been found necessary because of the need to localise the responsibility for the technology such that the technology understanding and regulatory compliance remain with experts. Read Bain, in 1929 [42] argued that in fact, sociology is complex, as much of the sociological data can be intangible and can in fact be more complex than natural phenomena. Perrow [43] specifically points out that complex organisations become difficult to understand in terms of their full behaviour. This is something that engineers and scientists would not have caught for in their design and development or would show up in risk regulation processes. Work by Turner and Vaughan [44, 45], which will be discussed in detail in Chap. 5, shows that it is possible to look at issues of deviance and the incubation of errors and how they build up into accidents through how organisations behave.

64

S. Chandra

Perrow first published a book titled Complex Organizations: A Critical Essay in 1972 [46]. Interestingly, while Perrow as a sociologist deals with disasters more in terms of human behaviour, he looks at organisations structurally and effectively in terms of processes and their relationship with external effects. Perrow argues that organisations become vehicles of power for individuals in the organisation. It is argued that much of organisational behaviour, planning, and decisions are also limited by bounded rationality attributed to individual behaviours, a concept proposed by Herbert Simon [47]. In the garbage can theory, organisations are regarded as organised anarchies and lead to a chaotic reality of organisational decision making [48]. What is more important is that decision makers are opportunistic, and their behaviours are driven by gain and power, something that is explored in Chap. 8. Some of this is illustrated by the organisational issues at Boeing during the 737 Max accidents. Even in organisations, it is not as if complexity is always bad. The question is whether it is structured, is it in a way that communication is clear in the hierarchy and follows a protocol where deviant behaviour can be identified. While we discuss complexity and believe it is important to keep things simple, as in technology, the idea is that necessary organisational complexity should ensure value addition. The problem with a complex organisation with a chaotic structure is that decisions are not made at the right level, leading to the accumulation of inefficiencies. Previously, units in large organisations were not heavily interconnected, but with information technology, these are, many times connected globally and have by definition become more complex. As with complex technology, interactions may produce surprising new and unanticipated scenarios. These behaviours, even if regarded as outliers, may, if large in number, create unforeseen actions.

4.5

Black Swans

Taleb’s Black swan theory is about rare events [49], the probability of occurrence of which is supposed to be extremely low but with devastating consequences. Now is there a link between Perrow’s view of the inevitability of accidents (which will be discussed in Chap. 5) and rare events with low probabilities when using complex technologies. Throughout the chapter, the conjecture that complexity could deliver unanticipated behaviour/responses has been acknowledged. A black swan event is possible if, as Turner also puts it (see Chap. 5), it could also be a

4 Keep It Simple but not Stupid—Complex Technology …

65

foresight failure. Even rare and low probability events will need to be anticipated. Is that possible? As in the pandemic or the B737 case, these events were talked about. Therefore, they were seen as possibilities, a Black swan is about an “unknown” event and a black swan can be managed only by creating a system that is robust and antifragile.

4.6

High-Reliability Organisation (HRO) Theory

In Chap. 5, a sociologist’s fatalist view on the inevitability of accidents based on human limitations to foresee everything will be discussed. While the view does have evidence going for it, the role of scientists and engineers is to evolve towards a risk-free society. The HRO theory [9] was conceptualised after looking at a number of complex organisations where an accident would be disastrous. These include aircraft carriers, nuclear power plants, and air traffic control systems. Of these, the complexity of aircraft carriers is important to understand. On its own, the carrier is a floating city, has air traffic control systems, has armaments, is nuclear powered, and passes through bad weather routinely. Landing and taking off from an aircraft carrier is a highrisk business. However, over the years, accidents have been low. This is not to say, there haven’t been any accidents. However, there is not a case where an aircraft carrier was lost due to an accident. Therefore, the argument was to see what made some complex technology and organisations successful and where adverse outcomes were rather infrequent. A number of research projects were undertaken, the one at University of California, Berkely [9] stands out, and they found common traits: • • • • •

Preoccupation with failure Reluctance to simplify Sensitivity to operation Commitment to resilience Deference to expertise.

These traits are sociological and human behaviour characteristics and form part of the basic tenets of HROs. In HROs, process failures are addressed immediately; they do not ignore any failure, there is constant thought of scenarios of failure, and a transparent reporting culture is encouraged. There is no plan to convert complex issues to simple ones; there is constant root cause analysis, which always challenges conventional wisdom and sets benchmarks. The HROs have an ear to the ground, look at what people on the front

66

S. Chandra

line are saying, no assumptions are made, and the alerting structure is open. The structure is such that reporting should not lead to whistle blowing in a damaging form and in fact provide feedback to the ones who have reported the issue, without the possibility of reprimand. There is a sense of agility in these organisations in terms of what needs to be done once a problem has been spotted. Multidisciplinary teams are trained for problem solving and managing unforeseen scenarios. However, the most important attribute is that there is deference to expertise, not authority. Therefore, the organisation knows who has the expertise, the domain knowledge, and invests in creating and nurturing the expertise. In summary, none of these traits are engineering or science driven; these traits correspond to organisational behaviour, human behaviour, and sociology. Interestingly, in the early months of the pandemic, especially in some countries, there was less deference to expertise, not much interest in listening to doctors and nurses on the ground, and reluctance to accept the possibility of a disaster. There have also been attempts to simplify a complex problem. In political leadership, deference to expertise was notably lacking in some cases. In healthcare, even a minor error could have catastrophic consequences. On the other hand, some healthcare organisations have moved to adopt the HRO mindset. Some systems have been developed to reduce errors and improve reliability [50]. However, HRO is still a human behaviour and a sociological set of recommendations. It does not belong in the turf of engineering systems, attempting to improve reliability using data and probability. Nor is there a measure for reliability as is done in engineering. On the other hand, Leveson [51] argues that a systems approach to safety will allow building safety into sociotechnical systems more effectively and provide higher confidence than is currently possible for complex, high-risk systems. One of the arguments of Leveson [51] and group is not just the failure of subsystems that cause accidents, but it is also about aberrant interactions among what is termed perfectly functioning subcomponents. The interaction behaviours truly point to organisational communication and decision making. When these deviate or are dysfunctional, accidents tend to occur. Additionally, as the complexity of these interactions grows, the possibilities of dysfunctional interactions enveloping the full system are high. The question is whether what applies to a system or equipment that can be used to measure human reliability in operations. While not explicitly said, that is what HROs aim to address in a qualitative way. Overall, it is acknowledged that technology and organisational complexity can be a cause for failures, accidents, and disasters. Scientists and engineers

4 Keep It Simple but not Stupid—Complex Technology …

67

have historically dealt with complexity using a systems approach, with more complex systems being defined through a system of systems framework. In the accidents described in Chap. 2, the complexity of technology and organisations have contributed to the accidents; many times, human errors have occurred because of automation of the systems. In those cases, understanding and mental models have been deficient. These encompass human behaviour, social structures, belief systems, and the general perception of risk. These issues are addressed in Chap. 5. Complex organisational structures that deviate from accepted processes for higher levels of profit or meeting schedules were also noted in some of the accidents. These issues are addressed in Chap. 8.

References 1. Smith, A. (2019). Trump tweets airplanes becoming ‘far too complex’ following Ethiopian Airlines crash. NBC News, March 12, 2019. 2. Perrow, C. (2000). Normal accidents: Living with high risk technologies. Princeton University Press. 3. Feder, B. J. (1988). Business technology; The A320’s fly-by-wire system. New York Times, June 29, 1988. 4. Ministry of Civil Aviation, Government of India. Report on the accident to Indian Airlines Airbus A320 Aircraft VT-EPN on 14th February, 1990 at Bangalore, By The Court of Inquiry Hon’ble Mr. Justice K. Shivashankar Bhat, Judge, High Court of Karnataka, Ministry of Civil Aviation, Government of India. http://lessonslearned.faa.gov/IndianAir605/Indian%20Airlines% 20Flt%20605%20%20Accident%20Report.pdf. Accessed August 10, 2022. 5. Aviation Accidents. https://www.aviation-accidents.net/air-france-airbus-a320111-f-gfkc-flight-af296/. Accessed August 10, 2022. 6. The Economic Times. (2019). When DGCA grounded planes for safety reasons, March 14, 2019. 7. BBC. (2020). Uber’s self-driving operator charged over fatal crash, September 16, 2020. https://www.bbc.com/news/technology-54175359Selfdri vencars. Accessed August 10, 2022. 8. CBS News. (2022). What’s the status of self-driving cars? There has been progress, but safety questions remain, February 19, 2020. https://www.cbsnews. com/news/self-driving-cars-status-progress-technology-safety/. Accessed August 10, 2022. 9. Roberts, K. H. (1989). New challenges in organizational research: High reliability organizations. Organization & Environment, 3(2), 111–125. 10. Koestler, A. (1967). The ghost in the machine. Hutchinson.

68

S. Chandra

11. Leitch, R. R., Wiegand, M. E., & Quek, H. C. (1990). Coping with complexity in physical system modelling complexity. AI Communications, 3(2). 12. Simon, H. A. (1973). Structure of ill structured problems. Artificial Intelligence, 4 (3–4), 181–201. 13. Dean, R. D., Hawe, P., & Shiell, A. (2007). A simple guide to chaos and complexity. Journal of Epidemiology and Community Health, 61(11), 933–937. 14. Singh, K. (1995). The impact of technological complexity and interfirm cooperation on business survival. Academy of Management, Proceedings (1). 15. Valdez, L. D., Shekhtman, L., La Rocca, C. E., Zhang, X., Buldyrev, S. V., Trunfio, P. A., Braunstein, L. A., & Havlin, S. (2020). Cascading failures in complex networks. 8(3). 16. The MITRE Corporation. Systems of systems. Accessed June 2022. https:// www.mitre.org/publications/systems-engineering-guide/enterprise-engineering/ systems-of-systems. Accessed August 10, 2022. 17. Checkland, P. B. (1981). Systems thinking, systems practice. Wiley. 18. Turner, B. A. (1978). Man-made disasters. Wykeham Publications. 19. Reason, J. (1995). A system approach to organizational error. Ergonomics, 38(8), 1708–1721. 20. Reason, J. (1997). Managing the risks of organizational accidents. Ashgate. 21. Bogner, M. S. (2002). Stretching the search for the why of error: The systems approach. Journal of Clinical Engineering, 27 , 110–115. 22. Graham, D. A. (2014). Rumsfeld’s knowns and unknowns: The intellectual history of a quip. The Atlantic, March 18, 2014. 23. Van Riper, A. B. (2007). Rockets and missiles: The life story of a technology. JHU Press. 24. NTSB. (2000). In-flight breakup over the Atlantic Ocean Trans World Airlines Flight 800 Boeing 747-131, N93119 Near East Moriches, New York July 17, 1996”, Aircraft Accident Report. 23rd August 23, NTSB/AAR-00/03TWA. 25. Hadad, C. (1996). What happened to Flight 800? CNN , July 19. 26. DOD. (1980). Procedures for performing a failure mode, effects and criticality analysis. A. U.S. Department of Defense. MIL-HDBK-1629A. 27. NASA, Failure modes, effects, and criticality analysis (FMECA). National Aeronautics and Space Administration JPL. PD-AD-1307. 28. Center for Chemical Process Safety. (2008). Guidelines for hazard evaluation procedures (3rd ed.). Wiley. 29. Ericson, C. (1999). Fault tree analysis—A history. In Proceedings of the 17th International Systems Safety Conference. 30. International Nuclear Safety Advisory Group, IAEA. (1996). Defence in depth in nuclear safety (INSAG-10). 31. Reddick, H. K. Jr. (1983). Safe-life and damage-tolerant design approaches for helicopter structures. In NASA, 19830025690, Conference Paper. https://ntrs. nasa.gov/api/citations/19830025690/downloads/19830025690.pdf. Accessed August 21, 2022. 32. Reason, J. (1990). Human error. Cambridge University Press.

4 Keep It Simple but not Stupid—Complex Technology …

69

33. Reason, J. (2000). Human error: Models and management. BMJ, 18(320), 768–770. 34. Cook, R. I. (1998). How complex systems fail: Being a short treatise on the nature of failure; how failure is evaluated; how failure is attributed to proximate cause; and the resulting new understanding of patient safety. Accessed via https://how.complexsystems.fail/ 35. Whattles, J. (2020). Virgin galactic unexpectedly aborts test flight of space plane. CNN Business, December 13. 36. Roberts, A., & Tregonning, K. (1980). The robustness of natural systems. Nature, 288. 37. Kitano, H. (2004). Biological robustness. Nature Reviews Genetics, 5. 38. Zioab, E. (2016). Challenges in the vulnerability and risk analysis of critical infrastructures. Reliability Engineering & System Safety, 152. 39. Lind, N. C. (1995). A measure of vulnerability and damage tolerance. Reliability Engineering and System Safety, 48(1). 40. Blockley, D. I., Lu, Z., Yu, Y., & Woodman, N. J. (1999). A theory of structural vulnerability. Structural Engineer 77 (18). 41. Taleeb, N. N. (2012). Antifragile: Things that gain from disorder. Random House. 42. Bain, R. (1929). The concept of complexity in sociology. Social Forces, (1929– 30), 222–231. Accessed via https://brocku.ca/MeadProject/Bain/Bain_1930_ html 43. Perrow, C. (1999). Normal accident theory: Living with high risk technologies. Princeton University Press. 44. Turner, B., & Pidgeon, N. (1997). Man-made disasters. ButterworthHeinemann. 45. Vaughan, D. (2016). The challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press. 46. Perrow, C. (1986). Complex organizations: A critical essay. McGraw-Hill. 47. Simon, H. (1957). A behavioral model of rational choice. In Models of man, social and rational: Mathematical essays on rational human behavior in a social setting. Wiley. 48. Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organizational choice. Administrative Science Quarterly, 17 (1), 1–5. 49. Taleeb, N. N. (2008). Black Swans. Penguin. 50. Banfield, P., Cadwaladr, B., Hancock, C., Heywood, T., & Semple, M. (2012). Achieving high reliability in NHS wales. Accessed via http://www.1000lives plus.wales.nhs.uk/sitesplus/documents/1011/Achieving%20High%20Reliabi lity%20in%20NHS%20Wales%20%28FINAL%29.pdf 51. Leveson, N. (2015). A systems approach to risk management through leading safety indicators. Reliability Engineering & System Safety, 136 .

5 Are Failures Stepping Stones to More Failures? The Sociology of Danger and Risk

After President Trump left office, Dr. Anthony Fauci, the Director of the National Institute of Allergy and Infectious Diseases who was part of President Trump’s task force on COVID, described his discussions with President Trump on the pandemic. Interviews with the COVID task force are in the media. There is a sharp contrast in the views of President Trump and Dr. Fauci in the discussions. During the rapid rise in cases in the New York area, President Trump is quoted to be asking if things were truly that bad, to which Fauci would counter to say things indeed looked truly bad. While Fauci would look at research to make decisions about a drug, President Trump, according to Fauci, would rely on anecdotal inputs from people who he knew, who called him, e.g. saying the drug hydroxychloroquine (HCQ) was great or convalescent plasma worked [1]. This episode represents the classic divide between the scientists who are specialists and others who are often guided by popular opinions and the social framework with respect to the perception of risk. At the end of January 2020, as the pandemic moved from China to many countries, very few believed that a disaster of this scale would occur. When some governments started to act, there were worries about such drastic action leading to a battered economy. Some politicians, for example, expected it to disappear quickly [2, 3]. In January 2020, nobody visualised how New York City would look in April 2020, with hospitals full, makeshift morgues, and a deserted Manhattan [4, 5]. However, such events were discussed before, novels were written, and movies were made [6, 7]. Much of social epidemiology is based on how

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_5

71

72

S. Chandra

epidemics spread in social settings. The pandemic of 2020 was definitely in a social setting: of politicians who were up for elections, businesses that desperately needed to remain open to survive, doctors at the very end of their wits, frontline workers vulnerable on all fronts—work, family, and survival if laid off or caught the virus. However, there were several people who truly did not take the pandemic seriously, believed it was exaggerated and felt that there was no need to follow the guidelines. Masks were suspect, and it was believed that it violated personal freedom [8, 9]. Even those who were fearful of the danger could not resist interaction with friends and loved ones. This was all about human behaviour and sociology set in the twenty-first-century backdrop of science and technology. Why do some people disbelieve science, why do some take the risk, and why do so many not care about infecting others? It is not the pandemic alone that illustrates the sociology of risk; most accidents and disasters illustrate that. The Boeing 737 Max air crash led to many people showing deep reluctance to fly in the plane again [10], although it has since been fully cleared to fly. The episode illustrates just about all the frontline theories of the sociologists, especially of organisational culture, deviance, imperfect mental models, etc. The Boeing 737 Max story, at least in the initial stages, was as much about sociology as it was about engineering. Consistently, the public was given extremely optimistic predictions of the aircraft’s return [11]. Only after the ouster of its CEO, many months after the crash did Boeing take accountability, and it looked as if there were organisational culture issues [12, 13]. Sociology is about prevailing culture. While psychology is about human behaviour, sociology is the study of social behaviour in groups, organisations, and societies, and how people interact within these contexts. Sociology, anthropology, and psychology have a role in articulating how human beings behave when faced with risk or more appropriately when they perceive danger. Risk perception in society seems divided between scientists and engineers on the one hand and lay people on the other. The perception of risk by lay people is what has been studied by sociologists and psychologists. There is also a sense of fatalism among some sociologists about accidents and disasters, based on the belief that the combination of complexity and human behaviour provides that scope for accidents to continue to happen. Moreover, it is organisational behaviour that is emphasised, which scientists and engineers have had little to say about. Later in this chapter, the work of Perrow [14], Turner [15], Vaughan [16], and Downer [17] is studied, where Charles Perrow argues that when using

5 Are Failures Stepping Stones to More Failures? The …

73

complex technology, accidents can be inevitable. Turner in his book on manmade disasters describes the crucial links to incubation and organisational structure. Vaughan, in her work on Space Shuttle disasters, describes organisational structures and management decision-making failures. Mary Douglas, in her seminal work, described how the reaction to risk and the interpretation of failure occur in communities and are largely cultural. It is well known that in nonindustrial communities, there appear to be different perceptions of risk and interpretations of disasters and failures, sometimes driven by myths, religion, and inaccurate mental models of complex physics. These theories are discussed in this chapter.

5.1

Mary Douglas and the Perception of Danger and Risk

Danger is a primitive and primordial instinct. In that sense, the concept of danger is mostly individual with social and collective constraints. Interestingly, a road can be “dangerous”, while driving on it could be “risky”, so these words have been used interchangeably. One thing is clear, as in any modern culture, the red traffic light says nothing about risk, it says danger. In any society, primitive, industrial, or otherwise, it is danger that appears to be easy to absorb. The author believes that we tend to use the word danger more frequently than risk, although over the years: “it is dangerous” has sometimes been replaced by “It is risky”. More as an afterthought, it can be argued that people have responded by acting as if they know the difference when asked, but in reality, they are talking about danger. A dictionary definition of risk is full of the possibility of danger, failure, or loss but colloquially used as a synonym for danger. For example, risk is a situation involving exposure to danger, while danger is defined as the possibility of suffering harm or injury [18]. In some ways, a widespread use of the word risk in everyday life has disturbed the very concept of danger, and sociologists are critical that risk often makes a very spurious claim to be scientific. As an anthropologist and sociologist, Douglas [19] argues that it is all very good for individuals in primitive tribes to think and act in unison; the simplicity and uniformity of their experience make that understandable. Douglas and Wildavsky [20] argue that risk selection is a social process. While scientists and engineers strive to explain risk selection through probability, nonspecialists, when confronted with social decisions, cannot assimilate the risks with complex technological issues.

74

S. Chandra

This is illustrated in the present pandemic, as much of the response has been social and tribal but driven in part by social media. The tribe or community appears to make choices with respect to dangers, and some that appear dangerous for a particular community are not perceived as dangerous by other communities. This manifests primarily in social behaviour. Social anthropologists such as Mary Douglas note that there have been cases in tribes and strong communities; sometimes, natural disasters are implied as being a sign of reprehensible behaviour, so much so that a tenuous link is established between dangers and morals. In some other communities, technology is viewed as a source of danger. Why is vocabulary so important? The lexicon reflects the contemporary thought processes in societies, with risk replacing danger, as if it were more scientifically correct. There is a certain feeling of being static with danger, but risk is more about the future and provides a sense of prediction. It is interesting that the sense of danger based on individual or shared experience generates an intuitive risk assessment. This is especially true when the risk taking is for gain. We explore this specifically in a later chapter. However, as sociologists such as Mary Douglas argue, the low probability of a dangerous event makes little difference in making a choice because the public is aware of the fact that there would be many things that may have been left out of risk calculations. The author believes, especially based on the work of Douglas, that sociologists are not convinced that the scientists’ and engineers’ view of risk is objective or well-rounded, as it is devoid of the social and political issues that drive risky behaviour. In other words, they argue that much of the dangers come from human action, interweaved between politics and social behaviour, which is not explicitly accounted for by engineers. Human beings sense danger, but the sense of risk that has been developed by them is complicated by social structure, as noted by Douglas, and what they internalise in terms of risk is a qualitative trade-off they make in terms of whether the event will occur. Sociologists such as Douglas argue that the sense of danger in a modern industrial society guides us in day-to-day risk taking and that this is also a socially constructed process. In other words, one’s sense of risk is based on the society in which he or she lives. Even in the COVID pandemic, Douglas’s view of how social groups perceive danger can be seen, e.g. a number of people who refuse to wear masks [21]. In the cultural theory of Mary Douglas and Wildavsky, groups are identified in the form of a worldview: hierarchical, individualistic, egalitarian, or even fatalist. The reaction to risk is supposed to be based on the worldview of the group, and any information about a certain risk is interpreted based on that view and generates polarised positions [20].

5 Are Failures Stepping Stones to More Failures? The …

75

The issue, as in the pandemic, is about how decision makers, politicians, and civil servants use risk analysis to make decisions such as lockdowns. There is an intuitive understanding of probabilities in human beings, but there has always been a need for guidance. A decision maker, especially a politician, does need to see the “risk statement” to make a choice based on the probabilities: a real quantitative risk assessment and relies on scientists for the purpose. Here, we see the interplay between danger, risk, and uncertainty. While danger exists, there is also a sense of uncertainty about the occurrence of the event, leading to risk assessment. Thus, the basis of risk could be regarded as uncertainty. However, Mary Douglas argues that risk unfortunately has become a “decorative flourish” over danger, saying in fact that the Japanese can comfortably talk about probability, uncertainty, safety measures, and danger without using the word risk. Her main contention is that when probabilities are used to inform the public, e.g. that there is a ten per cent probability of something happening, it is a poor guide to decision making. Her view is that showing low probabilities of a dangerous event does not make a difference in making a choice. Douglas is of the opinion that the public already know that many other things that should go into a risk calculation have been left out [19]. It is not that engineers or scientists do not provide the numbers, but they do not include what sociologists care about, and sociologists argue that a more balanced perception of risk is needed. Therefore, unlike risk associated with natural disasters such as earthquakes or floods, aggregation of choices is complex, and risk assessments fail to take note of human interactions, beliefs, compulsions, etc., all of which are grounded in sociology rather than science and engineering. Unlike for scientists or engineers, for sociologists, risk is all about thoughts, beliefs, and constructs. There have been discussions about the cultural theory of Douglas [22] and its explanatory power. In its analysis, Sjoberg also argues that other aspects of social behaviour, such as trust, play a vital role in risk perception [23]. Trust in an organisation or business entity or corporation is often given huge importance [24], and the author argues that in the Boeing 737 Max case where trust in both Boeing and the regulator FAA was lost after the accidents, for a while it led to considerable concern about the risks in flying on the aircraft. This happened previously with NASA as well, again reminding us of the need to retain trust in an organisation and, more importantly, in the organisation structure for its capability and experience. Trust is clearly a social construct and influences risk perception. The study of risk from a cultural theory perspective of Douglas and Wildawsky [20] asks if society and culture influence the perception and understanding of risk. In

76

S. Chandra

fact, the argument is that individual traits, needs, and preferences have less influence than cultural adherence. This is important because the social fabric dictates the perception of risk as much or more than individual traits, a case seen in the pandemic, even in rich and affluent countries. However, this theory has received support as well as criticism in that it may not have captured the complexity of cultural symbols and meanings. The message to scientists and engineers is that risk perception is a complex phenomenon shaped by many views: sense of individual liberty, social structures, religion, peer groups, information, and a sense of trust in institutions, governments, and organisations. It will eventually play a role in how accidents are prevented.

5.2

Engineering and Risk

The Royal Society Risk monograph (1992) states that risk truly is a probability that an event (adverse event) will occur at a given point in time or is a result of a particular process. The emphasis is that because probability is part of statistical theory, risk also satisfies all the formal laws of combined probabilities [25]. The engineering view of risk is based on scientific methods. For example, the life of an aircraft component is calculated based on scientific theories and tested accordingly, scenarios of possible failures simulated and assessed, through which probabilities are computed. As a tribe, scientists and engineers remain convinced that they will eventually be able to minimise risk through probability methods. This is interesting from the point of view of the Boeing 737 Max disaster. A statement regarding the probability of an event occurring being low (e.g. 1 in a million) could lead to various interpretations, leading to diametrically opposite conclusions and decisions such as “it is insufficient to warrant a risk” by some while “ít is an acceptable situation” by others. On the other hand, as the sense of being in a situation that is completely risk free is near impossible, these risk assessments do play a role in decision making at the individual and policy level, sometimes articulated, sometimes not. However, sociologists see this in many ways as incomplete, as it does not account for human fallacies, social interactions, etc. In the work, risk and blame [19], Douglas makes the point very clearly: “when I tried to engage established risk analysts in conversation, I soon gathered that to emphasise these dubious uses of risk is perverse, a dirty way of talking about a clean scientific subject. Although they recognise that the grime and heat of politics are involved

5 Are Failures Stepping Stones to More Failures? The …

77

in the subject of risk, they sedulously bracket them off. Their professional objective is to get at the real essence of risk perception before it is polluted by interests and ideology. Risk analysts have a good reason for seeking objectivity. Like all professionals, rightly and properly, they do not wish to be politically biased: this is important for their clientele. To avoid the charge of bias, they exclude the whole subject of politics and morals. Reflecting their notion that right and wrong are not what “matters” [19]. Douglas and Wildavsky [20] emphasise that in a social setting, specialist opinions (experts in the subject, such as doctors and scientists in the pandemic) should be treated equally without the extra weight to their opinions, as risk is more than what they see. It is really social and reflects shared beliefs and values in organisations and communities. The author believes that this truly is the view of sociologists on scientists and engineers. This is also in the context of the Boeing 737 Max accidents, where the risk analysis or hazard analysis did not take into account human behaviour, politics, social interactions, or the role the regulator in a social network of manufacturer, and regulator may play in ignoring the risks of noncompliance. Sociologists argue that contemporary risk analysis is unable to see the political or social need to understand the probability of occurrence in a meaningful way. The argument is that the way probabilities are bandied about, making a choice, may never be by using it. The view is that the public finds this risk quantification difficult because it does not have social, human behaviour issues in the measurement, and the public intuitively knows that it is important. The vagaries of social interaction and human behaviour drive the risk, either up or down. This cannot be better illustrated than with the pandemic, as we see human behaviour that disbelieves in mask wearing but insists on social interactions that increase the risk. Ulrich Beck, a well-known sociologist who has studied risk in his book Risk society, notes that people become overly dependent on experts to define what is dangerous [26]. Like Mary Douglas, he states that the pretended innocence of scientists and engineers of a politically free risk view should be rejected, as it is effectively about not wanting to be explicit about the political stakes and the individual’s view on it. The author notes that sociologists such as Douglas, Perrow, and Beck believe that avoiding one type of danger could set up another type of danger, and these decisions are political, and there is little benefit in sanitising and disguising it in probability terms. Because risk perception does not occur in a social vacuum, one cannot account for how people perceive and understand risks without also considering the social context.

78

S. Chandra

The interesting bit is how engineering vocabulary that is basically invented by engineers is seen by others. For example, “innovativeness”, “complexity”, and “coupling” are used widely in engineering. These are many times quantitatively measured, weighted, or indexed. However, outside of engineering, it is a qualitative variable. This applies to risk as well, which sociologists view as a highly ambiguous variable but widely used in public policy. Interestingly, among engineers, the risk science area is also used many times to support a case. This applies to the probability of failure, hazard analysis, and even the regulatory requirement that says “a certain aircraft system should be shown to have a failure occurrence of say 1 in 10 million flights or so”. The positioning and chasm between engineers and scientists on the one hand and sociologists on the other sometimes appear to be irreconcilable. There have been some acknowledgements from the engineering side about human behaviour and social effects on accidents. The concept of soft systems in fact came about based on this recognition [27]. In fact, David Blockley before Perrow in 1984, but after Turner in 1978, spoke about the role of human behaviour in the generation of disasters [28]. Blockley’s language was more suited to engineers than to sociologists, but what was important was the recognition that the engineering approach, which was devoid of human behaviour effects, was inappropriate and required modification. A later book on Engineering Safety edited by Blockley and Turner has a chapter on the sociology of safety [29]. The author believes that what has since happened, however, is that engineers have gone their way, not incorporating human behaviour effects but refining their algorithms using probability methods, while sociologists have continued to maintain that risk perception is the basis of disasters and, as a result, will inevitably happen again.

5.3

Perrow and the Normal Accident Theory

A professor at Yale, Charles Perrow has argued that many of the accidents that have taken place or occur in the future will be due to the complexity of systems and a number of interacting elements, and the failure of each of them will result in a sequence of progressive failures. The book normal accidents [14] as the title suggests implies that these are normal (as defined in: conforming to a standard; usual, typical, or expected) and inevitable when system complexity is involved. The work of Perrow [14] has also come to be known as normal accident theory (NAT), which also makes a case of organisational and management factors as causes of failures. In other words, NAT is about the inability of the organisation to foresee the accident apart from the

5 Are Failures Stepping Stones to More Failures? The …

79

implication that if the technology was simpler, the management could have fared better. Perrow looked at the issues of risk and technology after working on a Presidential Commission’s paper based on the Three Mile Island nuclear accident in 1979 [14]. The accident as documented was apparently caused by a small leak of water into equipment that triggered a chain of subsequent failures, malfunctions, and operator errors. NAT quite effectively uses the Three Mile Island accident to show what a blocked filter could do, resulting in cascading failures. It is emphasised that in the Three Mile Island accident, possibilities and scenarios that appeared minor or even inconsequential, not anticipated by the designers, led to a cascading chain of failures that broke the multiple safety systems in play. The engineering solution is a defence in depth policy, which engineers are familiar with, as they build redundancies to avoid cascading failures, but to Perrow, this is insufficient from an inevitability point of view-based complexity and coupling. Perrow argues [14] that while risk takers feel that there is a “moral fibre” that is needed to take risks with technologies, they do not show the same sense of risk with social experiments with respect to poverty, dependency, and crime. Furthermore, risks from risky technologies are not shared equally by the different social classes. The issues raised here make a clear call between imposed risk versus voluntary risk taking as in skiing, scuba diving, sky diving, etc. While imposed risk can come from large corporations and their behaviour (Boeing 737 Max is a case), the acceptance of the risk truly needs to be by the public. The argument is that risk assessment is undertaken by a specific group who are generally either engineers or scientists and where success is measured not in eliminating the risk of the activity but in minimising the risk of regulation stopping or inhibiting the activity. Any business template has a need for the project to be evaluated for regulatory risk, effectively saying that regulation could cause a project risk. Perrow [14] goes on to say that people in power historically always commissioned risk assessments from priests, astrologers, lawyers, and now risk experts who use quantitative techniques. The author notes that sociologists such as Perrow see technology risk very broadly, more broadly at the societal level. It is observed by Perrow [14] that there is a prevailing assumption that new risks should not be any higher than existing ones, implying that when other industries become riskier, nuclear power could have its safety levels lowered. Based on risk assessment, policy recommendations are broadly tagged as one of “tolerating, restricting, or abandoning” activities. For example, the author sees that the oil import of Europe from Russia was regarded as lower risk versus nuclear power before

80

S. Chandra

the Ukraine War, but already, there are campaigners for nuclear power to be back in Europe [30]. The NAT contrasts accidents with what is sometimes called “component failure accidents”, which are defined as accidents where one failure causes other failures through predictable and linear relationships between the elements of a system, e.g. when an electrical surge affects the wiring in civil aircraft. However, other sociologists, such as Downer [17], note that scientists and engineers envisage “failure-free systems”, more like in normal parlance, to be known as systems that can also err without failing, currently with an ability to repair and correct themselves. The idea is that the defence in depth redundancies and fail-safe elements can accommodate the uncertainties of certain bounds in which the system operates. However, Perrow says that in many cases, the risk assessment tools developed in science and engineering, while they find their way into interactions that involve social and political spheres, do not have the human and social element, and the concept of risk being considered is actually unaccepted danger. This is the underlying theme of Mary Douglas’s argument as well.

5.4

Turner and Man-Made Disasters

Barry Turner and later Barry Turner and Nick Pidgeon wrote a series of papers and the book “man-made disasters” in 1978 [15, 31]. While Turner was a sociologist, his work was different, and his views were different from those of Perrow, Mary Douglas, and others. While he acknowledged that the complexity of any system did pose risks, he emphasised issues in organisation management, such as the lack of planning for interactions among members of the organisation in the event of an accident and channels of communication even where they existed, which were blocked by unrelated organisational issues. The focus was on the incubation of errors, risks, and dangers—nothing happens suddenly. There is usually a history of smaller failures, and there is a social and cultural context. In contrast to Perrow, Turner did not explicitly say in a fatalistic way that accidents will inevitably happen. Turner’s work investigated a series of “man-made disasters” and noted that there were danger signals lurking, and the disasters were preceded by a “legacy of unheeded warnings” that, if acted on, would have averted a misadventure. Turner in 1978 argued that a single reason may not cause a disaster, but a collection of “undesirable” events over an incubation period would. These could be uncertain or ill-defined safety problems, with added technical malfunctions and operating errors, all of which could be counter to accepted beliefs or norms.

5 Are Failures Stepping Stones to More Failures? The …

81

It could in fact be unanticipated, and according to Turner, eventually, one of these becomes a “trigger event”. Incubation of failures could be aggravated by the complexity of technology, systems, and organisational structure. The concept of a “root cause” in that sense as used frequently by failure investigators is truly several errors. In the work of Dekker and coworkers [32], as an extension of the Turner paradigm, accidents are preceded by periods of gradually increasing, although unrecognised or spotted, risk in the incubation period. Turner argues that causal features could be one or more of the following: rigidities in institutional beliefs, distracting decoy phenomena, neglect of outside complaints, failure to comply with regulations, and multiple information handling difficulties. In man-made disasters, Turner points out the exacerbation of hazards by strangers and a tendency to minimise emergent danger. These were part of the incubation to a disaster, much of which could go unnoticed until a precipitating event occurred. Turner classified the events into various stages: (1) a starting point (driven by culture, regulation, social mores, etc.), (2) incubation (accumulation of unnoticed errors, which clearly are not at odds with hazards), (3) a precipitating event that now transforms the events in stage 2, (4) onset, where the collapse of cultural precautions become apparent, and (5) where rescue, salvage take place, leading to a final stage, where full cultural adjustment occurs. Although not explicitly stated, what is important to note is that organisations are often observed to have deviated and become ill-structured preceding the collapse. The organisational culture, however, is seen to readjust to a well-structured form after the event. This has been seen clearly with Boeing 737 Max and the pandemic. In the Boeing 737 Max disaster, with stage one being deterioration of culture at Boeing and the FAA, things occurred against the accepted hazard culture. What emerged was an incubation period, leading to the accidents, the grounding of the aeroplanes and a next set of stages, where with legislative oversight, a full cultural readjustment is underway. Turner suggested that more attention be paid to the design of a safe organisation. Three decades later, in terms of disasters such as that of the Shuttle, Boeing 737 Max, and the pandemic, it is just as relevant now. As later chapters will show as well, as much as complexity, it is human behaviour that seems to be the link in the chain. Pidgeon [33] points out that given the close interdependence between technology and behaviour, behavioural preconditions to failure lead to disasters and, as a result, are invariably human-made, causes being multiple and diverse, and compounded in complex interactive ways over time.

82

S. Chandra

Downer notes that Turner’s basic tenet—that accidents are “foresight failures”—is valid in many cases, in that they are preceded by events that organisations could in principle (even if not in practice) recognise in time to prevent catastrophic accidents [17]. The fact that these are as much “social” rather than “engineering” problems was pointed out. This was important because a disaster with warning signs is, in principle, a disaster that can be prevented. However, in practice, even if warning signs were present, as Turner himself argued, there are often intractable problems associated with recognising and acting on them.

5.5

Downer and the Epistemic Accident

Downer in “Anatomy of a Disaster, Why Some Accidents Are Unavoidable”, quotes Friedrich Dürrenmatt, the Swiss playwright, “the more human beings proceed by plan, the more effectively they may be hit by accident” [17]. Through this book, the argument is that among the sociologists, there is scepticism about science and engineering and what it will lead to, as all of science, especially engineering is about planning. Downer provides a different view of disasters (of Turner type of man-made disasters), calling it an epistemic (knowledge related) accident, which places it on familiar ground as far as sociologists are concerned. It challenges the implicit rational philosophical model of engineering and scientific knowledge, which assumes that engineering facts are, in principle, objectively “knowable” and that “failures” are theoretically foreseeable. Downer addresses the fundamental constructs in Western philosophy and its influence on science and engineering, that the scientific method will inevitably lead to the “truth”, and erroneous knowledge claims will be recognisable and can in fact be eliminated. However, any measurement is via an experiment, and Downer points to the inadequacy of experiments to simulate reality [17]. This is true in many cases, as one finds it nearly impossible to simulate an accident completely. Engineering and science are coupled, engineering follows science faithfully, in the form of what Karl Popper calls conjectures and refutations [34], that a worldview is always evolving based on the available facts. However, the basis of science, which is in evolving better theories, does not sit well with the inevitability view of Perrow or Downer. However, among all the sociologists, it is Downer who speaks some of the language of engineers and argues that no complex system can function without small/trivial errors but still can remain robust. On the other hand, Perrow argues that even such tiny errors have disaster potential. Downing’s

5 Are Failures Stepping Stones to More Failures? The …

83

view in combination with Turner’s incubation concept of how small errors incubate to finally lead to a disaster is certainly plausible. Therefore, Downer’s illustration of the Aloha accident suggests an epistemic accident, where the aircraft fuselage blew up after 35,000 flights, because the science of fatigue failure of a specific kind in terms of multisite damage was unknown. Paradoxically, many years before that the Comet crash highlighted the problems involved, which was also due to fatigue failure, after which it was expected that fatigue was a known phenomenon, but the Aloha accident showed how some surprising new physics was not known before it occurred. This applies to Boeing 737 Max and the pandemic as well. In both cases, there were some precedents of a similar kind, and tests were carried out, but the reality was that those precedents did not lead to preparedness and some unknown behaviour also emerged. Along the lines of Perrow, Downer notes that it is impossible to completely and objectively “know” complex machines (or their behaviours) and that there is no inherent pathology to failure and no perfect method of separating “flawed” from “functional” technologies. The view is that accidents and failures must be understood just as when it is done for things that work normally. Epistemic accidents are caused by erroneous scientific knowledge or assumptions made because they were not known earlier. However, normal accidents of Perrow assume, something like this is bound to happen, it is in fact inevitable as human beings are just not capable of “foreseeing it”. And in a sense, so does Downer, who argues like normal accidents, epistemic accidents are unpredictable and unavoidable and are more likely in highly innovative systems, likely to reoccur, and challenge design paradigms. Do both Perrow and Downer believe that scientists and engineers will ultimately get it wrong? Either the complexity will get them, or that the theories are fallible in some case or the other, meaning it would challenge design paradigms of the previous generation. In some ways, all of this is a foresight failure, but not in the way Turner would describe it. It is about a lack of complete foresight of technology behaviour. Downer, in his paper, notes a statement from a director of FAA where it is said the understanding aircraft does not happen quickly but could be decades, and hence, successful complex technologies have legacies of failure and modern aviation owes much to its tombstones. The author argues this is a telling crucial statement, where the engineer and scientists acknowledge the inevitable in terms of the unforeseen, but believe by evolution and learning, better systems can be designed.

84

S. Chandra

Scientists and engineers working in the real world make judgements with the best information they have. To say that we should not fly airplanes unless all disagreements have been resolved, and all facts are known, is in essence, to say that we should never make these. While the public may be fearful of dangers associated with the planes, it has not stopped a very large number of them flying in them. What seems to come out from both Perrow and Downer’s work is that there is a cautionary tale of being unable to predict everything, although science and technology does seem to go on just the same, albeit by learning from failures.

5.6

Vaughan and the Normalisation of Deviance

However, many accidents occur due to another major point called deviance. Turner argued that it was human behaviour that enabled the incubation to man-made disasters, but it was Vaughan [16], who, on investigation of the Shuttle disasters, showed how an organisation could systematically accept the deviance until a disastrous event occurred. This is described in her telling piece of work called the normalisation of deviance. Diane Vaughan, a sociology professor at Columbia, investigated the Shuttle disaster. It was not the sense of danger, nor was it inevitability of the NAT, or what Downer called epistemic accident, although one could see a sprinkling of these in the accident. However, it was a systemic failure at the organisational level at NASA, which she termed the normalisation of deviance. She argued that normalisation of deviance is a process that leads to change in the social fabric in an organisation. Based on the now famous Challenger disaster, where damage to the crucial O-rings in the booster rockets had been observed in earlier Shuttle launches and a concern was raised by Roger Biseljoy (an engineer with the contractor Thiokol, which designed and supplied the O-rings), further damage could occur if a launch is done in very cold conditions. However, the technical deviation from performance predictions was found acceptable by the organisation, NASA. What Vaughan found was that this kind of acceptance of risk slowly percolated into the social fabric of NASA. More importantly, the organisation and its management team effectively provided the scope to conceal the danger and not evaluate the risk involved. The view that space exploration is a high-risk game ensured that what was an unacceptable level of risk and concern was ignored. Worse still, the social fabric was intimidating for those who were “whistle blowing”.

5 Are Failures Stepping Stones to More Failures? The …

85

At NASA, this happened again with the Columbia disaster, which could also be considered an epistemic accident. The physics of a simple foam block causing catastrophic damage was never seen as a possibility of an accident and falls under the realm of poor mental models, described in a later chapter. However, it was the decision making after the event that showed organisational deviance. The attitude towards risk was that it is better for the crew not to know about this danger before the Shuttle re-entry was deviant, as reported in the media. Vaughan argues that accident studies would benefit if the early signals regarding the danger due to deviance are identified and are warning signs of a disaster. What was found was that over the years, the work culture at NASA, based on deadline pressures, began to “normalise” or ignore the possibilities of failure due to process and technical deviations. Risk assessment and management became only a necessary procedure. As discussed in the book before (Chap. 2), some risk was accepted in space exploration, but the gradual normalisation of deviance over the years was not noticed. Small deviations gave way to large ones over time. There was an incremental acceptance of deviation driven by peer pressures in the organisation for performance and delivery to deadlines. These issues led to the challenger disaster. Much of Vaughan’s work can be corroborated with other cases, such as in the Gulfstream accident, the Boeing 737 Max disaster described in Chap. 2 or even the pandemic, especially the handling of it by some governments. The idea is that in any organisation or government itself, the social fabric undergoes changes, even in the most reputed ones, where deviance becomes normal and acceptable. It is not individual (though a collection of like-minded individuals may influence and accelerate it), but a broader structural change would have gradually occurred, changing the cultural conditions leading to the normalisation of deviance. In healthcare, deviance and the culture of insensitivity occur insidiously and sometimes over years because disaster does not happen until other critical factors line up, slightly like the Swiss Cheese Theory. It is said that in comparison with other sectors, in clinical practice, failing to do time-outs before procedures, shutting off alarms, and breaches of infection control deviate from evidence-based practice [35]. While this deviance may not be regarded as wilful, it is deviance, just the same. While the normalisation of deviance is a borrowed concept in healthcare, the drivers for deviance, such as time, cost, and peer pressure, also exist in healthcare.

86

5.7

S. Chandra

The Organisational Structure and Risk

While complexity is generally attributed to cause accidents, it is the nature of organisational complexity and culture that will be critical in preventing the accident or disaster. The organisation more than the technology involved is fundamental in being responsible or even incubating an accident. An organisation is truly people interacting. Sociologists argue that the proper way to study risk is not by the scientific and engineering process alone but to study institutional design, and looking at the social aspect of risk. In fact, Douglas and Wildavsky [20] end up with the view that risk culture truly is about the organisational structure of groups. If seen this way, it reflects on the organisation and its reaction to risk, as seen in Boeing, NASA in terms of organisational structure and risk perception and the occurrence of accidents. Mary Douglas, in her critique of how risk, truly danger, is evaluated, says it is weak in its treatment of something called “the human factor”. On the one hand, everyone agrees that the human factor is central. This is especially so about organisations. She notes that for the psychologist, the human factor is an individual person. For an anthropologist, the human factor would mean the general structure of authority in the institution or an organisation. It is not difficult to assess what this is like; there are symptoms, clues, lines of communication, incentives, and sanctions, all of which can be investigated quite systematically to understand their bearing on the perception of risk. Institutions could be graded quite objectively as safety-ensuring systems. Charles Perrow’s analysis of “normal accidents” is a step in this direction. The sociology of risk regulation is generally not seen to be important and never emphasised in regulation. This is true as much in aviation regulation, in which the scientific approach and process used in terms of “predict, evaluate, and test” are sanitised to the point that human behaviour and social compulsions are left out. As organisational behaviour is social behaviour, as individual human behaviour also impacts the cause of accidents, there appears to be a need for formal responsibility within the regulatory process and system about risky activities. Hence, the sociology of risk within organisations and an organisation’s view on risk need to be included in safety standards, especially in complex technology. Accident investigation is truly hindsight and has important information and will enable action on the part of those responsible for the system, and the organisation will change. Accident investigations are about enabling the culture of readjustment after any disaster. However, in terms of the prevention of accidents, social and human behaviours and the inherent perception of risk need to be part of regulation.

5 Are Failures Stepping Stones to More Failures? The …

87

It is important to note that regulation can specify safety processes and the organisational framework for safety monitoring, but ultimately organisations function to deliver missions and goals. There will be limits to how much safety regulation personnel will be allowed to monitor social processes that could contribute to accidents. Repeated auditing of processes, including how crises were managed, are the hallmarks of organisations that are conscious of safety issues and have a safety culture, as will be discussed subsequently. Scott Sagan in his book The Limits of Safety [36] asks—are normal accidents inevitable or can the combination of interactive complexity and tight coupling in organisations be safely managed? This is where the high-reliability organisation (HRO) [37] concepts come in. While there is scepticism among the sociologists, the high-reliability organisation advocates cases such as flight operations on aircraft carriers and air traffic control systems, where the conditions for normal accidents exist but the systems operate safely each day. Karlen Roberts and group identified [37] cultural factors, such as collective decision making and organisational learning, as key reasons why an otherwise toxic combination of complexity and risk can be managed. In contrast, critics such as Sagan pointed out that even these systems had serious near misses from time to time and that normal accidents could happen.

5.8

Safety Culture

Over the years, as part of a sociological approach to understanding safety, the concept of a safety culture has been widely discussed, much in the 1980s, and used by the IAEA after Chernobyl. What Sibley [38] points out and shows up in the Boeing 737 Max accident and again in the pandemic: normative heterogeneity and conflict, inequalities in power and authority, and competing sets of legitimate interests within organisations. These are normal issues in any organisational culture. Now, the need to make a profit and gain market share is to that extent legitimate as well. Equally, pursuing a career in an organisation is also legitimate. However, there appear to be limits when these compete with safety culture and with safety requirements. Even during the pandemic, power and authority changed the perception of the way the pandemic should be approached, especially when a legitimate economic interest conflicted with introducing safety norms for COVID-19. In the work on the psychology of risk, Slovic [39] contrasts the scientific approach versus the way ordinary people perceive and respond to risk. The point here is that psychologists and sociologists recognise this as a complex behaviour issue, as risk is a social process that is socially developed and

88

S. Chandra

constructed. As a result, even with the best of intentions, risk is inherently subjective, and as Slovic puts it, it “represents a blending of science and judgement with important psychological, social, cultural, and political factors”. In [40], Paul Slovic, in an article in the journal Science, argues for a framework for understanding and anticipating public responses to hazards and improving the communication of risk information among lay people, technical experts, and decision makers. Slovic [40] argues that those who regulate risk must understand perceptions and responses to risk. Otherwise, even policies will not be useful. Gorman [41], in a review of Slovic’s work, says that there is a need for assessment of risk to be considerate of emotions and cognition in public conceptions of danger rather than simply disseminating information. Gorman says that Slovic’s work argues for a two-way street between the public and experts; experts respect various cultural and emotional factors which results in the public’s perception of risk. These statements are consistent with the Douglas view described earlier as well. As sociologists argue, the public is influenced as much by feelings and emotions, ideologies, values, and worldviews. This is why the safety culture may not have received the traction it should have. Guldenmund [42] notes that while safety culture and climate are generally acknowledged to be important concepts, little consensus has been reached on the content and consequences of safety culture in the past 20 years. Moreover, there is an overall lack of models specifying either the relationship with safety and risk management or with safety performance. Cox [43] noted that the concept of a safety culture means the identification of a set of core variables, such as management commitment, clarity of safety principles, a set of benchmarks, communication to employees in an organisation, and closer integration between management practice and safety management, as in “good safety is good business”. Pidgeon [44], a close collaborator of Turner, also argues that the word safety culture is a new approach for the conceptualisation of processes for handling risk in organisations. While it uses a systematic approach that is driven by knowledge representation schema familiar to engineers, it is essentially a sociology and psychology view of behaviour. It has meanings familiar to sociologists in the form of norms, beliefs, roles, and practices. A good safety culture is supposed to have norms for risk, safety attitude, and reflexivity. It could also be set as a precondition in the operation of complex processes. In this chapter, the author believes that sociologists are effectively in consensus and are sceptical about whether accidents and disasters can be eliminated. There are many theories discussed in the book, each of which has been seen in a specific set of accidents. However, the sociology of danger and

5 Are Failures Stepping Stones to More Failures? The …

89

risk seems to have a common thread mostly dealing with human aspects, organisations, and the inability to deal with complexity. This complexity is what human beings create as they engineer things or even the case of the pandemic; they try to unravel the complexity of nature. While Mary Douglas’s cultural theory holds in cases, Perrow’s NAT has been verified based on the mounting evidence that irrespective of the level of detail, scenarios are not anticipated in some cases. There is also evidence that organisations and individuals can be deviant. Downer, while he has studied the scientific approach, still believes that there will be epistemic accidents, somewhat aligning with Perrow, that there will be unanticipated scenarios. Vaughan and Turner focus on the organisation, deviance in process, propensity for incubation of errors, alignment of small errors, or deviances leading to a catastrophe. What can be agreed upon is the fact that scientists and engineers in their risk assessment processes do not explicitly count possibilities of human behaviour, social interactions, and deviant action. These are of paramount importance to the sociologist. What sociologists are pointing out is how human beings, not how experts or specialists, think and that is important. What they are also saying is that risk is a social construct, and scientists and engineers need to deal with it and cannot ignore it.

References 1 McNeil, D. G., Jr. (2021). Fauci on what working for trump was really like. New York Times, January 24, 2021. 2 Walker, T. (2020). Trump thinks Covid-19 is going to sort of just disappear. The Guardian, July 2, 2020. https://www.theguardian.com/us-news/2020/ jul/02/first-thing-trump-thinks-covid-19-is-going-to-sort-of-just-disappear. Accessed August 22, 2022. 3 Terkel, A. (2020). Trump said coronavirus would ‘Miraculously’ be gone by April. Well, it’s April. HuffPost, April 1, 2020. https://www.huffpost. com/entry/trump-coronavirus-gone-april_n_5e7b6886c5b6b7d8095959c2. Accessed August 15, 2022. 4 Alsharif, M., & Sanchez, R. (2021). Bodies of Covid-19 victims are still stored in refrigerated trucks in NYC. CNN , May 7, 2021. https://edition.cnn.com/ 2021/05/07/us/new-york-coronavirus-victims-refrigerated-trucks/index.html. Accessed August 16, 2022. 5 Olito, F. (2020). I explored Manhattan during rush hour amid the coronavirus lockdown and found a completely empty and changed city. Insider, April 1, 2020. https://www.insider.com/photos-empty-manhattan-governmentcoronavirus-lockdown 6 Contagion. (2011). Movie. https://www.imdb.com/title/tt1598778/

90

S. Chandra

7 Gullion, J. S. (2014). October birds: A novel about pandemic influenza, infection control, and first responders. Brill. 8 Friedman, T. L. (2020). If our masks could speak. New York Times, July 28, 2020. 9 Hauser, C. (2020). In fights over face masks, echoes of the American seatbelt wars. New York Times October 15, 2020. 10 Isidore, C. (2020). Will passengers be willing to fly on the Boeing 737 Max? CNN , February 3, 2020. 11 Isidore, C. (2020). Don’t expect to fly on a Boeing 737 Max anytime soon. CNN , July 8, 2020. 12 Whattles, J. (2021). Boeing CEO Dennis Muilenburg ousted after a disastrous year. CNN , December 23, 2021. 13 Topham, G. (2021). Boeing admits full responsibility for 737 Max plane crash in Ethiopia. The Guardian, November 1, 2021. https://www.theguardian.com/ business/2021/nov/11/boeing-full-responsibility-737-max-plane-crash-ethiopiacompensation 14 Perrow, C. (2000). Normal accident theory: Living with high risk technologies. Princeton University Press. 15 Turner, B. A. (1978). Man-made disasters. Wykeham Publications. 16 Vaughan, D. (2016). The challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press. 17 Downer, J. (2010). Anatomy of a disaster: Why some accidents are unavoidable. Discussion paper no 61, March, London School of Economics. 18 https://www.merriam-webster.com/dictionary/risk https://dictionary.cambri dge.org/dictionary/english/risk. Accessed August 14 2022. 19 Douglas, M. (1992). Risk and blame, essays in cultural theory. Routledge. 20 Douglas, M., & Wildavsky, A. (1982). Risk and culture: An essay on the selection of technological and environmental sangers. University of California Press. 21 Ford, R. T. (2020). Why Americans are so resistant to masks. Slate. https://slate. com/news-and-politics/2020/04/why-americans-resist-coronavirus-masks.html 22 Oltedal, S., Moen, B.-E., Klempe, H., & Rundmo, T. (2004). An evaluation of cultural theory: Can cultural adherence and social learning explain how people perceive and understand risk, Rotunde, Norwegian University of Science and Technology, Department of Psychology, 7491 Trondheim, Norway. https://doc uments.pub/document/cultural-theory-b-oltedal-bjrg-elin-moen-hroar-klempetorbjrn-rundmo-explaining.html?page=21. Accessed August 14, 2022. 23 Sjöberg, L. (2002). Limits of knowledge and the limited importance of trust. Wiley. 24 Siegrist, M. (2019). Trust and risk perception: A critical review of the literature. Risk Analysis, 41(3), 480–490. 25 Royal Society. (1992). Risk analysis, perception and management. https://royals ociety.org/topics-policy/publications/1992/risk/. Accessed August 14, 2022. 26 Beck, U. (1992). Risk society: Towards a new modernity (M. Ritter, Trans.). Sage Publications. 27 Checkland, P., & Scholes, J. (1990). Soft systems methodology in action. Wiley.

5 Are Failures Stepping Stones to More Failures? The …

91

28 Blockley, D. (1980). Nature of structural design and safety. Ellis Harwood. 29 Blockley, D. (Ed.). (1992). Engineering safety. McGraw Hill. 30 Dave Keating, D. (2022). Power. Energymonitor. https://www.energymon itor.ai/sectors/power/will-the-ukraine-war-change-europes-thinking-on-nuclear. Accessed August 16, 2022. 31 Turner, B. A., & Pidgeon, N. F. (1997). Man-made disasters (2nd ed.). Butterworth-Heinemann. 32 Dekker, S., & Pruchnicki, S. (2014). Drifting into failure: Theorising the dynamics of disaster incubation. Theoretical Issues in Ergonomics Science, 15, 534–544. 33 Pidgeon, N. (2010). Systems thinking, culture of reliability and safety. Civil Engineering and Environmental Systems, 27 (3), 211–217. 34 Popper, K. (2002). Conjectures and refutations. Routledge. 35 Price, M. R., & Williams, T. C. (2018). When doing wrong feels so right: Normalization of deviance. Journal of Patient Safety, 14 (1), 1–2. 36 Sagan, S. D. (1993). The limits of safety: Organizations, accidents, and nuclear weapons. Princeton University Press. 37 Roberts, K. H. (1989). New challenges in organizational research: High reliability organizations. Organization & Environment, 3(2), 111–125. 38 Sibley, S. S. (2009). Taming prometheus: Talk about safety and culture. Annual Review of Sociology, 35, 341–369. 39 Slovic, P. (2000). The perception of risk. Routledge. 40 Slovic, P. (1987). Perception of risk. Science, 17,236 (4799), 280–285. 41 Gorman, S. (2013). How do we perceive risk Paul Slovics landmark analysis. The PumpHandle.org. http://www.thepumphandle.org/2013/01/16/howdo-we-perceive-risk-paul-slovics-landmark-analysis-2/#.YnSs84dBxaR. Accessed August 6, 2022. 42 Guldenmund, F. W. (2000). The nature of safety culture: a review of theory and research. Safety Science, 34. 43 Cox, S., & Flin, R. (1998). Safety culture: Philosopher’s stone or man of straw? Work and Stress, 12(3), 189–201. 44 Pidgeon, N. (1991). Safety culture safety culture and risk management in organizations. Journal of Cross-Cultural Psychology, 22(1), 129–140.

6 To Err is Human—What Exactly is Human Error?

A 23-year-old woman in Italy was given six doses of Pfizer-BioNTech COVID-19 vaccine due to a mistake by the health worker administering the vaccine. The woman, an intern in the psychology department of Noa Hospital in Tuscany where she was administered the shot, was kept under observation for 24 hours and discharged. The hospital spokesperson said that the hospital has started an internal investigation on the issue, adding that it “maybe just human error, definitively not on purpose” [1]. The interpretation of what is human error, a mistake, a violation, or malafide intent is fascinating, with consequences for accidents and disasters. Both aviation and healthcare deal with human error quite extensively. The pandemic and the Boeing 737 Max accidents illustrate this quite clearly. When the first Boeing 737 Max aircraft (Lion Air) crashed, an instinctive view was that it could be pilot error; after all, the 737 Max was one of the most sophisticated new aircraft inducted. William Langweische, in his article in the New York Times [2], implies that poor training of the Lion Air and Ethiopian pilots could be the cause, while not suggesting that they were from third world countries and hence could have poorer cognitive models of the aircraft. Langweische argues that this might not have happened in the USA. One argument was that many pilots in the airlines in the USA were from the military, and they would have the “skills” to recover from the situation, both the Lion Air and Ethiopian Airlines pilots were in. However, accidents are caused not just by pilot error, as the subsequent Boeing 737 Max investigations showed. Boeing’s organisational problems, design flaws, and failure of

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_6

93

94

S. Chandra

the regulator oversight have instead come up. In aviation, when there is not much evidence to point to obvious errors in design, maintenance, or acts of nature, pilot error is often inferred. Errors in aviation can occur at all stages: design, manufacturing, maintenance, or in operation. Something that presents a cascading catastrophe of events due to human errors is the COVID pandemic. From a sense of instinctive denial in the beginning, to doomsday predictions, believing that shutdowns, lockdowns were even more harmful, and then locking down immediately, there has been a cascade of erroneous decision making. From recommendations that masks were only for healthcare workers, to orders to always wear one, to believing viruses were transmitted by touching surfaces to now saying that much of it was aerosol transmission, to describing some drugs as miracle ones to later finding them to be actually harmful, the basket of human errors is now overflowing. In many healthcare settings, errors are unintentional, truly what you would call honest mistakes; however, many errors come because they are epistemic in nature, as Downer says (see Chap. 5). In the pandemic, the epistemic errors have been very high. In many of the accidents discussed before in Chap. 2, there have been investigations that have identified errors, but these errors have been found to have deeper sources; a number of causal factors, not just a root cause. The phrase, one too many, emphasises the fact that there is a series of errors that leads to an accident. Some of these form incubation chains or align themselves as in a Swiss cheese theory discussed before in Chap. 5. In this chapter, the issues of intentional error (malice or avarice), unintentional error, ignorance, and negligence are examined. Errors due to fatigue, poor working environment, health conditions, inadequate training, bad design, and manufacturing are also issues that confront those who study human error.

6.1

The Lexicon

When James Reason, a psychologist from Manchester, wrote a book titled Human Error (1990) [3], it appeared in the subject list of psychology and cognition. Therefore, the inference is that it is about human behaviour. However, human error is a phrase that means different things to different people, such as an air accident investigator, lawyer, or layman. In fact, the statement “To Err is Human” is a well-worn phrase and hidden in it is everything that can be said about human frailty.

6 To Err is Human—What Exactly is Human Error?

95

The phrase “human error” is important in the context of accidents. Used quite commonly, this implies that the error was caused by somebody carrying out a task. Pilot error is regarded as human error and used frequently to indicate that while the aircraft as a piece of equipment was functioning normally, the pilot had made an error. In a nuclear power plant, an accident could be regarded as being caused by ‘operator error’. However, there is more to this. Machines can be faulty or malfunctioning, but human beings have designed them. Therefore, there is human error that can occur across the “design to operation” spectrum. Before human error can be reduced or eliminated, it must be identified. The cause, mechanism, and effect of human errors can differ for each of them. Human error has been defined as “a failure on the part of the human to perform a prescribed act (or the performance of a prohibited act) within specified limits of accuracy, sequence, or time, which could result in damage to equipment and property or disruption of scheduled operations” [4]. It is also regarded as an out-of-tolerance action or deviation from the norm, where the limits of acceptable performance are defined by the system. Synonyms for errors such as mistakes leading to phrases such as honest mistakes and accidental errors are also used widely, interchangeably, and colloquially, just as the terms danger and risk discussed in the book. However, in accident investigations, especially by engineers, the definitions are actually about errors made by operators of equipment, such as pilots and nuclear power station operators. It is here that the ergonomics of operating equipment is investigated, or work load, fatigue, etc., are pointed out, or the epistemic nature of the error investigated. Even with AI and robots, errors are truly human errors and can finally be attributable to humans. Therefore, the Frankenstein dilemma of AI is truly human error if things go wrong, as human beings would have invented and made the uncontrolled robot. Colloquially, the instinct is to equate error with a mistake. Heard of a genuine mistake? Well, it was just an error. Heard about an honest mistake? Well, it was just another error with no malafide intent. It is not easy to categorise the Shuttle disaster as an honest mistake, a genuine mistake, or an accidental error. On the other hand, was it wilful ignorance? As has been discussed, there was a level of awareness of the possibilities of things not going as planned, but was there wilful ignorance? However, sociologically and colloquially, these are statements in everyday life, but actually, even in everyday life, even in routine events, there are possibilities that an honest mistake can cost lives. In healthcare, where errors by doctors, nurses, pharmacists, etc., could mean death to individual patients. Again here, the repeated admission is about human beings making unintentional errors. What happens if the

96

S. Chandra

mistake/error was by dishonesty? The law has positions on this as we will determine next. Even though the lexicon broadly implies errors are mistakes, errors appear to be more than mistakes: they have causes, some out of human failings, negligence, fatigue, poor training, etc. The law of tort examines this from the viewpoint of law.

6.2

The Law

In law, the effort is to distinguish between an act of God, honest mistake, and dishonesty with intent, and in definition of tort is a wrong that gives rise to a right for a remedy and is mostly part of civil law. There is an interesting article, as early as 1902, in the Harvard Law Review [5] on the term mistake in the law of torts. It is said that the terms accident and mistake have been confused at times. The article points out that the effect of tortious conduct in the use of the terms “intention”, “negligence”, and “accident” implies the type of tort. This is to ensure that all torts are not regarded as intentional. A wrongdoer foresees and intends an effect in an intentional tort causing an injury to the other party. On the other hand, a prudent person should have foreseen that the effect was probable, forego the action, and guard against the consequences in a negligent tort. In that sense, a legal concept of an accident is that there is no intent or negligence. In that sense, an accident then is one where the effect was neither intended nor was so probable a result out of negligence. On the other hand, the effect is intended in a mistake, but the belief is that the effect is not tortious. Thus, the legal view of an accident is different from what is colloquially expressed or sometimes defined in regulatory documents, as noted in Chap. 2 (see ICAO definitions). The tort law can be classified in terms of negligent torts, intentional torts, and strict liability torts. Negligent torts are about negligence, causing harm to others by a failure to be careful, while intentional torts are about damage or harm by intent and wilful misconduct. On the other hand, liability torts are not about culpability but more about the act itself, such as producing defective products and the damages that result from it, irrespective of the care exercised or intent. For these types of torts, there is indemnity for acts, which is a type of protection against a financial liability. In general, the law attempts to clearly label intent and error, acknowledging that error, unintentional as it is, is a human failing and addressed as such.

6 To Err is Human—What Exactly is Human Error?

6.3

97

What Psychologists Say

Psychologists see human error in a humane way; they deal with human failings. Very unlike how engineers or scientists approach human error. James Reason, as a psychologist, in his book on human error [3] believes mistakes can be subdivided into failures of expertise (a preestablished plan or solution is applied inappropriately) or lack of expertise (individual not having an appropriate “off-the-shelf ” routine is forced to work out a solution from first principles). Jen Rasmussen in 1982 [6] called human errors human malfunction and classifies errors into slips, lapses, and mistakes, errors that can happen even to well-trained and experienced persons. More importantly, errors are defined as actions that lead to a failure to obtain what the expected results or outcomes were, irrespective of the phase, planning, or action. Here, execution errors are also classified as slips and lapses, while planning failures are mistakes. In HSG48, part of the Health and Safety Guidance of the UK [7], failure to achieve a desired outcome is classified as human errors, mistakes, and violations. The definition is that a human error is an action or decision that was not intended. A violation is a deliberate deviation from a rule or procedure. Mistakes are errors of judgement or decision making where the “intended actions are wrong”, i.e. where we do the wrong thing believing it to be right, and a definition attributed to James Reason. Errors can be slips or lapses, often forgetting, even when highly trained. These could be mistakes, mostly errors of judgement, without intent. All again to be rectified by training. However, violations are different; clear-cut rules are not followed, but it is most often accepted that they are done deliberately but are well intentioned and not malicious. These are driven by poor training, design, or maintenance. Violations can be out of ignorance, but if rules are impossible to handle, a nudge towards (as will be discussed in detail in Chap. 8) violations can happen. Psychologists and people who study human errors broadly agree that human failure is normal and predictable. The goal is that it be identified and managed. The failures are classified as active failures that have an immediate consequence and are usually made about frontline people such as drivers, control room staff, or machine operators. In a situation where there is no room for error, these active failures have an immediate impact on health and safety [8]. Latent failures are made by people whose tasks are behind in time and space from operational activities, for example, designers, decision makers, and managers. Latent failures are typically failures in health and safety management systems (design, implementation, or monitoring). Latent failures provide as great, if not a greater, potential danger to health and safety

98

S. Chandra

as active failures. Some tasks are called “skill-based” and are very vulnerable to errors if attention is diverted, even momentarily. Driving a car is a typical skill-based task for many people. Slips and lapses are errors that are made by even the most experienced, well-trained, and highly motivated people. They often result in omitted steps in repair, maintenance, calibration, or testing tasks. Slips are failures in carrying out the actions of a task. They are described as “actions-not-as-planned”. The failure involves our mental processes that control how we plan, assess information, make intentions, and judge consequences. Two types of mistakes exist, rule-based and knowledge-based. Rule-based mistakes occur when our behaviour is based on remembered rules or familiar procedures [9]. Misdiagnoses and miscalculations can result when we use this type of knowledge-based reasoning. Relying on “experience” is actually relying on knowledge-based reasoning of the ‘expert’. Violations are classified into routine, situational, and exceptional [9]. A routine violation is breaking the rule or procedure, which is a normal way of working, while the situational is about something that is based on a given situation, while the exceptional is something that is unforeseen and unanticipated. Looking at the body of work across sectors, it was found that the majority of errors were skill-based, a quarter of them were rule-based, and a small number of errors were knowledge-based. This clearly is the pointer to the level and type of training that is needed [9]. Reason argued that accidents can be traced to failure domains: organisational influences, supervision, preconditions, and specific acts. While specific acts are truly human error, the others are errors as well and would have been latent and caused by faulty design of organisational structure or management. Like sociologists, Reason, acknowledges that human error is both universal and inevitable. More importantly, the argument is that it is not a moral issue and human failings can be reduced by training but cannot be eliminated. There is a strong belief that both success and failure have the same roots and that failures are indeed stepping stones to success. The view is that there are human limitations, and how those error traps will be avoided is based on changing the conditions in which one works. Consistently, it is argued that all human beings are capable of making mistakes, no one is immune and many are in positions or occupations where such errors are possible. The most important view is that errors are consequences not causes; or, in other words, there are causes for an error to occur. Many errors are recurrent, based on the human condition, and they need to be targeted. Psychologists think errors can occur along the hierarchy of the

6 To Err is Human—What Exactly is Human Error?

99

organisation, and managing errors involves not just psychology but process, organisational structure revamp, and technical applications. Just as sociologists believe that accidents are inevitable, psychologists argue that human beings are fallible, and hence they will make errors, as well as mistakes, while engineers attempt to error-proof systems with complexity and automation. As has been seen in Chaps. 2 and 4, this error proofing by automation could introduce a new type of error, where a scenario is unforeseen by the automated system and the error is transferred from the operator to the system designer if the system is unable to anticipate all scenarios and hence can be classified under foresight failures. What all of them do not explicitly address is faulty mental models, or in cases where risk for gain is taken knowing fully well of the danger involved. When lawsuits have been generated, claims made, and where errors are identified and blame is clearly pointed out, the law looks for clear malafide intentions where organisational behaviour is involved (e.g. the Shuttle disaster, the Boeing 737 Max case, or other aviation accidents). Irrespective of any of this, when human error in terms of a psychological state exists, the logical question is to ask if this is an intentional error, or does a “not so normal psychological state” qualify as intentional error? There are a few cases of aviation accidents in terms of pilot behaviour in terms of “psychological state”. EgyptAir Flight 990 was a flight from Los Angeles to Cairo with a stop at JFK Airport, New York. On October 31, 1999, the Boeing 767 crashed into the Atlantic Ocean, killing all 217 passengers and crew on board. It was investigated by the Egyptian Civil Aviation Agency and the National Transportation Safety Board (NTSB) under International Civil Aviation Organization (ICAO) rules. After a few weeks, the NTSB wanted the investigation to be overseen by the FBI, as the data suggested that a criminal act had taken place and that the crash was intentional rather than accidental. The NTSB said that the accident was due to a first officer’s deliberate flight control actions. Egyptians argued that it was a mechanical failure of the aircraft’s elevator control system [10–14]. A Silkair aircraft, Flight 185 between Indonesia and Singapore, crashed in southern Sumatra on 19 December 1997, killing all 97 passengers and seven crew on board. Here, the cause of the crash was also independently investigated by the United States National Transportation Safety Board (NTSB) and the Indonesian National Transportation Safety Committee (NTSC). The NTSB investigation concluded that the crash was the result of deliberate flight control inputs, possibly by the captain; the NTSC Chairman said it was unable to determine the cause of the crash. Additionally, complicating

100

S. Chandra

this was earlier issues with the B737 rudder, which was deployed in an uncommanded fashion on some occasions [15, 16]. A routine Germanwings flight (9525) from Barcelona to Düsseldorf Airport ended in a disaster on 24 March 2015. The aircraft (Airbus A320) crashed into the French Alps, killing 144 passengers and six crew members. It was later surmised that the crash was deliberate, not an accident in the strictest sense of the word, caused by the co-pilot, Andreas Lubitz. Lubitz intentionally put the aircraft into descent when the captain had left the cockpit to use the toilet. It was found that Lubitz was under depression and treated for suicidal tendencies [17, 18]. The most well-known and unsolved mystery was the MH370 disappearance between Kuala Lumpur and Beijing on 8 March 2014. The aircraft was lost from ATC radar screens minutes later but was tracked by military radar for another hour, deviating westwards from its planned flight path and crossing the Malay Peninsula and Andaman Sea. All 227 passengers and 12 crew aboard were presumed dead. There have been allegations that the captain of the aircraft was not psychologically in good shape, but these allegations have never been proven [19–21]. In all four crashes, there were insinuations regarding the pilots and their mental states. In some cases, nothing has been proven beyond doubt but has been part of the investigations.

6.4

Engineering and Human Error

The term human error is well accepted and used widely in engineering, especially in failures and accident investigation, but it literally encompasses all aspects of any technology: design, development, manufacture, and operation. However, hidden in this rather simplistic term are many reasons for the error: negligence at various levels, ignorance, poor processes, inability to view a failure scenario, etc. However, these are still human errors caused by humans. One important point is that in engineering, the term honest mistake is generally avoided. The underlying premise is that human limitations must be taken into account, and hence, automation should be introduced, or the defence in depth strategy will take care of errors and mistakes to the extent possible, building robustness and resilience. It is argued that a majority of accidents can be designated as being related to human error, much of it during operation. The rest could be equipment failures. However, this is not that simple, even those majority can be reclassified as being design flaws, maintenance errors, organisational issues that

6 To Err is Human—What Exactly is Human Error?

101

could not be spotted early on (as in the Boeing 737 Max), others could be directly attributed to operators using equipment. Therefore, at each stage of any development or operation or maintenance, errors can occur, and identification, classification, and training could reduce human error. The Fukushima accident, for example, showed up organisational weaknesses, which increased the chances of errors. More importantly, even those that are classified as operational errors will have a basis in design, which clearly has not anticipated the possibilities of operational errors. For example, in testing, errors tend to be classified as systemic and random. Some of the systemic errors are in the realm of operational or individual or personal errors, reading, and interpreting or actually carrying out a test, but outside of protocol. Interestingly, there is also classification in terms of random errors, where it is accidental or random or indeterminate error. In reality, nothing is random, and on investigation, if science is able to point it out, then reasons can be found. The way a human being performs a task varies all the time. These variations in performance can cause an error if it exceeds permissible levels. In aviation, nuclear energy, and space, three of the high-complexity hightechnology areas, there is awareness that human error is always possible, and everything must be done to eliminate it. Obvious action is to increase training, and simulators are extensively used. Scenarios of equipment failures, natural hazards, etc., are simulated. However, there is also automation, the idea being that those highly complex chores, that a pilot or astronaut, or even an operator of a nuclear power plant has to do, should be automated, especially the routine ones. This is specifically done to reduce human error. The real issue that regulation addresses is to ensure that honest errors driven by lack of training for a particular scenario, fatigue, physical inability at a particular time, etc., are minimised. Where there has been negligence during design, manufacturing, etc., it cannot truly be treated simply as an honest error. A 2001 US Department of Energy (DOE) Human Performance Handbook notes: “The aviation industry, medical industry, commercial nuclear power industry, U.S. Navy, DOE and its contractors, and other high-risk, technologically complex organisations have adopted human performance principles, concepts, and practices to consciously reduce human error and bolster controls to reduce accidents and events” [22]. This is because there is recognition in high-technology industries that a majority of operational errors are human errors.

102

S. Chandra

Engineers attempt to quantify risk based on failures at various levels in the system. A fault tree analysis (FTA) in reliability engineering provides an indication of how subsystem failures grow into a catastrophic failure, as discussed in Chap. 5. Engineering depends on science and statistics to reduce failures. Physics provides the reasoning for building the models, the technology, and the machine. Monitoring the use of the technology in the real world provides the data, which many times is used to predict failures. The failure of this system to operate as predicted is in a sense human error, but that would be based on design, or manufacturing error, which is human error, but not seen as such. From the work of John Downer, which was discussed in detail in Chap. 5, it can be interpreted that if a bridge collapses or an aircraft breaks apart, because it experiences loads and forces more than what was anticipated, then there is an error in design. These could occur in terms of assumptions made, models created or the tests conducted. The engineering worldview is that their work is governed by process, rules, and objectivity, which produces reproducible facts that are centred on measurements rather than opinions and have a scientific basis. However, Petroski [23], who has written on engineering failures, says that failures have persisted because the design process remains fundamentally a task carried out by a human mind in a human context. This is a reinforcement of the view of sociologists. Robert Whittingham [24] discusses failures and their role in design as they evolve, and it is clear that failures have led to learning from them and hence newer and better designs. Clearly, one important approach in science and engineering is to use failures for learning. In Nick Pidgeon and Barry Turner’s work [25] (see Chap. 4), there is analysis of how human errors have led to failures. If the design is based on sound physics and engineering, then human error during design should be discounted. However, is it that way? Have all scenarios of failure been envisioned? In the Fukushima disaster, it was both a combination of human error at the design and operator levels coupled with inadequate training. The engineering worldview of mistakes, errors, and failures is quantitative and is mapped to the engineering understanding of risk. It refrains from issues related to negligence, avarice, or malafide intent. It also refrains from issues of behaviour, fatigue, etc., which are generally associated with psychology and sociology. In fact, all regulatory positions incorporate the engineering views on risk and failure. But, human error can be the cause of many accidents, some driven by mental states of those operating the equipment and some done deliberately.

6 To Err is Human—What Exactly is Human Error?

6.5

103

The Honest Mistake in Healthcare

RaDonda Vaught, a Tennesse nurse, faced sentencing for an error in 2017 [26]. She had given medication that would paralyse a patient instead of a simple sedative. The patient had died. In her defence, it was said that Ms. Vaught made an honest mistake and found faults in the dispensing process. The prosecution, however, argued that many obvious signs were overlooked when she got the drug from the dispensing system and did not monitor the patient after the injection. As the COVID-19 pandemic began, the real issues plaguing the healthcare sector started to unravel. While costs and unequal healthcare was always an issue, it was more about decision making during a disaster that came to the fore. In 2000, there was a report titled To Err is Human from the US National Academy, effectively acknowledging that human error was always a possibility and that there is only so much one can do to eliminate it [27]. In a research article Dealing honestly with an honest mistake that was published in the Journal of Vascular Surgery, the authors cite the American Medical Association AMA [28], where they acknowledge that medical errors are part of healthcare. As in aviation, in terms of communication between pilots in a cockpit or with air traffic control, the emphasis is on training and communication. An addition here is also disclosure, especially to the patient on matters such as risks in a surgery, acknowledgement that there could be “honest error” which requires disclosure. Under anaesthesia, over 75% of intraoperative cardiac arrests appear to have been caused by preventable anaesthetic errors [29], while inadequate patient observation was estimated to contribute to one-third of deaths in another study. Much of what happens in the aviation industry was thought of as an example to look for. Atul Gawande, in his book Checklist Manifesto [30], looks at the aviation industry as a basis to follow for healthcare. In HRO for healthcare, reduction of errors is advocated by using protocols and systems such as in aviation. Engineering and psychology view intent and negligence like in healthcare. In both, however, there is a clear belief that improvement is a possibility. Accident investigation is about identifying the errors, nature of the errors, basis of the errors, and the incubation of the errors leading to possible corrections but makes no statement about honest mistakes or intentional actions. In Reason’s work as well, there is a belief that errors are inevitable but can be contained by training and cultural change. In healthcare, cultural and organisational issues have a bearing on human error. Strauch [31] examines cultural factors in complex systems

104

S. Chandra

and discusses the relationship of cultural factors to operator performance. Cultural antecedents to error were acceptance of authority and identification with the group. From his view point, cultural factors can affect safety and become precursors to error or reduce possibilities as well. Importantly, cultures that are risk averse found it difficult to deal with ambiguous situations. He argues that operators should be sufficiently trained and objective to recognise potential conflicts between organisational goals and safety and should also be sufficiently independent to perform in the best interests of safety.

6.6

Regulation and Errors

How do regulators deal with human errors? From the author’s viewpoint, it appears that it is much like the way Mary Douglas has described the way scientists and engineers treat risk. Everything is sanitised in terms of human behaviour and sociology of interactions, but they concentrate on technical aspects and the use of technology that can prevent human error, apart from training. What can also be observed is how accident investigators react to errors, which consequently forms the basis of further evolution of regulation. Recommendations are purely technical in nature, which include ways to minimise the errors and hence the risk. As the FAA would see it, the job of a regulator is to minimise risk. Layers of technology are added to prevent an error. As we have seen in previous chapters on sociology, this has received criticism from sociologists such as Perrow [32]: The defence in depth strategy has been questioned for adding complexity and hence being a reason for accidents. There is some evidence that complexity does lead to errors if all scenarios are not foreseen, and unanticipated ones can cause accidents. This also applies to automation, which in itself, if not designed appropriately, could be a cause of accidents. As previous chapters show, errors have a past. Regulators see antecedents for errors, which are then corrected after accident investigation. This is even more so with complex systems, technical and sometimes organisational. As discussed, the antecedents could be during design, manufacture, operation as well as errors in regulation. This has been seen in the Boeing 737 Max accident, where there were errors from the aircraft maker as well as the regulator. What regulators enforce and what they do not enforce can have a bearing on accidents. The oversight of a regulator is limited to the expertise they have and the expert advice they receive. Technology changes quickly, and regulators can make errors if they are unable to evaluate correctly, again illustrated during

6 To Err is Human—What Exactly is Human Error?

105

the Being 737 Max accident. This was seen more vividly in the pandemic by the response of the WHO and many national healthcare regulators, as science itself is still evolving. While regulators do not make any distinction between an honest error versus wilful ignorance or malafide intent, which is left to the law to take care of, they report technical errors quite explicitly. Since many errors are due to human behaviour, social compulsions, and interactions, a framework to handle these errors needs to be embedded in regulation. While human limitations in operating complex systems, e.g. reaction time, are well known, errors due to poor understanding, avarice or actions for gain are not generally addressed. These are discussed subsequently in the book. The HRO was introduced in Chap. 4, which has an explicit goal of reducing errors and consequently accidents. The many recommendations for the HRO are clearly sociological and human behaviour based. In healthcare, for instance, the use of HRO has been by training, yet there is no explicit embedding into the regulatory mechanism of the HRO concepts and how these principles ensure avoidance or minimisation of errors. These issues are addressed in the final chapter.

References 1 Guy, J., Borghese, L., & Isaac, L. (2021). Italian woman accidentally given six shots of Covid-19 vaccine. CNN , May 11, 2021. 2 Langweische, W. (2019). What really brought down the Boeing 737 Max? New York Times, September 18, 2019. 3 Reason, J. (1990). Human error. Cambridge University Press. 4 https://www.merriam-webster.com/dictionary/human.error. Accessed August 16, 2022. 5 Whittier, C. B. (1902). Mistake in the law of torts. Harvard Law Review, 15 (5), 335–352. 6 Rasmussen, J. (1982). Human errors a taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents, 4 (2– 4), 311–333. 7 HSG48. (1999). Reducing error and influencing behaviour. https://www.hse. gov.uk/pubns/books/hsg48.htm. Accessed August 10, 2022. 8 hseblog.com. (2022). Types of human errors and how to avoid them. https:// www.hseblog.com/types-of-human-errors-in-health-and-safety/. Accessed August 10, 2022. 9 Leadership and worker involvement toolkit: Understanding human failure (hse.gov.uk). https://www.hse.gov.uk/construction/lwit/assets/downloads/ human-failure.pdf. Accessed August 10, 2022.

106

S. Chandra

10 Langewiesche, W. (2001). The crash of EgyptAir 990. The Atlantic, November 1, 2001. 11 NTSB. (2000). Operational factors group Chairman’s factual report. National Transportation Safety Board, January 18, 2000. 12 Egyptian Civil Aviation Authority. (2001). Report of investigation of accident: EgyptAir 990. Egyptian Civil Aviation Authority, June 22. 13 Lathem, N. (1999). FBI profilers dig into co-pilot’s past. The New York Post, November 18. 14 Sinha, S. (2015). A history of crashes caused by pilots’ intentional acts. The New York Times, March 26. 15 National Transportation Safety Committee. (2000). Aviaion accidents, aircraft accident report Silkair Flight MI 185 Boeing B737-300 9V-TRF Musi River, Palembang, Indonesia, 19th December 1997. Department of Communications, Republic of Indonesia. 16 Keys, L. (1998). Suicide is possible cause of jet crash, officials say pilot had history of troublesome behaviour. Associated Press, March 11, 1998. 17 Bureau of Enquiry and Analysis for Civil Aviation Safety. (2016). Final investigation report: accident to the Airbus A320–211, registered D-AIPX and operated by Germanwings, flight GWI18G, on 03/24/15 at Prads-Haute-Bléone 13 March. 18 New York Times. (2015). What happened on the Germanwings flight. The New York Times, March 25, 2015. 19 Ryall, J., & Staff Writers. (2014). Malaysian police investigation names MH370 pilot ‘prime suspect’, News.com.au. News Corp Australia, June 23, 2014. 20 Sheridan, M. (2014). MH370 pilot ‘chief suspect’. The Sunday Times, June 22, 2014. 21 Jamieson, A. (2018). MH370 safety report ‘unable to determine the real cause for disappearance. ABC News, July 30, 2018. 22 DOE. (2001). Human factors/Erogonomics handbook for the design for ease of maintenance. DOE-HDBK-1140-2001 February. 23 Petroski, H. (1992). To engineer is human: The role of failure in successful design. Vintage. 24 Whittingham, R. B. (2004). Design errors, the blame machine: Why human error causes accidents. Elsevier Butterworth-Heinemann. 25 Turner, B., & Pidgeon. (1997). Man-made disasters. Butterworth-Heinemann. 26 Lamas, D. L. (2022). The cruel lesson of a single medical mistake. New York Times, April 15, 2022. 27 Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (2000). To err is human: Building a Safer Health System Institute of Medicine (US) Committee on Quality of Health Care in America. National Academies Press (US). 28 Liang, N. L., Herring, M. E., & Bush, R. L. (2010). Dealing honestly with an honest mistake. Journal of Vascular Surgery, 51(2), 494–495. 29 Green, R. (1999). The psychology of human error. European Journal of Anaesthesiology, 16 (3), 148–155.

6 To Err is Human—What Exactly is Human Error?

107

30 Gawande, A. (2014). The checklist Manifesto: How to get things right. Penguin Books. 31 Strauch, B. (2017). Investigating human error: Incidents, accidents, and complex systems. CRC Press. 32 Perrow, C. (2000). Normal accident theory: Living with high risk technologies. Princeton University Press.

7 What I Do not Know Will Hurt Me—Mental Models and Risk Perception

Robert Meyer, a professor at Wharton and his team, performed a survey of people who were expected to be hit by Hurricane Isaac and Hurricane Sandy in the United States. It was found that most people failed to adequately understand the threats from disasters, and their sense of risk and poor mental models led to poor planning. It was found that for many hazards, mental models can be inadequate about how things will unfold [1]. Benedict Carey states that it is complacency, not panic, that is the real danger, and only a small minority of people panic under threat; far more do not take the threat seriously enough [2]. When the COVID pandemic started, there was much speculation about the virus, whether it was human designed or if it came from bats or some other animals. The COVID-19 pandemic has produced a plethora of mental models, each individual creating or believing in their favourite models. Some of these can pass off as scientific, some others intuitive, while some come across as downright speculative. All, however, are mental models. On the one hand, these generated hope from the public, while on the other hand, they invited disapproval from the scientists. The New York Times (April 26th, 2020) discusses President Trump’s mental models about the COVID-19 virus: [3–5]. During the pandemic, there have been a number of mental models, many of them personal in terms of view. Some of these were found to have far-reaching effects, as reported by the media, and have gone on to cause a number of deaths [6, 7].

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_7

109

110

S. Chandra

Trump’s statement is reflective of a mental model of a cure. There has been much satire about it, but the truth is that until other cures are proven by accepted scientific methods, they remain as personal mental models. Like many others, they could cause harm. It must be remembered, however, that the last word on the use of many such drugs for COVID-19 has yet to be written. On the other hand, in September 2019, at the height of the Boeing 737 Max controversy, William Langeswiche wrote an article in the New York Times [8] stating airmanship meant a mental (Visceral in Langeswiche’s words) understanding of navigation, weather information, mental maps of traffic, radio communications, and truly the flight physics of an aeroplane. To him, the best pilots, especially the fighter pilots, were given intense training and flew to the limits of the envelope by effectively having a mental map of the aircraft’s capabilities. This is not something expected from airline pilots who never fly alone and cater to passengers. Again, the emphasis is on the lack of adequate mental models. Langeswiche sees that airmanship is about mental maps, a colloquial equivalent to the term mental models as used in this book. For the Boeing 737 accident, Langeswiche argues that some rookie or even commercial pilots who do not undergo the training that military pilots receive will not have a mental map that will enable them to deal with a crisis situation of the kind that pilots of the Lion Air aircraft encountered. Langwesiche’s article implies that only a certain set of pilots, trained to do more than what is done in routine flying, can handle abnormal events, came in for considerable criticism by Captain Sullenberger [9], who flew an aircraft that had lost both engines, ditched the aircraft into the Hudson River. Captain Sullenberger felt that there was no way the pilots of the Boeing 737 Max could have responded, given the flaws the aircraft had. In other words, to expect civil aircraft pilots to counter such an unforeseen scenario was wrong. There were deep problems with the Boeing 737 Max aircraft because of the designers believing that the modifications did not justify new types of training for exigencies and the regulator accepting these. It is possible that a new generation of designers did not have the models or the mental maps that would have alerted them to changes and modifications and did not follow the process to build new ones. Therefore, how much should one know is important, as people such as firefighters and other rescue personnel develop an acute sense of danger based on experience [10, 11]. The issue is in many cases how much training should be given by an organisation. In the Boeing 737 Max case, airline pilots were supposed to do routine flying with the belief that automation would handle the rest, especially in crisis situations. Based on the evidence thus far, in the

7 What I Do not Know Will Hurt Me—Mental Models …

111

Boeing 737 Max accident and the reports as discussed in Chaps. 2 and 4 on complexity, the use of automation in terms of the MCAS system caused the crisis. Very many accidents can also be attributed to faulty mental models of how complex technology works. While mental models are formed in the minds of all human beings, it is just that those who are trained would have refined the models, while laypeople could continue to have faulty models. Specifically, for example, where complex physics and science play a role, the cognitive models of the experts reflect a higher sense of reality compared to the inaccurate models of the layperson.

7.1

A Mental Model

As human beings, we build mental models of anything and everything. These models are crucial in our perception of danger and risk. They are built out of individual experiences and social interactions and are truly concepts from them. This applies to pandemics, air crashes, or even managing natural disasters. Various definitions exist for mental models, sometimes called mental maps (not to be confused with a mental map of a geographical location). Broadly, a mental model is a thought process about how things work and the behaviour of people and society at large. Craik [12] argues that mental models are constructs and represent a smaller version of reality. These models have also been referred to as an abstraction. Mental models can be called mental representations or conceptual models. Forrester [13] defined a mental model as a selected concept and relationships between them that can be used to represent the real system. There are, of course, cases where intuitive decisions are made by human beings, and these are hard to articulate. It may not in fact be possible to decompose these and explain the model. Mental models are also categorised in the form of representations and simulations, learning, and skill development. In the mental model, concepts are established, associations created, and a causal relationship structure developed. In training, or simulation, it is the repeat that matters, providing reinforcement of the learning. However, if there are flaws in these processes, then one obtains an imperfect mental model. In computer science, AI, and knowledge engineering, such a model can be referred to as a conceptual model [14], which can be an integration of a number of concepts. It is an articulated description of a process of thought,

112

S. Chandra

which has connections to other objects, relationships between parts, and an intuitive perception about the person’s actions. In general, it can be observed that experts and the experienced have a detailed knowledge structure, open to refinement, can perceive things at an abstract level, and spot problems in the patterns, relationships, and connections so that they can ‘foresee’ scenarios. As a result, experts are much faster than novices in terms of extrapolating from the information they have. Many mental models are constructed out of individual experiences, conversations, public events, reading, and learning. These could also be based on a belief system, or it could be based on facts as in scientific thinking. It could also be an explosive combination of both, and often, this is the case. Many mental models of risk exist in this domain, including a mixture of belief, hearsay, and real data. Mental models are sometimes classified as shallow knowledge and deep knowledge from a knowledge representation viewpoint. There can be causal knowledge, but without deeper understanding. Shallow knowledge is the “hand me down” kind, many times qualitative with no real measurement. Much of intuition is in this fold. Qualitative knowledge is based on qualitative statements, for example, how fast the car is going, how risky things are, etc., and involves no real measurement. Qualitative knowledge can also be a belief —belief is inherited knowledge, word of mouth, and something colloquial. Interestingly, qualitative physics, which is a research area in AI, is intuitive physics, which is a language-driven and descriptive physical behaviour explanation [15]. There is this concept of naïve physics or folk physics [16], which is about how people view phenomena and behaviour. Set in colloquial language and vocabulary, it is about how untrained people understand behaviour. Introducing counterintuitive behaviour is tough in this framework. Counterintuitive behaviour knowledge is a product of science and deep knowledge. One could argue that mental models are primitive constructs and, when decomposed, can provide clarity regarding the reasons why such models were constructed. In fact, this is relevant when one looks at risk and truly at how the sense of danger and risk evolved. This, however, is difficult, especially when the model is constructed out of anecdotal information or folk physics or information at the same level. Folk physics could be a set of heavily simplified statements or even misinterpretations. They may be incapable of generalisation and reliable predication and could be contradicted by more rigorous knowledge. However, they can serve as causal statements of behaviour, providing an initial understanding of behaviour, especially, in cases where they can be verified by facts.

7 What I Do not Know Will Hurt Me—Mental Models …

113

Folk physics represents the intuitive understanding of physical behaviour. Work by psychologists shows how humans and primates have notions regarding aspects such as weight, centre of gravity, and stability. One could argue that all mental models are evolving, which applies even to science-driven models, as seen in pandemic science. From Karl Popper’s book on Conjectures and Refutations, it can be inferred that science as an iterative process of analysis and falsification [17]. Therefore, in many ways, mental models can be termed conjectures. Sometimes, these are refuted and cannot be proven. Sometimes, with the available evidence, they can be considered valid until they are disproven with a new theory or when a new observation comes to light. Some mental models are factual, and some are imaginary. Some can be articulated, some cannot, some are intuitive, and some are counterintuitive. In summary, mental models are constructed by human beings for all things, be it understanding a virus transmission or flying a new plane. These can be picture perfect or downright faulty, or somewhere in between.

7.2

Evolution of Mental Models

Mental models are truly evolutionary. Clearly, mental models we have as children get refined with experience. In fact, that appears to be the case with training as well even it happens by repeated exposure to simulation and testing with experiments. Many mental models are descriptive constructs that are qualitative [18]. This is the difference between the perception of scientists and engineers on one hand and of the public on the other. As models evolve, the qualitative descriptors are mapped with quantitative measurements for scientists and engineers, while in the minds of the lay public, these could remain qualitative. In these too, there are exceptions, for example, in cases such as our assimilation of speed limits and diabetes measurements. These are quantitative and are embedded in the layman’s mental models. These serve as examples of mental models that have evolved enough to be widely accepted and have a great deal of effect on risk perception. Mental models are not permanent. There is a dynamic change in input information leading to evolution and learning and model refinement or modification. Model memory is not static either. These models also span a whole range, from those that engineers have about what they design to those that sociologists have about the inevitability of failure due to complexity. Much of the evolution and building of cognitive models occur via complex experience and interaction with residual mental models.

114

S. Chandra

The real evolution of mental models is seen in the case of the gradual but clear transition to counterintuitive models. For example, in the case of car crashworthiness, the intuitive model is a strong and heavy car, but as the physics tells you, one needs to have zones in the car that crumple and absorb energy such that the accelerations are not transferred to occupants. This is now known to protect the occupants, but the strong heavy car model had remained the intuitive model, until simulations were performed, crash tests were shown, and measurements were displayed [19]. Then, the mental models changed, the physics of energy absorption was accepted and the mental model became more refined over time. The development of mental models has a cultural side to it as well, and the evolution of the models is heavily dependent on the culture we live in. As cultures become more technologically advanced, many of the technology models appear to be intuitive and wired in. These generational mental models of any system are linked to the perception of risk. Constant observation of certain occurrences leads to a build-up of a certain type of mental model, leading to changes in risk perception. Much of the layperson’s models are driven by observation, anecdotal information, and hand me down stuff over generations, some of which is folklore and become rule based in the mind. Much is not causal, probably driven by belief systems and, in many ways, tribal. This can happen even in industrialised societies, as evident from the power of social media during the current pandemic. A mental model is similar to a language and is not static, even in isolated societies. The kind of hand me down models, which are regarded as established practice, can happen in technology, design, as well as in other professions that use tacit knowledge as a basis. Often, the design of an engineering part can be passed on over generations of engineers, without truly being questioned. Therefore, what may have been useful and optimal many years ago for requirements of that period could still be continued for long periods without being questioned until change occurs. Organisations are known for a certain risk culture that evolves with the organisation; for example, some organisations remain successful because of a set of governance mental models that has internalised risk perception. In technology companies, employees and management are taught to pursue a certain culture of openness, discussion, knowledge capture, and work ethics. These models define the success of the organisation and are expected to be embedded into the thought processes of the employees. What exactly are these models? These models are truly distributed models in the human resources of the organisation. In that sense, it represents domain knowledge, process knowledge, and work culture. It also represents a certain set of core

7 What I Do not Know Will Hurt Me—Mental Models …

115

values as the organisation goes about its business. While it can be viewed more as culture or work practice, it is embedded into the employee’s individual mental models and is reflected in practice. In some organisations, knowledge capture and access are such that there is ready recall of past experiences. In many others, tacit knowledge is systematically passed on. These are truly part of the organisation’s models, especially with respect to safety and risk.

7.3

Mental Models of Technology and Automation

Technology is based on science; in fact, much of it is based on scientific discoveries. As technology becomes more complex, e.g. driverless cars, robots, AI, and automated systems for control, the basic question is what is this for? Principally, it is being done based on the perception that if the system has an overly complex behaviour, human beings cannot operate it safely. The other consideration in favour of automation pertains to the operating costs— human beings trained to operate the technology are more expensive than the automation. Of course, efficiency as well, automated systems could be less prone to errors and work faster. The real advantage of automation is about computers taking over the decision making from humans because it is felt that human beings may not be able to react as fast or handle too many tasks at the same time. Herein lies the catch, while automation may reduce operating errors, other errors creep in. During design and testing, it is entirely possible that scenarios that could lead to accidents go unanticipated and hence untested. There are examples of such accidents with aircraft, such as the Boeing 737 Max, an A320 crash in India in 1989, the TCAS, and human interaction in the Swiss mid-air collision discussed in Chaps. 2 and 4. Complexity is a worldview, a matter of perception as discussed in Chap. 4. Nonscientists hold a firm “belief ” that one needs to keep things simple. The engineer’s view is that in complex systems, one needs to ensure that a particular subsystem is understood properly by the ones who design, build, and operate it. Evidence over the years suggests that complexity needs to be structured to be able to probe and comprehend system behaviour and enhance system reliability. These illustrate how subsystem failure, or cascading failures that have not been envisioned, could lead to accidents. Thus, an organisational worldview and an integrated model are important, especially in organisations that build and operate complex technology. The issue is how the sociologists view it. Perrow and others see this as a potential cause for accidents [20]. Engineers expect that exhaustive testing

116

S. Chandra

and accounting of various failure scenarios will “minimise” the risk. Downer in his work [21] on epistemic accidents (see Chap. 4), without explicitly terming epistemic models as mental models, sees faulty, imperfect mental models to be responsible for accidents. Apart from errors, a number of scenarios that could occur would not have been foreseen by designers of any system and sometimes accident investigation revealing the science to be completely counterintuitive. How much of complex technology should an individual know? A hierarchical model where expertise exists in a distributed structure, e.g. in an aircraft, specialists of various systems know deeply about their disciplines but may not have more than a rudimentary knowledge of other systems. However, an aircraft is designed based on collaboration and overseen through layers of management. In such cases of complex technology and its development, management and oversight are what enables complex organisations to function efficiently, especially where the knowledge is held in a distributed structure.

7.4

Faulty Mental Models and Accidents

In Turner’s [22] work on man-made disasters, much has been said about how human beings (and of course their mental models) have failed to anticipate accidents. The accident burden due to this is just as high as that due to negligence. In Chap. 2, a typical set of accidents is described. In both space Shuttle accidents, the mental models for the design were flawed, especially among programme managers. In the Challenger disaster, the O-rings failed, as the models about how they would behave in cold weather were flawed. Even when an expert suggested caution, other experts disagreed, and the programme managers went ahead with the launch. In this accident, the interplay between the models of experts, risk assessments, and the flawed but gung-ho approach of the programme managers led to this accident. This is apart from the need to meet deadline pressures. In the case of Columbia, a foam block punctured a wing, blowing a hole, and when the Shuttle reentered the Earth’s atmosphere, it burnt out and disintegrated. In hindsight, after ballistic tests were conducted [23], which simulated the foam hitting the wing, a scenario of the impact causing cracks that could allow hot gases during re-entry emerged. The author argues that this incident reinforces two points—what sociologists (see Chap. 5) see as the inevitability of accidents due to the inability to foresee all scenarios, as well as the continuous refinement of mental models after each event.

7 What I Do not Know Will Hurt Me—Mental Models …

117

In the case of the B737max accident, as in the case of the Shuttle disasters, while experts had clearly seen and raised alerts, the managers had a different mental model about the dangers. From the analysis of all reports on the Boeing 737 Max, it seems that complacency had set in because an accepted process existed and that the modifications to the aircraft were ‘thought’ to be minimal. This was the message emphasised to the regulator as well. Thus, the mental model of all concerned managers, designers, and regulators led them to “believe” that the minor modification did not alter the behaviour of the aircraft. It is possible that this could be a case of a “forgotten process” as well, in terms of checking if safety processes were complied with, as part of checks on the redundancies for the number of sensors that were needed. This would have been standard practice but was forgotten. From the information available, the automated control system depended on a single sensor and yet overrode the pilot inputs. Why this was not spotted by the management or the regulator, even during the design process for automated flight control that was formalised by Boeing and accepted by the regulator, remains a question. This appears to be a case in which the knowledge and experience acquired earlier were either forgotten or ignored due to normalisation of deviance. To an extent, Langeswiche’s view of mental maps [8] is correct and holds in terms of pilots having a number of flying hours, flying a variety of aircraft types for obtaining better experience in handling unseen situations and whether they would have been able to recover the aircraft compared to the Lion Air and Ethiopian Airlines pilots. However, the real issue in the Boeing 737 Max case was that Boeing or the regulator did not believe such experience was necessary, as technology automation was available to act as constraints for any unseen situation. In the Fukushima nuclear disaster, there were faulty mental models of the designers, who never considered the occurrence of an earthquake and a tsunami in quick succession. On the other hand, it was found that the operator training was also inadequate, and hence, their mental models were imperfect [24]. In automotive accidents, the way mental models work is rather different. Drivers are trained minimally in comparison with pilots or astronauts. However, automobiles are less complex; they are made in such a way that the expertise and training required to drive are lower in comparison with flying an aircraft. In fact, much of the decision making in drivers is intuitive. Statistics shown as part of the WHO report show that young people account for most accidents (see Chap. 2). The author argues that the mental models of young adults have risk perception based on competing social pressures and

118

S. Chandra

a high-risk appetite. Because of lack of experience, the mental models young people have, regarding any vehicle are likely to be imperfect, leading to more risk-taking behaviour compared to older adults. In healthcare, the classic example is the pandemic; there have been wide differences of opinion among healthcare professionals and scientists and a poor belief in science among some in the public. That is probably understandable, given that science is also groping in the dark regarding the pandemic. Therefore, mental models about the virus have been imperfect and evolving. This applies to the transmission process, use of masks, drugs, and vaccines as well. A scientist is convinced about these, and the mental models get better and better with new knowledge, whereas a doubter with a very different mental model would continue to view the process of knowledge evolution with suspicion. One thing stands out—science, however, much it has stumbled in this pandemic, has advocated caution and defence in depth, something we saw in the use of nuclear power as well. In the previous chapters, we looked at the issue of mistakes and errors. Many of these are because of poor mental models. Interestingly, while regulation and accident investigation are different, some accidents due to errors are attributed to poor training and flawed design. Both attract blame, but these are not exactly wilful ignorance, probably just ignorance.

7.5

Training and Refinement of Mental Models

Training has always been recognised as a fundamental requirement to enable improved mental models. Research shows that cognitive models are formed in the minds of human beings; it is just that those who are trained develop and refine the faulty models, while laypeople could continue to have faulty models [25]. Specifically, where complex physics and science play a role, the mental models of the experts reflect reality versus the inaccurate models of the layperson, e.g. aeroplanes and how they fly. The more accurate the mental models of behaviour, the better the understanding of the implications of risk. Sometimes, however, many managers tend to rely on their mental models, sometimes ignoring the more experienced models that experts have. This was illustrated vividly during the pandemic. The most researched mental models have been those of aviators and pilots. As Langweische notes, their airmanship is what matters. There is a distinction of how experienced pilots react to situations compared to the rest, as it is about quick, clear decision making, evident from the case in the A320 ditching into the Hudson River by Capt Sullenburger on January 15th, 2009.

7 What I Do not Know Will Hurt Me—Mental Models …

119

Much has been written about that, but what appears to be “cool” flying was in fact several things working out together—the pilot’s intuitive reactions, analysis, and training. As also noted in FAA studies [25] on pilot training and reactions, this process can be decomposed to show that the expert can identify causal relationships, even in an intuitive process. According to Dreyfus, intuition is the product of deep situational awareness and involvement that is quite distinct from the conscious application of abstract rules [26]. Research into how pilots function shows that competency build-up is a result of the interplay between knowledge and the ability to process that knowledge. This also involves speed of processing, which in turn implies competence and training. Experienced pilots show very quick and confident decision making and, in a sense, move to the best decision rapidly, such as having almost a direct perception of the proper course of action. These decisions occur so swiftly, literally in split seconds, that it is regarded as if a structured mental model is present and is based on insight and intuition. This happens with repeated use of simulator training of pilots, while the simulator does not reflect absolute reality (though it is getting quite close currently because of technology), it provides (with repeated use by a person) that training to handle failure scenarios that moves a rookie pilot to one with experience of accident scenarios. Here, training is about building, rebuilding, and refining a mental model using simulation and learning through experience. Why is training so important to avoid accidents? Knowing “how” appears more important than knowing “why”. Knowing how to respond to a scenario is important, and as important, is making the know-how effortless. That is why training to respond is vital to pilots, drivers, nuclear power operators, and others in such occupations. Therefore, while the phrase know-how is loosely bandied, it is know-how that works, and that too needed at the level that is appropriate training provides some of both know-how and know-why. What happens if the behaviour or scenario for action is counterintuitive? Even here, new constructs generally replace the old. Interestingly, after a while many things become embedded, become intuitive, and are taken for granted. Training plays a role in the deletion of old models and replacement with a new counterintuitive model. Intuitions vary, the phrase a trained mind has not come about for nothing. Training can be formal, self-taught, or cultural. A combination of naïve physics and cultural training, driven by the social fabric and myths, can be a powerful force that inhibits any attempts to move towards formal science and counterintuitive phenomena training.

120

S. Chandra

Mental models are important to people who make decisions where accidents and disasters are a real possibility. They need to update their knowledge and understanding all the time. A fair question would be ‘should I know everything’? Not truly, but you need to know enough for a certain task, e.g. an operator at a nuclear power station need not be an expert in physics, but some fundamental knowledge of the subject which is linked to expertise in the role does make the difference. Clearly, experts should be operating a rational structured scientific process in their minds. Experts have a mental model that is updated by keeping abreast of new developments. That is why they are experts. Laypeople in turn have a more causal, but shallow knowledge model, without the depth that experts have. This applies for COVID as well, hence the need for deference to expertise and regulation to ensure that processes and rules formulated by experts are followed. In the B737 max case, the designer and the organisation had an imperfect mental model, which the regulator believed, and the pilots were led to believe that there was no new complex behaviour they needed to train for. However, complex behaviour of the flight control system appeared in the Lion Air and Ethiopian Airlines crash. This behaviour as it turned out was counterintuitive for them to handle.

7.6

Risk Perception

In a book on how people make decisions, Klien [10] talks about how firefighters develop that intuition when they approach a burning building, a raw sixth sense, “I don’t feel right about this” kind of feeling. This power of intuition about risk is both individual and social, and experience is the vital thing. In every case where there is danger, mental models of risk perception play a role. Much of the mental models that are there with laypeople is based on a primordial sense of danger. As has been seen in the pandemic, the way the virus spreads have been seen differently by different people. Many have died attending COVID parties, hoping to disprove the virulence of COVID [27]. Thus, belief systems play a vital role in the perception of danger. Even here, danger is felt either intuitively or when people have been subjected to experience. Risk is also about belief. Belief systems and how they change are equally fascinating. Beliefs influence mental models. Individual mental models can change based on the environment. Belief systems are updated, and mental models can change. Especially in industrialised societies, where scientific information is disseminated rapidly, mental models will change in the population. Among the minority, diehards will remain, as in the case with groups that resist vaccines.

7 What I Do not Know Will Hurt Me—Mental Models …

121

From a risk perception viewpoint, what differentiates sociologists from scientists and engineers is the fact that they do not explicitly say that mental models evolve and refine, and hence, risk can be minimised. On the other hand, engineers seem to constantly work towards refining their models, physics-based, data-based, any model in fact to minimise the risk. While sociologists criticise the methodology for the quantitative assessment of risk, the engineering community internalises the risk numbers and uses them for decision making. The risk models are in that sense embedded in the mental models as well, and there is a risk assessment spectrum. In societies that do not attach importance to a quantitative measurement of risk, the possibility of a binary judgement exists, e.g. no mask or mask. Equally important is to look at the concept of calculated risk, a colloquial term that appears from the argument that there certainly are mental models that influence risk taking. Tricky as it may sound, there have been times that a calculation showing that the odds are against the plan leads to the abandonment of the plan. Risk taking is influenced by the mental models of the technology, say when one wants to test a new aircraft. Sometimes, many people who have less detailed knowledge can be more adventurous, taking on more risk. This is human behaviour, especially if there is a perceived belief that there is a large amount of gain. Ignorance combined with avarice can be a deadly mixture for seeding a disaster. This leads to the discussion in the next chapter on avarice and gain and the perception of risk when coupled with a need for gain as well. Knowing how was discussed earlier. However, knowing why is different and still important, as it makes a difference to risk perception. Experts tend to have a different risk perception; in fact, they are more cautious, as seen in aircraft accidents or the pandemic. Who needs to know-why? Designers need to know-why. Let us examine the Boeing 737 Max accident. It can be argued that designers who needed to justify know-why they failed to watch and skipped the time-honoured process of building redundancy and training. Test pilots operate both know-how and why as much as possible. That is why we sometimes hear statements such as “this plane can only be flown by a test pilot”. For example, having knowledge structures in the mind for situations requires rapid decisions for which quick identification is required (e.g. the experienced pilot anticipating a thunderstorm by recognising a threatening cloud formation, anticipating wind shear on landing, or anticipating in-flight icing conditions). Expertise is highly domain specific. Within a domain, experts develop the ability to perceive large meaningful patterns. In Klien’s work, the power to see the invisible is a powerful statement about how some

122

S. Chandra

people can perceive and assess risk better than others. This is a crucial bit that combines native intelligence, training, and simulation to provide a superior understanding of risk. One thing that has never been explicitly stated is how much should a regulator know. In all highly regulated sectors, such as aviation and healthcare, the regulator is not a know everything sort of person. The regulator follows a process of evidence-based certification. The regulator may not be expected to know deeply about counterintuitive physics but accepts a process, with the caveat that all scenarios have been examined. In the B737 case though, as even though the information was not adequate, the modification to the aircraft was approved. The regulator as an organisation is structured to take on board the filing of say a drug or an aircraft that needs to be approved. The regulator depends on the detailed submissions from a drug developer or an aircraft maker. These submissions go through a process of verification against tests. A set of experts is consulted. In fact, for vaccines, the vaccine is approved by a set of experts in the field. However, the issue with the Boeing 737 Max as it now appears is that the regulator had permitted Boeing to self-certify and Boeing missed out crucial design and testing requirements. In healthcare, whether it is disease behaviour, diagnosis, or therapy processes, all these are mental models. Accepted practice is driven by research, testing, and regulation, based on what the FDA puts out as regulation. What is not laid out in the regulator’s work is what the sociologists find important. The unified mental model of the regulators has no place for verification of how an organisation or its employees behave. While the regulator expects an organisation structure, roles, and responsibilities of individuals, it does not have the models that can spot or check deviant behaviour or malafide intent. Irrespective of this, risk is regulated. The regulator is trained to deploy a structured model that is supposed to ensure that using scientific principles and processes of verification and validation, any new product can be certified as safe, as also examine all scenarios of failure.

7 What I Do not Know Will Hurt Me—Mental Models …

123

References 1 Meyer, R. (2014). The faulty ‘Mental Models’ that lead to poor disaster preparation, knowledge at Wharton. In A business journal from the Wharton School of the University of Pennsylvania, July 7, 2014. https://knowledge.wharton.upenn. edu/article/wind-rain-worse/. Accessed August 23, 2022. 2 Carey, B. (2020). Complacency, not panic, is the real danger. New York Times, March 19, 2020. 3 Flegenheimer, M. (2020). Trump’s disinfectant remark raises a question about the ‘very stable genius’. New York Times, April 26, 2020. 4 Diaz, D. (2018). Trump: I’m a ‘very stable genius’. CNN , January 6, 2018. 5 Grady, D., Thomas, K., Lyons, P. J., & Vigdor, N. (2020). What to know about the malaria drug trump says he is using, June 15, 2020. https://www.nytimes. com/article/hydroxychloroquine-coronavirus.html. Accessed August 11, 2022. 6 Clark, D. (2020). Trump suggests ‘injection’ of disinfectant to beat coronavirus and ‘clean’ the lungs. NBC News, April 24, 2020. https://www.nbcnews.com/ politics/donald-trump/trump-suggests-injection-disinfectant-beat-coronavirusclean-lungs-n1191216. Accessed August 23, 2022. 7 Editorial. (2021). The Guardian view on Bolsonaro’s Covid strategy: murderous folly. The Guardian, October 27, 2021. https://www.theguardian.com/com mentisfree/2021/oct/27/the-guardian-view-on-bolsonaros-covid-strategy-mur derous-folly. Accessed August 23, 2022. 8 Langewiesche, W. (2019). What really brought down the Boeing 737 Max? New York Times, September 18, 2019. 9 Sullenberger, S. (2019). My letter to the editor of New York Times Magazine, October 13. https://www.sullysullenberger.com/my-letter-to-the-editor-of-newyork-times-magazine/. Accessed August 11, 2022. 10 Klein, G. (2004). The power of intuition: How to use your gut feelings to make better decisions at work. Currency. 11 Klein, G. (1999). Sources of power: How people make decisions. MIT Press. 12 Craik, K. J. W. (1943). The nature of explanation. Cambridge University Press. 13 Forrester, J. W. (1971). Counterintuitive behavior of social systems. Technology Review, 73(3), 52–68. 14 Sowa, J. F. (1984). Conceptual structures—information processing in mind and machine. Addison-Wesley. 15 Forbus, D. (1988). Chapter 7—Qualitative physics: past, present, and future, exploring artificial intelligence. In Survey Talks from the National Conferences on Artificial Intelligence, (pp. 239–296). 16 Smith, B., & Casati, R. (1994). Naive physics: An essay in ontology. Philosophical Psychology, 7 (2), 225–244. 17 Popper, K. (2002). Conjectures and refutations. Routledge. 18 Chandra, S., & Blockley, D. (1995). Cognitive and computer models of physical systems. International Journal of Human-Computer Studies, 43, 539–559.

124

S. Chandra

19 Chandra, S. (2014). Evolution of design intuition and synthesis using simulation enriched qualitative cognitive models. In ICoRD’15—Research into Design Across Boundaries (Vol. 2, pp. 3–14). Springer. 20 Perrow, C. (2000). Normal accident theory: Living with high risk technologies. Princeton University Press. 21 Downer, J. (2010). Anatomy of a disaster: Why some accidents are unavoidable. Discussion Paper, No: 61, March 2010, London School of Economics. 22 Turner, B., & Pidgeon. (1997). Man-made disasters. Butterworth-Heinemann. 23 Harwood, W. (2003). CAIB accepts, agrees with NASA failure scenario. Spaceflightnow. https://www.spaceflightnow.com/shuttle/sts107/030506sce nario/. Accessed August 11, 2022. 24 IAEA. (2021). Learning from Fukushima Daiichi: Factors leading to the accident (Vol. 62–1). IAEA Bulletin. 25 Adams, R. J., & Ericsson, J. A. (1992). Introduction to cognitive processes of expert pilots, DOT/FAA/RD-92/12. 26 Dreyfus, H. L, & Dreyfus S. E. (1986). Mind over machine. The Free Press. 27 Pietsch, B. (2020). Texas hospital says man, 30, died after attending a ‘Covid Party’. New York Times, July 12, 2020.

8 Is Greed Truly that Good? Avarice and Gain Versus Risk and Blame

The hoarding of hand sanitisers in the US during the pandemic is a fascinating story. In March 2020, as the pandemic began, the Colvin brothers found that they had an opportunity. They cleaned out shelves after shelves at supermarkets. Over a few days, they collected thousands of bottles of hand sanitisers and anti-bacterial wipes. They then listed them on Amazon, selling over 300 bottles for prices between $8 and $70. Amazon removed thousands of listings and suspended sellers’ accounts for price gouging, a term used in the US for hoarding and selling. Suddenly, Matt Colvin had over 17,000 bottles of the sanitiser and was unable to sell. This happened elsewhere in the USA and other countries. After The New York Times’ article, Mr. Colvin said he was exploring ways to donate all the supplies [1]. Attributed to Ivan Boesky and highlighted in the Michael Douglas movie, Wall Street is the view that greed is good. Ivan Boesky was a Wall Street trader who, along with a few others, was investigated and jailed for insider trading. Before that, he was the image for success on Wall Street. Since then, there have been a number of convictions for insider trading. Boesky spoke at the School of Business Administration at Berkeley commencement in May 1986 [2, 3]: “greed is all right, by the way. I want you to know that. I think greed is healthy. You can be greedy and still feel good about yourself ”. More than any other disaster, the pandemic has brought out most vividly human behaviour with regard to the perception of risk, but more importantly, the perception of risk based on the gain or loss involved. At the height of the pandemic in New York, Governor Andrew Cuomo was accused by the

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_8

125

126

S. Chandra

Republicans [4, 5] of following the political gain involved rather than taking the advice of scientists or doctors. Some of the decision making during the pandemic was driven by the prospects of losses: businesses, jobs, and the eventual impact on elections. Political leadership made decisions that were politically important, sometimes in contradiction to scientific advice. This clearly was for gain. Price gouging for hand sanitisers, oxygen concentrators, and the availability of vaccines on the dark web are stark illustrations of avarice. In many cases, the best side of human behaviour was also on display, which far outweighed what was driven by avarice or greed driven primarily by the perception of gain and loss involved. Even with the experience of the 1918 pandemic and smaller disease outbreaks since (H1N1, SAAS, MERS, etc.), novels and movies and warnings, all countries were caught unprepared, specifically because the COVID-19 appropriate behaviour rules were against the grain. The sense of denial of the worst things that can happen was there to be seen vividly and was clearly risk taking for gain. What is avarice, greed, or just plain need is a truly fuzzy but a continuous spectrum of “wants” for a human being. This has a predominant influence on the perception of risk as well as individual and organisational behaviours. The original argument is that risk perception is based on danger and hence leads to risk aversion, but there is a constant trade-off being made by human beings when choices exist in terms of taking the risk for gain. The picture that emerges is of human behaviour; when nudged in their selection of economic choices, some will make the choices that are riskier for possibilities of gain. The fact is that there is always a possibility that this might swing towards deviance. Avarice and greed have had implications for accidents and disasters but have never explicitly been described in accident investigations or in regulation. While greed and the innate need to take a risk for gain are inherent in humans, altruism and taking a risk for somebody else does exist, say when a rescue is needed, but is probably much less than when done for personal gain. Motivation to accomplish, create, or be successful is not in that sense classified as greed or avarice but does encourage risk taking. When does it turn into avarice? Will it play a role in being the basis for accidents and disasters? In this book, all possible causes for accidents, including human errors, complexity, poor communication, and bad mental models, have been discussed earlier. Additionally, the law of tort and how human error, intentional, or otherwise and how it is regarded in law are discussed. However, what was not discussed earlier was that all of this, and especially deviance, may have a background of avarice and greed among individuals or by an organisation itself and would have an implication on accidents. By the end

8 Is Greed Truly that Good? Avarice and Gain Versus …

127

of this chapter, one can look at Boeing 737 Max, Space Shuttle disasters, the Gulfstream accident, the pandemic, and the OxyContin use through the lens of how the nudge for gain affects the perception of risk. Much of risk and safety assessments and regulation is focussed on the possibility of human error, while the concept of risk and associated possibilities of failure are tacitly accepted. For example, in the FAA risk handbook [6], “a key element of risk decision making is determining if the risk is justified”. Justified by what is the natural question. In the FAA case, it is justified based on its low probability, while in many cases, it is the gain that justifies the risk. In fact, even in aviation, what is “excessive conservatism” towards safety can kill the industry is what is sometimes argued. As a result, when FAA makes regulations, a notice for proposed rule making (NPRM) [7, 8] is issued, and the economic impact is also taken into account. It can be seen in many accidents that decision makers have “erred” towards risky actions based on a perception of the gains involved. Human behaviour towards risk is driven by possibilities of reward and fame.

8.1

Risky Behaviour and Gain

The Special Air Services (SAS) of the UK, a commando force, has a motto: “those who dare win” [9]. A well-worn phrase which implies that unless one takes a risk, one cannot gain. This view is embedded in the minds of some and provides an impetus for risk taking. As discussed earlier in the book, humans have an innate sense of danger, some primordial, some social, a lot that is also an evolved sense of danger. However, many of us are willing to live dangerously, not in all things, only in some. We “take risks”, in doing many things. In fact, we take ‘calculated risks’ as the frequently used phrase goes. This means that a risk was taken after the analysis of the risk, impact, and reward. In fact, there is a 1963 British movie called calculated risk about a robbery that goes wrong [10]. Why do we do it? Perhaps, because life would be ‘dull’, there is nothing truly to look forward to in comparison with the sense of adventure and gain? However, the type of danger one is willing to ignore, and ‘take the risk’ is different for each one of us, for each community, culture, even nations. The risk we take is based on the rewards we want, especially with regard to personal safety. One could be willing to ignore the danger given the reward at a personal level: sky divers, rope climbers, and astronauts. On the other hand, during the pandemic, we have seen people willing to risk contracting the virus or passing it on to others as well when

128

S. Chandra

eating out in a restaurant or going to a motorcycle rally in Sturgis in the USA [11]. Why do they do it? As judged in behavioural economics, people are willing to accost danger, given some rewards. It is not as if they will avoid it altogether. However, if behaviour was always just transactional, one could put a finger on it. Why would some become astronauts, while others don’t. Why would some want to drive in a Grand Prix, while others just watch. On the other hand, many who are completely ‘risk averse’ in one thing can actually take a huge risk in another, knowing the danger involved. Danger, as discussed in Chap. 4, is something that is primordial, intuitive, and based possibly on a tribal outlook. On the other hand, risk is something else; one analyses, one uses ‘judgement’ whether something is worth the risk. In cases where data are not explicitly available, the risk trade-off seems to be assimilated qualitatively. Risk can be regarded about the perception of gain or loss. In the psychology of risk perception, risk taking is a complex behaviour that could happen for a number of ‘reasons’, e.g. associations with fun factors such as bungee jumps and skydiving [12]. It can happen if the perception of gain is very high in comparison with what they presently have: lottery, gambling, etc. It could be attractive if the possibility of fame is also high for some. It also appears that familiarity lessens the sense of danger. Repeated exposure to risky behaviour can enable a sense of complacency, especially when accidents do not occur in the period. Additionally, trust is a vital factor; if information about a certain risk is reliable, there could be higher risk taking, as noted earlier in Chap. 5. Equally important is to know that sometimes, people are willing to risk other people’s lives, not just of themselves, like talking on a mobile phone while driving or not wearing a mask in the pandemic. Many people note the result outcome as well, calibrating the risk, they are willing to take for a reward, like believing a car to be safer, than a plane, if an accident were to occur, irrespective of whether that is accurate or not. Mental models of the type of risk also play a part; for example, fear of danger is fuelled by fear of the event, believing that a certain type of technology is harmful, say mobile phones for some people, while the same individuals may not worry too much about wearing a mask in the pandemic. Rational choice theory explores how humans continue performing a cost– benefit analysis in choosing actions and options for themselves. The theory suggests that human behaviour is about choosing the best options. The implicit suggestion is that decisions made are rational, but driven by selfinterest [13]. Based on Adam Smith, there is also an argument of the role of an invisible hand, where the collective good is a part of the narrative and is accounted for in the choices human beings make [14]. While this theory has

8 Is Greed Truly that Good? Avarice and Gain Versus …

129

been applied in sociology and philosophy, there are critics who question its basis, even though an aggregation of individual choices produces acceptable social behaviour in many cases. When applied to anti-vaccination choices, there is a question of whether the invisible hand has worked. However, there is also evidence that risk taking for gain, for self-interest, is a choice and regarded as rational . The rational agent will then perform cost–benefit analysis using a variety of criteria to perform their self-determined best choice of action. The debate is about goals in rational choice, and these, according to some, could be altruistic or idealistic [15]. Additionally, the definition of rationality is about being goal oriented and consistent. Now, in hindsight, in the case of the Boeing 737 Max disaster, the decision to launch the new aircraft was in a sense a rational choice given the need to compete with the Airbus 320 neo, but it was marred later on with decisions that were clearly inappropriate. In the case of the Boeing 737, the competition was from Airbus, and many of Airbus’s actions influenced Boeing’s decisions in designing the 737 Max. There have been discussions on the effect of extensive cost cutting to enhance shareholder value in the Boeing 737 Max accidents and regarded clearly as corporate avarice [16, 17]. Eventually, the focus on gain has had an effect on the culture in design, development, and manufacturing and has resulted in safety issues. Therefore, as much as rational choice, it is what game theory says that needs to also be examined because when choices are made in situations where there is competition, the strategy of others also plays a role. Game theory can be regarded as an extension of rational choice theory in the sense that economics and interdependency are factors in decision making. Therefore, the choices of others, especially competitors, play a part in game theory. Therefore, when risk taking for gain occurs, seemingly rational choices with an awareness of competitors’ strategy appear to be embedded in the decision making [18]. While people look at risk, based on many issues, much of what we do is also dependent on the economic system we are in. It is also based on where in the hierarchy of the economic system, one is. An economic system driven by capitalism generates a certain type of risk taking and how members of that system behave towards risk could be a characteristic representation of the system. It is here that one sees clear personal risk taking, e.g. scuba diving, bungee jumping, and so on. A sense of adventure is most seen in capitalist systems. However, the need to do that is felt very differently in other systems, socialist, or communist. The affluent and rich feel that the rewards are worth the risk, but a loss will not break the bank, so their risk perceptions are different. For those who see a possibility to break out of a poor economic

130

S. Chandra

condition, there is a sense of all or nothing and a willingness to go all out, e.g. refugees crossing the seas in a small boat to reach more prosperous countries. These individuals who break out are clearly the ones who are willing to risk the tribal wrath, the possibility of everything or nothing, a gamble, or a “calculated risk”, if you may. Therefore, it is just not psychology, or the sociology of a tribe, but behaviour economics as well. How an economic system and the social structure influence risk taking which is important to visualise. The cultural theory of risk is based on the work of anthropologist Mary Douglas and political scientist Aaron Wildavsky first published in 1982 and as described in Chap. 5, showing four “ways of life” in a grid or group arrangement. Each of these signifies a social structure and a perception of risk. It also implies the constraint on the people in the group based on their social role. This limits individual freedoms and flexibility and personal control, driven by compulsions of belonging and solidarity. The four ways of life are hierarchical, individualistic, egalitarian, and fatalist. However, the theory has had its fair share of criticism, especially if we view it in the prism of fast-changing industrial societies, where technology advancement is rapid. This theory in fact shows up in the way risk which is taken for gain and societies in the West, East, socialist, and communist view of risk. The author argues that sociologists’ view of risk takers and the effect of risk taking are seen in the analyses by Douglas and Perrow (Chap. 5) in terms of science, engineering, and technology in terms of accidents, while risk takers continue to take these risks based on an innate sense of risk perception and the belief that learning occurs from failures. There is a question of what is an evolved sense of danger and consequently a perception of risk. Ulrich Beck’s Risikogesellschaft [19] provides insight into risk, sociology, and science. He argues that as industrialisation has occurred, there has been a fundamental shift in how we perceive danger, old moral questions about greed have been up for question, wealth generation has seen ups and downs, and hence, we see risk today very differently from what was perceived centuries ago. The word misfortune is still used, but for a number of things that cannot be scientifically explained, or so when some people choose to ignore science. However, again, the use of misfortune implies an interest in gain, if not avarice. In the authors’ opinion, risk perception is not static. It is fluid and constantly changing, just as the wants and needs for each generation and as also the mental models. What fears the pandemic will leave behind will show up in how perceptions form and evolve, again clearly in terms of what each individual will see in terms of the risk that needs to be taken for any gain. The

8 Is Greed Truly that Good? Avarice and Gain Versus …

131

perception of Boeing 737 Max as a safe aircraft has been shaped by communication about the lowering of risk by modifications after the accident and not just rebranding of the aircraft. However, some people may never fly on it again. Many others might after a while as the perception of risk changes and the sense of gain is higher, if the decision is economic as well. The common motivational phrase “unless, you take a risk, you cannot achieve anything” means that individuals are generally risk averse. There is not a case in the world where a person is willing to face all types of dangers. A person who takes a financial risk could be very averse to bungee jumping and vice versa. However, the statement is a reflection on the fact that there can be no gain without risk. This is ingrained in human behaviour. Understanding risk perception is crucial for policy. Suppose that people falsely think that some risks are quite high, whereas the possibility of others occurring is relatively low. Such misperceptions can affect policy making because governments and more so politicians will tend to respond to this view and have resources allocated that are aligned with opinions based on fears rather than on the most likely risk. Vaccine hesitancy represents risk perception and affected policy making in the pandemic. What is lacking in the risk regulation powered by concepts of scientists and engineers which are methods and processes that can integrate risk perceptions that arise out of individual beliefs, social structures, and peer pressures.

8.2

Human Behaviour and Probability

According to folklore, a famous statistician in Moscow argued that during the Second World War, the probability of him being hit by a bomb was one in seven million, as there were seven million people in Moscow. However, he is supposed to have shown up at an air raid shelter one night. When asked, he responded, there are seven million people and one elephant in Moscow, yesterday, they got that elephant [20]. That in a nutshell describes probability, risk, and perceptions. Therefore, the way in which one looks at risk based on probabilities for decision making can sometimes be controversial. In the case of the Boeing 737 Max, the probability of a failure of a flight control system should be in one in many millions of flights, but there was a failure of the flight control systems twice in the two accidents of the Boeing 737 Max in a few months. This again provides considerable strength to the position of sociologists about risk.

132

S. Chandra

Then, there is the perception of risk based on the sense of danger, as in fear of vaccines, surgeries, flying in aircraft, and natural phenomena such as earthquakes and tornadoes and blizzards. Thunder induces a fear of being struck by lightning, although the probability of being struck by lightning is rather low. Human behaviour is also about some people willing to bet on the low probability of making huge gains, while there is a high probability of loss, as seen in gambling and casinos. It appears that while humans still grapple with the methodology to predict outliers, anecdotal information about these outliers, where gain or loss has occurred, has a strong influence on human behaviour. Most of our decisions are made under some form of uncertainty. Decision sciences usually define risk as the uncertainty about several possible outcomes when the probability of each is known. The lineage of probability theory points to Henri Poincare [21], who believed that everything has a cause, and we as human beings are incapable of knowing all the causes for the events that occur and hence the study of probability. What needs to be pointed out here is that as we enter the domain of scientific analysis of risk, we leave behind the social and psychological framework in which we view danger and look at risk in terms of probability of occurrence; in other words, while we have accepted danger, we calculate risk based on probabilities. The use of risk theories has been controversial with sociologists and in fact some engineers as well. The story of it being used in the space programme is particularly fascinating. Harry Jones, a NASA employee, writes about NASA’s understanding of Risk in Apollo and Shuttle about this. Risk analysis was used in Apollo, but it was considered too conservative and discontinued. The Shuttle was designed without using risk analysis, under the assumption that good engineering would make it very safe. The view among the advocates of the risk analysis methods was that this approach led to an unnecessarily risky design, which directly led to the Shuttle tragedies [22]. Although the challenger disaster was directly due to a mistaken launch decision, it might have been avoided by a safer design. Now, whether the decision to abandon risk analysis of the Apollo era was the cause for the Shuttle tragedies has been a matter of debate. The risk advocates also argue that because probabilistic risk analysis was not used, design scenarios were never evaluated. When statistically predictable failures occurred, the failure investigations focussed on the lowest organisational level and the final stages of the design, and that it would have been impossible to reach back to the conceptual design stages as the programme was already operational. Although some design changes were made, the Shuttle design was largely as it was, and the main recommendations were

8 Is Greed Truly that Good? Avarice and Gain Versus …

133

to improve the NASA organisation, culture, and operations. Some argue that the ultimate cause of the Shuttle tragedies was the choice and decision to abandon the Apollo era probabilistic risk analysis (PRA) [23] to avoid the negative impact of risk analysis. Other aspects of Shuttle advocacy, such as projecting an impossibly high number of flights to justify projected launch cost savings, also show distortions justified by political necessity. Ultimately, it was generally accepted that the Shuttle was too risky to continue to fly, and the programme was cancelled. Was the Fukushima accident just human error? Was there clearly something completely unforeseen? Risk analysts could advocate that not all risks can be foreseen. A tsunami striking Japan is an event that can be regarded as possible, but low on probability, especially a combined strike tsunami and earthquake may have been regarded as of very low probability. Previously, it could have been considered a part of natural disasters, or an act of God, but investigators blame it on human error. Interestingly, the nuclear regulatory commission (NRC) of the US, which is a regulatory agency similar to the FAA for civil aeronautics, has specified that reactor designs have a very low probability of failure (1 in 10,000 years), as seen in Chap. 3. This is the kind of metric used in civil aircraft accidents as well, such as a flight control failure of 1 in 10 million flights. However, even using a more deterministic approach that looks at the entire system safety, compared to a probabilistic safety analysis (PSA), a very low possibility of occurrence is being estimated. European safety regulators specify a 1-in-a-million frequency of occurrence [23]. According to Dr. Richard Feynman, the Nobel laurate, many at NASA assessed the risk to be 1 in 100,000, but engineers on the ground argued it to be only 1 in 100 [24]. The probability of the “toss the coin” type has been embraced by scientists and, more so, engineers. While there is belief in this, it is said by sociologists that there is a motive here as well and that engineers need probability to justify to policy makers. However, while probability theory dates back centuries ago, John Galton is known to have made a distinction between risk generated by nature versus risk generated by human behaviour [25]. But, Keynes [26] is known to have been sceptical about the theory of probability and its relevance to real-life situations. In the author’s opinion, the positions of Laplace, Poincare, and Galton versus the positions of sociologists such as Perrow are that Perrow argues that things will inevitably happen. Now this could be system structure, equipment failure, or truly human behaviour. What needs to be deliberated is whether the low probability of an event happening is what makes people take a certain

134

S. Chandra

risk, tending towards avarice and gain. Interpreting the Keynesian view, especially with regard to economics, probability theory does very little to explicitly model avarice, greed, in a sense called animal spirits by Keynes [26]. It can be argued that it could after all show up in the data, provided such data are available, when the probability distributions are computed.

8.3

Nudge and Risk

The favourite nudge story is of a urinal for men in Schiphol Airport in Amsterdam [27, 28], where a fly was painted at a spot in the urinal to ensure that the aim was on reducing any spill on the floor. Therefore, nudges are cultural and social and can in fact make changes to risk perception. For example, many governments used it to nudge the public. The nudge theory is now well known and is attributed to be part of behavioural economics. A nudge as the word indicates is to provide indirect suggestions, a reinforcement and influence for change in behaviour. A nudge is not an order, enforcement, or a rule. The definition of a nudge goes back to the theory of behavioural economics (work by Thaler and others [29]), which argues that behaviour has an effect on the economic activity—people making choices of what to buy. However, there is more to this, as human beings take risks, their decision making is not optimal [30], which contradicts rational choice theory and notes that how choices are framed influences the risk we take. The work shows that responses are different if choices are framed as a gain or a loss. Governments have used the nudge as a tool to implement policy [31]. A wink says it all? A nudge is a subtle way of asking for something to be done. Can a nudge cause accidents or disasters directly? One can be nudged into doing something that can either prevent or cause an accident. However, many accidents are caused by a nudge towards deviance in organisations, leading to ignoring the established process as also seen in Chap. 2. Societal pressures are nudges, and these play a heavy role in the perception of risk. Belonging to certain clans, social groups, and even professional bodies generates a nudge towards the goals of that group for individuals who are members of the group. This sense of belonging consequently generates a collective perception of risk that could influence what causes the accidents. Thaler also argued that human beings look keenly at the cost already paid, which are referred to as sunk costs [32]. In many domains, consideration of already sunk costs plays a role and is a nudge of sorts. For example, in the Boeing 737 Max case, the manufacturer had sunk costs in the old Boeing

8 Is Greed Truly that Good? Avarice and Gain Versus …

135

737, and it appealed to Boeing to develop a version at minimum costs, which played out in terms of pricing versus competition with Airbus. What is also seen in the pandemic is the nudge that causes herd behaviour based on social forces, irrespective of the information available, such as the people who protest lockdowns and refuse masks. The tribal and cultural nudge is a powerful force that can be used for good as we see in the next chapter. However, it needs to be managed with care. One aspect of human decision making is aversion to potential loss, which is to nudge the other way. This risk aversion for losing can be higher than the value of gaining the same amount of money to some. It can be argued that loss aversion is whether losses are actually experienced more negatively than equivalent gains or merely predicted to be more painful but actually experienced equivalently. There are situations where a particular choice is in terms of cooperation for gain and rewards. The other possibility is fame and reputation as reward or gain, and one can look at rewards in terms of positions in the social structure and be regarded as an icon of sorts. This is clearly about gain. Much of human behaviour is intuitive and automatic versus what is reflective and rational. This has a great bearing on our approach to danger, a sense of risk taking, and ingredients to possible accidents.

8.4

Nudge and Deviance

As discussed in Chap. 4, deviance is both individual and organisational. Much of it unsaid and unwritten, nudges can be human behaviour that drives deviance. The nudge need not be for personal gain alone; it is also for acceptance into the fold, for building trust in a peer group, all in a sense a part of the social fabric. A nudge for deviance is also part of organisational culture and behaviour, even though it is many times not termed as such. In organisational behaviour, there is work on designing ethical organisations and setting boundaries [33, 34]. A nudge towards deviance based on the overall social behaviour of the organisation is always a possibility, as most organisations are driven by market demands, always responding to the stock market and competition. Thus, without being explicit, tacit pressures to ignore operating protocols and processes are a nudge for deviance. In others, there are intense schedule pressures, cost cutting, and pressures on performance and delivery. As in the Shuttle disasters, normalisation of deviance could occur because the nudge to deviance would have systematically led it to become a new normal. Much has been written about the Boeing 737 Max accident and attributed

136

S. Chandra

to the complexity of the flight control system, human error by the pilots, faulty cognitive models by designers, faulty training for pilots leading to poor mental models, complexity of the organisation, deviance with the way the regulator operated. Given the competition from Airbus, was management willing to take the risk of deviance? Can wilful ignorance be regarded as a nudge for deviance? As happens when there is competition, avarice for more market share drives decision making, a sense of deviance, whether unsaid or explicitly communicated, pervades, and percolates in an organisation. Was there the thrill of the chase that got the management at Boeing as they were in head-on competition with Airbus? There was a market share war between Boeing and Airbus for the narrow body aircraft space. Both B737 and A320 were very successful aircraft. Airbus had just released A320neo, which promised over 15% savings in fuel. Boeing did not want to invest in a brand-new aircraft but modified its existing B737, the first version flew in 1967. A modification to the B737 with new engines was envisaged. The programme managers were very keen to enter the market as early as possible, as A320neo had started to gain market share. B737 promised better efficiencies and received several orders. Many airlines, past Boeing customers too wanted to increase market share using the promised efficiencies. The schedule pressures and cost cutting [35] seemed to nudge Boeing management towards errors. It also meant that the regulator, FAA, was nudged into being part of the plan to cut short processes and pass on the need to build redundancies, and as is now known, the aircraft entered the market with deficiencies. Diane Vaughan’s normalisation of deviance discussed in detail in Chap. 5 provides evidence that organisational behaviour can be shaped towards deviance, starting in subtle ways, but growing unstopped and unchecked is always a possibility. There was clearly a nudge for organisational avarice, something that is generally not pointed out. As in the Boeing 737 Max case, economic pressures during competition need to maintain schedules (due to both market share and economic pressures as in the case of the Gulfstream 650 accident) and compulsions to keep an economy open during the pandemic or win an election which shows how organisational deviance propagates. Much of it unsaid, this deviance is truly based on a nudge and many times never truly spotted at the early stage. In many organisations, as was probably noted in the Shuttle disaster as well as the Boeing 737 Max case, there is probably an evolving culture in the organisation that shows a nudge for deviance. Why does this happen, or when does this happen in an organisation that is generally known for being intolerant to deviance? One of the reasons is

8 Is Greed Truly that Good? Avarice and Gain Versus …

137

that of complacency and a feeling that one can get away from it. The other aspect is that the nudge, subtle as it may be when there is a danger of being overtaken and survival is at stake in the competitive world. However, it is still avarice—pressures to deliver to shareholders, bonuses, stock options, and so on. It should be noted that the possibility of large-scale financial risk led to a nudge for deviance, leading to short cuts in the engineering process, as the case of the B737 Max shows. The pandemic has provided examples where the nudge for deviance is seen. Assuming that distancing and wearing masks are the new norm and nonconformance is deviance, the question is what motivates people to deviate. As seen in religious practices around the world (Jewish groups in Brooklyn, New York [36] or the Kumbh Mela in India [37]), it is existing practice versus the new societal norms. A powerful nudge instead of an order is pressure from societal groups, religious groups, and even branches of political parties. As discussed in Chap. 2, in healthcare, the OxyContin case also shows a nudge towards organisational deviance. This nudge was provided by marketing strategies and management, leading to a number of patients becoming dependent on the drug. Dr. Dina Marie Pitta, a physician at McKinsey who were consultants to Purdue Pharma, wrote an email to her colleagues, saying that the organisation “needs to transform, rather than remediate” and systems must change to avoid more failures, asking for the leadership to take accountability for their role. Phil Murphy, New Jersey’s Democratic governor, told reporters that McKinsey’s work with Purdue was “beyond the pale”, particularly its proposal that Purdue pay pharmacy companies like CVS rebates for OxyContin [38, 39]. In Chap. 6 on human error, there is a statement of an honest mistake that is examined in the law of torts. Why is a mistake classified as honest or not? Keynes, in an essay in 1936 [26], is sceptical, if not critical of the view that all human behaviour is reasonable. He argues that there is deeper passion, insane, and irrational levels of crookedness possible and talks about animal spirits. In other words, was Keynes hinting that the nudge to deviance always exists? Richard Thaler, in his article on the power of the nudge for good or bad [32], notes that behavioural science, and when used ethically can be very useful. The idea is that it is not employed to sway people to make bad decisions. That nudge should be wholly transparent and never misleading, easy to opt out of it, and more importantly, it has a larger welfare goal, something discussed in the next and final chapter using Ostrom’s view of the regulating for the commons.

138

8.5

S. Chandra

Blame

Why do we discuss blame? Blame more than any other behaviour is typical social behaviour, as discussed in Mary Douglas’s work on risk and blame (see Chap. 5). Blame is a consequence of the identification of error [40]. Errors are classified, as explained before in Chap. 6, but actual assignment of blame is a different thing, as errors are about negligence, ignorance, human limitations of managing complexity, fatigue, mental stress, and so on. Blame is in that sense about pointing to those who caused it, organisations or individuals. If errors are intentional and with malafide intent, then blame is seen in the eyes of the law in terms of punishment. There is an enquiry and the need to hold people and organisations accountable. Blame in a social sense and in engineering is different. In social settings, blame is a complex moral thing. In engineering, accidents lead to investigation, errors are identified, and blame is assigned. All accident investigations point out blame; in other words, human errors are pointed out, e.g. pilot errors for landing on the wrong runway, for which is blame and is assigned. The author views that blame is sanitised in the scientific and engineering process, and most accident reports are specifically written to remain technical, providing that expert view of the cause of a certain event, even in a legal case. However, blame is also to be seen in terms of how oversight in regulation is supposed to work. The idea is to learn from blame to enable that accidents are not repeated. In engineering, it is predominantly for learning but does not help understand human behaviour, and as sociologists such as Mary Douglas note, it sanitises the real issues that are swept under the carpet. In many societal situations, even in engineering and in many professions, we have distanced error from blame, for example, in forecasting research on cancer, when found wrong, because the intent was accepted to be honest. Now, in healthcare, as discussed in Chap. 6, the issue of negligence is a matter of considerable concern, but equally, the ethics of an honest mistake has been debated, and the focus is on learning and training rather than punishment. In Mary Douglas’s work on risk and blame, there is a discussion on how certain societies operate in terms of not directly blaming anybody. Professions, engineering, healthcare, and others are to that extent social and cultural constructs. Blame is pointed out to know if there was malafide intent, and whether there was negligence, culpable errors, or avarice. Bisejoly was an engineer at Morton Thiokol (later ATK), which was responsible for the design and manufacture of the Space Shuttle Booster Rockets. Bisejoly pointed out that the O-rings would not survive a launch of the space vehicle in cold weather. When the Shuttle Challenger was launched

8 Is Greed Truly that Good? Avarice and Gain Versus …

139

on January 28th, 1986, as discussed in Chap. 2, the vehicle disintegrated. After the disaster, Bisejoly testified about the NASA and Morton Thiokol attempt to continue the launch irrespective of the alert [41]. He argues that the meeting called by Morton Thiokol managers, which resulted in a recommendation to launch, “constituted the unethical decision-making forum resulting from intense customer intimidation”. For his honesty and integrity leading up to and directly following the Shuttle disaster, Roger Boisjoly was awarded the Prize for Scientific Freedom and Responsibility from the American Association for the Advancement of Science. Roger Boisjoly left his job at Morton Thiokol and became a speaker on workplace and engineering ethics but was nearly ostracised by his community for whistle blowing. Blame and whistle blowing are intertwined and are complex social issues. Blame comes with a complex set of moral positions, one in which it is a duty to point out and expose those responsible for an accident or a disaster versus loyalty to the organisation. On the other hand, it is also about retaining organisational reputations. In aviation and healthcare, where reporting of errors is encouraged, using a system of anonymous reporting. Whistle blowing has had complex reactions and responses in communities [42–44].

8.6

Wilful Ignorance: The Deadly Combination

In Chap. 2, a number of accidents were analysed. Many had a specific set of causes. In some, there were poor mental models; in others, there were actions that were clearly for gain. While sociologists see technology and complexity leading to epistemic causes for accidents, psychologists accept human failings and see this as translating into errors. Engineers acknowledge that errors can occur in operation and that there could be unforeseen scenarios that design would not have anticipated. However, engineers provide systems that catch subsystem errors and build layers of redundancies to ensure that these subsystem failures do not lead to disasters. However, none of them look at the avarice, the nudge for gain, and poor mental models as a combination. While behaviour economics explicitly acknowledges that there is a possibility of a nudge for risk taking, it appears to provide that vital ingredient for aligning of other errors, poor mental models, negligence, and organisational deviance towards a disaster, as seen in the incubation theory of Barry Turner. While wilful ignorance [45], sometimes called wilful blindness [46], has been discussed in sociology, it attains an all-new status in terms of organisational structure and behaviour in alignment with avarice as a basis for risk taking. In

140

S. Chandra

the Shuttle disaster, Boeing 737 Max, the pandemic, there have been cases of wilful ignorance. Wilful ignorance is truly the combination of poor mental models and avarice. The issue that will be discussed in the next chapter is how risk regulation can handle this rather deadly combination.

References 1 Nicas, J. (2020). He has 17,700 bottles of hand sanitizer and nowhere to sell them. New York Times, March 14, 2020. 2 Chapman, S. (1987). Boesky, takeovers and the value of rewarding greed. Chicago Tribune, May 1, 1987. 3 James, M. S. (2008). Is greed ever good? ABC News, September 18, 2008. 4 Ortt, R. https://twitter.com/SenatorOrtt/status/1460257539476570114 5 Cillizza, C. (2021). Andrew Cuomo’s Covid-19 performance may have been less stellar than it seemed. CNN , January 29, 2021. 6 FAA. (2009). Risk management handbook. U.S. Department of Transportation. 7 Federal register, notice of proposed rulemaking. (2016). https://www.federalregis ter.gov/documents/2016/07/19/2016-16385/notice-of-proposed-rulemaking 8 Yackee, S. W. (2019). The politics of rulemaking in the United States. Annual Review of Political Science, 22, 37–55. 9 Ferguson, A. (2003). SAS: British special air service. The Rosen Publishing Group. 10 IMDB. (1963). Calculated risk. Movie. https://www.imdb.com/title/tt0056 895/. Accessed August 11, 2022. 11 McEvoy, J. (2021). Covid surges nearly 700% In South Dakota after Sturgis motorcycle rally—An even higher rate than last year. Forbes, September 2, 2021. 12 American Council on Science and Health. (2016). Bungee jumping and the art of risk assessment, August 18, 2016. https://www.acsh.org/news/2016/08/19/bun gee-jumping-and-the-art-of-risk-assessment. Accessed August 11, 2022. 13 de Jonge, J. (2011). Rethinking rational choice theory: A companion on rational and moral action. Palgrave Macmillan. 14 Allingham, M. (2002). Choice theory: A very short introduction. Oxford. 15 Snidal, D. (2013). Rational choice and international relations. In W. Carlsnaes, T. Risse, & B. A. Simmons (Eds.), Handbook of international relations (p. 87). Sage. 16 Gelles, D. (2022). How Jack Welch’s Reign at G.E. gave us Elon Musk’s Twitter feed. New York Times, May 21, 2022. 17 Gelles, D. (2020). Boeing’s 737 Max is a Saga of capitalism gone awry. New York Times, November 24, 2020. 18 Myerson, R. B. (1991). Game theory: Analysis of conflict. Harvard University Press.

8 Is Greed Truly that Good? Avarice and Gain Versus …

141

19 Beck, U. (1992). Risk society: Towards a new modernity (Ritter, M., Trans.). Sage Publications. 20 Bernstein, P. L. (1998). Against the gods: The remarkable story of risk. Wiley. 21 Mazliak, L. (2013). Poincaré and probability. Lettera Matematica, 1, 33–39. 22 Jones, H. W. (2018). NASA’s understanding of risk in Apollo and shuttle, NASA Ames Research Center, ARC-E-DAA-TN60477. 23 Stamatelatos, M. (2000). Probabilistic risk assessment: What is it and why is it worth performing it? NASA Office of Safety and Mission Assurance. 24 BBC. (2016). Viewpoint: Challenger and the misunderstanding of risk, February 1, 2016. https://www.bbc.com/news/magazine-35432071. Accessed September 3, 2022. 25 Bulmer, M. (2003). Francis Galton: Pioneer of heredity and biometry. Johns Hopkins University Press. 26 Keynes, J. M. (2016). General theory of employment, interest and money. Atlantic Publishers. 27 Lawton, G. (2013). Nudge: You’re being manipulated—Have you noticed? New Scientist, June 19, 2013. 28 Connor, T. (2019). Helping people make better choices—Nudge theory and choice architecture. Medium.com, March 31, 2019. 29 Thaler, R. H., & Sunstein, C. R. (2008). Improving decisions about health, wealth, and happiness. Yale University Press. 30 Kahneman, & Tversky. (1979). Prospect theory. Econometrica, 47 (2), 263–291. 31 Thaler, R. H. (2015). The power of nudges, for good and bad. The New York Times, November 1, 2015. 32 Thaler, R. H. (2016). Misbehaving—The making of behavioral economics. W. Norton & Company. 33 Holmström, B., & Roberts, J. (1988). The boundaries of the firm revisited. The Journal of Economic Perspectives, 12(4), 73–94. 34 Epley, N., & Kumar, A. (2019). Highy ethical organisations? Behavioral science, how to design an ethical organization. Harvard Business Review, 144–150. 35 Robinson, P. (2019). Former Boeing engineers say relentless cost-cutting sacrificed safety. Bloomberg. 36 Stack, L. (2020). Backlash grows in orthodox Jewish areas over virus crackdown by Cuomo. New York Times, October 7, 2020. 37 Ellis-Petersen, H., & Hassan, A. (2021). Kumbh Mela: How a superspreader festival seeded Covid across India. The Guardian, May 30, 2021. 38 Bogdanich, W., & Forsythe, M. (2020). McKinsey issues a rare apology for its role in Oxycontin sales. New York Times, December 8, 2020. 39 Hamby, C., Bogdanich, W., Forsythe, M., & Jennifer Valentino-DeVries, J. (2022). Oxycontin, FDA McKinsey opened a door in its firewall between pharma clients and regulators. New York Times, April 13, 2022. 40 Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25, 1–40.

142

S. Chandra

41 Martin, D. (2012). Roger Boisjoly, 73, dies; warned of shuttle danger. New York Times, February 3, 2012. 42 Wolff, J. (2006). Risk, fear, blame, shame and regulation of public safety. Economics and Philosophy, 22, 409–427. 43 De Cremer, D., De Schutter, L., Stouten, J., & Zhang, J. (2016). Can employees really speak up without retribution?. Organisational Culture. 44 Whittles, J. (2021). FAA says lack of federal whistleblower protections is ‘enormous factor’ hindering Blue Origin safety review. CNN , December 10, 2021. 45 Mats, A. K., & Schaefer, S. M. (2022) Dynamics of wilful ignorance in organizations. British Journal of Sociology. 46 Bovensiepen, J., & Pelkmans, M. (2020). Dynamics of wilful blindness: An introduction. Critique of Anthropology.

9 And Then There is Dr. Kato: How Does It Look and Where Do We Go from Here?

In March and April 2020, New York City became the epicentre of the pandemic in the USA, and the city was in much more a serious situation than many other cities across the world. New York had received in-bound traffic from COVID-affected zones, and cases had escalated rapidly. Around then, hospitals in New York began postponing previously scheduled surgical operations to accommodate the traffic of COVID-19 cases. Some surgeries could not be postponed, and surgeons took the risk of doing them. At New YorkPresbyterian, Columbia University Irving Medical Center, Dr. Tomoaki Kato continued to perform surgeries. Dr. Kato was a star surgeon, some called him the hospital’s Michael Jordon, a man of exceptional ability, innovative, and a marathon surgeon, operating 20 hours at a stretch. He was also extremely fit and a marathon runner. However, he got COVID in March 2020 and became acutely sick and came close to death a number of times. After two months of illness, which included being on a ventilator, he started to recover and went home in May 2020 to a cheering crowd of hospital staff, chanting “Kato”. In August, he started performing surgery again, first using robotics, and in September, liver transplants, with his shoulder still strapped in tape. Dr. Kato said he had learnt a lot during his sickness—how patients feel and more importantly why he needed to pass his skills on. In other words, he said his skills needed to be everybody’s sometime soon [1]. Dr. Kato’s case represents some of the best outcomes of the pandemic. The pandemic is ending, and people have started getting to normal routines

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 S. Chandra, Accidents and Disasters, https://doi.org/10.1007/978-981-19-9984-0_9

143

144

S. Chandra

but are still wary of variants. Vaccines developed in such a short time have provided extraordinary protection. The Boeing 737 Max is back in the air with its malfunctions and bugs sorted out, and the aircraft has been certified by the regulators, internationally, to fly again. Boeing settled many lawsuits, and the FAA acknowledged flaws in policies and started a process of putting in place measures for better oversight. The air crash and the pandemic have provided extraordinary illustrations of human behaviour. While there have been many instances of avarice or poor mental models, many others also have shown incredible courage, altruism, sheer dedication and commitment, and deference to expertise. For example, people such as Dr. Kato provided inspiring commitment to professional work. On the other hand, we heard of cases of hoarding of masks and sanitisers. The book has focussed on frailties of human behaviour, suggesting there will be some level of inevitability to accidents and disasters occurring, but at the same time showing the promise of preventing them from happening again because of determined efforts. Pandemics will continue to occur, accidents will continue to take place, sociologists will continue to tell you these events are inevitable, while scientists and engineers will continue to develop new technology, using processes and regulation they can envision, with a belief that learning from failures will reduce the occurrence of these events. However, there is more to human error than just incubation to failure or root causes as described by failure analysts. It is about ensuring that human error is minimised by taking care of risk perceptions and understanding its impact. It is also about understanding that human behaviour can be driven by profits, market share, financial gain, and fame. At the end of the day, decision makers need to be accountable, in terms of being aware of the implications of risk perceptions and risk for gain. In the Boeing 737 Max case or many times during the coronavirus pandemic, there has been considerable confusion on this. This chapter looks at a future that might still resemble the past, but with a wiser regulatory position that is fully aligned to risk perceptions and organisational behaviour, apart from the currently practised conventional scientific and engineering worldviews. Over the various chapters, the strikingly important things are human behaviour, risk perception, complexity of technology and organisations, fallibility of mental models, and risk taking for gain leading to accidents and disasters. Many of these factors lead to a sense of inevitability among sociologists about accidents. Equally important is how risk perception is viewed by sociologists compared to scientists and engineers. This makes it obvious for the case for risk perception to be included in a regulatory framework. In this chapter, a case is made for the regulation of common and public good based

9 And Then There is Dr. Kato: How Does It Look …

145

on Ostrom’s theory [2] and exploring HRO theory as a possible solution to embedding risk perception into technology and processes.

9.1

Human Behaviour

In the book, the issue that has mattered most has been human behaviour: cultural, social, and individual behaviour. It has been emphasised that scientists and engineers effectively ignore the effect of human behaviour in estimating risk (basically risk perception of humans), which has a role in accidents and disasters. Throughout the book, the effect of cultural and social interactions in shaping risk has been discussed on various fronts, from using seat belts in cars to COVID vaccinations. Chapter 4 discussed complexity, and it is clear that human beings, especially those who do not have the experience of science and technology, view risk very differently. Chapter 5 describes the views of Mary Douglas and Perrow and other sociologists on the influence of human behaviour on accidents and disasters and the shaping of risk perception. The pandemic has offered a great opportunity for studying risk perception. There were a number of people who feared vaccinations and are against them (called antivaxxers), sometimes even when they are actually in the healthcare profession; for example, a number of New York nurses refused vaccinations [3]. However, mental models can be changed and refined with training, scenario planning, and simulation, as discussed in Chap. 7. Human behaviour is many times typical: psychological and experience driven on the one hand and social compulsions on the other. In combination, it has always presented a dilemma and challenge to a risk regulator but has never been picked up and worked upon. In reality, human behaviour and social behaviour also include risk taking for gain, something not dealt with explicitly by regulators, except perhaps in financial risk regulation. Risk taking is individual, cultural, and social. In attempting to retain one’s status and position in a social fabric, there could be a nudge towards risk taking. Notwithstanding the social structure, individual risk taking for gain is common and many times part of the evolutionary fabric. An organisational ethos that encourages risk taking for gain and has no independent process in place to contain deviance leads to possibilities for accidents and disasters. While advocates of rational choice theory argue that human behaviour is tailored towards choices for self-interest (as in Chap. 8) and can be termed risk taking for gain, it sometimes includes community interest, organisational interests, or interest groups. The challenge, however, is to ensure that these

146

S. Chandra

choices are optimal for the common good as will be discussed in a later section. Human behaviour can also be seen from an evolutionary perspective in terms of survival of the fittest of the individual, the family, the community, and even the country. Human behaviour is intertwined with culture; some cultures and civilisations see errors, deviance, and even malafide intent differently, some attempt forcible corrections, and some do not even blame. What we learnt thus far is that there will be honest mistakes and human frailties, just as there will be avarice and risk taking for gain. When one takes risk, one takes it as an individual but is heavily influenced by the social fabric. On the other hand, risk taking in the design of a new aircraft or a pilot flying in bad weather takes a risk for self as well as passengers. There are organisational leaders who attempt to take that risk for a group or their entire organisations. Additionally, wrapped around all this is blame, which is intertwined with morality and ethics and truly civilisational worldviews in terms of how intent for gain is viewed. Not much is said by scientists and engineers on these aspects of human behaviour, even less on morals. Blame is human behaviour after all, and when blame is seen in terms of the law, negligence can be established for liability. In the law of tort, the effect of liability is seen in terms of costs and redressal. No regulation, at present, has a process to capture actions based on risk perceptions. Risk regulation presently concentrates on technology failures and operation errors but not much on risk perception. Therefore, the requirement to provide a nudge towards mindfulness and sharing of common resources is more effective than just blame, as we see in the next sections.

9.2

Inevitability

Based on the previous chapters, it can be said that many sociologists feel a sense of inevitability about accidents and disasters, especially in cases where complex technologies are used. Overall, all actions are human behaviour related, and Perrow in his normal accidents theory emphasises this. As noted in Chap. 5, Douglas expresses scepticism about the scientists’ and engineers’ position on risk, as it does not account for human behaviour. Turner refers to the incubation of errors in complex technologies and systems in his book on man-made disasters, and these errors, small as they may be individually, provide that link for an accident to occur. Downer refers to epistemological failure as a cause for accidents, which broadly refers to being unable to foresee. These are important opinions that lead to that sense of inevitability.

9 And Then There is Dr. Kato: How Does It Look …

147

The black swan events discussed in Chap. 5 represent the inability of foresight to capture all possible events, and Taleeb records the impact of the unforeseeable. Probability and inevitability may appear to be from two ends of the spectrum, but even in a case of low probability of occurrence, these events can happen. In the Boeing 737 Max case, the probability of flight control causing an accident was one in millions. However, two accidents occurred in a few months, reinforcing the view of inevitability. Both the Black swan theory and probability theory also indicate inevitability, even when there is an assessment of a low probability of occurrence. The Black swan theory acknowledges that human beings cannot foresee everything and that there could be the inevitable. Probability theory is about what can be foreseen, but the chances of it occurring are either low or high. Together, they point to inevitability. Rumsfeld’s famous, if convoluted, statements discussed in earlier chapters about “known knowns” and “known unknowns” effectively acknowledges what scientists and engineers need to do so openly. While it was made in the context of the Iraq war [4], these statements have implications for how we view the possibilities of accidents and disasters. It also means failure of imagination leading to accidents and disasters. The black swan implies “unknown unknowns”, which are impossible to imagine and hence predict. Given that science and engineering have limitations, there are “known unknowns”. When a system fails, corrections are made and it is also possible, during project planning, the failure scenario was examined, and foreseen. It is just that the probabilities were either low and hence ignored. On the other hand, if the probability of occurring is very high, it may eventually lead to abandonment of the project. The fact that events can be foreseen but will have low probability is also something that can be associated with risk perception. The penalty can be enormous when the known possibilities of a disaster are not addressed. On the other hand, sociologists view all these (the known and the unknown) with the same brush of inevitability.

9.3

Scientists and Engineers: The Ostrich with Its Head in the Sand

Scientists and engineers view the world differently as compared to others, based on their training. Fundamental to the typical position of scientists and engineers is that technology and nature can eventually be understood based on reductionist principles and that complexity truly is the aggregation of simple behaviours and that the probability of known events occurring

148

S. Chandra

can be computed. The reductionist philosophy is to structure as much as possible, breaking it down repeatedly, obtaining as much understanding of subsystems, and finally the aggregation of behaviour reflecting the behaviour of the whole system. While there has been much success of this worldview in engineered systems, the aggregated behaviour has sometimes been different from what was envisaged, leading to the view that when the whole is more than the sum of its parts, there is the possibility of failure of foresight in terms of predictability. The lack of inclusion of human behaviour, which involves human errors and risk for gain, in predictions by scientists and engineering, is similar to the ostrich with its head in the sand, unable and unwilling to acknowledge the importance of risk perception and its effect on the occurrence of accidents and disasters. This consequently affects regulation too. While there is reluctance among scientists and engineers to accept and believe that black swan events can occur, when they do, they are of course analysed and corrections made. The belief system of scientists and engineers is driven by the use of conjectures and refutations [5], using a predict, test, and evaluate strategy. There is mostly tacit acknowledgement about the possibilities of failure and explicit commitment to learning from failure. Scientists and engineers are committed to probability as a tool, but as Mary Douglas says (see Chap. 5), they are reluctant to look at the risk perception of human beings as a variable and work with it. The question is why is this so when many accidents are caused by human actions. Science and engineering work primarily with what is known and based on which scenarios of operations are developed. Regulation follows the science and engineering process. There is a need for regulation to acknowledge that mental models keep evolving, human error and mistakes are actions that could be based on avarice. In engineering, safety education and audits have played a part in refining mental models to control error, but risk perception has not been gauged or monitored in a formal way. The COVID pandemic, as discussed in the previous chapters, shows why positions of scientists and engineers can lead to failures unless the risk perception of the public is taken on board. This applies to climate change and even older issues such as safety rules for seat belts, as seen earlier in this book. As science evolves, convincing the public driven by scepticism and complacency needs to be part of the risk assessment in engineering and science, essentially through regulation. This book has outlined two points: refining mental models based on risk perception and managing avarice for gain that will impact risk, both of which have been outside the scope of science and engineering.

9 And Then There is Dr. Kato: How Does It Look …

149

As discussed in Chap. 5, complexity is generally regarded as one of the main causes of accidents and disasters, and even more so, the belief that human beings have limitations in understanding complex systems and technologies and that there is a certain inevitability with respect to failures for this reason. There is scepticism among sociologists regarding the reductionist methods to handle complexity, as discussed in Chap. 4. Complexity can be from nature or technology. While the pandemic could be seen as an illustration of natural complexity, the Boeing 737 Max is an illustration of how technological complexity played a role in the disaster. In many cases, automation is introduced to manage complexity, to reduce workload by human beings, for example, airline pilots, and to reduce errors due to the many actions needed to perform near instantly to avoid accidents. In some cases, automation causes accidents if the design of the automated system is inadequate. Thus, the need for an enhanced framework for regulation that embeds the effects of risk perception among all stakeholders and provides for improved risk understanding to contain accidents and disasters is evident.

9.4

Tragedy of Commons, Ostrom, and Risk Regulation

A new approach to regulation, especially dealing with risk perception, avarice, and risk for gain, could be the emphasis on shared resources and the implications for its availability, be it for an organisation, nation, or a global entity. Much of what engineers and scientists do is to design for optimal use of resources. Risk regulation is also in the business of regulating resources and reducing the risk to those who use them. The term tragedy of the commons described in 1968 by Hardin [6] uses the British concept of the commons, meaning common land in each village available for shared use and individuals if acting in short-term interest will lead to a resource catastrophe if the commons is unregulated or managed badly. This implies that it is inevitable to see a run on the resources when it is claimed and not shared. This argument has implications for regulation in terms of being unable to ensure sharing of resources equitably. As the sharing reduces, risk perceptions change, while the goal of risk regulation should be to ensure all stakeholders are able to manage the risk equitably. Hardin’s concept also has a ring of inevitability to it regarding the unequal sharing of resources leading to a disaster.

150

S. Chandra

The Nobel laureate Elinor Ostrom’s work [2, 7, 8] challenged Hardin’s argument with evidence that individuals, groups, and communities have shown the ability to manage the commons or what is a set of collective resources. She argues that there will be no collapse if common resources are well managed by direct stakeholders, and there is regulated access and cooperation. Ostrom’s work, when interpreted for organisational good, means that risk perception of the stakeholders of the organisation is important. In other words, the nudge for deviance will be restricted if one takes the view of being careful of the commons. This leads to the question of who minds the commons. The lessons noted by Derek Wall [9] are about setting boundaries for the commons: shared resources in organisations, nations, and communities. Thus, there is a need to set regulation that has meaning to local issues, enables participatory decision making, and monitors resources, but is embedded within larger networks, which is akin to the system of systems concepts discussed in an earlier chapter. Organisations, irrespective of the power structure, have shareholders and multiple stakeholders. The framework of using the resources for the institution is akin to that of the commons. In that sense, it provides an opportunity to further embed safety practices into the framework. Regulating social behaviour in organisations, even from a safety and accident prevention culture, would probably be unworkable, beyond a certain point. While regulation is key, the tricky bit is in ensuring that it is not overarching and intimidating, but providing the framework for organisational common good. An enhanced HRO with embedded self-regulation with the appropriate nudges and checks shows promise.

9.5

What Should Be the Nudge in Risk Regulation?

In Chap. 8, the power of the nudge was described. It was also noted that the nudge for deviance was apparent in many of the accidents described in Chap. 2. As in behaviour economics and public policy, the positive nudge, a nudge for change, has been highlighted in Chap. 8. As noted earlier, in the regulation of risk, the emphasis has been on the use of science and engineering approaches. The only positive nudge science and engineering can provide is training and refinement of mental models as part of the process to be embedded in design, manufacturing, and operation. This is truly how risk regulation is presently structured. However, there is a need to have a positive nudge that constrains deviance and risk taking for gain, when it has

9 And Then There is Dr. Kato: How Does It Look …

151

implications for public and common good. In combination with the sociologist’s view of how risk is perceived, the embedding of a sense of commons in risk regulation provides the nudge to reduce the risk. The arguments made by behaviour scientists and sociologists regarding risk perception and the variability of it, and the nudge for public good need to be explicitly incorporated in regulation. The nudge towards forming an unified and consistent risk perception is also important. The pandemic is a case in point—the variability of risk perception hurts the management of the pandemic. This includes opposition to the vaccines and prompting the use of unproven drugs and procedures. Risk regulation here needed the vital nudge to enable the fragmented viewpoints to consolidate into more rational views, which are formed by refining the mental models. These models need to be consistently retrained with the latest information, evidence, and data. Based on the Ostrom theory, participatory decision making, deference to expertise, and organisational oversight embedded in a regulatory process should enable the nudge to constrain deviance and risk taking for gain and have an influence on public good. These are to that extent possible in HRO theory. Specifically, for scientists and engineers, the nudge can embed risk perception into fault trees and identifies possibilities of catastrophic failure based on the faults, if modelled appropriately. These trees can be continuously refined as mental models improve: as scientific, counter intuitive learning embeds in and the fault trees change. The other part is to fill in the PRA methodology, system engineering-based reliability, and hazard analysis with the effects of poor risk perception on the possibilities of catastrophic failure. Leadership nudges as an enabler for organisational behaviour and culture to resist deviance, irrespective of individual behaviour, are important. As seen in Chap. 2 among the many accidents, individual deviance needed checks and a sense of mindfulness of organisational good such that the organisation would not be termed deviant. Karl Weick [10] argues for organisational mindfulness, which is very much a nudge that aggregates individual behaviour and eventually reflects the organisational ethos and worldview. It is important to embed these into the organisational structure through a framework.

9.6

Why is HRO so Important?

HRO was introduced in Chaps. 4 and 5. A revisit to HRO is now needed. It needs to be seen in a new light. In Chap. 4, HRO was introduced as part of navigating and managing complexity. There, the objective was, as noted

152

S. Chandra

by sociologists such as Perrow, the inevitability of accidents due to human limitations in dealing with complexity. Automation was supposed to play a role in reducing the number of actions a human being would need to make when operating complex systems such as nuclear power stations and aircraft. While automation contributed to a reduction in workload, there have been cases where automation has led to accidents, for example, automated flight control in aircraft and driverless cars. In the past, HROs were intended to provide an overarching framework to reduce accidents when using complex technology. Examples are aviation and healthcare, sectors where technology and organisational complexity are very high and there is a need to ensure the safety culture. In that sense, HRO is an acknowledgement of managing complex technology where human and social behaviour are involved. It is also a tacit acknowledgement of the fact that scientists and engineers need to take on board all possibilities of human behaviour. An updated framework of HRO theory that incorporates social behaviour issues, management of deviance, and avarice is useful. It will emphasise the creation of a safety culture embedded in organisations that addresses complexity and risk and creates an HRO as an organisational norm that allows the embedding of the appropriate safety cultural components. The five traits of an HRO, as discussed in Chap. 4, are (1) preoccupation with failure, (2) reluctance to simplify, (3) sensitivity to operation, (4) commitment to resilience, and (5) deference to expertise. It is important to note that all of these points are about human behaviour and sociology. While scientists and engineers are committed to safety, not all are preoccupied with failure; in fact, most are preoccupied with success. The risk perception of those who are preoccupied with success was discussed in detail in Chap. 8 based on risk taking for gain. While engineering concepts of fault trees and cascading effects of subsystem failure signal preoccupation with failure, they do not embed the risk perception of individuals or the sociology of it in the fault trees. When regulatory oversight is established for any technology, it should take into account human and social aspects of risk. In engineering systems, these aspects can be incorporated into fault trees, and alerts and checks can be issued. Reluctance to simplify is an acknowledgement of complexity. The belief that complex systems need to be analysed as such, and the acknowledgement of the occurrence of rare and unforeseen events enables building agility in response to even these events. Sensitivity to operations is a clear acknowledgement of human limitations and possibilities of human error in complex systems and, as elaborated in

9 And Then There is Dr. Kato: How Does It Look …

153

Chap. 6, is those errors that HRO with its processes and training methodology can reduce. It provides that opportunity and nudge for training the operators of the technology with an understanding that their mental models of the technology will be limited to what information has been made available. Commitment to resilience is truly about building processes that can help the organisation build agility and innovate within a dynamic environment as things unfold. This is truly a quick reaction system, communication, and control and a command structure. Commitment to resilience is an acknowledgement of unforeseen events and the capability to respond as the disaster unfolds. Again, the regulatory environment should enable collaborative teams working in a multidisciplinary environment with flexibility and agility of roles. One of the casualties during the pandemic was deference to expertise, even in the Boeing 737 Max case; even when warnings were sounded, those with decision-making power may have bypassed expertise. The Shuttle disaster, in the case of the engineer at Morton Thiokol, who was involved in the O-rings, was about lack of deference to expertise, and authority overriding expertise is well known. In all cases, expertise is also about more refined mental models (see Chap. 7). Expertise in conjunction with other facets of HRO is critical for response, agility, and resilience. Organisations that develop such skill structures are, in general, HROs. In many ways, HRO, if used correctly, is truly an acknowledgement of a nudge towards preoccupation with failure. It is also a nudge towards deference to expertise. Training through HRO should enable improved rational choice. A rational choice is assumed to be made as an informed choice. Rational choice theory argues that individuals use rational choices for outcomes that maximise their self-interest. In reality, using an updated HRO should result in outcomes that provide organisations with the best benefit. In combination with sensitivity to the common interest as outlined earlier and using the HRO as a way guide should enable outcomes that are better for the individual and the community as a whole while rationalising risk perception.

9.7

Integration of These Concepts into Risk Regulation

The story thus far is that human behaviour as an entity must be accounted for and embedded in risk regulation. What we have learned is that human and social behaviour, complex as they are, will be dependent on the

154

S. Chandra

prevailing social fabric and culture which will need to be taken into perspective. A framework that acknowledges this is needed for regulation of risk. Even in the case of the pandemic or the Boeing 737 Max crash, the lessons will be forgotten over the years, especially retention in terms of organisational and institutional memories, unless embedded into a regulatory framework. What has been discussed in the book is that there will be negligence, honest mistakes, and poor mental models, but there will also be avarice and decision making from personal interest versus the common good. There will also be choices made that are truly opportunistic; for example, hoarding sanitisers during the pandemic was the most graphic illustration of the tragedy of the commons. Therefore, the conclusion is that human behaviour will always need to be considered, especially in combination with technological complexity and, human frailties. Training appears to provide a way of reducing these, but it is the nudge for recognition of the effect on the commons that needs to be embedded through the HRO. Weick and coworkers [10] describe a “collective mindfulness” in organisations that embed the HRO culture, organisations that work with high reliability despite the risks and perception of danger. In the pandemic, corrections have occurred, and constant learning has taken place and regulatory oversight has kicked in. The important point is how to embed human and social behaviour as constraints in regulation. This is not as straightforward as what one expects in regulation for technology that scientists and engineers are accustomed to. But, HRO provides the best option to provide these constraints in regulation. In the Boeing 737 Max case, while there were many checks and oversight, the organisational structure that should have enabled these had broken. Sensitivity to operations and a clear acknowledgement of human limitations and human error were missing. Complex technologies, such as flight control systems, were taken for granted, and yet another tenet in HRO was ignored in terms of commitment to resilience by building and checking different scenarios was also ignored, something that could have aided the understanding of the pilot behaviour when the aircraft was put into operation. In the pandemic, sensitivity for the common good was sometimes ignored, especially when there was vaccine hesitancy, even though vaccines were for the common good, leading to filling up hospitals, knowing fully well that one was getting vaccinated for oneself and for the sake of others. However, over time, all basic tenets of HROs have found value and have implicitly been embedded into processes in pandemic management. The possibility of accidents will always be there. However, the cases examined in the book show that many of them, especially the organisational, societal, and regulatory combinations, are cases where they could

9 And Then There is Dr. Kato: How Does It Look …

155

have been foreseen, though with effort in some cases. Further learning and scenario building can transform some of the unknowns into scenarios for which responses can be devised. But, there will be some totally unforeseen scenarios that can happen. Can HROs build robustness into regulation against a black swan event? The answer lies in this revisit to HRO. The basic tenets of HROs build agility in response, use expertise to recover from the event, and reduce the fragility of the system response. In a social and cultural context, while the black swan event cannot be foreseen, the response to the event can be through the HRO in terms of agility and system robustness. Overall, the need to expand the scope of regulation to encompass social components and human behaviour in both organisations and societies has never been greater. While external and explicit regulations constructed through laws and legislation that are embedded with social behaviour and risk perception issues may always be needed, these are best augmented through self-regulation, mindfulness, and HRO within organisations and communities.

References 1. Grady, D. (2021). ‘I had never faced the reality of death’: A surgeon becomes a patient. New York Times, June 3, 2021. 2. Ostrom, E. (2000). Governing the commons: The evolution of institutions for collective action. Cambridge University Press. 3. Barnard, A., Ashford, G., & Vigdor, N. (2021). These health care workers would rather get fired than get vaccinated. The New York Times, September 26, 2021. 4. Graham, D. A. (2014). Rumsfeld’s knowns and unknowns: The intellectual history of a quip. The Atlantic, March 18, 2014. 5. Popper, K. (2002). Conjectures and refutations. Routledge. 6. Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243– 1248. 7. Ostrom, E. (2009). Engaging with impossibilities and possibilities. In K. Ravi, & K. Basu (Eds.), Arguments for a better world: Essays in honor of Amartya Sen|Volume II: Society, institutions and development. Oxford University Press. 8. Ostrom, E. (1998). A behavioral approach to the rational choice theory of collective action: Presidential address. American Political Science Review, 92(1), 1–22. 9. Wall, D. (2014). The sustainable economics of Elinor Ostrom: Commons, contestation and craft. Routledge. 10. Weick, K., Sutcliff, K., & Obstfeld, D. (2013). Organising for high reliability: Processes of collective mindfulness. Research in Organisational Behaviour, 21, 81–123.