274 12 24MB
English Pages 324 [342] Year 2020
The Cognitive Autopsy
The Cognitive Autopsy A Root Cause Analysis of Medical Decision Making Pat Croskerry, MD, PhD Professor of Emergency Medicine, Director, Critical Thinking Program Dalhousie University, Halifax, Nova Scotia, Canada
1
1 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2020 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Cataloging-in-Publication Data Names: Croskerry, Pat, author. Title: The cognitive autopsy : a root cause analysis of medical decision making /Croskerry, Pat Description: New York, NY : Oxford University Press, [2020] | Includes bibliographical references and index. Identifiers: LCCN 2019054280 (print) | LCCN 2019054281 (ebook) | ISBN 9780190088743 (paperback) | ISBN 9780190088767 (epub) | ISBN 9780190088774 Subjects: MESH: Emergencies | Clinical Decision-Making | Diagnostic Errors—prevention & control | Emergency Medical Services | Case Reports Classification: LCC RC86.8 (print) | LCC RC86.8 (ebook) | NLM WB 105 | DDC 616.02/5—dc23 LC record available at https://lccn.loc.gov/2019054280 LC ebook record available at https://lccn.loc.gov/2019054281 This material is not intended to be, and should not be considered, a substitute for medical or other professional advice. Treatment for the conditions described in this material is highly dependent on the individual circumstances. And, while this material is designed to offer accurate information with respect to the subject matter covered and to be current as of the time it was written, research and knowledge about medical and health issues is constantly evolving and dose schedules for medications are being revised continually, with new side effects recognized and accounted for regularly. Readers must therefore always check the product information and clinical procedures with the most up-to-date published product information and data sheets provided by the manufacturers and the most recent codes of conduct and safety regulation. The publisher and the authors make no representations or warranties to readers, express or implied, as to the accuracy or completeness of this material. Without limiting the foregoing, the publisher and the authors make no representations or warranties as to the accuracy or efficacy of the drug dosages mentioned in the material. The authors and the publisher do not accept, and expressly disclaim, any responsibility for any liability, loss, or risk that may be claimed or incurred as a consequence of the use and/or application of any of the contents of this material. 9 8 7 6 5 4 3 2 1 Printed by Marquis, Canada
Contents
Foreword
vii
Mark Graber Preface About the Author Acknowledgments
Introduction
ix xiii xv 1
The Cases
Case 1. Christmas Surprises Case 2. Distraught Distraction Case 3. The Fortunate Footballer Case 4. An Incommoded Interior Designer Case 5. Teenage Tachypnea Case 6. The Backed-Up Bed Blocker Case 7. The English Patient Case 8. Lazarus Redux Case 9. A Model Pilot Case 10. A Rash Diagnosis Case 11. The Perfect Storm Case 12. A Case of Premature Closure Case 13. Postpartum Puzzler Case 14. The Blind Leading the Blindable Case 15. Pseudodiagnosis of Pseudoseizure Case 16. Failed Frequent Flyers Case 17. Explosions, Expletives, and Erroneous Explanations
17 27 31 37 41 49 55 65 71 77 81 89 95 99 105 111 119
vi | C o n t e n t s
Case 18. The Representativeness Representative Case 19. The Michelin Lady Case 20. An Instable Inadvertence Case 21. A Laconic Lad Case 22. The Misunderstood Matelot Case 23. A Hard Tale to Swallow Case 24. A Rake’s Progress Case 25. Deceptive Detachment Case 26. A Search Satisfied Skateboarder Case 27. The Vacillated Vagrant Case 28. A Tale of Two Cycles Case 29. Misleading Mydriasis Case 30. Bungled Bullae Case 31. Overdosing the Overdosed Case 32. The Lost Guide Case 33. Hazardous Handover Case 34. Double Trouble Case 35. Tracking Fast and Slow Case 36. Alternate Alternatives Case 37. Notable Near Miss Case 38. A Stone Left Unturned Case 39. Sweet Nothings Case 40. Straining the Strain Diagnosis Case 41. Missed It Conclusion: Strategies for Improving Clinical Decision Making
123 129 135 141 151 157 169 175 181 187 193 201 207 211 219 225 231 235 241 247 253 259 263 271 277
Appendix A: Diagnoses in 42 Cases
289 291 295 299 301 303 313
Appendix B: Probable Biases and Their Frequencies in 42 Clinical Cases Appendix C: Analysis of Ordinal Position of Bias in Clinical Cases Appendix D: Potential Error-Producing Conditions Appendix E: Analysis of Knowledge-Based Errors in the Case Series Glossary of Biases and Other Cognitive Factors Index
Foreword
If you wish to visit a new place, help is available everywhere: maps, guidebooks, internet sites for trip planning, online reviews and personal advice, etc. But nothing beats finding a local guide—someone who lives there and knows the land, its people, culture, and “how things work here.” That’s exactly the role that Pat Croskerry plays in The Cognitive Autopsy—he’s our guide to the still-emerging arena of understanding diagnostic error. Pat has spent a long career working on the front lines in emergency medicine, also known as the emergency room or the emergency department (ED). Pat knows this place like the back of his hand, and he is the perfect person to show us around and orientate us to what there is to see. The Cognitive Autopsy is a unique and fascinating collection of cases that span the spectrum of diagnostic error types and causes seen in the ED. Each one is its own self-contained story, with lessons of cognitive and affective biases. Stories are powerful, and it is said that stories may be how our knowledge base is organized. A diagnosis is itself a story—the medical story that corresponds to and flows from the patient’s story. Although this book is about diagnostic errors, let me state for the record that despite all its challenges, diagnosis in the ED typically works, and works very well. More often than not, the diagnosis is made and is correct. Or at the very least, the next step in the diagnostic journey is clarified: The patient can return home; or needs follow-up by their primary care provider; or must be seen by a specialist; or requires admission. The reason that diagnosis succeeds in the ED reflects the fact that patients are being evaluated by experts. Beyond their years of training, a health care team has seen thousands of patients, and it is this cumulative experience and expertise that patients count on when they arrive at the ED. Hospitals and health care centers have incredibly sophisticated laboratory tests and imaging modalities that assist with providing further insight into a patient’s condition. Despite its overall successes, the ED is sometimes referred to as the petri dish for diagnostic errors—it is the perfect laboratory to learn about adverse events and study them. Every
viii | F o r e w o r d
patient presents a new problem to solve. And time is short—there is an ever-present sense of urgency. One can’t dwell on any problem too long because there are so many more patients and problems coming through the doors. The atmosphere tends to be chaotic, people are coming and going, interruptions abound. A major constraint is that the diagnostic team in the ED doesn’t know the patient. They don’t know what the patient was like before their symptoms emerged. The patient may have to communicate that change to paramedics, the triage nurse, and physicians, and that can be difficult to do. Communication issues, both written and verbal, can mushroom from there; for example, the nursing staff need to communicate with the physicians, the physicians with radiology consultants or subspecialists, and so on. Studies of adverse patient safety events invariably find that breakdowns in communication are the most commonly encountered issue, and the same is true in studies of diagnostic error. Understanding how diagnosis works in the ED is necessary for recognizing when it fails and why it fails. That brings me to one of Pat’s other strengths: He is a trained psychologist and our field’s foremost authority on how diagnosis emerges from our cognition. Pat has explicated this process elegantly in his other writings: Diagnosis results from the interplay of Type 1, our intuition and ability to quickly recognize routine things, and Type 2, our ability to consciously consider other possibilities and construct plans for how to differentiate the various options. This background in cognitive psychology is invaluable to understanding diagnostic error in the ED, and it is the necessary counterpart to understanding the system-related factors that may be abundant in cases of error. The concept of “situated cognition” is helpful in pointing out the relevant interrelationships between the cognitive- and the system-related factors in cases of diagnostic error: The success of the diagnostic process is invariably tied to the environment in which the process takes place. The environment can support diagnosis or inhibit it. Effective diagnosis, for example, often depends on how the clinician’s thoughts and actions are supported by interactions with colleagues and other helpful resources. Conversely, production pressure, distractions, tension, fatigue, and emotions can adversely impact diagnostic performance. Ultimately, the goal of The Cognitive Autopsy is to improve the diagnostic processes utilized by health care providers and organizations. We are often a bit intimidated that the world of diagnostic error is so large and seems so formidable. Having guidebooks such as The Cognitive Autopsy to help us understand and address the problem gives us great confidence so that instead of being humbled by the problem, we can be comfortable understanding and addressing it. Everyone involved in diagnosis would benefit from reading this book. Mark Graber, MD, FACP Founder, President-Emeritus, and Chief Medical Officer Society to Improve Diagnosis in Medicine
Preface
For as long as medicine has existed, even back to the prehistoric shamans, knowledge about disease has been stored in the form of narratives, verbal at first and then later in the written record. Practitioners of medicine had little to guide them other than their narratives and the brutal lessons learned from trial and error. If something appeared to work, it would be incorporated into their armamentarium. As different disease states came to be recognized, a consensus would develop on specific treatments. Through the past two centuries, this process has evolved with the emergence of the medical literature and some journals retaining the original idea of publishing case reports with an emphasis on medical management. Over several decades, the Australian Medical Journal published such a series by general practitioner John Murtagh; the series, Cautionary Tales, proved extremely popular.1 Others have focused specifically on the educational value of medical malpractice cases. The online series Medical Malpractice Insights, edited by Charles Pilcher, is an invaluable opportunity for physicians to vicariously learn from others, both patients and colleagues, the first and second victims of such unfortunate events.2 Valuable as these cases are, their focus is often necessarily limited to the tangible and obvious facts of the case; a similar constraint is noted in root cause analysis (RCA) after an adverse event. RCA is aimed at establishing the root causes of the event in order that appropriate solutions might be identified. Typically, this involves repeatedly asking the “why” question, moving linearly from proximal to distal events. For example, following the sinking of the Titanic, the first question asks why the ship sank, to which the answer is: It had a hole in its side (a proximal cause). Why did it have a hole in its side? Because it hit an iceberg. Why did it hit the iceberg? And so on, such that, as we get closer to the distal causes, there is a greater chance that we will find an ultimate cause for which there may be a solution. However, among the various problems with RCA teams in health care3 is that they rarely, if ever, deal sufficiently with individual decision making and the cognitive issues that underlie it. They might conclude that an insufficient history was taken or that the physical exam was
x | P r e fa c e
incomplete, which may be legitimate proximal causes of the adverse event but the search often needs to be taken more distally to get at their respective origins.4 Why wasn’t a more complete history taken, and why was the physical exam insufficient? There are several reasons why we do not go further than the obvious, but one particular bias comes to mind: WYSIATI (What you see is all there is), an acronym proposed by Daniel Kahneman.5 This describes the tendency toward cognitive miserliness and is associated with several cognitive biases, including anchoring, ascertainment, unpacking failure, and search satisficing. Among the multitude of reasons for cursory or miserly behavior are time pressures, cognitive overloading, stress, and fatigue—all of which push us toward using shortcuts, heuristics, and abbreviated ways of making decisions—any of which may impair decision making. Adverse event investigators are usually not trained in understanding the human factors that underlie decision making and therefore unlikely to uncover suboptimal decisions that might have been made. Their tendency is to focus on the tangible and measurable, and proximal rather than distal causes.6 One of the major impediments in RCA is that the cognitive processes that underlie decision making are invisible. Unlike medication errors and procedural errors, which are tangible and highly visible, those that underlie cognition may only be inferred from behaviors that may or may not have been witnessed—a cognitive RCA is challenging, but without it our understanding of unsafe events is limited. Nevertheless, that is what I attempt in this book. Two other recent additions to the literature in this context deserve mention. Jonathan Howard has published a case-based compendium of cognitive error and diagnostic mistakes in neurology,7 and Cym Ryle has offered cognitive insights into failures in diagnostic reasoning from the perspective of a general practitioner.8 Along with these and other recent initiatives, the current work aims at improving the overall cognitive calibration of the clinician. Becoming less wrong and more rational in clinical decision making is the rising tide that will lift all boats in patient safety.9 A selection of real clinical cases is presented here to illustrate flaws in decision making, the cognitive errors that frequently arise through cognitive and affective biases. They are real de-identified examples mostly from my personal experience, and mostly from the milieu of emergency medicine (EM) collected over a decade through the 1990s and early 2000s. With occasional updates, the book was used for clinical teaching at Dalhousie Medical School from 2005 continuously to the present as the Applied Cognitive Training in Acute-Care Medicine (ACTAM) manual. Several cases were added more recently, some contributed by Sam Campbell (Cases 11, 12, and 30), Terry Fairbanks and colleagues (Case 32), and Emil Zamir (Case 14). I am most grateful for these additional contributions. They were too interesting to resist. For the remainder, I take full responsibility. Although some aspects of emergency practice have changed since the inception of the manual, little has changed in the cognitive properties of diagnostic decision making. The clinical setting of these cases turns out to be fortuitous—EM covers all disciplines and therefore provides a wide variety of cases. Furthermore, when cases are viewed in this setting, they are at their most undifferentiated and more likely to illustrate contextual issues and the
P r e fa c e | xi
intrinsic complexity of the process10 (see Figure I.4 in the Introduction). This contrasts, for example, with an orthopedic or a plastics clinic in which most cases are already fairly well differentiated once they are referred. When cases are reported in the literature, they are similarly stripped of their detail. Those reported here are also good exemplars of situated cognition, emphasizing Gibson’s counsel: Ask not what is inside your head but what your head is inside of.11 The heads of EM clinicians are inside a milieu that has been described as a “natural laboratory for medical error”12 and an ecological war zone for the biases that have been extensively reported in the cognitive sciences literature. Inevitably, the cognitive analysis offered here will reflect the perceptions and biases of the author. Some might find it an overanalysis of the ways in which physicians think and err. However, in the past, medicine has not spent sufficient time on the process of clinical thinking, so perhaps we can be forgiven for the emphasis offered here. Finally, the irony should not escape us that this book will be published in a year that provides clarity of vision. If it is not biased, 20-20 hindsight has much to teach us.
References 1. Murtagh J. Cautionary Tales: Authentic Case Histories from Medical Practice. New York, NY: McGraw-Hill, 1992. 2. Pilcher CA (Ed.). Medical malpractice insights: Learning from lawsuits. https://madmimi.com/s/fe6a85# 3. Peerally MF, Carr S, Waring J, Dixon-Woods M. The problem with root cause analysis. BMJ Qual Saf. 2017; 26: 417–422. 4. Croskerry P. Our better angels and black boxes. EMJ. 2016; 33(4): 242–244. 5. Kahneman D. Thinking, Fast and Slow. New York, NY: Farrar, Straus & Giroux, 2011: 85–88. 6. Croskerry P. The need for cognition and the curse of cognition. Diagnosis. 2018; 5(3): 91–94. doi:10.1515/ dx-2018-0072 7. Howard J. Cognitive Errors and Diagnostic Mistakes: A Case-Based Guide to Critical Thinking in Medicine. Cham, Switzerland: Springer, 2019. 8. Ryle CA. Risk and Reasoning in Clinical Diagnosis: Process, Pitfalls, and Safeguards. Oxford, UK: Oxford University Press, 2019. 9. Croskerry P. Becoming less wrong (and more rational) in decision making. Ann. Emerg. Med. 2020; 75: 218–220. 10. Croskerry P. Adaptive expertise in medical decision making. Medical Teacher. 2018; 40(8): 803–808. doi:10.1080/0142159X.2018.1484898 11. Mace WM. James J. Gibson’s strategy for perceiving: Ask not what’s inside your head, but what your head’s inside of. In: Shaw RE, Bransford J (Eds.), Perceiving, Acting, and Knowing. Hillsdale, NJ: Erlbaum, 1977: 43–65. 12. Croskerry P., Sinclair D. Emergency medicine: A practice prone to error? CJEM. 2001; 3(4): 271–276.
Pat Croskerry is Professor of Emergency Medicine and in the Division of Medical Education & Continuing Professional Development, Faculty of Medicine at Dalhousie University in Halifax, Nova Scotia, Canada. In addition to his medical training, he holds a doctorate in Experimental Psychology and a Fellowship in Clinical Psychology. He has published over 90 journal articles and 40 book chapters in the area of patient safety, clinical decision making and medical education reform. Two of his papers are in the top 3 cited papers in the emergency medicine education literature. In 2006 he was appointed to the Board of the Canadian Patient Safety Institute, and in the same year received the Ruedy award from the Association of Faculties of Medicine of Canada for innovation in medical education. He has given over 500 keynote presentations at leading medical schools, hospitals, and universities around the world. He is senior editor of Patient Safety in Emergency Medicine (2009), and senior author of Diagnosis: Interpreting the Shadows (2017). He was appointed Director of the new Critical Thinking Program at Dalhousie Medical School in 2012. He is a Fellow of the Royal College of Physicians of Edinburgh. In 2014, he was appointed to the US Institute of Medicine Committee on Diagnostic Error in Medicine. He was nominated to the Canadian Association of Emergency Physicians Top Ten List of most impactful Canadian medical educators in 2016. In addition to his medical career, he has had a lifelong involvement with rowing. He represented Scotland (1967), was on the Canadian National Team at the World Championships in Nottingham, England (1975) and Olympic Games at Montreal (1976). He was a National Team Coach for Canada at the Pan American Games in Puerto Rico (1979) and at the World Rowing Championships in Bled, Yugoslavia (1979).
Acknowledgments
My first thanks are to our patients who, although often unknowingly, constantly teach us how to make them safer. I acknowledge my debt to Grant Smith, my doctoral supervisor, and fellow graduate students Gordon Tait and Brian Byrne, who all helped in the journey of discovery about thinking. We are all indebted, probably more than we realize, to the pioneering work of Amos Tversky and Daniel Kahneman for their insights into cognitive bias and to the other cognitive scientists since, notably Jim Reason for his groundbreaking work on human error Charles Vincent for its application to patient safety, and Keith Stanovich and his group (and other meliorists) for their more recent work on rationality. Sherri Lamont at Dartmouth General Hospital immeasurably contributed to the development of these cases in the early days, through her support and technical skills. I also thank Deirdre Harvey for her more recent work. Mike Murphy’s early encouragement for this work, Doug Sinclair’s support and backing, and the stalwart efforts of leaders in the diagnostic error field—Mark Graber, Karen Cosby, and Gordy Schiff—were invaluable. Special thanks to my emergency medicine colleagues Sam Campbell, George Kovacs, Dave Petrie and many other warriors for the working day who helped develop and sustain the approach described here. Finally, my thanks to Karen for her love and encouragement and for guiding me over and around technological hurdles along the way.
Introduction
A variety of errors occur in the course of normal medical practice, but many will be detected and corrected before they do any harm. Nevertheless, medical error is now estimated to be one of the leading causes of death.1 Errors take several forms and range from simple problems such as miscalculating a dose of medication to more complex errors such as misdiagnosing a myocardial infarct, a cerebrovascular accident, or wrong side surgery. Individual errors occur in a variety of areas, but by far the greatest number of errors we make in medicine are in the ways through which our thoughts and feelings impact our decision making. Yet historically, surprisingly little emphasis in medical education has been put on how to think, and especially on how to think rationally. The tacit assumption is made that by the time people arrive in medical school, they are already rational, competent thinkers and invulnerable to a variety of predictable and widespread biases in thinking and other cognitive failures that may lead to error. Unfortunately, this is not the case. The primary purpose of this book is to focus attention on clinical thinking failures. It should also be remembered that affect or emotion is reciprocally related to cognition—one generally does not occur without the other. Errors in emotion and cognition are collectively referred to here as cognitive errors, although affective error is occasionally used for emphasis in some cases. One of the most important drivers of this book is to focus on these cognitive failures in clinical reasoning and the ambient conditions that enable them. It is now claimed that diagnostic failure is the main threat to patient safety,2 which translates into clinical decision making is the main threat to patient safety. In fact, we can refine this further and state that clinical prediction is the main threat. Diagnosis is about prediction—how accurately can a clinician predict the identity of one (or more) of approximately 12,000 diseases that may underlie the patient’s symptoms and signs? As Pinker notes, “the acid test of empirical rationality is prediction,”3 so we need to ask what we can do to improve the predictive power of our decision making to increase the likelihood of a correct diagnosis and reduce the morbidity and mortality of diagnostic failure. As the philosopher/sociologist Habermas observed, the modernity of the Enlightenment, with rationality as its main driving force,
2 | I n t r o d u c t i o n
remains “an unfinished project.”4 Completion will be more attainable when a fuller understanding of rationality, in all its forms, is realized. Medicine in particular will benefit from a better understanding of what is needed for rational decision making. A first step lies in understanding the process of diagnosis, the most important of a physician’s tasks. Although numerous publications have addressed diagnostic failures within specific disciplines, it is only fairly recently that attention has focused on the process itself.5,6 This is surprising given that getting the diagnosis right is so critical for the safety of patients. An important distinction needs to be made at the outset—between thinking and deciding. Thinking is generally considered a deliberate act—that is, to think about a problem is to deliberately engage in a conscious process aimed at solving it. However, in everyday life, the strategies we use to solve many problems typically involve shortcuts, approximations, rules of thumb, and even guesses. These are reflexive, autonomic processes that mostly do not reach consciousness and are usually appreciated as potential time-savers. In the majority of cases, this appears to be the case. It is often unnecessary to laboriously work one’s way through a clinical problem when the answer seems readily apparent. However, these processes are imperfect in that although they may work most of the time, they are occasionally wrong and may manifest themselves as cognitive biases. Paradoxically, although they are often viewed as time-saving, they may actually increase workload.7 They are not deliberate acts, so we cannot refer to them as thinking per se, even though they may lead to a decision that we deliberately act upon. This is referred to as Type 1 or intuitive processing, whereas decisions that arise from deliberate thoughtful activity are known as Type 2 or analytical processing. The concept underlying this approach is referred to as dual process theory. The two processing systems, Type 1 and Type 2, differ from each other in a number of ways8 that have been well delineated (Table I.1). In the modern era of decision making, it appears to have emerged with TABLE I.1 Characteristics of Type 1 and Type 2 approaches to decision making Characteristic
Type 1
Type 2
Cognitive style
Heuristic, intuitive
Systematic, analytical
Cognitive awareness
Low
High
Conscious control
Low
High
Verbal
No
Yes
Automaticity
High
Low
Cost
Low
High
Rate
Fast
Slow
Reliability
Low
High
Errors
Normative distribution
Few but large
Effort
Low
High
Predictive power
Low
High
Emotional valence
High
Low
Detail on judgment process
Low
High
Scientific rigor
Low
High
Source: From Croskerry.8
I n t r o d u c t i o n | 3
F I GUR E I .1 Thomas
Paine (1737–1809), early observer of dual-process decision making.
the work of Schneider and Shiffrin in 1977,9 although it had been well recognized more than two centuries earlier in 1794 by Thomas Paine (Figure I.1): Any person, who has made observations on the state and progress of the human mind, by observing his own, can not but have observed, that there are two distinct classes of what are called Thoughts; those that we produce in ourselves by reflection and the act of thinking, and those that bolt into the mind of their own accord. I have made it a rule to treat those voluntary visitors with civility, taking care to examine, as well as I was able if they were worth entertaining, and it is from them that I have acquired almost all the knowledge that I have.10 Dual process theory is now the dominant view for examining decision making in medicine (Figure I.2).8,11 Cognitive bias, arguably the most important issue in clinical decision making, underlies many of the cognitive failures in the cases described in this book. Typically, they “bolt into the mind” in a reflexive, autonomous fashion; are often uncritically accepted; and unfortunately are not always subjected to the scrutiny, caution, and civility that Paine exercised. Often, but not exclusively, they are associated with Type 1 processing. Pohl12 has described five main characteristics of biased decisions (Table I.2).
4 | I n t r o d u c t i o n
Processes Hard-wired Emotional Over-learned Implicitly-learned
RECOGNIZED
Patient Presentation
Pattern Processor
Type 1 Processes
Pattern Recognition
Executive Override
Irrational Override
Calibration
Diagnosis
Repetition
NOT RECOGNIZED
Type 2 Processes
Knowledge Rationality Logic
F I GUR E I .2 Dual
process model for medical decision making. The two principal modes of decision making, automatic and controlled, originally described more than 40 years ago,9 are now commonly referred to as intuitive and analytical, respectively. Intuitive decision making is seen to be driven by four kinds of Type 1 processes and analytical reasoning by a single Type 2 process. Type 2 can override a Type 1 process (executive override), and Type 1 can override a Type 2 process (irrational override). The process is in a dynamic state and can toggle (T) back and forth between the two systems. There is an overall tendency to default to Type 1 processing (cognitive miser function). Source: From Croskerry.11
Stanovich13 has described four major categories of Type 1 processing (Figure I.3): 1. Processes that are hardwired: The product of evolutionary forces acting on our distant ancestors in the environment of evolutionary adaptiveness when we spent most of our time as hunter–gatherers. They have been selected in the Darwinian sense and are, therefore, in our present-day DNA (genetically transmitted). The metaheuristics (anchoring and adjustment, representativeness, and availability) are examples of such inherited heuristics that may be associated with various biases. TABLE I.2 Characteristics of biased decisions Reliably deviate from reality Occur systematically Occur involuntarily Are difficult or impossible to avoid Appear rather distinct from the normal course of information processing Source: From Pohl.12
I n t r o d u c t i o n | 5 Hard Wired Process
Emotional Processes
Type I Processes
Over-Learned Processes
Implicitly Acquired Processes F I GUR E I .3 The
four subsets of Type I processing. Source: From Stanovich.13
2. Processes regulated by our emotions: The basic emotions (happiness, sadness, fear, surprise, anger, and disgust) are also evolved, hardwired adaptations (e.g., fear of snakes is universal in all cultures). They may be significantly modified by learning. 3. Processes established by overlearning: Continued repetition of information and of psychomotor acts eventually leads to habituation so that eventually they may be performed without conscious deliberation (e.g., reciting multiplication tables or driving a car). Thus, knowledge and skills become firmly embedded in our cognitive and behavioral repertoires through overlearning. This allows these processes to be executed quickly, effortlessly, and reliably when needed, without conscious effort. 4. Processes developed through implicit learning: We generally learn things in two ways— either through deliberate explicit learning, such as occurs in school and in formal training, or by implicit learning, which is without intent or conscious awareness. Implicit learning plays an important role in our skills, perceptions, attitudes, and overall behavior. It allows us to detect and appreciate incidental covariance and complex relationships between things without necessarily articulating that understanding. Thus, some biases may be acquired unconsciously. Medical students and residents might subtly acquire particular biases by simply spending time in environments in which others have these biases, even though the biases are never deliberately articulated or overtly expressed to them (i.e., in the hidden curriculum). Examples include the acquisition of biases toward age, socioeconomic status, gender, race, patients with psychiatric comorbidity, and obesity. Generally, we talk about thoughts, beliefs, and feelings as being “intuitive,” but Stanovich’s work allows us to go deeper than that. We can develop and acquire “intuitive knowledge” in a variety of ways from the multiple sources he describes, as well as from interactions between the various sources. This may have implications for dealing with biases. Those that are hardwired might be expected to generate the most difficulty in mitigation. Recently, there has been a colloquial tendency to describe such biases as originating from our “reptilian
6 | I n t r o d u c t i o n
brain”—a term coined by the Yale neuroscientist MacLean in the 1960s.14 In his memoir of spiritual and philosophical awakening, the novelist Lawrence Durrell viewed it as our biggest challenge: “The greatest delicacy of judgement, the greatest refinement of intention was to replace the brutish automatism with which most of us exist, stuck like prehistoric animals in the sludge of our non-awareness.”15 Dealing with our “brutish automatism” and unsticking ourselves from the “sludge of non-awareness” may require extraordinary effort, whereas intuitions acquired and not inherited might be more amenable to mitigation;16 this is discussed further in the closing chapter. Overall, many of the heuristics that characterize Type 1 decision making serve us well. If the decision maker is well-calibrated—that is, understands the continuous need for vigilance in monitoring the output from Type 1—then the quality of decision making may be acceptable. If monitoring is suboptimal, then calibration of decision making deteriorates and Type 1 becomes unreliable. The diagnostic failure rate across the board in medicine is 10–15%.17 This sounds better if we frame it as “the success rate of diagnosis is 85–90%,” yet few of us would cross a bridge or make a car trip with those odds. Although not all diagnoses carry life-threatening consequences, many of them do, and this level of failure is simply unacceptable. Some have asked how medical error occurs, given that it is not rocket science. Ironically, if it were rocket science, it would be much easier.18 Rocket science follows the laws of physics, which are mostly immutable and predictable. Diagnosis is less so. It is estimated that at least six clusters of factors have the potential to influence the diagnostic process.19 Each is identified in the literature and number in the 40s, all with the potential for significant interactions with others (Figure 1.4); there are probably more. Perhaps it is surprising that the process fails only 10– 15% of the time. Its inherent complexity makes it difficult to study and difficult to teach. Reductionism typifies the traditional scientific approach to complexity, by stripping away as many independent variables as possible to isolate the key one(s). For some research groups, this has led to studies in which the dependent variable, in this case diagnostic reasoning, is studied by reading text vignettes on computer screens. The ecological validity of this approach has been seriously challenged.20–22 As Gruppen and Frohna stated, Too often, studies of clinical reasoning seem to take place in a vacuum. A case or scenario is presented to subjects, usually in written form, stripped of any “irrelevant” noise. The traditional methodology of providing clinical cases that are decontextualized and “clean” may not be a particularly valid means of assessing the full range of processes and behaviors present in clinical reasoning in natural settings.20 That is, this methodology, in ignoring the principles of situated cognition, is a significant threat to the external and ecological validity of these findings, and their relevance to understanding real-life clinical practice is thus seriously questioned. In the cases presented in this book, an effort has been made to preserve the original clinical context as much as possible.
B
A Intellect
Knowledge
Gender
Experience
Ethnicity
Culture
Age
Personality
Active Open-minded Thinking
Critical thinking
Religion
Rationality Experientiality
Reflective coping
Reflection
Fatigue
Sleep deprivation
Mindfulness
Lateral thinking Need for cognition
Affective state
Cognitive load
Perseverance
Metacognition
C
Stress
Logicality
Adaptiveness Sleep debt E
D System design Ergonomic factors
Communication Resource allocation Scheduling
F I GU R E I.4 Six
Symptoms
Team factors
IT
Signs
Progression
F Onset Mimics
Pathognomonicity Co-morbidities
clusters of factors that influence the diagnostic process. Source: From Croskerry.19
Patient Family Other patients
Caregivers Friends
8 | I n t r o d u c t i o n
One conclusion from the finding of diagnostic failure across the board in medicine17 is that the training in clinical decision making currently received in most medical school falls short of what is needed.19 In the domain of patient safety, comparisons are often made with the airline industry. If the industry’s training of pilots resulted in a performance deficit of 10– 15%, we would be very quick to say that their training program was deficient, so we should not be reluctant to draw the same conclusion in medicine. Although we can say that the diagnostic process has an irreducible uncertainty that will always result in failure in this order of magnitude and we should live with it, we might instead argue that the normal level of expertise that is achieved with conventional training in medicine is insufficient. If the latter, then we need to augment the training process so that we can tackle the 10–15% of failures. What we do not know is what is responsible for the overall failure rate in diagnosis, other than it is a combination of physician and system factors. Various estimates suggest that the physician accounts for most of the failure—probably approximately 75%. Physician failure varies between disciplines. In the visual specialties (dermatology, radiology, and anatomic pathology), it is approximately 1% or 2%, whereas in the more general specialties [family medicine, emergency medicine (EM), and internal medicine], it is approximately 15%.23 Thus, failure appears to be determined by the nature of the task. To put it in the context of signal detection theory, we suspect that the lower failure rate in the visual specialties is due to the fact that there is less noise around the signal, whereas in the general specialties there is more. Increased noise arises from the complexity described in Figure I.4 and the correspondingly greater challenge to clinical reasoning. So, it appears we need to find ways to improve clinical reasoning. Within a particular general discipline, one possibility is that 85% of physicians are good at diagnosis and the remaining 15% are not; another possibility is that all physicians fail approximately 15% of the time. Further work is needed to identify where the failure occurs, but the working assumption here is that within a particular discipline, all physicians fail approximately 15% of the time, and our goal is to identify errors when they occur and find ways to deter them from happening in the first place. The well-known Dreyfus model24 has been used to describe how people acquire competency and expertise. Beginning from a novice stage, they progress through various levels to become experts. The end point is now referred to as “routine” expertise (Figure I.5A), and we might suppose that it is suboptimal in cases in which diagnosis fails. Recent work suggests that the process of acquisition of expertise can be augmented to include other features, such as rationality, critical thinking, flexibility, creativity, and innovation, to attain what may now be known as “adaptive” expertise (Figure I.5B).19 Adaptive expertise appears to characterize those who have more highly developed metacognitive skills compared with routine experts.25 This difference between routine expertise and adaptive expertise can be viewed as a mindware gap.26 Mindware is a term coined by Perkins27 to describe the unique operating system software that each individual brain runs on, a product of genetic and acquired influences. A number of specific interventions to close this gap have been proposed19 and are reviewed in more detail in the concluding chapter of this book.
RECOGNIZED
A
B
Type 1 Process
Type 1 Process
Routine Expertise
Clinical problem features
F I GURE I.5 Augmented
Adaptive Expertise
Proficiency
Proficiency
Competence
Competence
Advanced Beginner
Advanced Beginner
Novice
Novice
Type 2 Processes
Type 2 Processes
Pattern Processor
NOT RECOGNIZED
Routine Expertise
Mindware Gap
Rationality Critical thinking Metacognitive processes Lateral thinking Flexibility, creativity, innovation
effects of multiple cognitive processes to close the mindware gap associated with routine expertise (A) and achieve adaptive expertise (B). Sources: Originally adapted from Dreyfus and Dreyfus24 and subsequently Croskerry.19
10 | I n t r o d u c t i o n
This book uses a selection of real clinical cases to illustrate flaws in thinking, the cognitive errors. They are real de-identified examples mostly from the author’s personal experience within the milieu of emergency medicine collected over a decade through the 1990s and 2000s. With occasional updates, the book was used as a clinical teaching manual continuously to the present. Several cases were added more recently. Although some aspects of emergency practice may have changed since the inception of the manual, little has changed in the cognitive properties of diagnostic decision making. The clinical setting of these cases turns out to be fortuitous—EM covers all disciplines and therefore yields a wide variety of cases. Furthermore, when cases are seen in the emergency department (ED), they are at their most undifferentiated and are more likely to illustrate the complexity of the process. This contrasts with, for example, an orthopedic clinic in which most cases will have become well differentiated once they are referred. Several of the cases described in this book have previously been published in journal articles and book chapters. Four medicolegal cases from the emergency care setting, originally included in the Applied Cognitive Training in Acute-Care Medicine (ACTAM) manual, have been omitted from the current work but may be accessed in the ED Legal Letter,28 in which they were first published. Performing cognitive autopsies on cases from EM is particularly useful because this environment has been described as a “natural laboratory for medical error.”29 There has been some debate in the literature about the use of the word “error.” There is a historical tendency to view it as a negative term, one that suggests someone is at fault and should be blamed. Psychologists, the people who mostly study error, do not view it negatively but, rather, as a piece of human behavior that is worthy of study in its own right. The current practice in medicine is not to use “medical error” but to talk about “patient safety.” The word “error” is a more direct term, however, and is used in this book in the psychological sense. Errors are pieces of our behavior that need to be studied so that we might learn from them. They are not a tool for attribution or blaming. Similarly, there is a tendency to view “bias” as a negative term; in some cases (e.g., “racial bias”), it carries an obvious and undesirable negative connotation. Many will associate “biased judgment” with negative character traits, and not wishing to be associated with this undesirable attribution has probably driven some people away from accepting it as an important feature of all human behavior—one that deserves our attention and study. This might be especially true of the medical establishment, whose members may view themselves as holding to higher standards of propriety and being less judgmental of others. Again, however, cognitive scientists do not view bias as a negative attribute but, rather, as an aspect of cognitive behavior that needs to be studied to improve our understanding of it. Throughout the cognitive revolution of the 1970s and onwards, cognitive scientists happily identified themselves with the “heuristics and biases” literature. In earlier work, we attempted to circumvent the negative associations of bias by referring to cognitive bias as “cognitive dispositions to respond” and affective bias as “affective dispositions to respond,”30,31 but “bias” appears to have prevailed in the literature.
I n t r o d u c t i o n | 11
Such narrative accounts as are presented in this book are something of a tradition in medicine and recognized as a powerful tool for learning.32–34 This particular collection vividly illustrates some classic errors that physicians will encounter in the course of their careers. They are not atypical or isolated; they happen in every medical setting, every day, everywhere in the world. In some cases, there will be an unambiguous demonstration of a particular bias or flaw in thinking, or some combination of these, but at other times the error will be less apparent and we will need to make an inference that erroneous decision making has occurred both through the circumstances and through what we can construe from the medical and nursing record and the outcome. Inference is necessary because we never really know what is going through a clinician’s mind. In fact, neither are clinicians; many are unaware of what they are thinking and how they arrive at their decisions. Thus, any process that attempts to examine the ways in which clinicians think will face this problem. However, although thinking and feeling are typically covert processes, this should not discourage us from trying to take a closer look at them and understand some of the pitfalls. In addition to the cognitive errors noted previously, other conditions that occur in the hospital setting (error-producing conditions,35 other systemic errors, transitions of care, resource limitations, etc.) are also described. Another feature of these cases is that the majority have significant outcomes (Appendix A reads like a “what’s what of emergency medicine,” containing some of the most interesting diagnoses that can be encountered). This is no accident—if a physician misdiagnosed an ankle sprain as an ankle contusion, the outcome would not be particularly different, and the case itself would not be very interesting from a learning standpoint; misdiagnosis usually only comes to light when the outcome is serious. Consequently, many of the cases presented in this book are characterized by their graphic nature and mostly unrepresentative of the spectrum of routine cases usually seen in EM. Using such “exaggerated” distinctive examples has been shown to facilitate the processing of information and enhance learning.36,37 In addition to illustrating a variety of cognitive failings, the cases also contain important teaching points about a wide variety of significant illnesses that students and some clinicians might not otherwise encounter. The book addresses another major problem in this area: the language that is used to describe error and, in particular, cognitive error. It provides a glossary of definitions and descriptors for specific terms used in the commentaries that follow each case. This is important because the language will not be familiar to many. An effort has been made to keep the terminology as basic as possible, but at times the original terms used by the cognitive psychologists who pioneered this work are retained. It is important to familiarize ourselves with some of these terms because they have already begun to enter the medical literature and become part of the medical lexicon. Finally, this process is one of hindsight, and hindsight itself may be subject to a significant bias. It is always easier to be wise after the event. There are a variety of interesting psychological experiments that illustrate how unreliable our basic perceptual processes are in the moment. One perceptual phenomenon in particular, “inattentional blindness,” is
12 | I n t r o d u c t i o n
demonstrated in several of the cases. However, the problem is further compounded when we recall things. We often become even more vulnerable to inaccuracies and selective reminiscence, which may result in “faking good” or “feeling bad.” Faking good is an effort to make ourselves look better than we were at the time. This may help our egos along a little, but the failure to fully appreciate faults in our performance usually means we are destined to repeat them. On the other hand, when we feel bad about what we appear to have done, we may not be being entirely fair to ourselves. It is difficult to re-create the ambient conditions under which the original event took place, and subtle cues or other factors that significantly influenced behavior at the time may not be apparent in retrospect. Thus, there is a danger of overpunishing ourselves for something we may not have been entirely responsible for, and perhaps of subjecting ourselves to too much self-recrimination. Nevertheless, hindsight allows us an opportunity to learn from our mistakes; the process itself need not necessarily be biased. Through the approach and analysis offered here, there is a good chance the reader will develop an understanding of the frequency and normalcy of cognitive failings and how context and ambient conditions can influence clinical decision making. It is hoped that an appreciation will be gained for the circumstances under which biases and other cognitive failures occur. Such insights provide significant opportunities to evolve realistic strategies for avoiding them or coping with them when they do occur. They are a fact of our lives. General strategies to mitigate cognitive error and produce overall improvement in cognitive habits are reviewed in the closing chapter of this book.
References 1. Makary MA, Daniel M. Medical error: The third leading cause of death in the United States. BMJ. 2016; 353: i2139. 2. Tehrani ASS, Lee HW, Mathews SC, Shore A, Makary MA, Pronovost PJ, Newman-Toker DE. 25-Year summary of US malpractice claims for diagnostic errors 1986–2010: An analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013; 22(8): 672–680. 3. Pinker S. Enlightenment Now: The Case for Reason, Science, Humanism and Progress. New York, NY: Viking, 2018. 4. Habermas, J. The Philosophical Discourse of Modernity, Twelve Lectures (F. Lawrence, Trans.). Cambridge, MA: Massachusetts Institute of Technology, 1987. 5. National Academies of Sciences, Engineering, and Medicine. Improving Diagnosis in Health Care. Washington, DC: National Academies Press, 2015. 6. Croskerry P, Cosby K, Graber M, Singh H. Diagnosis: Interpreting the Shadows. Boca Raton, FL: CRC Press, 2017. 7. Moss SA, Wilson SG, Davis JM. Which cognitive biases can exacerbate our workload? Australas J Org Psychol. 2016; 9: 1–12. doi:10.1017/orp.2016.1 8. Croskerry P. Critical thinking and reasoning in emergency medicine. In: Croskerry P, Cosby KS, Schenkel S, Wears R (Eds.), Patient Safety in Emergency Medicine. Philadelphia, PA: Lippincott Williams & Wilkins, 2008; 213–218. 9. Schneider W, Shiffrin RM. (1977). Controlled and automatic human information processing: 1. Detection, search, and attention. Psychol Rev. 84(1): 1–66. 10. Paine T. The Age of Reason. San Bernardino, CA: Minerva, 2018. 11. Croskerry P. A universal model for diagnostic reasoning. Acad Med. 2009; 84(8): 1022–1028.
I n t r o d u c t i o n | 13 12. Pohl RF. Cognitive illusions. In: Pohl RF (Ed.), Cognitive Illusions: Intriguing Phenomena in Thinking, Judgement and Memory. Oxford, UK: Routledge, 2016; 3–22. 13. Stanovich KE. Rationality and the Reflective Mind. New York, NY: Oxford University Press, 2011: 19–22. 14. MacLean P. The Triune Brain in Evolution: Role in Paleocerebral Function. New York, NY: Plenum, 1990. 15. Durrell L. A Smile in the Mind’s Eye: An Adventure into Zen Philosophy. London, UK: Open Road Media, 2012. 16. Croskerry P. Cognitive bias mitigation: Becoming better diagnosticians. In: Croskerry P, Cosby K, Graber M, Singh H (Eds.), Diagnosis: Interpreting the Shadows. Boca Raton, FL: CRC Press, 2017. 17. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005; 165(13): 1493–1499. 18. Croskerry P. Not rocket science. CMAJ. 2013; 185(2): E130. doi:10.1503/cmaj.120541 19. Croskerry P. Adaptive expertise in medical decision making. Medical Teacher. 2018; 40(8): 803–808. doi:10.1080/0142159X.2018.1484898 20. Gruppen LD, Frohna AZ. Clinical reasoning. In: Norman GR, van der Vleuten CP, Newble DI (Eds.), International Handbook of Research in Medical Education. Boston, MA: Kluwer, 2002; 205–230. 21. Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014; 89(2): 197–200. 22. Royce CS, Hayes MM, Schwartzstein RM. Teaching critical thinking: A case for instruction in cognitive biases to reduce diagnostic errors and improve patient safety. Acad Med. 2019; 94(2): 187–194. doi:10.1097/ ACM.0000000000002518 23. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008; 121(5 Suppl): S2–S23. https://doi.org/10.1016/j.amjmed.2008.01.001 24. Dreyfus SE, Dreyfus HL. A five-stage model of the mental activities involved in directed skill acquisition. Supported by the U.S. Air Force, Office of Scientific Research (AFSC) under contract F49620-79-C-0063 with the University of California, Berkeley, 1980: 1–18 (unpublished study). 25. Carbonell KB, Stalmeijer RE, Könings KD, Segers M, van Merriënboer JJG. How experts deal with novel situations: A review of adaptive expertise. Educ Res Rev. 2014; 12: 14–29. 26. Stanovich KE. Rational and irrational thought: The thinking that IQ tests miss. Scientific American Mind. 2009; 20(6): 34–39. 27. Stanovich K. What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven, CT: Yale University Press, 2009. 28. Croskerry P. Achilles heels of the ED: Delayed or missed diagnoses. ED Legal Lett. 2003; 14: 109–120. 29. Croskerry P, Sinclair D. Emergency medicine—A practice prone to error? CJEM. 2001; 3(4): 271–276. 30. Croskerry P. Achieving quality in clinical decision making: Cognitive strategies and detection of bias. Acad Emerg Med. 2002; 9(11): 1184–1204. 31. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003; 78(8): 775–780. 32. Murtagh J. Cautionary Tales: Authentic Case Histories from Medical Practice. New York, NY: McGraw-Hill, 2011. 33. Pilcher CA. Medical malpractice insights (MMI): Learning from lawsuits. https://madmimi.com/p/ fa0e2d?fe=1&pact=76716-148560539-8457174274-2b6f035f60fb4d603a886574b0a5af25e2c8ab1d. Accessed December 29, 2018 34. Howard J. Cognitive Errors and Diagnostic Mistakes: A Case-Based Guide to Critical Thinking in Medicine. Cham, Switzerland: Springer, 2019. 35. Croskerry P, Wears RL. Safety errors in emergency medicine. In: Markovchick VJ, Pons PT (Eds.), Emergency Medicine Secrets (3rd ed.). Philadelphia, PA: Hanley & Belfus, 2003; 29–37. 36. Dror IE, Stevenage SV, Ashworth A. Helping the cognitive system learn: Exaggerating distinctiveness and uniqueness. Appl Cognit Psychol. 2008; 22(4): 573–584. 37. Dror I. A novel approach to minimize error in the medical domain: Cognitive neuroscientific insights into training. Medical Teacher. 2011; 33(1): 34–38.
The Cases
Case 1
Christmas Surprises
At 15:00 hours on Christmas Day, an emergency physician (EP2) arrived at a general hospital emergency department (ED) to begin his shift. The waiting room was full, and all beds in the department were occupied, with patients on overflow stretchers in hallways. The off-going emergency physician (EP1) looked tired and unwell. They began changeover rounds with the charge nurse. In the cardiac room, a female patient was lying supine in bed, unconscious but breathing spontaneously. The off-going physician said that a neighbor had found her lying on the floor of her living room that morning and called an ambulance. At the scene, her pulse was 40 beats/min, her systolic pressure 70, and a glucometer showed 27 mmol/L (486 mg/dl). She was a known type 2 diabetic, with coronary artery disease. She was responsive to verbal commands and complained of abdominal pain. Her pulse dropped into the 30s at times. An intravenous (IV) was started and she was given atropine at the scene, but without effect. She was transferred to a nearby small, rural cottage hospital as a “cardiac” patient for stabilization. There, an external pacemaker was applied, which achieved intermittent capture. Arrangements were made to transfer her to the cardiology service of the general hospital for further management of her arrhythmia. On arrival, the patient was taken directly to the cardiac room. EP1 was notified in accordance with the departmental protocol, and the cardiologist who had accepted the patient was paged. The patient had a Glasgow Coma Scale of 4. Her pulse was now 77, blood pressure was 111/36, and respiratory rate was 18. The cardiac monitor showed a junctional bradycardia, and it was thought she had suffered a cardiac event. EP1 began to assess the patient, but the cardiologist promptly arrived, indicated that the history and circumstances were known to him, and proceeded to manage the patient. After failing to achieve reliable capture with the external pacer, the cardiologist inserted an IV pacer. The patient’s systolic pressure had dropped into the 60s, and a dopamine infusion was started. All beds in the hospital’s coronary care unit (CCU) were full, and arrangements were made to transfer the patient to a nearby tertiary hospital. The off-going emergency physician (EP1) commented to the oncoming physician (EP2) that because he had been very busy, the patient had been mostly looked after by the cardiologist. He noted that the
18 | T h e C o g n i t i v e A u t o p s y
patient now had a functioning pacemaker and that he had been notified she was in renal failure and was hyperkalemic. He also noted that the patient’s condition was considered “grave.” As an ambulance was imminent for transfer, he did not believe EP2 needed to get involved with the patient. They proceeded to complete rounds, and EP2 took over the care of six sick patients, as well as a number of minor ones. He proceeded to start reassessing the sick patients. Soon afterwards, he was informed by the charge nurse that the tertiary-care receiving hospital had been unable to clear a CCU bed for the patient and that she would have to remain in the ED until the hospital could do so. He decided, therefore, to reassess the patient. Four hours had elapsed since the patient was first brought to the general hospital’s ED. The demeanor of the nurses attending the patient in the cardiac room was consistent with the grave prognosis that he had been given, and it was clearly conveyed to him that future efforts would probably be futile. The cardiologist had left the ED, having assumed that the patient’s transfer to the receiving hospital was imminent. The patient’s breathing appeared slightly labored, with a rate of 19. Systolic pressure was 70, pulse 63, and the monitor showed a paced rhythm. On examination, she appeared cool to the touch, and when the physician checked the chart, he found that no temperature had been recorded at admission either at the cottage hospital or at the general hospital. An ear thermometer did not register a recordable temperature, and a low reading rectal thermometer showed 31.4°C (88.52°F). Immediate efforts were begun to rewarm the patient. The laboratory results were reviewed and showed the following: creatinine was 643 µmol/L; blood urea nitrogen was 46 mmol/L; liver transaminases were elevated, as were lactate dehydrogenase and amylase. Creatinine phosphokinase and troponin were in the normal range. Blood sugar was 18 mmol/L (324 mg/dl). Electrolytes were as follows: Na 136, K 7.9, Cl 103, and CO2 5. The complete blood count showed white blood cells 10.9, red blood cells 2.9, Hb 91, and platelets 230. The anion gap was 28. She had received a total of 4 L of normal saline and 300 mEq of sodium bicarbonate. A portable chest X-ray appeared normal. Calcium gluconate by IV infusion was started and IV insulin given. Arterial blood gases (ABGs), blood cultures, and a toxic screen were done, and the patient was intubated. Arterial blood gases were as follows: pH 6.81, pO2 67, pCO2 57, and HCO3 9 on an FiO2 of 100%. A repeat electrocardiogram now showed atrial fibrillation with a competing junctional pacemaker and premature ventricular complexes. Subsequently, the patient’s temperature began to slowly rise, potassium level declined, and repeat ABGs showed improvement. The patient was stabilized further and subsequently transferred to the tertiary care hospital’s intensive care unit. She remained in intensive care for 2 weeks. After a further 4 weeks in hospital, she was sufficiently recovered to be discharged home without a pacemaker, and she resumed independent living. EP2 later reviewed the patient’s history with the patient’s daughter. Several days earlier, the patient had experienced urinary tract symptoms and had seen her primary care provider, who prescribed a sulfonamide antibiotic. She was currently being treated for type 2 diabetes, hypertension, coronary arterial disease, and glaucoma. There was no history of renal failure. On the day prior to her collapse, the patient had been unsteady on her feet, had complained of fatigue, and her speech appeared slow. Her daughter took her to the ED of a cottage hospital
C h r i s t m a s S u r p r i s e s | 19
for assessment. Her blood sugar was low at 2.6 mmol/L (46.8 mg/dl), and she was advised to use orange juice to “keep her sugar up.” She improved with the administration of sugar and appeared well up to late in the evening when her daughter left. The following morning, the daughter was unable to reach her on the phone, and it was then that a neighbor was called to check on her and found her in her nightdress on the floor of her trailer park home. She was conscious and complaining of abdominal pain, as well as back pain that had been an ongoing problem. The ambient temperature in the trailer in which she lived was said to be “normal,” but the floor of the trailer on which she was lying in her nightdress was probably cold as the outside temperature was below freezing.
Commentary Hypothermia is defined as the point at which core body temperature drops below that necessary to sustain functioning of vital organs (usually at ≤35°C or 95oF).1,2 As the core temperature drops, predictable cardiovascular changes occur: atrial fibrillation at approximately 31°–32oC, ventricular fibrillation at 22°–28°C, and asystole below 18°–20°C. Osborn or J waves may appear at approximately 32°C and below. They are not pathognomonic of hypothermia and may be seen in a variety of conditions. Deaths due to hypothermia are classified as primary (homicidal, suicidal, and accidental) or secondary, in which the hypothermia complicates systemic disease. The number of deaths due to secondary hypothermia is generally underreported. In the present case, the hypothermia appears to have been iatrogenic and environmental. The administration of a sulfonamide drug to someone on a sulfonylurea medication may significantly potentiate its hypoglycemic effect, which in this case led to collapse and a prolonged exposure to a cold floor. This case illustrates a number of biases, the impact of which is amplified by numerous transitions of care3 through four levels of the system, from paramedic to cottage hospital to the general hospital to a tertiary care hospital (see Cases 11, 23, and 33 and Box 23.1). They are embedded within and exacerbated by significant error-producing conditions (EPCs) and violation-producing behaviors (VPBs); common ones in the ED are listed in Table 1.1.4,5 The sequence of errors and points of failure are shown in Table 1.2. The first occurs at her primary care provider’s office, where she is prescribed a drug that significantly interacts with one she is already receiving, resulting in hypoglycemia (knowledge-based error). The second error occurs when she goes to the ED of the cottage hospital, where her hypoglycemia is inappropriately managed. It appears that the medication interaction causing the hypoglycemia was not recognized and she was given inappropriate advice about its management (knowledge-based errors). In view of the patient’s known cardiac history and her current bradycardia and collapse, paramedics failed to take the patient’s vitals (error of omission); assumed the patient had a cardiac problem (posterior probability error); and transported her to the cottage hospital, where their diagnosis was incorporated (diagnosis momentum) and a physician applied an external pacemaker. They then communicated with the general hospital where, instead of asking for a
20 | T h e C o g n i t i v e A u t o p s y TABLE 1.1 Operating characteristics of the emergency department that may
compromise patient safety: error-producing conditions and violation-producing behaviors Error-Producing Conditions Intrinsic
Systemic
High levels of diagnostic uncertainty High decision density High cognitive load Narrow time windows Multiple transitions of care Multiple interruptions/distractions Low signal-to-noise ratio for disease Surge phenomena Fatigue Dysphoria Circadian dysynchronicity Novel or infrequent conditions
Suboptimal ED design Suboptimal equipment design Inadequate maintenance High communication load Overcrowding Boarded patients Throughput pressures High noise levels Inadequate staffing Incompatible goals Poor feedback Inexperience Inadequate supervision
Violation-Producing Behaviors
Gender Individual cognitive factors Risk-taking behavior Normalization of deviance Maladaptive group pressures Maladaptive copying behavior Underconfidence Overconfidence Maladaptive decision styles Authority gradient effects Likelihood of detection
ED, emergency department. Sources: Adapted from Croskerry and Wears4 and Hobgood et al.5
thorough assessment of the patient at the ED (unpacking principle), they spoke directly to a cardiologist. Thus, a premature diagnosis (premature closure) was made and incorporated (diagnosis momentum) at the cottage hospital and at the general hospital. At both institutions, instead of following the usual protocol of taking vital signs at admission, her salient presenting feature, bradycardia, was anchored upon, and all efforts were urgently directed at correcting it. This initial anchor did not appear to have been challenged or adjusted until the point at which her true diagnosis became apparent. This diagnosis momentum was perpetrated through to the cardiologist, nurses, and emergency physician at the general hospital, and the patient was subsequently placed in the cardiac room. The persistence of the cardiac diagnosis is reflected in the referral to the tertiary hospital’s CCU. Premature diagnostic closure is a powerful phenomenon. Once a diagnosis is made, further thinking tends to stop. The failure to elicit an appropriate history at the cottage hospital contributed to the diagnosis momentum and closure, once they were established and is an example of the unpacking principle. Had this detailed history been known, and her hypothermia detected, her management would have been very different. At the second general hospital, the departmental protocol required that the emergency physician be responsible for the assessment of all patients presenting to the ED. However, the prompt arrival of the cardiologist, who himself anchored to the “cardiac” diagnosis and adopted a search-satisficing approach, disrupted this initial assessment. Under normal conditions, full vital signs would have been included in the emergency physician’s usual initial assessment, at which time the hypothermia would have been detected. This is an example of authority gradient effect; the cardiologist was male, very senior, and the emergency physician was relatively young. The result was that rule violations occurred; departmental
C h r i s t m a s S u r p r i s e s | 21 TABLE 1.2 Sequence of events and decision failures Event
Nature of Error
Source of Error
Visit to primary care provider for urinary tract infection and prescription for antibiotic
Decision failure Medication error Failure to inform patient/ relatives of medication interaction
Error of omission Knowledge deficit Memory WYSIATIa
First visit to the cottage hospital for weak spells
Decision failure
Poor judgment Unpacking principle
Paramedics fail to record vital signs at scene
Breach of protocol
Error of omission
Paramedics assume patient has cardiac condition
Decision failure
Posterior probability error Anchoring Search satisficing Premature closure
Second visit to the cottage hospital: Physician applies external pacemaker and makes referral to cardiologist at general hospital
Decision failure
WYSIATI Search satisficing Diagnosis momentum
Patient bypasses admission procedure at general hospital and does not have temperature recorded
Breach of protocol
Direct referral from the cottage hospital Cognitive overload
Emergency physician accedes to cardiologist, allowing take over of care of patient
Breach of protocol
Cognitive overload Authority gradient Affective Fatigue
Cardiologist fails to recognize source of patient’s bradycardia and fails to review original rhythm strip on which Osborn J waves were evident, or fails to recognize them
Decision failure
Overconfidence Error of omission Diagnosis momentum Déformation professionelleb Knowledge deficit (?)
Nurses fail to get involved more in patient’s care and accept poor prognosis
Decision failure
Affective bias Knowledge deficit Groupthink Authority gradient
Acronym for What you see is all there is.7 Seeing something from the point of view of one’s own training; in this case, seeing bradycardia as an intrinsic problem with the heart rather than in a broader context such as hypothermia. Source: Adapted from Croskerry.6 a
b
protocols that required vital signs on all patients, and that the emergency physician assess the patient first, were disregarded. The case is also an example of triage cueing error, reflected in the maxim, “Geography is destiny.” Once the patient was in the cardiac room, and in the hands of the cardiologist and cardiac nurses, the underlying problem was perceived as exclusively cardiac. In reviewing the case at handover rounds with EP2, EP1 said he planned to go back to fully assess the patient once the pacemaker had been inserted but that he could barely cope with his other patients due to fatigue. He had been recently diagnosed with multiple sclerosis, associated with a clinical depression. The health of physicians is rarely considered
22 | T h e C o g n i t i v e A u t o p s y
in discussions of clinical performance, but fatigue and lowered affect appear to have been contributory factors here. As well as diagnosis momentum, there appears to have been an error of overconfidence on the part of the cardiologist in assuming that he knew the cause of the patient’s condition. There may also have been a knowledge deficit in that when the EP2 discovered the patient’s hypothermia, he reviewed the rhythm strip obtained earlier at the general hospital and was able to identify Osborn J waves—a slow positive deflection at the end of the QRS complex (Figure 1.1). The cardiologist either did not review the earlier strip (error of omission), did not recognize the abnormality (knowledge deficit), or recognized the abnormality but attributed it to other factors. EP2 thought that the most likely of these possibilities was the second because rhythm strips are usually appended to the top of the chart and difficult to miss. Overall, the ability of the ED to provide safe management of this patient was compromised by a number of cognitive and affective biases, as well as VPBs and EPCs, both intrinsic and systemic. Intrinsic EPCs are endemic to many EDs, and emergency
(a)
(b)
J point
(c)
J point
Normal
Elevated
(d)
(e)
J point Depressed F I GUR E 1.1 Osborn
J point Elevated
J wave
J point Osborn wave
J waves of hypothermia. Arrows indicate beginning of wave following QRS complex in lead 1. Source: Reproduced with permission from Dr. Frank Yanowitz at the ECG Learning Center (https://ecg.utah.edu/img_index).
C h r i s t m a s S u r p r i s e s | 23
physicians learn to adapt to them, often normalizing a deviant situation within the microsystem of the ED. The systemic EPCs in this case include overcrowding, unavailability of resources at other points in the system, and no backup system to replace a sick physician. Both CCUs in the receiving hospital and the tertiary care hospital were full, necessitating delays in the transfer of the patient. The workload in the general hospital ED was excessive and probably led to vital signs being incomplete on a patient who was already considered diagnosed. The combined effect of these various sources of error led to the patient being subjected to an invasive and unnecessary procedure (insertion of a venous pacemaker), as well as significant delays in definitive care that exacerbated her condition. In the aftermath of a significant adverse event associated with diagnostic failure, physicians’ reactions often involve self-recrimination, denial, blaming, projection, and other inappropriate responses.8 These are invariably harmful, and the physician may become a second victim in the case.9 Importantly, they contribute little to an understanding of the actual antecedents of the event. Should litigation follow, further exacerbation usually occurs through the adversarial nature of the process and the blaming that inevitably occurs. In contrast, the process of a cognitive autopsy such as this, conducted as soon as possible after the event, allows for a more realistic appraisal of events. It illustrates the complex nature of the process and the critical interactions along the way. Much of this detail will remain unexplored in a typical case review at morbidity and mortality rounds, or even in a root cause analysis. An essential component of this approach is the examination of the pivotal influence of common biases in determining clinical management. Nevertheless, an inherent difficulty with this analysis, as with many other post hoc reviews and analyses, is that it is made in hindsight and therefore subject to bias. After the fact, clinical judgment may appear either better or worse than it was at the time, and occasionally it will be a reasonably faithful reconstruction of events. As noted previously, cognition is mostly a covert process and we may only infer what physicians were thinking from their behavior. An alternate and more visual way of characterizing the various errors in a complex case such as this is by using an Ishikawa or fishbone diagram. Ishikawa was an innovator in quality management, one of his major contributions in the 1960s being the development of the fishbone diagram as a basic tool of quality control. It is a schema for identifying potential factors contributing to an overall outcome—the head of the fish. Fishbones—the ribs branching off the main backbone—are the major causes identified. Each cause can be further detailed by using the 5 Whys technique to progress further toward the root cause. Diagnostic failure is particularly amenable to this type of analysis as multiple causes are often involved. The head of the fish in this particular case is diagnostic failure, the spine is the diagnostic process, and the ribs are the various contributory factors. Reilly has provided an excellent application of this approach.10 An illustrated example is shown in Figure 1.2. A summary of the probable biases and other EPCs is provided in Box 1.1.
Context factors • • • • • •
Cold time of year Multiple transitions of care Community ED very busy Tertiary hospital busy Bed availability Public holiday
Protocol violations • Failure to do vitals at scene • Failure to do vitals at Cottage Hospital • Failure to do vitals at Community ED • Patient not assessed by ED physician
Knowledge deficits • By PCP of medication interaction • By cardiologist of J waves in EKG
Diagnostic Failure
Data gathering • By EMTs • By Cottage Hospital • By cardiologist
FIGURE 1 .2 Fishbone
Cognitive biases • • • • • • •
Anchoring and failure to adjust Posterior probability error Diagnosis momentum Search satisficing Premature diagnostic closure Overconfidence by cardiologist Déformation professionnelle
Affective biases • Dysphoria • Fatigue
Team factors • Groupthink • Authority gradient
diagram identifying major causes of diagnostic failure in this case. Source: From Reilly.10
C h r i s t m a s S u r p r i s e s | 25
BOX 1.1 Probable Biases and Other Error-Producing Conditions • Knowledge-based error • Error of omission • Posterior probability error • Diagnosis momentum • Unpacking principle • Premature diagnostic closure • Déformation professionnelle • Anchoring • Search satisficing • Authority gradient effect • Triage cueing • Fatigue • Overconfidence • Error-producing conditions • Violation-producing behaviors
References 1. Danzl DF. Accidental hypothermia. In: Marx J, Hockberger R, Walls R (Eds.), Rosen’s Emergency Medicine: Concepts and Clinical Practice (7th ed.). Philadelphia, PA: Elsevier Mosby, 2010; 1868–1881. 2. Shinde R, Shinde S, Makhale C, Grant P, Sathe S, Durairaj M, Lokhandwala Y, Di Diego J, Antzelevitch C. Occurrence of “J waves” in 12-lead ECG as a marker of acute ischemia and their cellular basis. Pacing Clin Electrophysiol. 2007; 30(6): 817–819. 3. Croskerry P. Making transitions of care safer (a biased perspective). Presentation at Social Media and Critical Care (SMACC) conference, June 23–26 2015, Chicago, IL. https://www.smacc.net.au/2016/01/ making-transitions-of-care-safe-pat-croskerry 4. Croskerry P, Wears RL. Safety errors in emergency medicine. In: Markovchick VJ, Pons PT (Eds.), Emergency Medicine Secrets (3rd ed., Chapter 7, pages 29–37). Philadelphia, PA: Hanley & Belfus, 2003. 5. Hobgood C, Croskerry P, Wears R, Hevia A. Patient safety in emergency medicine. In: Tintinalli J, Kelen G, Stapczynski J (Eds.), Emergency Medicine: A Comprehensive Study Guide (6th ed.). New York, NY: McGraw- Hill, 2004; 1912–1918. 6. Croskerry P. Medical decision making. In: Ball L, Thompson V (Eds.), International Handbook of Thinking and Reasoning. New York, NY: Taylor & Francis, 2017; 109–129. 7. Kahneman D. Thinking, Fast and Slow. Toronto, Canada: Doubleday, 2011; 85. 8. Croskerry P. Perspectives on diagnostic failure and patient safety. Healthcare Q. 2012; 15(Special Issue): 50–56. 9. Wu A. Medical error: the second victim. BMJ. 2000; 320(7237): 726–727. 10. Reilly JB. Educational approaches to common cognitive errors. In RL Trowbridge, JJ Rencic, SJ Durning (Eds.), Teaching Clinical Reasoning. Philadelphia, PA: American College of Physicians, 2015; 51–76.
Case 2
Distraught Distraction
A 38-year-old female presented to the emergency department (ED) in some distress and was brought straight from triage to an observation bed. She was anxious, trembling, tearful, and extremely distraught. Two nurses attempted to reassure her and calm her. She was urged to take some deep breaths and slow her breathing down. Attracted by the commotion, the emergency physician (EP) broke off with a patient he was assessing for chest pain and went to see her immediately. The story emerged that her apartment had been broken into by intruders that evening and a number of possessions had been stolen. Her initial vitals were 37, 130, 32, 140/80, 98%. The physician ordered lorazepam 2 mg sublingually and proceeded with an examination. Her chest was clear, with equal air entry and no adventitious sounds; her breathing rate was elevated; cardiovascular examination was normal other than a raised heart rate; the head, eyes, ears, nose, and throat exam was normal; and an abbreviated neurological exam was normal. She began to settle slightly with the medication and constant reassurance from the nurses. She stated that she had no significant medical problems but had been treated in the past for “anxiety” attacks. When the physician finished with his previous patient and returned to her, she explained that she had been under considerable stress in the building where she lived. Some prostitutes had threatened her in the past, and she believed they were responsible for the break-in. She was in the process of arranging to move to a new apartment. Her past history was reviewed in more detail. She admitted again to having suffered periodically from severe anxiety attacks, which had been treated by her family doctor. She specifically denied any abuse of alcohol or other drugs. She had had some minor surgeries in the past. Her vitals were taken again, which then showed her heart rate had declined to 115, although her respiratory rate was still elevated at 26. The monitor showed sinus tachycardia and an O2 saturation of 96%. She had by then settled considerably, but she still felt very anxious. She was apologetic for all the “fuss” that she had caused. The EP diagnosed an acute anxiety reaction and questioned an underlying panic disorder. In view of the extreme nature of the presentation, he advised the patient that she should be seen in the acute psychiatric assessment unit. She was agreeable to this and was transferred
28 | T h e C o g n i t i v e A u t o p s y
for an assessment. She was seen by a psychiatry resident who, in the course of taking a more detailed history, discovered that the patient had taken a “handful” of aspirin to settle herself down. He suspected acute salicylate poisoning and possible suicidal behavior, and he called the ED requesting medical reassessment. She was transferred back to the ED. Arterial blood gases were drawn and showed a significant metabolic acidosis: 7.12/52/105/14. Her blood salicylate level was 3.5 mmol/L (normal therapeutic range is 1.1–2.2 mmol/L or 15–30 mg/dL). Greater than 2.5 mmol/L or 100 mg/dL at 6 hours post-ingestion is considered potentially fatal. It was not clear how long ago she had taken the salicylate or whether the level was due to a single dose or the cumulative effect of several doses. Nevertheless, the diagnosis was now clear. Salicylate toxidrome was diagnosed and alkalinization treatment started in the ED with a bolus of 1–2 mEq/kg followed by a bicarb drip. She was transferred to the intensive care unit and later referred to psychiatry.
Commentary The clinical presentation of an acute anxiety reaction shows considerable overlap with that of acute salicylate poisoning. Common features are agitation, tachycardia, premature beats, tachypnea, diaphoresis, nausea, and dizziness. Whereas the hyperventilation associated with salicylate overdose is a physiologically adaptive response to compensate metabolic acidosis, that of an acute anxiety reaction leads to a respiratory alkalosis and has a maladaptive outcome resulting in dizziness, muscular stiffness, and paresthesias. Arterial blood gases will readily distinguish the two. On seeing someone breathing quickly, the first reaction of many laypeople and even those who are medically trained is to assume that hyperventilation is a behavioral problem, typically due to anxiety. Correspondingly, efforts are invariably made to urge patients to slow their breathing down. This is entirely inappropriate in a number of circumstances in which increased alveolar ventilation is required for physiological reasons—for example, sepsis, diabetic ketoacidosis, pulmonary embolus, heatstroke, heart failure, and others, including the present case of a toxic metabolic acidosis. Several factors led to the error in diagnosis in this case. At the outset, the patient’s arrival in the department was dramatic. The focus immediately fell on her behavior, and the implicit assumption was made that emotional upset was her major problem. The rapid breathing was seen as a behavioral problem and, therefore, potentially controllable by the patient. Thus, she was given reassurance and urged to slow her breathing down. This is an example of fundamental attribution error, essentially attributing the patient’s state to her disposition rather than to her physiological state. Although a part of her tachypnea may have been due to anxiety and amenable to behavioral control (i.e., deliberately slowing her breathing down), the principal component to her respiratory drive was a metabolic one over which she did not have control. The second error is the more general one of assuming a psychiatric condition explained her presentation. This is one of the psych-out errors,1 in which a behavioral abnormality is viewed as having a psychiatric origin rather than a medical one (see Case 5 and Box 5.1). Any neurotic disorder is a diagnosis of exclusion and should not be made until a variety of medical
D i s t r a u g h t D i s t r a c t i o n | 29
conditions have been excluded. The EP witnessing this scenario at the outset may have incorporated the approach already taken by the nurses and started to lean toward a psychiatric diagnosis. This is referred to as diagnosis momentum, classically demonstrated in Case 1. It is the tendency of diagnoses to gather impetus as they move between people, becoming more refined and defined, but such diagnoses may gather momentum without gathering evidence. Furthermore, the physician’s impression of what was going on at the outset may have been due to groupthink or bandwagon effect, reflecting a tendency of people to following a certain behavior or idea blindly, with no critical challenge of the group consensus. This tendency may be augmented by situational factors—in this case, high levels of arousal.2 The EP also elicited a past history of anxiety attacks, which, in combination with the diagnostic momentum, may have disposed him toward the posterior probability error. This is the tendency to attach a probability to an event on the basis of what has happened in the past—that is, if she had had numerous previous diagnoses of acute anxiety reaction, then there was a good chance this episode was too. He also accepted the patient’s denial of the use of drugs, which may have reflected a communication error. Many patients do not view aspirin or salicylate as a “drug” and often may not be aware that they have been taking it. There are currently more than 100 brand name products available over-the-counter (OTC) that contain salicylate. It is therefore always advisable to ask patients if they have taken any OTC products (for an example of chronic salicylate toxicity, see Case 22). Finally, given that women experience anxiety neurosis at a higher rate than men, the rate being approximately double,3 there is a possibility here of gender bias. Thus, there may be a prevailing tendency to disproportionately attribute anxiety more often to women than men. EDs are where acute cases will most likely present, and emergency caregivers are likely to have experienced (have available to them) anxiety reactions at a significantly higher rate in women compared with men—this is manifested as availability bias. See Box 2.1 for probable biases and other error-producing conditions in this case.
BOX 2.1 Probable Biases and Other Error-Producing Conditions • Anchoring • Gender bias • Availability bias • Psych-out error • Groupthink/bandwagon effect • Fundamental attribution error • Posterior probability error • Communication error • Unpacking failure • Diagnosis momentum
30 | T h e C o g n i t i v e A u t o p s y
References 1. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003; 78(8): 775–780. 2. Chapman J. Anxiety and defective decision making: An elaboration of the groupthink model. Management Decision. 2006; 44(10): 1391–1404. 3. Remes O, Brayne C, van der Linde R, Lafortune L. A systematic review of reviews on the prevalence of anxiety disorders in adult populations. Brain Behav. 2016; 6(7): e00497. doi:10.1002/brb3.497
Case 3
The Fortunate Footballer
An 18-year-old male was brought to the emergency department (ED) by ambulance. The paramedics’ note stated he had been unwell for the past 10 days with signs of a respiratory infection. He had a sore throat, cough, and congestion. He also had nausea, vomiting, cramps, diarrhea, and general fatigue. At triage, his vital signs were: 373 110, 18, 110/60. He was triaged as a level 3. The triage notes essentially confirmed the paramedics’ note and added that he was diagnosed with “pneumonia” that day by a doctor at a football game and given a prescription for clarithromycin. He had taken the first dose. He was triaged to an area of low acuity in the department. He was accompanied by his parents and girlfriend. The department was busy, but he was seen within 20 minutes of his arrival. Routine blood work had already been done but had not yet been reported. On examination, he looked unwell and pale. He otherwise appeared to be a physically fit male. He recounted a 10-day history of respiratory symptoms. He stated that he was feeling unwell at a football game that day and was prescribed an antibiotic by the team doctor, who told him he had pneumonia. The emergency physician (EP) initially believed the patient was a spectator at the game, but the patient confirmed he was actually playing. The physician was somewhat surprised that someone with a 10-day illness should have been playing in a university football game. The mother noted wistfully that “there is an incredible pressure to perform.” The game was a local derby between two town universities, and the physician recalled hearing on the news that day that three players had been suspended from the patient’s team for unruly conduct off the field; thus, the team was probably short of players. After the game, the patient was taken to a pharmacy to pick up his prescription and then taken home by his parents. His mother was concerned about his general appearance and by the diagnosis of pneumonia by the team doctor. She decided to call an ambulance to bring him into the ED for further assessment. Head and neck examination was normal, and his neck was supple. His tonsils appeared slightly enlarged, and his ears were clear. On auscultation, the physician heard some scattered rhonchi but good equal air entry. Cardiovascular examination was normal. At this point, the
32 | T h e C o g n i t i v e A u t o p s y
physician was interrupted by the charge nurse, who asked him to assess a disruptive patient. The situation was resolved within approximately 10 minutes, and the physician headed back to the room with the football player. En route, the physician reflected on his findings so far and decided he would order a chest X-ray. When he returned to the room, the patient’s position was unchanged, and he was lying with his chest and upper abdomen exposed from where the sheet had been pulled down for the cardiovascular exam. The physician later recalled feeling that this triggered an examination of the abdomen. It is unusual for patients not to change their position in the bed unless they are too sick to move, and when he returned to the patient, he found this somewhat odd. He had no clear reason otherwise to examine the patient’s abdomen because the patient had not, thus far, complained of abdominal symptoms (although he had told the paramedics he had abdominal cramps and diarrhea). He was surprised to find diffuse tenderness and guarding on palpation of the abdomen. He asked the patient how long he had had abdominal discomfort, and the patient replied that it was just that day. The physician inquired further if the patient was injured in the game. He replied that, as the team quarterback, he had been tackled and hit several times during the course of the game, but he recalled no specific injury. The physician left the room and checked the patient’s blood work. It was normal other than a low hemoglobin at 10.6 g/dL. He was concerned about the possibility of an intra-abdominal injury and ordered a computed tomography (CT) scan of the abdomen. It confirmed a massive injury to the spleen with extensive bleeding into the abdominal cavity (grade V). The general surgery service was consulted, and a blood type and crossmatch were ordered, as well as a repeat hemoglobin. The patient was promptly taken to the operating room. His hemoglobin came back at 6 g/dL. He underwent a splenectomy. He was transfused with several units of blood postoperatively and made a full recovery. His blood work showed a positive heterophile test confirming infectious mononucleosis.
Commentary Splenomegaly occurs in approximately 50% of cases of infectious mononucleosis and is maximal in the second to third week. Spontaneous rupture of the spleen is extremely uncommon, but rupture may occur with even moderate trauma. The highest risk of injury is within the first 3 weeks of onset of illness.1 Based on the extent of splenic trauma evident on the CT scan, low-grade (I–III) injuries may be managed non-operatively to conserve the spleen. If higher grades (IV and V) cannot be managed with angioembolization, a splenectomy is required.2 Non-operative management has increased significantly in recent years. Immune function appears to be mostly preserved in those who have been managed non-operatively; however, those who have undergone a splenectomy should carry a medical alert bracelet, neck chain, or card and consult their physician for advice on vaccination and immunisations.2 Several error-producing conditions and biases are illustrated in this case. The context in which decisions are made is often highly relevant.3 The present context was a football
T h e F o r t u n at e F o o t b a l l e r | 33
game between two local universities. The patient was a football player who was unwell for approximately 10 days with significant systemic symptoms. Like most players in team sports, especially at the varsity level, he did not want to let the team down, especially in this local derby in which there is a long-standing rivalry with the opposing team. He was a quarterback, a key position on the team, and not easily replaced. The situation was exacerbated by several members of the team being currently suspended following some pregame exuberant behavior. His mother commented on the pressure he was under to play. Nevertheless, against her better judgment, and probably his own, he decided to play. He was not performing well and was tackled heavily several times. His general demeanor and appearance prompted an examination by the team physician, who diagnosed pneumonia. Physicians’ engagement in sports medicine occur at various levels. At a collegiate or professional level, their involvement may be as an independent contractor or even full-time. At a community level, it will be considerably less, perhaps entirely voluntary, and requiring no more than attendance at games. During a game, their immediate concern is the health and safety of the players. Secondary but important decisions, as far as the team is concerned, center around ability to continue to play or return to play (RTP). Given the obvious time and physical constraints on the field, physicians may sometimes be compromised in that they cannot take a full history nor conduct an appropriate examination. Neither can they access imaging techniques or other tests to fully evaluate injuries before making a decision about the game in progress. These are similar constraints to those that apply in the informal process of curbside (or corridor) consultations in medicine, which are fairly common. They typically involve a physician or nurse seeking an opinion from another physician on a medical complaint. Although the advice given may be well-intentioned, a major difficulty is that the rules of the doctor–patient encounter are different. Typically, the physician departs from their regular procedure and does not do an appropriate history or physical examination.4,5 Important things may get missed often through a failure to unpack sufficient information. Golub describes it thus: A physician, who is in a noisy, crowded hallway en route elsewhere, or is button- holed outside his office, that is, “on the curb,” may be distracted from offering the kind of thoughtful opinion that may come from a formal consultation or thorough discussion.4 Although the physician at the football field is not exactly curbside, the circumstances share some common features. The agenda of sports medicine physicians follows a different paradigm than that of normal clinical practice. The decision-making process has the additional onus of taking into account issues such as pressure from the athletes themselves, their family, coaches, and team; how critical the game is in the season; and availability of alternate players.6 These modifiers may exert an influence on the decision-making process that is beyond that generally found in normal clinical practice. Although clinicians who undertake sports medicine have the
34 | T h e C o g n i t i v e A u t o p s y
benefit of essentially healthy patients, one downside is that their decision-making may be burdened by these additional decision modifiers. In the present case, without the benefit of a more thorough history and examination of the patient, the team physician anchors to the obvious respiratory complaint and misdiagnoses pneumonia. Although he may have observed the footballer being hit several times, it was entirely within context and unlikely to have attracted undue concern. It would have been a long shot to connect a 10-day history of respiratory symptoms with the possibility of infectious mononucleosis, an enlarged and fragile spleen, and a subsequent splenic bleed secondary to trauma on the football field. The patient’s mother was on the opposite side of the RTP issue. She thought it unreasonable that he had been cleared to play in the game in the first place. Furthermore, if she had been sufficiently reassured by the team physician, she may not have sought further assessment, possibly resulting in a less fortunate outcome. When the paramedics picked the patient up at home, they similarly anchored to the respiratory complaint, noting that he had been given the diagnosis of pneumonia at the football field. There was no evidence in their record that the patient’s abdomen had been examined. The diagnosis continued to gather momentum at triage in the ED. The EP initially anchored in the same way, and may well have confined his assessment to the chest had his usual routine not been interrupted. The patient, too, inadvertently contributed to diagnosis momentum by emphasizing his respiratory symptoms. Patients will often characterize their illness as they see it and may inadvertently frame health care providers in the process (see Case 35). A significant consequence of framing is that the framer sets up the person being framed to see things in a particular way. Seeing what you expect to see is referred to as ascertainment bias. Once framing occurs, ascertainment bias may not be far behind. A summary of the probable biases and other error-producing conditions is provided in Box 3.1.
BOX 3.1 Probable Biases and Other Error-Producing Conditions • Context issues • Return-to-play modifiers • Anchoring • Unpacking principle • Diagnosis momentum • Framing • Ascertainment bias
T h e F o r t u n at e F o o t b a l l e r | 35
References 1. Becker JA, Smith JA. Return to play after infectious mononucleosis. Sports Health. 2014; 6(3): 232–238. 2. Hildebrand DR, Ben-Sassi A, Ross NP, Macvicar R, Frizelle FA, Watson AJM. Modern management of splenic trauma. BMJ. 2014; 348: g1864. doi:10.1136/bmj.g1864 3. Croskerry P. Context is everything or How could I have been that stupid? Healthcare Q. 2009; 12(Special issue): 167–173. 4. Golub RM. Curbside consultations and the viaduct effect. JAMA. 1998; 280(10): 929–930. 5. Croskerry P. Alternatives to conventional medical diagnoses. In: Croskerry P, Cosby KS, Graber M, Singh H (Eds.), Diagnosis: Interpreting the Shadows. Boca Raton, FL: CRC Press, 2017; 55–70. 6. Matheson GO, Shultz R, Bido J, Mitten MJ, Meeuwisse WH, Shrier I. Return-to-play decisions: Are they the team physician’s responsibility? Clin J Sport Med. 2011; 21(1): 25–30.
Case 4
An Incommoded Interior Designer
A 75-year-old female presented to the emergency department (ED) of a community health center complaining of neck pain. For 2 days, she had been feeling generally unwell. She thought she might have “a slight chill” and reported a mild headache. Neck pain had developed during the past 24 hours. She was known to the emergency physician (EP), who lived in the same community. Despite her advancing years, she was active and continued to work as an interior designer. Her husband had died several years earlier and she now lived alone. She was well regarded and popular with her neighbors and her customers. She immediately recognized the physician and appeared happy to see a familiar face in a busy ED. Her vital signs were within normal limits: 372, 90, 18, 122/70. She was in no acute distress and apologized for coming to the ED, but her family physician had retired approximately 6 months ago and she hadn’t yet acquired a replacement. She was on no medication and took no over-the-counter medications or supplements. Her past medical history was unremarkable. Her cardiac and respiratory examinations were normal, as was her skeletal exam. She experienced some discomfort turning her head from side to side, and flexion was slightly limited and painful. Neurological exam was normal, with negative Kernig’s and Brudzinski’s signs. Her headache had mostly settled. She appeared very unassuming and gracious, and, noting how busy the department was, did not want to waste anyone’s valuable time. Given that she hadn’t seen a doctor for more than a year, the EP decided to do some routine blood work and urinalysis. Although he was not unduly concerned that anything serious was going on, he was anxious not to miss anything. The physician continued seeing other patients and eventually returned to review her blood work and urinalysis, which were normal. He explained to her that he had not uncovered anything untoward and that she appeared in reasonable health. She enquired what he thought might be causing her neck pain, and he ran down his differential with her, which included musculoskeletal injury and meningitis. In fact, there had been two confirmed cases of meningitis in
38 | T h e C o g n i t i v e A u t o p s y
teenagers recently in the community. She appeared slightly concerned at the mention of meningitis and asked if it was a likely diagnosis for her. The physician reassured her it was not. She did not appear sick, had no fever and no significant headache, and her white blood cells were in the normal range. Also, he noted to her that he had performed two tests on her for meningitis, both of which were negative. She gently asked if that meant there was no possibility of meningitis, and he confessed that they both had low sensitivity and did not rule meningitis out when they were normal. She then asked if it was possible to have meningitis with normal blood work, and again, he admitted that older patients could have normal blood work, and be afebrile, despite significant illness. He explained further that there was a test that required examining the cerebrospinal fluid which would definitively rule out meningitis, but he did not think there was a strong enough case for subjecting her to the procedure. However, it was clear at this juncture that he had raised her anxiety level by discussing the possibility of meningitis, and having reviewed his thoughts out loud, he began to have second thoughts about his judgment. Furthermore, the patient’s general demeanor, her agreeableness, and her pleasant disposition strengthened his desire not to miss anything potentially serious, and he suggested to her that the test might be done after all. The patient was agreeable to the procedure, and it was performed effortlessly. Opening pressure was normal, and the fluid was clear and colorless. Because the laboratory was fairly busy, it was several hours before the results came back. The cerebrospinal fluid was completely normal. The physician reviewed the possibility of a post-lumbar puncture headache with her. She was further reassured and told to follow up through a walk-in clinic until she could get a new family doctor. In all, she had spent almost 4 hours in the ED. As she was preparing to leave, she thanked the nurse and physician for their care and thoroughness, and she commented that she had probably strained her neck hanging drapes a few days earlier. Several weeks later, the physician ran into the patient at a store in the community, and she reported she had no side effects from the procedure and was doing well.
Commentary Although no misdiagnosis occurred and no harm was done in this case, when the physician was later reviewing his management with a colleague, he concluded that his positive affect toward the patient had clearly influenced his judgment (affective bias). She was a bright, unassuming, likeable person with a pleasant demeanor who was known to him, and he felt he had probably overinvestigated her (commission error) for what was probably a minor complaint. By performing a lumbar puncture (LP), he was avoiding the intense chagrin that he would have experienced had he not done the procedure and she actually had meningitis. He thought, too, that the occurrence of two recent local cases of meningitis in high school students might have influenced his decision (availability bias). He admitted that had he not known the patient, and had she been someone else presenting with what were relatively minor complaints, she would likely have been dealt with in minutes rather than the hours it actually took. In short, he exercised an option to minimize chagrin over a patient toward whom he felt positive affect. By adopting an explicit disconfirming strategy, the LP, he removed all doubt about the possibility of meningitis.
A n I n c o m m o d e d I n t e r i o r D e s i g n e r | 39
As a general caveat, clinicians should exercise care in patients who are familiar to them. This especially applies to oneself, friends and acquaintances, family members, and colleagues and may dispose toward a familiarity bias. More than a century ago, the American Medical Association warned that a family member’s illness may “obscure the physician’s judgement and produce timidity and irresolution in his practice” (quoted in Croskerry1). Checklists for physicians diagnosing physicians have been developed.2 In broad terms, two major types of decisions are made in the care of patients: decisions about diagnosis and those that involve the management of disease. We prefer to believe that these decisions are rational and objective, and for the most part, they are. However, as noted in the introductory chapter, many factors are involved in the diagnostic process, making it the most complex activity in which physicians engage, and a high level of rational decision making requires some conscientiousness. Our affective engagement—that is, how our emotions are involved in the process—is probably one of the more subtle factors, and one in which awareness and understanding may be limited. Many of us are simply unaware of how emotions affect the majority of decisions that we make in life. Chagrin—the upset, sadness, disappointment, and even vexation resulting from a failure or mistake in decision making—is not uncommon in medicine;3 it may affect both the diagnostic process and management of the illness. In the fields of economics and psychology, it is referred to as regret aversion or anticipated regret4—we generally try to reduce the likelihood that we will experience regret in life. The avoidance of regret is a significant modifier of behavior and central to how we learn from experience. Ultimately, it facilitates the refinement of decision making and the attainment of rationality. Choosing wisely is good for both patient and physician. Although this discussion has focused on the potential chagrin experienced by the clinician, in practice the burden is usually spread by engaging the patient and family/ friends in a shared decision-making process in which the clinician can clearly articulate the advantages and disadvantages of different options. This probably applies more to the treatment process than the diagnostic one. Rather than make implicit assumptions about a patient’s preferences, an explicit, verbal articulation of the options and consequences serves to carry the discussion forward in a more analytical, objective, and rational manner. A summary of the probable biases and other error-producing conditions can be found in Box 4.1.
BOX 4.1 Probable Biases and Other Error-Producing Conditions • Affective bias • Familiarity bias • Availability • Chagrin/anticipated regret • Error of commission
40 | T h e C o g n i t i v e A u t o p s y
References 1. Croskerry P. Alternatives to conventional medical diagnoses. In: Croskerry P, Cosby KS, Graber M, Singh H (Eds.), Diagnosis: Interpreting the Shadows. Boca Raton, FL: CRC Press, 2017; 55–70. 2. Rosvold EO. Doctor, don’t treat thyself. https://psnet.ahrq.gov/webmm/case/71/Doctor-Dont-Treat- Thyself. Accessed December 20, 2018. 3. Feinstein AR. The “chagrin factor” and qualitative decision analysis. Arch Int Med. 1985; 145(7): 1257–1259. 4. Loomes G, Sugden R. Regret theory: An alternative theory of rational choice under uncertainty. Econ J. 1982; 92(368): 805–824.
Case 5
Teenage Tachypnea
An 18-year-old female was sent to the emergency department (ED) of a tertiary care hospital from a nearby psychiatric hospital for assessment. She had been recently admitted there for treatment of an anxiety disorder and depression. According to the psychiatrist’s note, she had frequent episodes of uncontrollable hyperventilation, associated with carpopedal spasm and loss of consciousness. Her main complaint was intermittent shortness of breath, and the psychiatric staff wanted to rule out a non-psychiatric diagnosis, specifically a chest infection. The ED was moderately busy at the time she was seen. Initially, she appeared anxious and complained of feeling uncomfortable being in the ED. She was reassured by the triage nurse and encouraged to slow her breathing down. Her vital signs at triage were: 37, 108, 22, 117/65, 94. She initially received a triage-level assignment of 4 and was transferred to a cubicle. She was seen and assessed by a junior resident. She had a history of overdose and was currently being treated for anxiety and depression. She had been treated by her family doctor with buspirone and a benzodiazepine. In the psychiatric short-stay unit, she was being weaned off the benzodiazepine and had been started on the selective serotonin reuptake inhibitor citalopram. She had been diagnosed with asthma in the past but was not currently being treated for it. On exam, she was mildly obese. She stated that she had been experiencing shortness of breath on and off for approximately 2 weeks. Cardiovascular, respiratory, and the head, eyes, ears, nose, and throat examinations were normal. Routine blood work was normal, an electrocardiogram was normal other than a mild tachycardia of 106, and chest X-ray was normal. The resident could see no evidence of pneumonia on her chest X-ray and believed there was no other significant chest infection. In the absence of any findings in the patient’s blood work and chest X-ray, the resident believed the patient’s complaints were explainable on the basis of her psychiatric condition, and possibly exacerbated by the recent change in her medications. He believed she could be safely discharged back to the short-stay psychiatric unit.
42 | T h e C o g n i t i v e A u t o p s y
The patient was subsequently assessed by the attending physician, who confirmed the resident’s history and findings. However, he also elicited a history of heavy cigarette smoking and that the patient was on a birth control pill. He ordered a D-dimer test but was told this would take several hours due to a backup at the lab. Approximately an hour later, almost 8 hours after the patient had first arrived in the ED, she became very agitated and tachypneic. Several nurses were trying to calm her down, and she had been given a paper bag to breathe into. Soon afterwards, she lost consciousness. She was found to have pulseless electrical activity and then went into asystole. She could not be resuscitated. At autopsy, she was found to have pelvic vein thrombosis extending from the femoral vein, and massive saddle emboli in her lungs as well as multiple clots of varying ages.
Commentary There is a failure at the outset to recognize the likelihood that this patient was at risk for pulmonary embolism. She had at least three risk factors: She was obese, on the birth control pill, and a heavy cigarette smoker. Clinical risk factors are summarized by the mnemonic THROMBOSIS1 shown in Box 5.1. The first bias in this case is the premature diagnosis in the psychiatric referral of “chest infection.” Although patients on an inpatient psychiatric unit are considered to be “medically cleared,” by sending the patient to the ED the psychiatric staff are correctly questioning whether there might now be a new medical diagnosis but, unfortunately, label it as a “chest infection.” Diagnostic labels tend to get attached to patients and diagnosis momentum may be established at an early stage. The problem is perpetuated here by the triage nurse urging the patient to slow her breathing down, assuming that the patient’s hyperventilation was due to anxiety and not to an underlying physiological cause—this represents a triage cueing error, and it led to assigning her a low-priority triage score on the basis of a presumed
BOX 5.1 Clinical Risk Factors for Deep Vein Thrombosis T Trauma, travel H Hypercoagulable, hormone replacement R Recreational drugs (intravenous drugs) O Old age (>60 years) M Malignancy, medical illness B Birth control pill, blood group A O Obesity, obstetrics S Surgery, smoking I Immobilization (general and/or local) S Sickness
T e e n a g e T a c h y p n e a | 43 TABLE 5.1 Features and manifestations of diagnosis in psychiatric and non-
psychiatric patients in the emergency setting Feature
Non-Psychiatric Patient
Psychiatric Patient
Physical manifestation
May be present
Likely absent
Behavior
Usually cooperative, compliant
Passive, sometimes noncompliant
Attitude of patient
Generally appreciative
Neutral, sometimes unappreciative
Diagnosis
Mostly objective
Mostly subjective
Workup
Relatively fast
Usually slow
Lab/imaging studies
Contributory
Mostly noncontributory
Management
Relatively clear
More challenging
End point
Often definitive
Poor, revolving
Compliance
Usually good
Uneven
Attitude of staff
Good, supportive
Occasionally unsupportive
Presence of bias
Occasional
More common
Source: Adapted from Croskerry and Wears.3
non-emergent psychiatric condition and sending her to a regular bed in the ED. This is an example of the maxim “Geography is destiny.”2 The ED is a physically structured environment in which different locations (geography) are associated with different outcomes (destinies). As discussed in Case 1, placing a patient in a particular location (cardiac room) establishes an ascertainment bias, which sets people up to see what they expect to see (i.e., a cardiac problem). In the present case, a regular bed suggests a routine problem and not a life-threatening one. When any patient is sent to an ED for evaluation, it is preferable that no specific diagnosis be made. In this case, there are multiple, wide-ranging diagnoses on the differential for dyspnea. The least biased way of making a referral, and avoiding ascertainment bias, is to ask for an evaluation of the patient’s signs and symptoms and leave the rest to the emergency physician. Overall, the main biases in this case appear to arise from the patient being referred from a psychiatric service. Generally, psychiatric patients are vulnerable to several categories of error. Once a patient is labeled as “psychiatric,” they tend to be seen differently, in part because they do have some characteristics that distinguish them from non-psychiatric patients, especially in the acute care setting (Table 5.1).3 Often, they become vulnerable to a variety of biases collectively referred to as psych-out errors, further characterized in Box 5.2.4 It seems likely that the resident reading the psychiatric referral would have noted the past history of hyperventilation syndrome, associated with carpopedal spasm and loss of consciousness. It is far easier to attribute the patient’s presenting complaints to a recurrence of this condition rather than search for new explanations, and it is easy to dismiss tachycardia and tachypnea in a psychiatric patient, attributing these symptoms to anxiety. This is referred to as posterior probability error—if it has happened repeatedly in the past, there is a greater likelihood that it is the same problem now. However, anxiety state is, at the outset, a diagnosis of exclusion. Had the resident ordered an arterial blood gases, he would have easily detected
44 | T h e C o g n i t i v e A u t o p s y
BOX 5.2 Psych-Out Errors 1.
Diagnostic reliability:5 Compared with other diseases, psychiatric diagnoses generally are not associated with clear, tangible, measurable characteristics. Currently, there are no biological measures, blood tests, or imaging modalities that can detect and define a particular psychiatric diagnosis, as can be done with most medical diagnoses. The diagnosis is established largely on the basis of symptoms reported by the patient or collateral reports of their behavior, and establishing a match with a consensus that has been reached by experts. Thus, there is less reliability, less validity, and more uncertainty, ambiguity, and, inevitably, error surrounding psychiatric diagnoses.
2. Medical conditions masquerading as a psychiatric disorder: A significant number of medical conditions are associated with common psychiatric symptoms; for example, anxiety may be associated with hypoglycemia, pulmonary embolism, pheochromocytoma, hyperthyroidism, hypothyroidism, intracranial tumor, hyperadrenalism, and others. Symptoms of depression may be due to an underlying alcoholism, hyperthyroidism, hyperadrenalism, adrenal cortical insufficiency, pernicious anemia, hypoglycemia, intracranial tumors, pancreatic carcinoma, multiple sclerosis, systemic lupus erythematosus, acquired immune deficiency syndrome, and other factors. A variety of other medical mimics of psychiatric disease have been described.6–9 3. Overlooking psychiatric illness in medical patients: This may arise from attributing symptoms in the second group (see No. 2) to an established medical condition rather than a psychiatric condition, comorbid or otherwise. Other examples include focusing on the medical diagnoses of patients with somatic symptom disorders and conversion disorders, and not giving the possibility of an underlying psychiatric condition sufficient consideration. It is often far easier to address the proximal medical symptoms of the patient than focus on the more distal psychiatric issue. Nevertheless, clinicians must make every effort to exclude a medical cause before psychiatric diagnoses such as these are considered. 4. Underestimating medical comorbidity in psychiatric patients: Although psychiatric patients generally have more medical problems than non- psychiatric patients,10 their medical comorbidities are consistently underestimated. Accounts of significant medical problems being overlooked in psychiatric patients are abundant. Generally, it is more likely that a significant comorbid illness will be missed in the psychiatric patient. In the present case, this may have led to the failure to take a history of cigarette smoking and not
T e e n a g e T a c h y p n e a | 45
noting that the patient was on birth control medication. These are errors of omission. 5. Vulnerability to attribution errors: People have a general tendency to explain the behavior of others by attributing it to either situational factors or personal qualities or traits. Fundamental attribution error (FAE) occurs when we explain other people’s behavior by overestimating their personality characteristics and underestimating the influence of the situation they might be in. Although FAE may occur for a wide spectrum of behaviors, for psychiatric patients it is especially the case in the management of personality disorders, most notably cluster B (borderline, narcissistic, histrionic, and antisocial). It is often difficult for health care personnel not to take personal offense at some of the behaviors of these patients and to remind themselves that the behavior is a manifestation of the disease. It is easier to make the FAE when we believe the person is in control of their own behavior. 6. Stigma of psychiatric illness: In most cultures, there is a widely acknowledged stigma of mental illness among the general public. It remains one of the most deep-rooted areas of bias in the way that disease is perceived. People are generally uncomfortable with mental illness and tend to avoid those who suffer from it. Historically, and perhaps for hundreds of thousands of years throughout our evolution, it has been the mysterious nature of mental illness that has made these diseases frightening. They are different from other health problems—we easily understand and readily sympathize when someone has limitations to their physical health but are less willing to help when someone is incapacitated mentally. Even within medicine there is a bias against psychiatry—it is viewed as unscientific and, as a specialty, attracts the least interest among medical students.
the typical respiratory alkalosis that occurs with hyperventilation syndrome, as well as the probable hypoxemia associated with acute, severe pulmonary embolism (PE). Furthermore, the resident did not see any evidence of pathology on the chest X-ray and may have equated this with normal lungs. Although he was looking for manifestations of a chest infection, it would have been helpful if the patient’s PE had been apparent. However, at least approximately 25% of cases of proven PE will have a normal chest X-ray (CXR). There is inadequate sensitivity or specificity to rule PE in or out on a CXR alone, even if the PE is massive.11 PE is one of those conditions in which the absence of findings on CXR should raise an index of suspicion—it is like the dog that isn’t barking (see Case 7). In the present case, there was also a failure to appreciate that if the tachypnea was due to anxiety, then the patient’s oxygen saturation, measured by pulse oximetry, should not have been less than 95%.
46 | T h e C o g n i t i v e A u t o p s y
BOX 5.3 Probable Biases and Other Error-Producing Conditions • Ascertainment bias • Premature closure • Diagnosis momentum • Triage cueing error • Psych-out error • Error of omission • Posterior probability error • Knowledge deficit • Time delay error
Finally, the delay in assessment due to the low triage assignment, the additional delay due to a resident assessment, and the further delay due to waiting for the result of a laboratory test all contributed to a time delay error. Had she been seen, assessed, and correctly diagnosed promptly by the attending physician, she would have been monitored more closely and may have received treatment that might have changed the outcome. Pulmonary embolism is a diagnosis that causes considerable difficulty. The symptoms and signs are nonspecific and vary significantly. They may mimic many other diseases, and PE may even present with almost no symptoms. One-third of patients may have no chest pain, and up to approximately half do not have tachycardia. In one study, the diagnosis was missed 50% of the time on initial presentation.12 It should always be considered in the context of dyspnea. Physical examination may be entirely normal, especially if there is no infarction. In particular, the lungs may sound completely clear. Patients with PE may have intermittent shortness of breath because of changes in the location of embolic material in the lungs and ongoing and evolving physiological adaptations. Importantly, symptoms may not match the degree of ventilation–perfusion mismatch. Those with a small mismatch may experience marked dyspnea, whereas those with a significant one may have only mild dyspnea. As noted previously, the CXR may be normal in many proven PE cases, and routine blood work and an electrocardiogram will often be normal. A summary of the probable biases and other error- producing conditions in this case is given in Box 5.3.
References 1. Chopra A. Thrombophlebitis and occlusive arterial disease. In: Tintinalli J, Kelen G, Stapczynski J (Eds.), Emergency Medicine: A Comprehensive Study Guide (6th ed.). New York, NY: McGraw-Hill, 2004; 409–418. 2. Perry SJ. Profiles in patient safety: Organizational barriers to patient safety. Acad Emerg Med. 2002; 9(8): 848–850. 3. Croskerry P, Wears RL. Safety errors in emergency medicine. In: Markovchick VJ, Pons PT (Eds.), Emergency Medicine Secrets (3rd ed., Chapter 7, pp. 29–37). Philadelphia, PA: Hanley & Belfus, 2003.
T e e n a g e T a c h y p n e a | 47 4. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003; 78(8): 775–780. 5. Aboraya A, Rankin E, France C, El-Missiry A, John C. The reliability of psychiatric diagnosis revisited: The clinician’s guide to improve the reliability of psychiatric diagnosis. Psychiatry (Edgmont). 2006; 3(1): 41–50. 6. Knight SR, Mallory MNS, Huecker MR. Medical mimics of psychiatric conditions, Part 1. Emerg Med. 2016; 48(5): 202–211. 7. Knight SR, Huecker MR, Mallory MNS. Medical mimics of psychiatric conditions, Part 2. Emerg Med. 2016; 48(6): 258–265. 8. Dorsey ST. Medical conditions that mimic psychiatric disease: A systematic approach for evaluation of patients who present with psychiatric symptomatology. Emerg Med Rep. September, 2002. AHC Media/ Relias/Bertelsmann Education Group. https://www.reliasmedia.com/articles/109640-medical-conditions- that-m imic-psychiatric- d isease- a -s ystematic- approach-for- e valuation- of-p atients-who-present-with- psychiatric-symptomatology. Accessed December 20, 2018. 9. McKee J, Brahm N. Medical mimics: Differential diagnostic considerations for psychiatric symptoms. Ment Health Clin. 2016 Nov; 6(6): 289–296. 10. De Hert M, Correll CU, Bobes J, et al. Physical illness in patients with severe mental disorders: I. Prevalence, impact of medications and disparities in health care. World Psychiatry. 2011; 10(1): 52–77. 11. Stein PD, Willis PW, de Mets DL. Chest roentgenogram in patients with acute pulmonary embolism and no pre-existing cardiac or pulmonary disease. Am J Noninvasive Cardiol. 1987; 1(3): 171–176. 12. Pineda LA, Hathwar VS, Grand BJ. Clinical suspicion of fatal pulmonary embolism. Chest. 2001; 120(3): 791–795.
Case 6
The Backed-Up Bed Blocker
A 59-year-old male presented to the emergency department (ED) of a community health center with a complaint of constipation. He was triaged low acuity and assigned a non-urgent bed. The department was extremely busy. He was seen and assessed by a nurse covering that area. She approached one of the attending physicians and gave a brief description of the patient and his problem. Given how busy the department was, she had told the patient it would be several hours before he would be seen but if the ED physician gave an order for a laxative, he might go home. She noted to the physician that the patient was otherwise well and that the only reason he had come to the ED was because he could not see his family doctor. She conveyed her irritation at what she viewed as an inappropriate visit to the ED. She also said the patient was a little “weird.” When the physician enquired further about this, she said the patient was making grunting noises. The physician was reluctant to allow the patient to be discharged without assessing him but did accede to his being administered an enema, providing there were no contraindications, while waiting further assessment. Approximately 1 hour later, the physician asked the nurse how the patient was doing and was told the enema had been ineffective. She asked the physician if the patient could be given an enema to take home so they might free up the bed for some sick patients. At this point, the physician went to assess the patient. The patient’s vital signs at admission were stable, but he appeared to be in some discomfort with lower back pain that he attributed to lifting a garbage can 4 days earlier. The physician noted that the patient made sporadic jerking movements of his head and occasional grunting sounds. His wife, who was present at the bedside, apologetically explained that her husband had lifelong Tourette’s syndrome. He also had coronary heart disease and hypertension. He had no other significant history and notably no gastrointestinal problems. On examination, his abdomen was soft with normal bowel sounds, although there was some lower abdominal diffuse fullness and tenderness on palpation. Chest and cardiovascular examinations were normal. He appeared to have mild bilateral weakness on straight leg raising and reduced lower extremity reflexes. Rectal exam revealed reduced sphincter tone and soft stool.
50 | T h e C o g n i t i v e A u t o p s y
There was no blood or melena. The patient complained of urinary urgency during the rectal exam and revealed that he had not urinated since the previous day. He continued to be unable to urinate and was subsequently catheterized for 1,000 mL; this explained his abdominal tenderness on exam, although it was not appreciated as a distended bladder at the time. A computed tomography scan revealed a congenitally narrow stenosis of his lumbar spine, with a L4–L5 disk bulge. He was referred to neurosurgery, which admitted him with suspected cauda equina syndrome (CES). Later, magnetic resonance imaging (MRI) was performed, which showed a large right-sided L1–L2 disk herniation with severe canal compromise. Same-day surgery was performed in which he underwent bilateral L1–L2 laminectomy and L1–L2 right-sided microdiscectomy. He had an uneventful postoperative course and was discharged 2 days later with full bowel and bladder function.
Commentary The spinal cord terminates at approximately L1–L2. The lumbar, sacral, and coccygeal spinal nerves that resemble a horse’s tail (cauda equina) continue down the spinal canal to innervate bladder and bowel and lower extremities. CES occurs when a disk herniates into the spinal canal and compresses the spinal nerves, graphically illustrated in Figure 6.1. It is estimated
F I GUR E 6.1 MRI
showing a large disk herniation (arrow) at the L4–L5 disk space causing severe compression of the cauda equina. Source: Reproduced with permission from Samandouras.1
T h e B a c k e d -U p B e d B l o c k e r | 51
that approximately 2% of disk herniations lead to CES, but the number may be lower because not all disk herniations are seen and assessed to enter the medical record. It is a surgical emergency. If not treated promptly, preferably within 24 hours but not more than 48 hours, lasting damage may result, leading to chronic low back pain and sciatica; motor deficits up to and including paraplegia; sensory deficits; and bladder and bowel dysfunction, including impotence.1 In one series of 332 patients, there was no clear relationship between the incidence of CES and the level of injury (Table 6.1).2 Motor deficits are less common at lower levels. Both bladder and bowel dysfunction typically herald the onset of CES. A variety of factors other than disk herniation are associated with CES: trauma, hematomas, iatrogenic (postoperative) spinal tumors, metastatic tumors, degenerative disease, inflammatory disease, and epidural lipomatosis.1 This case illustrates a number of important issues. The first concerns throughput pressure in an ED. The desire to get patients through the ED and free up beds can exert undue pressure on decision makers, nurses and physicians, to short circuit their decision making and, inevitably, take risks with patients. Hogarth referred to the emergency department as a “wicked environment.” 3 Many factors in such environments may contribute to poor decisions. In the present case, while the nurse was responding to pressure to free up a bed for sicker patients, she inadvertently fell into the trap of making the assumption that the patient had a benign complaint. The patient initially framed his complaint to the triage nurse as constipation, and this appeared in the triage note, becoming a triage cueing error. His problem was more accurately described as “unable to have a bowel movement,” which for most people is transformed into “constipation,” which becomes a diagnostic label and leads to premature closure, at least as far as the patient’s nurse is concerned. She did not consider this complaint appropriate for an ED. Although his complaint was constipation in that he could not have a bowel movement, the actual problem was nerve compression by a herniated disk, which prevents the same. This is an unpacking problem—neither the triage nurse nor the patient’s nurse unpacked sufficient information to discover the cause of the patient’s problem and were simply judging things on their face value. There is also a hint of fundamental attribution error (FAE), in which the nurse viewed the patient as having made an inappropriate decision to come to the ED for a problem that should have been dealt with by his family doctor or a walk-in clinic. She augments this by describing him as “weird,” with the implication perhaps that he was incapable of making TABLE 6.1 Level of herniation and cauda equina syndrome Level of Herniation
L1–L2
Incidence of cauda equina syndrome (%)
27
L2–L3
9
L3–L4
26
L4–L5
16
L5–S1
22
Source: From Ahn et al.2
52 | T h e C o g n i t i v e A u t o p s y
an appropriate decision. His weirdness was actually due to a psychiatric condition called Tourette’s syndrome, a condition first described in 1885 by the French neurologist Gilles de la Tourette. It is currently classified as a mental disorder in the Diagnostic and Statistical Manual of Mental Disorders and has a prevalence of at least 1% of the population, possibly higher. It is chronic and lifelong, but the severity of symptoms declines in the late teens and early 20s. It is associated with other neurobehavioral disorders: attention deficit/hyperactivity disorder, obsessive–compulsive disorder, and depression. Males are affected three times more often than females. The failure to elicit this critical piece of history was also an unpacking problem. Had the nurse pressed a little further, she would have uncovered an explanation for his behavior. In addition to FAE, part of the difficulty the nurse had with the patient may be related to any of the other psych-out errors described in Case 5 (see Box 5.2). Perhaps, having formed the opinion that the patient’s behavior was abnormal, she categorized him as having a psychiatric problem. People in general show an antipathy toward, and may even be fearful of, those with mental health issues. For the ED, especially when it is busy, patients with tangible, clear-cut, defined medical conditions are viewed as less challenging. The diagnostic process can be augmented by good teamwork, with each member of the team bringing their particular skills to bear on the problem. For the most part, teamwork optimizes the care of the patient, but occasionally the influence of particular individuals may adversely affect outcomes. Fortunately, this did not happen in the present case, but had the physician been junior and/or inexperienced and acceded to the nurse’s pressure to discharge the patient from the ED, the patient might have suffered a lifetime disability. Finally, we acknowledge the recurring problem in the ED and in family practice where presenting problems are at their most undifferentiated. Thus, serious conditions may manifest as simple ones. It is part of a broad signal–noise problem. Clinicians need to maintain surveillance on all that they see and make sure they have separated critical signals from distracting noise: “Indigestion” may be the presentation of an acute coronary syndrome; “migraine” headaches may be subarachnoid bleeds; a “cold” can be the presentation of cavernous sinus thrombosis, epiglottitis, meningitis, or other life-threatening conditions; and constipation may be the presentation of a dissecting abdominal aneurysm or, in the present case, a neurosurgical emergency. Not infrequently, wolves show up in sheep’s clothing. Clinicians need to beware of how things are framed to them. Patients rarely try to mislead clinicians, but they can only describe their problems in their own terms. Note: This case served as the inspiration for a comic case that was submitted to Annals of Internal Medicine. It was rewritten by Dr. Michael Green and illustrated by Ray Rieck.4 Green and Rieck were originally the creators of Case 41, “Missed It,” published in the same journal.5 A summary of the probable biases and other error-producing conditions is provided in Box 6.1.
T h e B a c k e d -U p B e d B l o c k e r | 53
BOX 6.1 Probable Biases and Other Error-Producing Conditions • System overload • Framing • Triage cueing error • Premature closure • Unpacking principle • Fundamental attribution error • Psych-out error • Signal–noise issue
References 1. Samandouras G. Cauda equina syndrome. In: G Samandouras (Ed.), The Neurosurgeon’s Handbook. New York, NY: Oxford University Press, 2016; pp. 839–841. 2. Ahn UM, Ahn N, Buchowski JM, et al. Cauda equina syndrome secondary to lumbar disc herniation: A meta-analysis of surgical outcomes. Spine 2000; 25(12): 1515–1522. 3. Hogarth RM. Educating Intuition. Chicago, IL: University of Chicago Press, 2001. 4. Green MJ, Croskerry P, Rieck R. The constipated bed blocker. Forthcoming. 5. Green MJ, Rieck R. Missed it. Ann Intern Med. 2013; 158(5 Pt. 1): 357–361.
Case 7
The English Patient
A 55-year-old male presented to the emergency department (ED) of a community hospital at 03:00 hours with a chief complaint of headache. His vital signs at triage were 37, 88, 18, 150/ 90, and the emergency physician’s (EP1) notes described him as being “in no significant distress.” The patient described a global headache that came on approximately 4 hours earlier while he was brushing his teeth, just before going to bed. He recalled initially experiencing zigzag lines in his vision and nausea. He admitted to occasional headaches but nothing like this before. Specifically, there was no history of migraine. The physical examination was normal with no neck stiffness, and a focused neurological exam was also normal. His past medical history was unremarkable. He was treated with intravenous prochlorperazine and subsequently showed some resolution of his headache and nausea. He went to sleep and, because the department was not busy, the physician decided to let him rest until the morning. At changeover rounds at 08:00 hours, EP1 described the case to the oncoming physician (EP2), suggesting that the diagnosis was probably migraine. He put some emphasis on the symptom of teichopsia (scintillating scotoma) as a characteristic feature of migraine. The patient
56 | T h e C o g n i t i v e A u t o p s y
had remained asleep and the off-going physician believed that the patient could be discharged when he awakened. At 08:30 hours, the patient was awake and was reassessed by EP2. His vital signs were normal: 36.5, 72, 16, 140/80; also a repeat neurological exam was completely normal. There appeared to be some slight neck stiffness, which the patient attributed to having slept in an ED bed. His headache was still present but substantially reduced in intensity. He was a stoical Englishman and extremely apologetic at having inconvenienced the ED with his visit. His wife, equally stoical, had remained at his bedside throughout the night, quietly reading a book. They asked if they could now go home. EP2 did not feel comfortable discharging the patient but was not sure why. He was impressed with the patient’s stoicism and worried that he may have been downplaying his symptoms. He was not reassured by the relief the patient had experienced with prochlorperazine because such treatments may equally well relieve non-benign headaches. He called a neurologist colleague and asked if teichopsia was a pathognomonic migraine symptom or might be explained by something else. The neurologist thought the visual auras were most likely due to migraine but added, helpfully, that any intracranial vascular problem might cause migraine-like symptoms. EP2 reassured the patient and his wife that, in all probability, the headache was benign but that he would like to do some further investigations to be sure. He did not have a computed tomography (CT) scanner at his hospital, so he transferred the patient to a tertiary care hospital for further assessment. A CT scan was done, which revealed a subarachnoid hemorrhage (SAH), and the patient was transferred to neurosurgery. He had surgical repair of his aneurysm that day. The patient’s wife was later told that any further delay might have proved fatal. She subsequently contacted EP2 and thanked him for his care.
Commentary Primary SAH results from the rupture of a blood vessel into the subarachnoid space. Eighty percent of cases are due to ruptured saccular aneurysms. Progressive weakness of the vessel explains the majority of cases, and it is common in the age range 40–60 years.1 Arteriosclerosis and hypertension may be contributing factors in older patients. Approximately 1–4% of all patients presenting to the ED with headache have SAH. It is more common in women, but men predominate in the younger than 40 years age group. Prior to rupture, there may be a history of prodromal pain in or around one eye and/or visual symptoms arising from pressure of the aneurysm on the third, fourth, fifth, or sixth cranial nerves. Pressure on the optic tract or chiasm may produce specific field deficits, although visual auras are not typically reported. The prototypical headache of SAH has been described as “thunderclap,” due to its abrupt onset, which may be initially severe, constant and unremitting, and typically nuchal or occipital. A history of exertion in the 2-hour period prior to headache onset is reported in approximately 20% of patients. There may be a period of altered level of consciousness, syncope, nausea and vomiting, dizziness, and abnormalities of pulse rate and respiratory rate. Nuchal stiffness and positive Kernig and Brudzinski signs may be evident, but focal neurologic
T h e E n g l i s h P at i e n t | 57
findings are absent in approximately 75% of cases. The classification scheme of Hunt and Hess grades them from I (asymptomatic or minimal headache with mild nuchal stiffness) to V (coma, decerebrate, and moribund).2 The prehospital mortality rate is estimated to be approximately 26%, but it is probably higher because unless the deceased received a full autopsy, medical examiners and coroners will likely attribute sudden death to more common causes, such as pre-existing cardiac condition. In the discussion that followed this case, EP1 clearly recalled considering SAH in his differential diagnosis and was able to identify a number of reasons why he did not pursue the possibility at the outset: 1. He considered the presentation atypical for SAH. There were not enough distinctive features to convince him it was a definite possibility. This and the relatively low probability of SAH (playing the odds) combined to make him think the diagnosis unlikely. 2. He recalled feeling pushed toward the diagnosis of migraine by the patient’s description of his visual symptoms (representativeness). Teichopsia was a very salient feature of this patient’s presentation and is not one of the visual symptoms typically associated with SAH. In fact, it is frequently, near-pathognomonically representative of migraine. The term “teichopsia” was originally used by the English physician Hubert Airy in 1857 to replace the terms “hemiopia” and “hemiopsia.”3 It literally translates from the Greek teichos (“wall”) and opsis (“vision”) as “town wall,” or fortification to describe its battlement appearance (Figure 7.1). Although such visual auras are not typical of SAH, migraine nevertheless is recognized as a mimic of SAH.1 3. The ED was in a community hospital that did not have a CT scanner. On previous occasions when this physician had requested a CT scan from another hospital in the early hours of the morning, he invariably met with resistance from the gatekeeping
F I GUR E 7.1 Bourtange
Fort in the Netherlands near the German border, built in 1593 by William the Silent. Source: Image from Wikipedia Commons (https://ru.wikipedia.org/wiki/ %D0%A4%D0%BE%D1%80%D1%82_%D0%91%D0%B0%D1%83%D1%80%D1%82%D0%B0%D0%BD%D0%B3%D 0%B5#/media/Файл:Fortbourtange.jpg).
58 | T h e C o g n i t i v e A u t o p s y
radiologists. Aside from being woken up at unreasonable hours, radiologists’ demeanor is not improved by any degree of vagueness in the clinical presentation that is relayed to them. They do not take kindly to clinical hunches, “just in case” scenarios, or other equivocations. Zebra retreat may result (see Box 13.1). 4. He knew that if the CT scan was normal, the patient would be returned to his hospital to have a lumbar puncture performed. He admitted that he had performed few of these and was uncomfortable with the procedure 5. The department was extremely busy. He described feeling overwhelmed with some complex cases and was working alone (resource limitations). Finally, the patient was stable and had shown no signs of deterioration since his arrival. In fact, he had appeared to improve with treatment (confirmation bias). Overall, the physician’s judgment was reasonable but wrong. Although none of these reasons might be sufficient by themselves to deter the physician from pursuing an admittedly remote diagnosis, in the early hours of the morning he retreats from it under the potentiating influence of a combination of reasons, including sleep deprivation and fatigue (see Figure 12.2). There are several reasons why unlikely diagnoses are not pursued. Presentations of disease, illness, or injury to the emergency department fall onto a manifest continuum (Figure 7.2).4 At one end, the problems are often visually evident and highly manifest (abrasions, lacerations, dislocations, and foreign bodies) and the signal:noise ratio is high. Prototypical presentations are also highly manifest because they present with classic signs and symptoms with which most clinicians are familiar. There is little ambiguity in the presentation of these problems, and they are often readily diagnosable. At the other end of the continuum, however, the origin of the presenting complaint may be obscure and not at all manifest (abdominal pain, headache, chest discomfort, dizziness, and weakness) and the signal:noise ratio is low. Atypical presentations are often highly ambiguous and associated with high levels of uncertainty. The relationship between diagnostic performance and signal:noise ratios is shown in Figure 7.3 in the context of signal detection theory.5 In condition A, there is minimal overlap between the signal and the noise; the signal is therefore highly manifest, and
High Signal:Noise
Manifest Continuum
Abrasions Burns Lacerations Dislocations Foreign bodies Prototypical presentations F I GUR E 7.2 Manifest
continuum.
Low Signal:Noise
Headache Weakness Syncope Abdominal pain Chest discomfort Atypical presentations
T h e E n g l i s h P at i e n t | 59 Noise
Diseased cases
Receiver Operating Characteristic (ROC) Curves
fo rm pe r
C:
Po
or
di
ag
no st
ic
(b)
an ce
A:
Non-diseased cases
Signal
H pe igh B: rf d or ia M od m gn an o e pe ra ce sti rf te c or d m ia an gn ce os tic
(a)
(c)
F I GUR E 7.3 Diagnostic
performance under different signal:noise ratios. Source: From Croskerry et al.5
the receiving operator characteristic curve shows a high level of diagnostic performance. In condition B, there is more overlap of the signal by noise, and the caliber of diagnostic performance declines. In condition C, in which there is complete overlap, the degree of manifestness is virtually nonexistent and the performance no better than chance. When the differential diagnosis is generated, the less manifest the illness, the wider the differential, the greater the uncertainty, and the more difficult the decision-making process. Inevitably, when differentials widen, they are more likely to include some rare and esoteric disorders. These are referred to as zebras, a term in widespread colloquial use in the ED. The New Dictionary of American Slang defines the medical use of the term zebra as an “unlikely, arcane or obscure diagnosis.” For a variety of reasons, clinicians may be reluctant to make a rare or esoteric diagnosis, referred to as making a zebra retreat (see Box 13.1). There is also a particular problem when what appear to be pathognomonic symptoms are actually mimicking another disease. In the present case, teichopsia for most clinicians signifies migraine and was probably the principal reason why ED1 did not consider SAH more likely. Furthermore, although extremely unlikely, it is also conceivable that patients with migraine can have a concomitant SAH. The annual estimate of SAH from a ruptured aneurysm in North America is approximately 1 in 8,000 or less than 0.013%. However, its reputation makes it a highly available diagnosis in the ED, in the sense that ED physicians can readily bring it to mind. But as noted
60 | T h e C o g n i t i v e A u t o p s y
previously, of all headaches that present to the ED, it will occur in less than 4%, and an ED physician may go a long time before seeing one. Thus, its incidence qualifies it as a zebra. An integral component of clinical decision making is the likelihood or probability of a diagnosis, and emergency physicians should be acquainted with the base rate of a particular disease. Several factors which influence the physician’s estimate of base rate include: the physician’s knowledge of the disease, the experience they have had with it directly or indirectly (through colleagues or morbidity and mortality rounds), the particular time period in which the disease occurs, the geographical area where the physician is practicing, and cognitive phenomena such as availability and representativeness. Furthermore, emergency physicians often follow a ROWS (rule out worst-case scenario) strategy, which means that even though the base rate for a particular disease is low, it will still be included on the differential because many follow the maxim that “the buck stops in the ED”—that is, if a significant finding is not made at an ED visit, subsequent caregivers might believe that there is nothing significant to find. The availability heuristic is one of the three main classes of heuristic originally described in a seminal paper by Tversky and Kahneman in 1974.6 It refers to the availability, immediacy, or ease of recall of a particular concept, image, or fact to the decision maker’s mind. As James Reason noted, things are judged more frequent the more readily they spring to mind.7 Certainly, a migraine diagnosis would have been more available to ED1 than a SAH. Availability can distort estimates of base rate and lead to base rate neglect. For example, if a physician is exposed to a case involving a dissecting abdominal aneurysm, it becomes more available to the physician in terms of recent memory. This may lead to a greater tendency to look for and overdiagnose it in the period following the exposure to it. Exposure here could include direct clinical experience, hearing of a case from a colleague or at rounds, reading of a case in a journal, or other means. The result is that the physician’s estimate of the probability of that disease becomes inflated for that period. The converse is also true (out of sight, out of mind).7 If a physician’s exposure to a particular disease is reduced, it becomes less available and there would be a corresponding tendency to underdiagnose it for a period until something changes its relative availability. Availability rises and falls depending on exposure. A number of factors may influence availability and, therefore, estimates of base rate. Some adaptive value lies in emergency physicians overestimating the true base rate of disease under certain conditions. This is inherent in the ROWS strategy. Many emergency physicians adopt this approach to err on the side of caution so that a worst case is never missed. Every wrist injury is a scaphoid fracture until proved otherwise (UPO), every chest discomfort a cardiac syndrome UPO, and no headache is benign UPO. A physician’s reputation is established not so much by the number of acute myocardial infarcts the physician has correctly diagnosed as by the number the physician has not missed, an exception to Francis Bacon’s maxim that we are more likely to be excited by affirmatives than negatives. Thus, it behooves all emergency physicians to cultivate a degree of overcautiousness. The second major heuristic in this case is representativeness, the overwhelming tendency to match patterns that we see to templates that we have stored in memory (see Case
T h e E n g l i s h P at i e n t | 61
18). Thus, ducks should come to mind when one hears duck-like creatures making duck-like sounds, and horses should come to mind when one hears hoofbeats. Teichopsia is highly representative of migraine so that diagnosis should prevail unless there is good reason for thinking otherwise. However, hoofbeats in the ED may sometimes signal the arrival of other ungulates, and emergency physicians have to be aware of, and have strategies for, dealing with the zebras noted previously. A good approach is to work through the differential diagnosis, taking account of respective base rates, and evidentially eliminating the more common possibilities. When the zebra is reached, its base rate and the pertinent evidence need objective evaluation to determine if it is a rational contender. It must be given a fair assessment because we know that zebras are out there. Some clinicians have a tendency, at times, to dwell a little too much on zebras and give them undue emphasis beyond ROWS—perhaps this reflects a degree of clinical fantasizing to make life more interesting, or even serves as a fire drill to rehearse what physicians would do if they did confront a rare disorder. Representativeness can result from building an additive pattern for a particular disease, especially if it has mnemonic value. Thus, we are more likely to diagnose gallbladder disease in a female who is fair, fat, febrile, fecund, and forty. It has been proposed that “familial” replace “forty.”8 Sometimes, physicians respond heuristically to a particular feature without realizing it. For example, physicians are generally aware of the patient’s age in assessing their symptoms, and for many diseases, there is a corresponding awareness that advancing patient age increases their likelihood (cardiovascular events, stroke, and dementia). For such diseases, the underlying relationship is continuous—that is, as age increases, the likelihood of disease increases, although not necessarily in a linear fashion. So, for heart disease, most ED physicians would suspect a smooth continuous increase in likelihood of the disease with age, which is what the data show.9 All other things being equal, this should translate into a smooth continuous increase in physicians’ ordering of tests as patients age. However, in practice, a major study showed that physicians adopt a heuristic of “40 and above” when ordering investigations— that is, there is a discrete jump in the ordering of investigations once the age of 40 years is reached.9 Physicians do not seem to be aware of the heuristic, although their behavior clearly indicates they are using it. The use of the heuristic has significant implications for the detection of heart disease around the heuristic threshold (see Case 18). Other clinically significant applications of the representativeness heuristic (e.g., in multiple trauma) have been described.10 In the present case, an error was made by EP1 in not correctly diagnosing the underlying condition. Had he discharged the patient at the end of his shift, it is likely there would have been a fatal outcome. However, the patient was saved by EP2. When the case was reviewed later, EP2 said that he was particularly struck by the patient’s stoicism. According to his wife, he was someone who rarely sought medical help for anything. Yet he had come to the ED in the early hours of the morning. He was the opposite of a “frequent flyer” (see Case 16). This aspect of the case is important and may well have been critical to the outcome. An integral part of patient assessment, in the clinician’s view, is the credibility of the patient: How
62 | T h e C o g n i t i v e A u t o p s y
BOX 7.1 Probable Biases and Other Error-Producing Conditions • Playing the odds • Availability • Representativeness • Resource limitations • Confirmation bias • Sleep deprivation/fatigue • Zebra retreat
reliable is their account, how severe is their discomfort, and how intense is their pain? This is important for both diagnosis and management. We sometimes ask patients to rate their pain on an analog scale from 1 to 10, where 1 is no pain and 10 is as bad as it could possibly be. Not infrequently, however, patients will rate their pain a 10 even when in no obvious discomfort, so clinicians may question their credibility and how to rate their rating. In contrast, the patient’s stoical response to his symptoms in the present case was very significant for ED2—he formed an impression from a “negative fact” just as Sherlock Holmes did with the curious incident of the dog in the nighttime (the dog that did not bark when a prize racehorse was being stolen).11 He was also impressed with the neurologist’s statement that “any intracranial vascular problem might cause migraine-like symptoms.” Once he was told that, the patient’s report of teichopsia could no longer be considered exclusively pathognomonic of migraine, and the worst-case scenario had to be excluded. Cognitive biases in this case are summarized in Box 7.1.
References 1. Kwiatkowski T, Alagappan K. Headache. In: Marx JA, Hockberger RS, Walls RM (Eds.), Rosen’s Emergency Medicine: Concepts and Clinical Practice (7th ed.). Philadelphia, PA: Elsevier Mosby, 2009; 1356–1366. 2. Hunt WE, Hess RM. Surgical risk as related to time of intervention in the repair of intracranial aneurysms. J Neurosurg. 1968; 28(1): 14–20. 3. Eadie MJ. Hubert Airy, contemporary men of science and the migraine aura. J R Coll Physicians Edinb. 2009; 39(3): 263–267. 4. Croskerry P. Medical decision making. In: Ball L, Thompson V (Eds.), International Handbook of Thinking and Reasoning. New York, NY: Taylor & Francis, 2017; 109–129. 5. Croskerry P, Petrie DA, Reilly JB, Tait G. Deciding about fast and slow decisions. Acad Med. 2014; 89(2): 197–200. 6. Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974; 185(4157): 1124–1131. 7. Reason J. Human Error. Cambridge, UK: Cambridge University Press, 1990; 38. 8. Bass G, Gilani SN, Walsh TN. Validating the 5Fs mnemonic for cholelithiasis: Time to include family history. Postgrad Med J. 2013; 89(1057): 638–641.
T h e E n g l i s h P at i e n t | 63 9. Coussens S. Behaving discretely: Heuristic thinking in the emergency department. Working paper. Cambridge, MA: Harvard Kennedy School, 2017. Available at scholar.harvard.edu/files/coussens/files/stephen_coussens_JMP.pdf. Accessed January 31, 2019. 10. Kulkarni SS, Dewitt B, Fischhoff B, et al. Defining the representativeness heuristic in trauma triage: A retrospective observational cohort study. PLoS One. 2019; 14(2): e0212201. https://doi.org/10.1371/journal. pone.0212201 11. Conan Doyle, A. The Adventure of Silver Blaze: The Memoirs of Sherlock Holmes. New York, NY: Oxford University Press, 1993. (Origin work published 1893)
Case 8
Lazarus Redux
A 65-year-old male presented to the emergency department (ED) of a teaching hospital late in the evening with left-sided weakness that had started approximately 2 hours earlier. He also complained of mild pan-cranial headache and nausea, both of which started at the same time as the weakness. He had no other neurological symptoms, and he had never had similar symptoms in the past. His past history included lung cancer for which he was currently undergoing chemotherapy, adult-onset diabetes, and hypertension. He had no allergies. Medications included dexamethasone, ondansetron, metoprolol, and ranitidine. His vitals were: 37, 130, 16, 170/96, 95%, and Glasgow Coma Scale 15. His speech was normal, and he had no cranial nerve abnormalities. Chest, heart, and abdomen were unremarkable. Marked weakness and increased reflexes were evident on the left side. Sensory exam was normal. His glucose was 9.6 mmol/L. A computed tomography (CT) scan was completed and appeared normal. A referral was made to neurology, describing the stable condition of the patient and the diagnosis of a non- hemorrhagic cerebrovascular accident. The physician then gave a verbal and a written order for 10 mg Maxeran (metoclopramide) intravenous (IV) for nausea and headache. The physician was called to the patient’s bedside 10 minutes later because the patient had become unresponsive with a respiratory rate of 8. Given the rapid deterioration of the patient, with sudden loss of consciousness and bradycardia, the emergency physician assumed the patient was experiencing “a stroke in evolution” and was exhibiting Cheyne–Stokes respiration—often a terminal pattern of respiration. A discussion followed with his family, in which the gravity of his apparently deteriorating condition was discussed. A decision was made to intubate him, repeat the CT scan, and reassess him at that time. The physician noticed soon afterwards that the medication vial attached to his IV bag was not metoclopramide but, rather, midazolam, a benzodiazepine. The two medications are very similar in appearance: Midazolam comes in a brown glass 2–mL vial with an orange cap and white label; Maxeran (metoclopramide) comes in a brown glass 2-mL vial with a silver cap but has an orange-and-white label (Figure 8.1). The patient
66 | T h e C o g n i t i v e A u t o p s y
F I GUR E 8.1 A vial
of midazolam and a vial of Maxeran (metoclopramide) for intravenous use.
was given the benzodiazepine antagonist, flumazenil IV, whereupon he immediately regained consciousness and appeared to recover to his previous state. The physician immediately disclosed the error to the family, with a full explanation of events. They were reassured that the patient had probably not suffered any apparent ill effects. One family member became extremely angry but eventually settled with reassurances from the other family members. The neurology resident was forewarned of the error before meeting with the family.
Commentary This case was originally reported in a series of medication error cases from the ED.1 The medication error may have been due to a simple slip of action or execution failure due to fatigue, distraction, or “attentional capture” by something other than the task at hand. Nevertheless, it represented some type of cognitive failure. In the investigation that followed, no specific explanation emerged to explain the error, although the nurse noted the department was extremely busy and she was engaged in a number of other tasks (rapid task switching) while she attempted to follow the medication order (excessive cognitive load) (see Case 29). This is a state of cognitive overloading known to increase the likelihood of error. It might have been a sound-alike error such that the nurse misheard the order as “midazolam” instead of “Maxeran.” However, this is unlikely because she said she believed she was giving Maxeran, and not midazolam, and the order was clearly written. The vials of the two medications were color-coded differently and easily distinguishable from each other, but their overall appearance was similar, so this might have been a “look-alike error.” They were also alphabetically
L a z a r u s R e d u x | 67
organized, and therefore adjacent to each other in the same drawer, so a spatial discrimination error in selection of the correct vial might have occurred. Medication errors are abundant. They are the most common error in medicine, but many are inconsequential. Nevertheless, an estimated 7,000–9,000 deaths due to medication errors occur annually in the United States,2 and cognitive failures are responsible for many of them. Prevailing conditions in the emergency medicine setting may predispose to a variety of errors, including medication errors.3 The majority of errors, almost 50%, occur at the first stage of a six-stage process (Box 8.1). It seems likely that the error in the present case involved the stages of transcription or dispensing or both. The nurse believed it was likely due to the proximity of the two medications to each other in the storage compartment and perhaps facilitated by their like-sounding name and appearance. Being unaware of the medication error, the physician’s assumption that the patient was suffering an evolving stroke when his condition suddenly changed was not unreasonable, but it was wrong nevertheless. In many situations, we attempt to attribute causation to events, often assuming that if B follows A, then A must have caused B—that is, a post hoc fallacy. A classic example in medicine was the observation that women receiving combined hormone replacement therapy (HRT) had a lower incidence of coronary heart disease (CHD). This led to an assumption that HRT was protective against CHD. However, randomized clinical controlled trials subsequently showed that HRT actually increased the risk of CHD. When the original studies were reviewed, it was found that women taking HRT were from higher socioeconomic groups enjoying better diets and exercise.4 The reasoning in the present case is that the patient has suffered a stroke, there has been a sudden deterioration (which is known to occur with stroke syndromes), therefore the deterioration is due to the stroke. This logical failure is not infrequent in medicine (for further discussion of cause–effect misunderstandings, see Case 21). As with the HRT example, the actual cause of the deterioration was quite different from that supposed.
BOX 8.1 Six-Stage Process of Medication Ordering/Prescribing ↓ Documenting ↓ Transcribing ↓ Dispensing ↓ Administering ↓ Monitoring
68 | T h e C o g n i t i v e A u t o p s y
BOX 8.2 Predisposing Conditions to Medication Error in the ED Condition • Multiple patients being treated concurrently • Frequent reliance on verbal orders • Wide range of drugs in use • Variety of administration routes • Wide variety of dangerous drugs • Time pressures • Throughput pressures • Interruptions/distractions • Tight couplinga • ED dispensing • Physician administration of medication • Team communication problems • Laboratory errors Tight coupling refers to medication administration that is time dependent, has a rapid onset of action, and when there is limited opportunity to notice a problem and intervene before it has an effect. a
Sources: From Croskerry and Sinclair3 and Peth.6
What was clear was that the patient’s level of consciousness had changed, as had his breathing rate. However, this does not necessarily represent true Cheyne–Stokes breathing because there was no documented dysrhythmia in respiration, associated with recurrent apneic spells (pattern mislabeling). These would result from a new instability and disruption of feedback control of the chemical regulation of breathing.5 The observation was only that the respiratory rate had dropped. In this context, this may have been enough for the physician to interpret the onset of an agonal pattern (ascertainment bias) and amounts to confirmation bias. But, in fact, a reduced level of consciousness associated with a slowed respiratory rate is also well explained by sedation. Benzodiazepines may have a profound depressing effect on respiration, which is what happened in this case. It takes some presence of mind when catastrophic events are unfolding to step back and ask, “What else might this be?” A variety of conditions contribute to medication errors in the ED (Box 8.2). Many of these are due to the unique operating characteristics of EDs. Safer and more effective use of medications may be achieved through automated dispensing system support devices, as well as through involvement of clinical pharmacists in patient care in clinical settings.7 Probable cognitive biases and other sources of error for this case are listed in Box 8.3.
L a z a r u s R e d u x | 69
BOX 8.3 Probable Biases and Other Error-Producing Conditions • Medication error • Rapid task switching • Cognitive overload • Post hoc fallacy • Pattern mislabeling • Ascertainment bias • Confirmation bias
References 1. Croskerry P, Shapiro M, Campbell S, LeBlanc C, Sinclair D, Wren P, Marcoux M. Profiles in patient safety: Medication errors in the emergency department. Acad Emerg Med. 2004; 11(3): 289–299. 2. Bhimji SS, Scherbak Y. Medication Errors. StatPearls; 2018. 3. Croskerry P, Sinclair D. Emergency medicine—A practice prone to error? CJEM. 2001; 3(4): 271–276. 4. Lawlor DA, Davey Smith G, Ebrahim S. Commentary: The hormone replacement–coronary heart disease conundrum: Is this the death of observational epidemiology? Int J Epidemiol. 2004; 33(3): 464–467. 5. Cherniack NS, Longobardo G, Evangelista CJ. Causes of Cheyne–Stokes respiration. Neurocrit Care. 2005; 3(3): 271–279. 6. Peth HA. Medication errors in the emergency department: A systems approach to minimizing risk. Emerg Med Clin North Am. 2003; 21(1): 141–158. 7. Fairbanks RJ, Rueckmann EA, Kolstee KE, et al. Clinical pharmacists in emergency medicine. In: Henriksen K, Battles JB, Keyes MA, et al. (Eds.), Technology and Medication Safety (Vol. 4). Rockville, MD: Agency for Healthcare Research and Quality, 2008.
Case 9
A Model Pilot
A 43-year-old male presented to the emergency department (ED) complaining of blurred vision. His appearance and demeanor were normal, and he was in no apparent distress other than looking slightly apprehensive. His vital signs at triage were 366, 84, 22, 140/85. He was placed in the ears, nose, and throat (ENT) room. According to the emergency physician’s (EP1) notes, the patient had experienced blurring of vision in both eyes during the past few days. There were no significant findings on physical examination. Specifically, his eye examination was noted to be normal. On the basis of his increased respiratory rate and apprehension, a diagnosis of anxiety state was made and he was discharged home, to be followed by his family physician. The following day, he presented again to the same ED with the same complaint. He was again triaged to the ENT room by a different triage nurse, who noted that he had been seen the previous day with an eye problem and diagnosed as having an anxiety disorder. Again, his vital signs were normal other than a tachypnea of 22/min. The emergency physician (EP2) at this visit found a healthy-looking male in no distress, seated in the examination chair. The patient reported that the visual blurring had begun 3 days ago. There was no history of discharge from the eyes, no history of exposure to any noxious material in the eyes, and no allergies. He had not previously experienced any eye problems and did not wear eyeglasses. Both eyes appeared normal on examination; pupils were equal and reactive to light, and a slit-lamp examination was normal. Visual acuity was measured at 20/60 in both eyes. Although the patient appeared slightly anxious, the persistent tachypnea puzzled the physician and he took a detailed respiratory and cardiovascular history, which proved to be noncontributory. He then performed a chest and cardiovascular examination. Air entry was equal on both sides with no adventitious sounds, and cardiovascular examination was normal. Nevertheless, the physician was not comfortable and ordered a chest X-ray, electrocardiogram (ECG), arterial blood gases (ABGs), and routine blood work. The chest X-ray and ECG were both normal, but ABGs showed the following: pH 7.23, pCO2 26, pO2 95 and bicarbonate 8. Base excess was –14.
72 | T h e C o g n i t i v e A u t o p s y
The remainder of the blood work showed an anion gap of 26 and a markedly elevated serum creatinine of 650 mmol/L. The physician returned to the patient and attempted to elicit a history of past medical history or ingestion of substances that might have produced a wide anion gap metabolic acidosis. He specifically asked the patient if he had ever had any kidney problems, diabetes, or a drinking problem, and if he had consumed any methanol or ethylene glycol-containing products or any iron or salicylate. The patient admitted to taking the “occasional” drink but strenuously denied drinking methanol-containing products or taking any other substances. However, on further questioning, he stated that his hobby was model airplanes and that in the process of siphoning some model aircraft fuel, he had inadvertently taken some into his mouth and swallowed it several days ago. Further blood work was done, including methanol, ethylene glycol, ethanol, and formate levels, and an infusion of intravenous ethanol started. He was subsequently transferred to the intensive care unit.
Commentary Model airplane fuel may contain up to 77% methanol (Figure 9.1). As little as 15 mL of a 40% solution has been reported to result in death. This patient’s visual disturbances and apprehension appeared to be manifestations of methanol poisoning. In view of the highly elevated serum creatinine, uremia was initially considered as a possible cause for the wide anion gap metabolic acidosis. However, toxicology references revealed that model airplane fuel contains significant amounts of nitromethane, which interferes with the Jaffe reaction in the assay of serum creatinine, falsely elevating it. The common conditions for an anion gap metabolic acidosis are ketoacidosis, lactic acidosis, renal failure, and ingestion of toxins.
F I GUR E 9.1 Model
airplanes powered by a fuel that contains 70% methanol.
A M o d e l P i l o t | 73
There are several mnemonics used for the varieties of factors that can cause an anion gap metabolic acidosis; a common one is CAT MUDPILES: Congestive heart failure, carbon monoxide, cyanide C A Aminoglycosides T Theophylline, toluene M Methanol U Uremia D Diabetic ketoacidosis, alcoholic ketoacidosis, starvation ketoacidosis P Paracetamol (acetaminophen), phenformin, paraldehyde I Iron, isoniazid, inborn errors of metabolism L Lactic acidosis E Ethylene glycol S Salicylates, acetylsalicylic acid, aspirin Triage cueing, made at each ED visit, is prominent in this case. Both triage nurses anchored on the symptom of blurred vision in an otherwise normal patient, and the patient was sent to the ENT room, which is dedicated to problems associated with the ears, nose, and throat. In most EDs, it contains a fully adjustable dental chair, slit lamp, eye chart, and ENT medication and instrument cart. Their assumption made was that blurred vision meant there was an eye problem. Subsequently, a physician going to see a patient in this room will assume the patient has an ENT problem, resulting in ascertainment bias—the physician sees what they expect to see, and the patient gets a complete eye exam for their visual symptoms. However, the symptom of blurred vision can arise from a variety of illnesses, several of which are not associated with the peripheral visual system. To place a patient with blurred vision in a room dedicated to ENT examinations is an example of Sutton’s law—going for where the money is. The law is named after the famous Brooklyn bank robber Willie Sutton (Figure 9.2). When asked by a judge why he kept robbing banks, he is alleged to have said “That’s where the money is.” Another way of saying this is the well-known medical adage: When you hear hoofbeats, think horses not zebras. Errors associated with the application of Sutton’s law are referred to as Sutton’s slip.1 An important error in this case is representativeness, which may arise through the use of the representativeness heuristic. In everyday life, as well as in medicine, we tend to intuitively assess things in terms of how similar or how representative their features are of a particular class or parent population. If there is a reasonably good match, there will be a tendency to treat them as members of that class. Often, this pattern matching is achieved with little or no conscious effort, accomplished in Type 1 decision making (see Figure I.1). Where the pattern is highly manifest, it works very well (see Figure 7.2). Like all heuristics, the strategy achieves economy of effort, but without appropriate calibration it will fail when an illness presents atypically—that is, as unrepresentative of the illness. Not surprisingly, the representativeness heuristic is used widely in medicine.1–3
74 | T h e C o g n i t i v e A u t o p s y
F I GUR E 9.2 Willie
Sutton shown cheerfully displaying the bank charge card that he was given by the New Britain Bank and Trust Company in Connecticut. Judicious use of the card would presumably obviate the need for robbing the bank.
This patient did not look like the type of patient whom a triage nurse or emergency physician would associate with methanol abuse. Drinking methanol-containing substances such as gasoline, antifreeze, or windshield wiper antifreeze is a desperate and sometimes last means of intoxication for those with alcohol dependency. The prototypical patient is usually unkempt, with poor hygiene, and often presents in a derelict state. If such a patient presented at the ED with symptoms of blurred vision, many physicians would probably make the connection with methanol fairly quickly. Symptoms of toxicity may be delayed by up to 72 hours following ingestion, especially if there has been coingestion of alcohol, and toxicity typically manifests as central nervous system depression, gastrointestinal complaints of pain, nausea and vomiting, visual disturbances, and malaise. However, gastrointestinal symptoms may be absent in some cases of significant ingestion, and visual disturbances may typically be delayed by up to 36 hours following ingestion. Neither is the discrete complaint of blurred vision particularly representative of this condition. The ophthalmology examination might reveal significant abnormalities (nystagmus, fixed and dilated pupils, optic disc hyperemia, and retinal edema). However, patients complaining of visual problems may have a normal eye examination, even in fatal cases. The cumulative nonrepresentativeness of this particular case (atypical general appearance, absence of significant history of alcoholism or substance abuse, absence of constellation of cardinal symptoms, and absence of ophthalmology findings) increases the likelihood of misdiagnosis at the first visit. Thus, in terms of general appearance, symptomatology, and signs, this patient is not representative of someone who might have ingested methanol. In contrast, he appears to be a model patient: oriented, clean, appropriately dressed, civil, and compliant. As noted previously, the triage nurse at the first visit focused on the discrete symptom presented to her. The complaint of blurred vision in a patient of normal appearance and in no distress results in a low category of triage and placement in a non-urgent and treatment-specific
A M o d e l P i l o t | 75
area of the department. Although this patient received a low priority triage assessment, he proved to be the sickest patient in the department that day. At the first visit, the emergency physician found a fully dressed patient, seated upright in an examination chair, in a room dedicated to investigating ENT complaints. He conducted an eye examination in accordance with what was expected, but he found nothing. Through his failure to account for the true basis of the patient’s symptoms, and attributing them to a psychiatric condition, the physician committed one of the psych-out errors, misdiagnosing a medical condition as a psychiatric one (see Box 5.2). Anxiety disorder is a diagnosis of exclusion, and the physician had not done enough to exclude other medical causes to explain this patient’s symptoms. The diagnostic process was therefore closed prematurely and incorrectly. It could also be classified as a search satisficing error. At the second visit, the patient was again directed to the ENT room. In her notes, the triage nurse recorded the visit the previous day and that the patient was diagnosed as having an anxiety disorder. However, she persisted with the original belief of her colleague that the patient had an eye condition. Perhaps she believed that it was missed the first time around and/or it had now progressed to a point where it might be diagnosed. At the outset, the second EP went down the same road as the first and conducted an eye examination. In his discussion of the case at morbidity and mortality rounds, he recalled that after this step, which included a slit-lamp examination, and having found nothing other than decreased visual acuity bilaterally, he consciously asked himself if he could be missing something—a cognitive forcing strategy.4 He recalled feeling uncomfortable with the diagnosis of anxiety state and reminded himself: “Beware the patient who returns to the ED— they are either wasting our time or we have missed something.” This strategy of stopping and reassessing what one is thinking falls into the general category of metacognition, essentially thinking about what you are thinking. Subsequently, in retracing his steps and re-examining the two visits, his attention was drawn to the subtle but persistent tachypnea, a respiratory compensation for the patient’s toxic metabolic acidosis. The pursuit of this abnormal vital sign probably saved the patient from significant morbidity and possibly death. A summary of the probable cognitive biases involved in this case is given in Box 9.1.
BOX 9.1 Probable Biases and Other Error-Producing Conditions • Triage cueing • Ascertainment bias • Representativeness bias • Search satisficing • Premature diagnostic closure • Psych-out error
76 | T h e C o g n i t i v e A u t o p s y
References 1. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003; 78(8): 775–780. 2. Kulkarni SS, Dewitt B, Fischhoff B, et al. Defining the representativeness heuristic in trauma triage: A retrospective observational cohort study. PLoS One 2019; 14(2): e0212201. https://doi.org/10.1371/journal. pone.0212201 3. Coussens S. Behaving discretely: Heuristic thinking in the emergency department. Working paper. Cambridge, MA: Harvard Kennedy School, 2017. Available at scholar.harvard.edu/files/coussens/files/stephen_coussens_JMP.pdf. Accessed august 31, 2018. 4. Croskerry P. Cognitive forcing strategies in clinical decision making. Annals Emerg Med. 2003; 41(1): 110–120.
Case 10
A Rash Diagnosis
A 16-year-old female was brought to the emergency department (ED) by her parents in the mid- afternoon. She was triaged with lower abdominal pain, diarrhea, and vomiting. Her vital signs were 374, 110, 18, 112/62, and an O2 saturation of 99%. The emergency physician picked up her chart and went to her room. He was in the habit of deliberately not reading the triage note before seeing patients to avoid the possibility of being influenced by what the note said. Usually, he would read the triage and nurses’ notes after seeing the patient. When he entered the room, the patient was sitting quietly and did not appear to be in any distress. He was immediately struck by a bilateral rash on her face and neck. She said that the rash had appeared within the past hour or two. She had vomited seven times that day and had a headache for the past 24 hours. The physician proceeded to examine the patient. Her neurological examination was normal, and her head and neck examination was normal other than the rash. The rash was prominent, bilateral on the face, and extended into her anterior neck. It did not blanch under pressure from a glass that the physician pressed on the skin, and therefore it appeared to be petechial. It was similar to the one shown in Figure 10.1 but more prominent. Chest, cardiovascular examination, and extremities were all normal. The patient’s past medical history included childhood asthma and several concussions but was otherwise unremarkable. The physician had seen a similar rash before when he was a medical student on elective. The patient, a young male, had meningococcal meningitis with disseminated intravascular coagulation and died within hours. He remembered being told that time was of the absolute essence and early antibiotics were critical to avoid a fatal outcome. He ordered blood work and immediately called the infectious disease service. He was advised the physician on call was tied up with another patient and to call back later. Believing that he had no time to lose, he quickly explained the situation to the patient and parents and ordered blood cultures, ceftriaxone 2 g intravenous (IV), Vancomycin 1 g IV, and dexamethasone 10 mg IV. While these were being prepared, he quickly performed a lumbar puncture (LP) on the patient. It was atraumatic, clear, and colorless.
78 | T h e C o g n i t i v e A u t o p s y
F I GUR E 10.1 Post- vomiting
petechiae on face (Valsalva purpura). Source: Image courtesy of Dr. Art Papier at VisualDx (http://www.visualdx.com).
The cerebrospinal fluid had to be sent to another laboratory for analysis, and results were usually not available for 1 or 2 hours. The first emergency physician’s (EP1) shift had now ended, and with the blood work and LP results pending on the patient, she was transferred to an oncoming physician (EP2) who was similarly impressed with the rash and who had not seen meningococcemia previously. The following day, at the start of his next shift, EP1 enquired what happened with the patient. The charge nurse said she had been referred to the surgery service. Apparently, the LP results were normal. Her blood work showed an elevated white cell count of 16.3 × 109/L and 93% neutrophils, but electrolytes and renal function normal. Urinalysis was normal other than showing mild hematuria. Pregnancy test was negative. When EP2 reassessed the patient, she appeared to be well other than some right lower quadrant (RLQ) pain, which she said was settling. With meningitis no longer on the differential, she was transferred to the surgery service with fever, nausea, vomiting, and RLQ pain as a probable appendicitis. On examination by the surgery resident, she was tender in both lower quadrants but more so on the right than the left. Her ultrasound was considered normal, and the appendix was not visualized. A computed tomography scan was not performed due to her age and the risks associated with radiation exposure. Nevertheless, the presumed diagnosis remained appendicitis, and after further discussion with the patient and parents, she was taken to the operating room and underwent a laparoscopic appendectomy. The appendix appeared normal, and no other pathology was seen. A final diagnosis for her abdominal pain was not made. Her blood cultures later came back normal. At a follow-up appointment with surgery a month later, she was entirely recovered, asymptomatic and in good health.
Commentary Petechiae are small (