141 54 517KB
English Pages 176 [172] Year 2023
Risk, Disaster, and Vulnerability
Risk, Disaster, and Vulnerability An Essay on Humanity and Environmental Catastrophe
S. Ravi Rajan
University of California Press
University of California Press Oakland, California © 2023 by S. Ravi Rajan Cataloging-in-Publication Data is on file at the Library of Congress. isbn 978-0-520-39262-5 (cloth : alk. paper) isbn 978-0-520-39263-2 (pbk. : alk. paper) isbn 978-0-520-39264-9 (ebook) Manufactured in the United States of America 32 31 30 29 28 27 26 25 24 23 10 9 8 7 6 5 4 3 2 1
This book is dedicated to the victims and survivors of Bhopal, Chernobyl, and the many other industrial disasters of the twenty-first century.
Contents
List of Figures
ix
Preface
xi
Acknowledgments
xiii
1. Setting the Stage
1
2. Risk
8
3. Disaster
49
4. Vulnerability
81
5. Looking Ahead
115
Bibliographic Essays
125
Index
145
Figures
1. The risk paradigm
14
2. A simplified diagram illustrating the risk paradigm
15
3. Risk knowledge, consent
21
4. The condition of post-normal science in risk management
40
ix
Preface
On the night of December 2–3, 1984, a toxic gas leak from a pesticides factory owned by the Indian subsidiary of a US multinational, the Union Carbide Corporation, killed an estimated two to six thousand people in Bhopal, the historic capital city of the state of Madhya Pradesh in central India. Within a few fateful hours on a cold December night, Bhopal became a universal metaphor for catastrophe. An estimated half-million survivors and their progeny continue to suffer from exposure to the gas to this day. I was barely out of my teens when Bhopal happened. Like many, I volunteered for a nonprofit organization assisting disaster survivors for two years. However, in my subsequent academic career, I largely avoided writing about the disaster. Given that academic artifacts inadvertently enable professional mobility, I could not escape the feeling that publishing about that horrific event would mean that I would, at least inadvertently, be capitalizing on a human tragedy. I therefore deliberately built my academic research around topics that have been more than an arm’s length from Bhopal. The few xi
xii P r e f ace
articles that I wrote were literally coerced out of me by professional colleagues. Yet the disaster has haunted me almost every single day of my adult life, as it has many people who witnessed it. In response, I served with environmental nonprofit organizations that work to address the broader topics raised by the events of December 1984. I also sought to educate myself about risk, disaster, and vulnerability. I studied the journal literature in esoteric corners of fields initially foreign to me, including risk assessment and management, organizational sociology, disaster studies, and resilience and vulnerability studies, in disciplines ranging from the physical sciences and engineering to anthropology and geography. I realized in the process that the public understanding of the vulnerabilities associated with risky technologies mirrored wider ideological predilections. In contrast, the academic and policy literatures addressed a wide range of topics relating to the broader implications of Bhopal and sparked reflexive thought and action. As a humanist, I felt that these ideas needed to be rendered legible to lay audiences so that they can be understood, debated, and where appropriate, critiqued. It is with these thoughts that I decided to write this book, which, in essence, aims to explain the philosophical implications that underlie the vast, complex, and disparate academic writings on the societal impact of environmental risk, disaster, and vulnerability to humanists, generalist policymakers, and members of the public. I hope that the book serves as a point of departure for readers coming to these topics with fresh eyes, and that readers engage and dispute the ideas presented in a manner that enables a more robust and critical public discourse on topics of global significance. Santa Cruz October 2022
Acknowledgments
This book owes a great deal to several influences. At the very outset, I am indebted to some extraordinary scholars and activists, including Dunu Roy, Sagar Dhara, Imrana Quadeer, Praful Bidwai, Dilip Simeon, Shekhar Singh, Jayanta Bandhopadhay, Sunny Narang, Madhulika Banerji, Rohan D’Souza, Archana Prasad, Shiv Viswanathan, Ashis Nandy, Veena Das, and JPS Uberoi, who in different ways helped me to make some sense of the world I found myself in during that dystopian year 1984, when the Bhopal gas disaster occurred alongside other tragic events in India, Sri Lanka, and elsewhere. I have also been greatly fortunate to have had some amazing teachers as a master’s, doctoral, and postdoctoral student at Delhi, Oxford, Berkeley, and Cornell Universities, respectively, and at the Max-Planck-Instit für Wissenshaftsgeschichte. Among them are R. K. Gupta, Robert Fox, Anna Guagnini, John Darwin, Barbara Harriss-White, Sheila Jasanoff, Peter Taylor, Michael Watts, Anthony Oliver Smith, Susanna Hoffman, Richard Norgaard, Kären Wigen, xiii
xiv A c k n o w l e d g m e n t s
Michael Lewis, Lise Sedrez, José Augusto Padua, Leena Srivastava, Geoff Bowker, and Susan Leigh Star. These extraordinary scholars impressed me by demonstrating the power and significance of the academic path and trained me in the grammars of the disciplines that shaped my thinking, including analytic philosophy, modern history, social anthropology, geography, political ecology, and science and technology studies. In recent years I have profited a great deal by conversing with some of the key scholars and activists in the field of environmental justice as part of my Liminal Spaces podcast. I am especially indebted to David Pellow, whose works I have greatly profited from reading. I have been particularly influenced by Hermino Martins, who introduced me to Continental European metaphysics, and gently nudged me, with humor and sustenance, to grapple with some eternal questions about ethics, reason, and the importance of the sacred. I am also grateful to the many wonderful scholars who I cite, who have de facto been teachers even though I have not had the good fortune of knowing them personally. I have also been fortunate to have had several generations of undergraduate, master’s, and doctoral students as well as colleagues at the University of California, Santa Cruz; TERI University in New Delhi; and the National University of Singapore. All have listened, sparred, contested, and raised the stakes, and thereby enabled me to master the material at hand and hopefully offer a tangible and useful narrative. More immediately, this book owes greatly to encouragement and support from Stacy Eisenstark and Naja Pulliam Collins at the University of California Press, the excellent copyediting by Catherine Osborne, and David Peattie. I also acknowledge with great gratitude support from the National Science Foundation, the Aphorism Foundation, and the Reid Hoffman Foundation. I am extremely grateful to the many wonderful nonprofit organizations with which I have been associated with, including Kalpavriksh, the Bhopal Group for Information and Action, the Pesticide Action Network of North America, the International
Acknowledgments
xv
Radio Project, and Greenpeace International. I am indebted to my wonderful friends, especially Reid Hoffman, Michelle Yee, Lorcan Kennan, Bharati Chatturvedi, Ajay Bisaria, and Judith Wilson. Last, but by no means least, I acknowledge, with tremendous gratitude, my immediate family—Radha Rajan, Priya, Om, and our two kitties, Vanilla and Jo, for their love and good cheer.
1 Setting the Stage
In tr o ductio n Over the course of the past few decades, there has been sustained reflection about environmental risks, disasters, and vulnerability. Some commentators have remarked that humanity finds itself in a challenging new epoch called the “technocene,” characterized by complex technologies with accompanying hazards that can potentially harm human societies and their living environments on historically unprecedented scales. The dystopian threats of the technocene have precipitated a host of crucial questions. Just how safe is humanity in a world of toxic chemicals and industrial installations with destructive potential? To what extent is it feasible to contain chemical, nuclear, and other pollutants? Is it at all possible to prevent runaway disasters in highly complex industrial technoscapes? Historically, such questions have been addressed from the vantage points of specific intellectual and ideological traditions. There have been religious perspectives ranging from fatalism to providence; cultural views that span denial of harm to extreme fear; and 1
2
C ha p t e r 1
political doctrines that span caution and precaution, on one end of the spectrum, and a Panglossian risk-taking attitude, on the other. However, recognizing that risk, disaster, and vulnerability associated with industries such as chemical and nuclear are novel, historically unprecedented, and anthropogenic in origin, scholars and thought leaders have responded with a wide range of new ideas, explanatory frameworks, and calls to action. These include traditional risk analysis, variants of the precautionary principle, industrial sociology, and environmental justice. There have also been many studies of the role of bureaucracies and technocracies, and of the passions, interests, and economic and political entanglements between corporate and governmental entities. In the last few decades, entirely new research traditions have consequently been formulated, adding nuance and, in many cases, reframing questions. Among these are vulnerability theory in anthropology and geography; the sociology of illness and the environment in public health research; organizational theory applied to understanding the ideology of disasters; a multifaceted inquiry called Science and Technology Studies (STS); and a field called development studies which, among other things, explores the tradeoffs and collateral damage related to industrial progress. There have also been vigorous debates about the relationship between the technocene and politics; reflexivity in civic thought and political discourse; science, technology, and democracy; the role of the media; and big data and the manipulation of thought. Underlying all these concerns are questions about the politics and political economies of risk acceptance and propagation; the role of economic systems; and human rights and accountability.
Po ints o f Depa rtu r e These intellectual adventures have, however, tended to be enmeshed in disciplinary genres, and as in the classic fable of the blind person
Se t t i n g t he S t a g e
3
and the elephant, most scholarly communities are captivated by one or another feature or theme. In the process, they have often ignored the synergies and complexities that can be explored through broader transdisciplinary conversations, leaving room for creative osmosis. Moreover, their jargon renders them inaccessible to a general audience, thereby shutting the public out of an important debate about the choices that face them. The purpose of this book is to address these gaps. By elucidating and synthesizing disparate literatures and points of view, it enables scholars and citizens not versed in the extant literature on risk, disaster, and vulnerability to understand the social and theoretical import of these works, and their implications for global and planetary ideologies, institutions, and arrangements. The sources for this book include documents produced by governmental regulatory agencies, nonprofit advocacy organizations, investigative journalists, and the works of academics from a wide variety of disciplines and institutions, including the physical and biological sciences, engineering, public policy, and the social sciences. I have also read many philosophical works that engage, critique, and reflect on some of these works, and on the wider human condition in the technocene. Moreover, I have, during the past decade and a half, discussed some of the central ideas with scholars and practitioners in these fields. The book you are reading now therefore aims to serve as an intellectual witness to some of the big ideas and debates, an attempt to organize historically disparate literatures in a dialogue of ideas, render technical discussions in simple language, and therefore enable ordinary citizens to understand and reflect on what might otherwise seem esoteric concepts. I draw on three broad types of genres of research and practice, each speaking to one theoretical term in the title of this essay. The first of these stems from a simply phrased question—how safe is safe enough? This question springs from the recognition that industrial toxins in the environment have, in one form or another, wreaked havoc on the lives and livelihoods of people all over the world. For example, air and water pollution have had significant public health
4
C ha p t e r 1
consequences. There have been dramatic instances when sections of rivers have literally caught fire. Many pollutants bioaccumulate, with traces of chemical toxins found far away from where particular polluting industries are physically located. For example, freshwater lakes have seen aquatic ecosystems devastated due to pollution from distant sources, and traces of toxic chemicals have been found in mountains, rivers, oceans, and even in the ice at the poles. The traditions of research and analysis that have addressed the problem of toxins and pollution have sought answers to the question of the human and environmental health implications of the increased presence of chemical toxins in the environment. More recently, uncomfortable questions about the distributive effects of environmental risks, and especially the question of the relationship between pollution, poverty, and social justice have also been raised, giving birth to a new field that calls itself environmental justice. The second research tradition addressed in this book is concerned with the idea that technology can go out of control and adversely damage human communities, societies, and even nation states. The concept picked up steam in the aftermath of the Bhopal and Chernobyl disasters. Although iconic, Bhopal, Chernobyl, and more recently Fukushima are only a few lowlights of a sequence of industrial disasters the world over. Some are relatively small in scale, killing a few people and maiming others, but others, such as disasters at chemical plants in Flixborough, United Kingdom, and Seveso, Italy, to take but a couple of examples, have had significant impacts. There have also been planetary consequences due to the release of toxic chemicals, as evidenced by phenomena such as acid rain. The third thread of this book seeks to map and understand the social, cultural, economic, and political vulnerabilities associated with the chemical industry, and more broadly, high-risk technological systems. It raises the question of who is most vulnerable and why. It seeks to understand the political economies that define vulnerabilities, and specifically, the roles played by chemical and fossil fuel corporations and their enablers, and the media and public rela-
Se t t i n g t he S t a g e
5
tions industries. Moreover, looking into the future, these genres of research ask a crucial series of questions relating to the adequacy of expertise and infrastructure to mitigate deleterious impacts when and where they occur. They also compel us to ponder the policy, public, and civic discourses about the meaning of progress, development, and the teleology of the general good, and consider the implications of the technocene for democracy, rights, and accountability.
The Structure of the B ook The purpose of this book is to explicate the humanistic import of the literatures on risk, disaster, and vulnerability for an audience consisting of people who are not experts in these domains. Its best methodological anchor is therefore the academic subfield called environmental humanities. This emerging genre encompasses literature, intellectual history, philosophy, and history, among others. Like many scholars in this field, I have found the work of Raymond Williams to be very useful. Williams suggested that the world of thought can be understood by interrogating its prominent words and exploring their interlocking, contrarian, contradictory, and contested meanings. Accordingly, I have organized the book as a series of short essays, each exploring one keyword that coalesces families of terms occurring in the underlying literatures. Keywords, in this analysis and methodology, serve literally as narrow keys. Such devices, as many generations of humanistic scholars have learned to appreciate, can open wide doors, helping frame, discuss, explicate, and analyze the stakes in the various debates and literatures described. In this book, I use this tool to explicate the big picture, the humanistic import, if you will, of the critical discourses about the origins, causes, and impacts of environmental risks, industrial disasters, and the vulnerabilities to human societies around the world owing to such events. The first of three main parts, entitled “Risk,” addresses the social,
6
C ha p t e r 1
cultural, psychological, and political impacts of synthetic toxins in the environment. It explores debates about how industrial pollution can be regulated and prevented, sketching the arc from a belief that risks can be contained within safe thresholds to one that holds that there are no thresholds for true safety and that the only course is precaution. The second part, “Disaster,” addresses technological complexity, human organizational capacity, and catastrophic events. Here, the big debate is over the extent to which disasters are inevitable, and whether some novel cultural arrangements within institutions might prevent the worst possible outcomes even in highly risky contexts. The third part, “Vulnerability,” examines how environmental risks and disasters create a range of vulnerabilities for the human order. These include situations such as chemical spills and gas leaks, which, in addition to damaging their proximate physical environment, have consequences for entire industries, supply chains, jobs, and complete verticals of the global industrial and financial system. They also include forms of vulnerability that exacerbate the economy, as happens when exposures to toxins in the environment differentially impact the ability to live and work, often mapping on to preexisting factors of social stratification, such as race, gender, class, ethnicity, and nationality. These insights offer new challenges for negotiating the future and pose ethical dilemmas of a kind that humanity has not encountered before, especially in the arena of acceptable collateral damage and tradeoffs. Finally, a brief concluding chapter, entitled “Looking Ahead,” sketches some emerging approaches to understanding risk, disaster, and vulnerability. In writing this book, I have deliberately chosen not to discuss two critical events that loom large in the public consciousness—the COVID-19 pandemic and climate change. The reason is that these topics of planetary import require a different kind of treatment, anchored around global geopolitics. The topical domain for this book is therefore deliberately constrained to address more conventional issues surrounding risk, disaster, and vulnerability, leaving a discussion of planetary-scale risks for later. The length of the book,
Se t t i n g t he S t a g e
7
likewise, is deliberately chosen to enable a wider audience, ranging from undergraduate classes to professionals with little time at their disposal, who might not have the capacity to read a larger work. Two final methodological points are worth making. First, in keeping with my humanistic goals, I have chosen to privilege storytelling and narrative to describe the central concepts of any discipline or genre. My goal here is to delve into many relevant subfields and bring them in conversation with others, rather than writing from the vantage point of any given discipline, however important or influential they have been. This book is also not about the actual techniques of risk analysis or disaster management. Rather, it is a humanistic exercise in inter- and transdisciplinary synthesis, attempting to extract philosophical insights from otherwise technical discussions. I have chosen, for the bulk of the book, to focus on what is regarded by scholars in risk and disaster studies as canonical. It is these literatures that inform the narratives in the three substantive chapters. In recent times, important works have explored relevant topics such as reflexivity, democracy, infrastructures, justice, and human rights. I dedicate chapter 5 to exploring these frontiers. Second, this book eschews footnotes in favor of bibliographic essays. My goal in these essays is to describe the sources and literatures that form the basis of this book, offer explanations about important assumptions, and provide definitions of some central concepts. By describing my sources, and in many cases, providing exegetical genealogies, I hope to provide systematic guides to further reading. One final comment is worth making. This book engages with publications in academic, public policy, and legal realms primarily from the United States, and to a lesser degree from the United Kingdom and Europe and other contexts where publications are in English and other languages in which I am proficient. Unfortunately, this does mean that many important regulatory and philosophical traditions that are inaccessible to me are not considered or analyzed in this work.
2 Risk
In tr o ductio n The Industrial Revolution promised economic growth, prosperity, and freedom from drudgery. Although there were naysayers from the get-go, writing about the conditions within the so-called satanic mills and other adverse collateral impacts of growth, the overwhelming urge was to see factories and machines as harbingers of progress. Indeed, for much of the second half of the nineteenth century onward, smokestacks and black soot were seen as signs of prosperity and development across Europe, North America, and many other parts of the world. By the late 1950s and early 1960s, however, mounting evidence of the adverse health and environmental consequences of pollution made some scientists take notice. The evidence they collected was diverse and widespread, from rivers on fire to pesticides such as DDT that were killing birds and animals and, eventually, poisoning human beings, to poor air quality in cities around the world. By the 8
R i s k
9
late 1960s, considerable public concern had mounted, and charismatic activists and advocacy organizations emerged, spurring further research, evaluating the causal evidence, and ultimately staging corrective events from hearings in legislative houses to significant lawsuits. A decade later, consensus had built in certain scientific communities that anthropogenic sources were despoiling the physical environment to the detriment of human health and the functioning of vital ecosystems. This consensus resulted in the first wave of environmental protection legislation, the creation of environmental protection agencies, and a general call to action. However, translating an activist sensibility to public policy was anything but a straightforward exercise of commissioning scientists and engineers to address and solve the problem. The early policymakers soon discovered that they had stumbled into philosophical problems about the bounds of reason, human nature, cultural and ideological standpoints, technological optimism and pessimism, fear, doubt, uncertainty, ambiguity, and ignorance, and ultimately, reflexivity and humility. What began as a simple case of adopting technological solutions for ostensibly technical problems turned out to stir up a deep swell of passion. Embedded within the debate about technical and legal solutions were deep humanistic concerns, some stemming from the beginning of thought. The purpose of this chapter is to address some of the big debates in the mainstream public policy disciplines that took place from the late 1960s onward. It is organized by keywords that follow the emergence of the central ideas. The very first is reason, which initially formed the basis of public policy, usually referred to as the risk paradigm, which sought to address and solve the problem of pollution. Next are a cluster of terms—bias, culture, and fear—that explore some of the early critiques of that approach, stemming from works in psychology and anthropology. Finally, I discuss how a few decades after the so-called risk paradigm was first articulated, amid new evidence of the deleterious impacts of industrial pollution, some of the
10
C ha p t e r 2
risk paradigm’s central assumptions began to be questioned. The three keywords for this last section of the chapter are doubt, reflexivity, and caution.
Reaso n In the beginning was the word, and the word was reason. Its salience lay in common sense. A reasoned approach, as it emerges in risk literature, usually entails ascertaining the facts, weighing the remedial options, deciding on the best tradeoffs possible, and then communicating the decisions clearly to the various stakeholders, including the public. This strategy is the essence of one of the earliest attempts at tackling the problem of toxins in the environment in the United States and other countries. The risk paradigm, as the process has been called by some subsequent commentators, consists of a three-part process—risk assessment, risk management, and risk communication. A good example of the risk paradigm in action is the process of the United States Environmental Protection Agency (EPA)—which forms the basis of the description here. Risk assessment is the attempt at ascertaining the facts. After scoping the nature of the problem—in this case, pollution—and deciding on the methodologies for assessment and analysis, scientists measure the extent and impact of contaminants, and predict how they might behave in the long term. Risk assessments are concerned with three factors. First, they determine the extent of the physical presence of a polluting chemical in the air, soil, or water. Next, they examine the impact of the exposure on either human beings or the landscapes and other biotic entities on which human beings depend, now and in the foreseeable future. Third, they evaluate the toxicity inherent in any contaminant. Here, the goal is to estimate the impact of the chemical at a given level of exposure. This, in turn, enables the prediction of the nature of risk and its mag-
R i s k
11
nitude. In essence, risk assessments are a quest to determine these probabilities. The actual process of conducting risk assessments is to a great extent codified in the procedures adopted by environmental protection agencies around the world. In the United States, for example, human health risk assessments comprise four key steps after the initial planning and scoping process. The first stage is hazard identification, that is, determining the potential of a chemical to cause harm both in general and under specific circumstances. The next step is referred to in the risk community as the dose-response assessment, essentially a calculation of the relationship between the chemical and its impacts on the environment and human health at various levels of concentration. The third step, exposure assessment, estimates the frequency or timing of effects at different levels of exposure or contact. Finally, an attempt is made to characterize risk based on the three previous steps. The EPA’s process for assessing risks to human health is an exercise in understanding what the extant data convey about the nature and impact of exposure to any given chemical contaminant, also called an environmental stressor. Risk assessments involving ecological health—or how the biophysical environment might be impacted by factors such as chemicals, disease, invasive species, or climate change—are quite similar. Again, after the initial scoping process, the first step of problem definition entails the collection of information to understand which plant and animal species are at risk and decide on the priorities for protection. The next phase is analytical and involves research to paint a picture of the nature and extent of exposure and the possible harmful effects. The last stage is risk characterization, which involves estimating scenarios of exposures and effects, on the one hand, and the interpretation of these scenarios to ascertain the nature and extent of harm, on the other. The goal of risk assessments is to provide risk managers, politicians, and other decision-makers with a clearheaded, science-based understanding of the potential environmental threats. Human
12
C ha p t e r 2
health risk assessments, for example, seek to identify the various types of environmental hazards (chemical, physical, biological, microbiological, nutritional, or radiational) and characterize the types of health problems associated with an environmental stressor. They examine the sources of these contaminants, which might be point sources, such as smokestacks or effluent discharge locations; non-point sources, such as automobile exhaust; or natural sources. They also look at the various pathways and routes of exposure in the human body. They estimate the probability that a particular demographic will experience health problems when exposed to chemicals at specific levels and times of exposure. In theory, the level below that exposure threshold does not pose a human health risk. Human health risk assessments also ask how the human body responds to a stressor, and in particular, the processes of absorption, distribution in the body, metabolism, and excretion; how these processes result in health effects and diseases of crucial organs; and the time period over which such diseases progress. Crucially, they also evaluate whether factors such as age, race, gender, culture, economic status, place of work or residence, and genetics play a role in exacerbating or diminishing the threat posed by a contaminant. Ecological risk assessments similarly explore possible causes and impacts and create scenarios based on scientific research. The second part of the risk paradigm is referred to as risk management. In this process risk managers, such as officers of agencies charged with environmental protection, examine the evidence and scenarios stemming from the risk assessment stage; consult other stakeholders including elected officials, business leaders, and private citizens; and arrive at decisions about the best strategy to address an environmental risk. A great deal of risk management involves weighing options and alternatives. Virtually every public policy decision involves tradeoffs. They might be economic, necessitating an understanding of costs and benefits of various actions and of the distributional effects of decisions across the various strata of a given society. They might also be societal, with a variety of social factors
R i s k
13
entering the fray—such as income, race, community values about land use and lifestyles, and the relationship between demographics and susceptibility to illness from a given set of stressors. Risk management also often entails the need to weigh political factors, involving both the dynamics and interactions between agencies at various levels, and macropolitical considerations such as ideology, the passions and interests of groups, and geopolitics. A slate of laws and prior legal decisions typically define or constrain the ability of an environmental protection agency to undertake any particular action. The availability and cost of remedial technologies is also an important consideration. Given the number, diversity, and complexity of variables, it is not uncommon for risk management processes to be iterative—soliciting and addressing gaps in data, reexamining assumptions about economy and society, and reevaluating options on this basis. The third facet of the risk paradigm is risk communication, the process of informing people about potential hazards to the various stakeholders and explaining the basis and rationale of remedial decisions made by the agency concerned. Effective risk communication involves the contextualization of risk and effective comparisons of hazards and threats. It also necessitates understanding how human behavior might contribute to the reduction of the risk to society. Crucially, as the phrase suggests, risk communication also entails a process of dialogue that enables the public to feel engaged and involved. The key assumption here is that, in democratic societies at least, clear and honest communication is a prerequisite for social compliance and effective behavior. Figure 1 illustrates the relationship between the three components of the risk paradigm. To summarize, the risk paradigm envisages that a potential risk be first understood and characterized in scientific terms, leading to risk assessments. The latter are then fed through a management policyformulation process, culminating in a policy or a program. Finally, risk communication helps with public compliance, which is a critical element for the success of strategies to implement the remedial
14
C ha p t e r 2
e hazards Acut
osure assessment Exp
zation Planning a teri nd rac sc a ch
ity
re sp on se)
Human health risk assessment
xic To
n
a
nd ex po sure analysis
Problem for mu lat io
se pon Stressor res
Ecological risk assessment
Ris k
Plann ing an d
ing op
tion riza cte a r
ing op sc
Ris kc ha
Risk communication
eos (ha dd zard n a identification
Risk management
Figure 1. The risk paradigm. Source: https://19january2017snapshot.epa.gov /risk/risk-communication_.html.
policy. At various points from the conceptualization stage onward the entire process iterates, until the goals of regulation are reached and social compliance is stable and predictable. In theory, the risk paradigm is a clear, logical, and commonsensical method. It has the merit of separating fact and value, with the determination of facts left squarely to scientists, and the process of evaluating and assessing options delegated to managers. The risk paradigm purportedly enables a dispassionate approach to policy, so that values do not contaminate the discovery process and discussions about tradeoffs and optimizations can be hashed out openly through a democratic political process (figure 2). The foundational beliefs of the risk paradigm, starting in the early 1970s, were that a) the methodologies for measuring air and water pollution were sound, and b) there was adequate scientific ability to
R i s k
Science
The Problem: Toxins in the Environment
Changes
Facts / Risk Assessments
Debug if changes are inadequate
15
Policy Process
Risk Management and Communication
Implementation
Figure 2. A simplified diagram illustrating the risk paradigm. Source: Author, based in part on a 1997 lecture at UC Santa Cruz by Piers Blaikie.
set reasonable standards for enforcement by state and federal agencies. If there was a concern during the early days, it was over whether the individual US states, which had traditionally been responsible for environmental protection, would be able to perform this role in the face of demands for economic growth and industrial investment. It was to address this concern that a strong role in enforcement by federal agencies came to be envisaged. It did not take too long, however, for serious questions to arise about the risk paradigm. The writings of William Ruckelshaus, the first Commissioner of the US EPA, about ten years after his first term ended in 1973, show these concerns in sharp relief. Reflecting on the experience of the first ten years of the agency, he observed that regulatory policy was at an important point of disjuncture. On the one hand, the interface between the demands of law and the realities of doing science was messy, in that the laws that the EPA had to follow demanded more certainty of protection from harm than science could provide at that point in time. The laws, and the governmental and public mandate that the EPA had to regulate the
16
C ha p t e r 2
chemicals and contexts that threaten public health and the environment as swiftly as possible, embodied what he called the public’s “thirst for certitude.” However, he observed ruefully, the scientific process took time to resolve the riddles of nature, with some of the interim debates seeming chaotic and controversial. Even if the EPA could somehow get a group of scientific practitioners to agree on a consensus position, it would, by default, only be tentative and open to revision as further evidence came in. Given that public officials often do not have the kind of time that scientists need to settle disputes, the EPA, from its earliest years, had to function under conditions of significant scientific uncertainty. The full implications of this problem were partially masked at the beginning because the kind of pollution that the EPA was trying to control was so blatant. However, with pollution eventually becoming more invisible to the naked eye, regulation became a real challenge, especially since the EPA, as a public agency, had to function in full public scrutiny in a context in which the public believed that the nature of pollutants and their adverse effects can be precisely known, measured, and controlled. Although there had been considerable success with many categories of pollutants, he observed sanguinely, this was not the case for all pollutants. On the contrary, the toxicity of several chemicals remained unknown. In the ensuing four decades, the nature of the conflict between science, law, and public policy was articulated further, and claims about what the risk paradigm can and cannot do began to be better understood. Today the US EPA, in describing risk assessments, acknowledges on its website that there are significant limitations, with key data needed for risk assessments in real life conditions often not available. This means that many risk estimates are, to some degree or another, uncertain. Accordingly, the agency calls for transparent presentations of uncertainties about the reliability of risk analysis–based estimates. The issue of scientific uncertainty, and consequently, the blurring of the fact/value distinction, has increasingly become a topic exercising the minds of policymakers concerned with mitigating the
R i s k
17
adverse impacts of toxins in the environment. The ensuing discussion on the topic has been wide-ranging. However, even as scientific uncertainty began to be acknowledged as a significant problem in risk analysis, another critical question, central to risk management and communication—the issue of risk perception—constituted an important facet of the debate during the early days of conceptualizing policy responses to environmental risk mitigation.
Bias As policymakers began conceptualizing the risk assessment frameworks in the early days of the risk paradigm, it became quite clear that the topic of risk was as much about cultural predilections as it was about scientific assessments. One key question was, and is, “how safe is safe enough?” More broadly, tensions arose about how to resolve trade offs between safety and the economy. An early approach to addressing this question represents a genre of argument that has endured. In an article in Technology Review, physicist Richard Wilson employed the techniques of comparative statistical analysis. He began by observing that the average life expectancy of human beings has improved over the course of the twentieth century, and that many of the large, potent risks of the past have been eliminated, but that day-to-day life in the modern world posed significant risks for human life. At the time of his writing, the simple act of getting out of bed in the morning and turning on the light was associated with the morbid statistic of five hundred deaths per year due to electrocution. Walking down the stairs for breakfast was not risk free, for falls killed sixteen thousand people per year, mostly in domestic accidents. Other simple activities, such as drinking a cup of coffee, or eating a peanut butter sandwich, or washing clothes with detergent, had carcinogenic risks. Wilson argued that there is no human activity, however mundane, that does not carry some risk; that there is no way to elimi-
18
C ha p t e r 2
nate all risks; and that any such attempt would create “incentive for ignorance, not an incentive for safety.” After comparing a wide range of possible risks, he concluded that the best approach for a cogent public policy on managing them was to measure them quantitatively and determine an upper limit on a given risk when there is a degree of uncertainty. This, of course, was the essence of the risk paradigm—understand the problem scientifically, and then formulate a policy after a careful consideration of the tradeoffs. Subsequent commentators built on these thoughts and identified some conceptual challenges. For example, Stephen Derby and Ralph Keeney, channeling the framework of the risk paradigm, began by stating that ideally, the question ought to be addressed by clearly identifying alternatives, specifying the objectives and measures of effectiveness, understanding the possible consequences of each alternative, quantifying them, and then analyzing the alternatives to select the best choice. However, they pointed out, in many instances neither the alternatives nor the objectives and appropriate measures are known. The sheer complexity of the actions of pollutants in the environment also means that there are many uncertainties about the nature of their impacts on human health. This, in turn, implies that the consequences of any particular course of action are unknown. Moreover, they argued, a wide slate of questions about values emerges: whose values ought to be used, how ought these values be determined, and how should value conflicts be resolved? By the end of the 1980s, there were many publications ruminating about the implications of scientific uncertainty and the ambiguities in identifying and evaluating alternatives across a range of contaminants in various industries. Some of the most significant contributions about the perception of risk, and consequently, the topic of tradeoffs, were made by psychologists. Of particular importance is the framework of techniques known as the psychometric paradigm. Drawing on other traditions, the psychometric paradigm, as developed by Paul Slovic and many colleagues, used a range of techniques to deliver quantitative
R i s k
19
measures of how risk was perceived across a wide range of societal actors, from experts to lay publics. The psychometric paradigm was a theoretical approach that assumed that risk is defined subjectively by individuals influenced by a variety of societal impulses. It used survey instruments to identify these factors and quantify how they interrelated with each other, and built models to understand how individuals and societies respond to the hazards that they face. A critical finding in this research was that human beings are conditioned by biases. Most people, the research found, have what it termed anchoring biases, which form their first approximations, or starting points, and which they then adjust over the course of exposure to other impulses. People also had hindsight biases after the fact. For example, with little access to statistical evidence, people often resort to heuristics that help them make sense of the world around them. One such heuristic, the availability bias, involves reliance on events that have either occurred recently or frequently. The implication was that laypeople either over- or underestimate risks, engendering fear and paranoia on one end of the spectrum, and complacency on the other. Another kind of heuristic is overconfidence. Slovic and his colleagues pointed out that laypeople tend to be very confident about their judgments and are often unwilling to concede that they might not have adequate information to back up their claims. Interestingly, experts, who differ from laypeople in relying on statistical evidence as the defining factor in understanding risk, are as prone to the overconfidence heuristic as laypeople, and this in turn has significant societal consequences in terms of actual accidents. A third heuristic is the desire for certainty. Both laypeople and experts, according to this research, tend to deny uncertainty as an anxiety-reducing strategy. In doing so, however, they often end up fueling overconfidence in their beliefs. Slovic and his colleagues also studied the extent to which the perception of risks across a wide range of hazards were known or unknown; and the extent to which these risks were dreaded, or not. They concluded that the subjective perception of risk was demon-
20
C ha p t e r 2
strably correlated with these two factors. They argued that both dread and familiarity, based on a variety of impulses including media coverage, tend to bias experience, and lead to the accentuation or diminishing of the perception of certain risks. Their research also found that there are some differences between the perceptions of laypeople and experts. For laypeople, the more a risk was dreaded, the more they wanted to see it reduced. There was also a direct correlation between the perception of a given risk’s likelihood and the desire to reduce it. Moreover, fears about catastrophic loss of life were an extremely important factor in public perceptions of risk. In essence, the psychometric research found that the concerns and anxieties of laypeople are deeply conditioned by a wide range of heuristics and other cultural assumptions, often amplified by the media. The experts, on the other hand, looked primarily at statistical data on mortality. Consequently, expert judgments could, in certain instances, conflict with those of laypeople. Significantly, these disagreements were not easily resolvable through evidence alone, especially in cases of rare hazards, where definitive data is hard to obtain. Here, as in other instances, weaker information was often interpreted to reinforce existing viewpoints and beliefs. The implication is that it is not an easy task to reassure the public and bring their views in line with those of experts. The task is rendered further difficult by the fact that experts and laypeople alike tend to be conditioned in their thinking by a wide range of factors—beliefs about the relationship between safety and the economy, or propensities to fear or optimism, or indeed their political proclivities. The research suggested that the best course forward is for laypeople to be better informed, and for experts to recognize the cognitive limitations of their analyses.
C ultur e What explains differing perceptions of risk? While the psychologists mentioned in the previous section gave their answer, others had
R i s k
21
Knowledge Certain Complete Consent Contested
Uncertain
Problem: Technical
Problem: Information
Solution: Calculation
Solution: Research
Problem: (dis)agreement
Problem: Knowledge and Consent
Solution: Coercion or Discussion
Solution: ?
Figure 3. Risk knowledge, consent. Source: Mary Douglas and Aaron B. Wildavsky, Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers (Berkeley: University of California Press, 1982), 5.
quite a different take. A case in point is the work of anthropologist Mary Douglas and political scientist Aaron Wildavsky. They combined in 1983 to write an influential book, Risk and Culture, but each had already placed their stakes on the ground previously. They observed that there is considerable disagreement about the nature of the problem in the West, where people are culturally and politically concerned about totally different things, such as war, pollution, employment, or inflation. Because of this divergence in the conception of risk, they argued, it is virtually impossible to forge a consensus on action. Nor is there a way to know whether any action is adequate. Here, an argument made by Jerome Ravetz, a leading British sociologist and philosopher, is worth noting. Ravetz observed that it is quite difficult to ascertain that enough is being done to prevent a given hazard from occurring. Besides, even after the fact, the question of what else could have been done is often asked, as is the question of whether such action would have been within the bounds of “reasonable” behavior given what was known at the time. Douglas and Wildavsky, for their part, argued that there isn’t really a credible way to rank dangers, as that activity requires knowledge about what to address and agreement on the criteria. They illustrated their point with the help of a diagram (figure 3). This diagram plots risk along the axes of knowledge (about what
22
C ha p t e r 2
is likely to happen or what risks are likely to ensue), on the one hand, and consensus about the ideal goals of addressing those contingencies and eventualities, on the other. When looked at this way, there are four possibilities. The first is the prospect that there is a good sense of what is likely to happen, and societal consent about what action to take (top left quadrant). In this ideal situation, all that one needs to do is ascertain the risks and take the societally sanctioned action. Next (top right quadrant) is a scenario in which there is societal consensus on what to do if a set of threats emerge, but uncertain knowledge or insufficient information about that prospect. This need not necessarily pose any conceptual difficulty for public policy, as it can be addressed through further research. The bottom left quadrant outlines a scenario in which there is scientific consensus about what the outcomes might be, but societal disagreement on how to respond. For Douglas and Wildavsky, this is a problem stemming from lack of consensus about how the consequences ought to be valued. The way forward here is to proceed through further debate and discussion or even, in some instances, through coercion. Thus far, these three quadrants do not significantly challenge the risk paradigm. The problem posed in the fourth quadrant (on the bottom right), however, does. Here, there is neither knowledge about possible outcomes, nor consent about how to address them, should they arise. Douglas and Wildavsky argued that the dilemma about risk, as they framed it, is best understood when approached through what they called a “cultural theory of risk perception.” The social environment, the selection principles about what is or is not risk, and the perceiving subjects, are all interrelated. Some dangers are emphasized above others, connected via a societal narrative with purported moral defects, and justified in several ways, including deploying mortality and morbidity data in a very selective manner. To illustrate their point, they offered a few examples. When they were writing, asbestos was feared more than fire. At a time when there was a great deal of concern with cancers after exposure to chemical toxins,
R i s k
23
the data on skin cancer from leisure sunbathing was widely ignored. Douglas and Wildavsky concluded that what to fear and what not to fear was dictated not by data but by common values that led to common agreements. From their point of view this was not that surprising, as social organization is predicated on cultural agreement on critical issues, among which are risk-taking and risk aversion. Such agreement is produced by social dialogue, which helps societies organize “some things in and other things out.” The upshot of the cultural theory of risk perception is that placing cultural biases out in the open enables a better understanding of where societies stand, and what differences can and cannot be reconciled. The task for students of risk and society is to explain how and why societies decide to isolate some hazards and threats and concentrate on mitigating others. Put differently, the question is why people deploy claims about risks to change the behavior of other people. For Douglas and Wildavsky the answer, in essence, is that the judgments of people are primarily societally and culturally determined. They contended that people tend to rationalize the risky choices that they make voluntarily, while getting angry when damage is caused by others, especially entities who profit while professing innocence. Douglas and Wildavsky thereby argued that the line between voluntary and involuntary is movable, and with it, discourses about what is and is not just. The fungibility of this line, they claimed, is related to the issue of control: who is in control of whom, and what in ways that affects and impacts the lives of people. The cultural theory of risk perception led Douglas and Wildavsky to a cornucopian approach to environmental risks. For example, Wildavsky began one paper by rhetorically asking whether people should be allowed to take risks at all, or instead, whether everything should be engineered to minimize risks. Observing that the richest, most protected, most resourceful, and most technological nation on earth was also the one that was most easily frightened, and that virtually everything there is in dispute, he posed his question rhetorically, asking whether this was due to something new in
24
C ha p t e r 2
the environment, or if it reflected something new in social relations instead. Wildavsky argued that the public was being irrational in demanding no risk. Adducing examples from popular media at that time, he pointed out that some statistically riskier entities were more feared by the public than others. He was disparaging of experts who were ostensibly unable to resolve disputes about the impacts of given technologies. Wildavsky argued that the consequence of this inability to quickly ascertain facts and decide matters was that lay citizens were unable to trust in science and the state and resorted to believing whatever they felt inclined to. He thus posited that the uncertainty over facts and consequently values was not due to the external environment but because of a crisis in the culture. In saying so, he questioned what he felt was a risk-averse culture by claiming that people today were better off than previous generations across a range of parameters—from diseases to the economy. To him, this begged the question of why there was an obsession to remove “the last few percent of pollution or the last little piece of risk.” Wildavsky’s rhetoric reflected his economic conservatism and the overall climate of the Cold War, wherein environmentalists were perceived by some in the national security systems as weakening the state. It was predicated on three key tenets. The first of these was that the ideology of risk prevention constitutes a slippery slope to socialism. He began this line of argument by claiming that accidents and health rates are dependent on personal behavior and individual negligence. Yet, environmentalists seek remedies in governmental regulation of industry. Wildavsky cited the example of product liability insurance, which, he claimed, significantly impacted the bottom line of companies, especially small firms, contending that such policy proposals meant that the general taxpaying population subsidized these costs entailed by the carelessness of some users. Based on these assumptions, he questioned why risk was being collectivized rather than being a matter of individual responsibility.
R i s k
25
The nub of his argument was that if individuals fail to assume personal responsibility but ask the government to constrain behavior, such as by mandating seatbelts and imposing speed limits, or to protect them against impact by, for example, paying medical bills or providing drug rehabilitation programs, the burden of risk is shifted from the individual to the state. This, in turn, collectivizes risk, and in the process ushers in socialism. He rhetorically questioned where governmental control would end if it aimed to avoid every imagined evil and claimed that an “anticipatory democracy” would diminish democracy, as it will force the citizenry to trust the moral authority of elected authorities such as the president to make crucial choices. The bottom line, for Wildavsky, was that risk could not be reduced for everyone, and “awful things that do occur must be vastly exceeded by the virtually infinite number that may occur.” Therefore, like most issues in public policy, tradeoffs needed to be acknowledged and weighed before any regulatory initiative was adopted. Wildavsky’s second tenet was about risk displacement, the idea that reducing risk for some people somewhere often means that it increases for others somewhere else. For example, resisting nuclear energy does not decrease the demand for energy. Instead, energy gets sourced from other entities, such as dams, which in turn causes flooding, among other negative effects. Similarly, eliminating pesticides and fertilizers might result in less food or higher food prices. Another type of displacement, Wildavsky argued, is visible in conflicts among different governmental agencies charged with reducing risk. For example, the Occupational Health and Safety Administration might order the meat industry to install guardrails between carcasses and people who are employed to slaughter animals; and yet, the Food and Drug Administration’s rules forbid guardrails because they could potentially be a source of disease transmission. Risk management thus had overlapping impacts, making the issue of tradeoffs quite complex. Wildavsky also contended that high aversion to risk leads to a breakdown in community. The ideology of prevention, he wrote,
26
C ha p t e r 2
implies a lack of confidence in society and social relations, and in relations of trust between people and with public institutions. This, in turn, can lead a segment of the population to a dependence on others, especially leaders such as experts or politicians. For another group, it might lead to a reaction against such dependence, and lesser willingness to follow these leaders. Moreover, he wrote, the ideology of prevention will ultimately result in coercion, as environmentalists will dictate the scope and limits of individual behavior, such as whether one can drive an ostensibly gas-guzzling automobile. He also countered the question of how one has the right to enrich oneself while endangering future generations by asking how one can impoverish the future by denying it choice, and what one needed to believe about the future to sacrifice the present to do it. Concerns about the environment, he reiterated, are cultural constructs, and reflect the cosmology of the society that uses it to justify its core beliefs. Similarly, the risks we fear are social, and imposed restrictions in the name of risk mitigation entail taking our own freedom.
Trust When it was first articulated and developed, the risk paradigm appealed to reason. As the debate about risk and society evolved, it became increasingly evident that reason alone does not suffice in contexts in which the passions and interests of people are ever present, and especially when social behavior is mediated by biases, belief systems, objects of faith, and standpoints (stemming from optimist/ cornucopian and pessimist/catastrophic predilections) about modern industrial technologies. Against this backdrop, a keyword of critical importance in cultural constructions of both fear and safety is trust. A significant writer on the subject is the sociologist William Freudenburg, whose writings are in sharp contrast with those of Wildavsky. He wrote that when something goes wrong, say with a chemical toxin in the environment, the public struggles to locate a
R i s k
27
“responsible” person or organization who can be trusted to tell the truth. He called this the problem of recreancy. Recreancy has Latin roots: re- (“back”) and credere (“to entrust”). The Oxford English Dictionary defines the term as, in part, “apostasy, treachery; mean-spiritedness.” In choosing to use this word, Freudenburg was picking up on a broad public sentiment in Western democracies during the 1970s and 1980s, which was skeptical about the motivations of firms, especially in the chemical and nuclear industries, and indeed, in the honesty and integrity of regulating agencies. Freudenburg himself used the word as signifying a retrogression or failure to follow through on a duty or a trust: “the failure of an expert, or for that matter a specialized organization, to do the job that is required.” He was referring here to the sociological research suggesting that there is a significant problem of trust in scientific experts among lay or common publics. This failure of trust, he claimed, is one of the main reasons that the public is often seemingly opposed to and fearful of chemical and other industries deemed risky. According to Freudenburg, lack of public trust in experts becomes particularly evident during controversies. These are moments in which the public demands unambiguous answers to two questions that the risk paradigm promises to answer: a) how safe is the technology? and b) is that safe enough? Freudenburg suggested that risk analysts sometimes fail to offer unproblematic answers to these two basic questions. What makes things worse is that they are often unable to address the question of whether critical factors—unknowns, as well as unknown unknowns (blind spots)—are being overlooked. The latter point is particularly important, he said, because we “not only . . . fail to see something, but we fail to see that we fail to see.” An example of this is in the Exxon Valdez oil-spill case, where an obscure risk “disappeared” as a rounding error, but when this one in a million error did occur, it caused a significant catastrophe. Even a conservative risk assessment would not have helped here, as the overall probability would still have been higher than one
28
C ha p t e r 2
in ten thousand, a common threshold in the expert risk community. It would not have compelled risk assessors to prioritize the type of failure that caused the accident. Freudenburg observed that there are thus limits to what experts can do. Scientific experts can answer narrow technical questions, as the strength of the expert is in the technical details. However, as the psychometric approach described earlier suggests, most of the important questions about risk and technology are neither narrow nor technical, as they embody culturally and socially situated assumptions about the nature of knowledge and of reality itself. This implies that policymakers should understand the limits of what science can answer, and therefore what kind of questions are appropriate to pose to scientists. For example, questions about values and blind spots need to be answered more broadly, and not by risk analysis alone. Again, it is important to avoid misusing science by passing off value judgments as science and by ignoring blind spots. Freudenburg’s concerns were echoed by many other scholars and commentators from the 1980s onward. These statements of concern, in turn, fueled a reassessment of the risk paradigm, and challenged two key unspoken assumptions. The first of these was that a quantitative scientific analysis can determine and dictate the threshold of pollution, in effect scientifically answering the question of how safe is “safe enough.” This assumption was based on the theory or supposition that natural processes and systems can either absorb pollutants or assimilate them in a manner that renders them relatively harmless up until a particular level of dose and exposure. Further, it was assumed that this threshold of safety could be scientifically determined. The second assumption was that the appropriate target for regulation ought to be an individual chemical or industrial facility, and the goal of regulation ought to be to minimize contamination of a given chemical at a particular site. This, in turn, led to what has been termed the “pollution permit,” the license that legally establishes the maximum release rates of these chemicals and installations. Rarely, as with DDT or lead in gasoline and paint, was a case
R i s k
29
made to ban or phase out a chemical due to the overwhelming scientific evidence about its ill effects on the environment and human health. The challenge to these assumptions came not just from philosophical or sociological concerns but from scientific evidence about the limitations of this approach. For example, Joe Thornton, a scientist and risk policy analyst, has compiled evidence that several classes of chemicals, ranging from pesticides to refrigerants and solvents, have not been absorbed by ecological systems and processes. Instead, they persist in the environment for decades, possibly centuries. Many chemicals are also bioaccumulative: they build up in living organisms, increase in concentration as they proceed up the food chain, and ultimately attain quite high concentrations in larger animals and human beings. Worse, these accumulations are now on a planetary scale; these persistent organic pollutants (POPs) are present everywhere—even in the poles, the open oceans, and isolated rainforests where there are few industries present. Human beings globally are impacted by these bioaccumulating substances, and chemicals are found all over their bodies, including in blood, fat, and breastmilk, to take some examples. Crucially, these trends are a result not just of the actions of a few chemicals that are the ostensible bad actors: in the Great Lakes alone, 362 synthetic chemicals have been identified. An estimated 700 xenobiotic organic chemicals have been found in the adipose tissues of the general population of the United States. At least 188 organochlorine pesticides, solvents, plastic feedstocks, specialty chemicals, byproducts, and metabolites have been specifically identified in the blood, fat, milk, semen, urine, and/or breath of the general US and Canadian population—people with no special workplace or local exposures to these substances. Moreover, for bioaccumulative substances, most of the average individual’s exposure—greater than 90 percent— comes through the food supply, primarily from animal products. The cumulative evidence is that even small discharges of chemicals, at levels acceptable by regulatory standards, is unacceptable for human
30
C ha p t e r 2
beings and natural systems alike. Put differently, the only acceptable discharge, the argument proceeds, is zero. Thornton argued that the second assumption can also be questioned based on the scientific evidence. For him a regulatory approach wherein the unit of analysis and regulation in the risk paradigm is largely local, and where the goal of regulation is to reduce exposure to an accepted level in the proximity of a discharging facility, encourages the dissipation of pollutants using measures such as larger discharge pipes or smokestacks. However, it does not take into consideration the impact of the traces of pollution farther away. The consequence is that while pollution might be decreased locally, it increases regionally and globally due to thousands of facilities pursuing the same policy. Moreover, because of bioaccumulation, the risk burden on human beings and other species alike is significant. What makes this problem worse, according to Thornton, is that there is no established or accepted method to calculate and analyze the cumulative impact of thousands of global sources of contamination. A related concern is about the synergistic effects of toxic chemicals interacting with each other in the environment. It is extremely difficult, methodologically, to precisely characterize and explain the complex processes at play when thousands of chemicals bioaccumulate in the environment. What is known scientifically is that pollutants do interact in novel and impactful ways, and that doseresponse assessments made for individual chemicals often do not hold up in the context of the physical environment or a human organ awash with other chemicals too. The approach of regulating chemicals individually is therefore not necessarily the best one if the health of the environment and human beings is the goal. What matters more is the total toxic burden, the various modes through which chemicals, individually and collectively, impact biological systems. Therefore, this argument proceeds, an effective regulatory method ought to go beyond straightforward cause-effect and singlechemical models. Thornton also drew attention to the poor quality of available
R i s k
31
data about the effects of chemical substances. As of 1997, according to a report by the National Research Council in the United States that he cites, 70 percent of high-volume chemicals that had been subject to regulatory action, and thereby ostensibly studied comprehensively, lacked even basic chronic toxicity data. For more than half of these chemicals, reproductive toxicity tests were unavailable; for two-thirds, neurotoxicity information was lacking, and for close to 90 percent, immunotoxicity data did not exist. There was even less information about the derivative substances formed as byproducts. Compounding the problem of lack of data is a stark statistic: while the US national toxicology program produces assessments for between ten to twenty substances a year, no less than five hundred to a thousand chemicals are introduced into commercial production during the same time period, with the result that what we know about the impacts of chemicals is progressively diminishing. The problem of data is exacerbated by the approach taken by regulatory processes that conflate the absence of data about risk with the proposition of no risk. Thornton observes that this lack of data implies that the risk paradigm is unable to advance the goal of protecting public health. He points out that following its precepts means that synthetic chemicals are presumed harmless until proven otherwise, and that the lack of existence of toxicological data is often taken in risk analysis to imply that the substance has zero risk. The consequence is that the risk paradigm often functions “on ignorance rather than knowledge.” On this basis, Thornton concluded that public perceptions of safety using the methods of the risk paradigm are unfounded. What these challenges point out is that the risk paradigm has limited efficacy even in the best-case scenarios. It struggles when and where POPs interact and persist in the environment, and it has limited ability to address bioaccumulative pollution on a global scale. Moreover, even the engineering systems created to help address pollution fail at times, as is the case with most artifacts of human design. As Thornton put it,
32
C ha p t e r 2
Human error, aging equipment, and fluctuations in operating and environmental conditions can all result in unexpectedly large releases of chemicals to the environment during capture or disposal. Landfill liners decay and leak, incinerators undergo upset conditions and explosions, chemicals are spilled, and so on. With the use of synthetic products expanding globally, especially in developing countries where regulatory and technological infrastructures are less developed, the scenario of optimally designed, operated, supervised, and maintained control and disposal technologies becomes highly unrealistic.
Given these constraints, public trust in regulatory policy and expert judgments is further diminished, especially among communities that are at the receiving end of a public health catastrophe due to environmental pollution.
Do ubt A key assumption underlying the risk paradigm is that risks can be clearly and scientifically determined. In other words, it should be possible, through the application of the scientific process, to determine the slate of possible outcomes, positive and negative, relating to the impacts of toxins in the environment, and ascertain the probabilities associated with each of these potential outcomes. While there are several approaches and methodologies for the determination of these probability distributions, the broad idea behind risk assessments, theoretically speaking, is not in dispute. The consensus goal is simply to understand and characterize risks scientifically. In the decades after the risk paradigm was developed, however, it became increasingly evident to the environmental risk management community that instead of reliably yielding verities, risk assessments usually exposed a large number of unknowns. Quite often, the effects of certain chemicals are not fully clear—especially concerning how they interact with other chemicals in the environment,
R i s k
33
how they disperse or persist, and what they do to the organs and biological systems of living beings. Far from a scenario where science could be deployed for a risk analysis and the results fed into a valuation and management process, the risk assessment process was often contested, with reason, by scientists and the general public alike. In the terms of the four problems of Douglas and Widalvsky’s risk framework (figure 3), a great deal of risk decision-making is mired in the bottom two quadrants, characterized either by certain knowledge and contested consent, or worse, by uncertain knowledge and uncertain consent, as opposed to being in the top left quadrant, that of science-based risk assessments. This raises several issues. At the very outset, there is the problem of lack of adequate information. For example, the sociologist Phil Brown, writing about public health and the environment, points out that barring very few instances, such as people employed in hazardous industrial contexts, healthcare assessments do not systematically conduct inventories of past exposures to chemicals. There is also very little by way of comprehensive data cataloging the medical histories of the interactions that human beings routinely have with toxic chemicals in their dayto-day lives, such as while gardening or cleaning. Again, even though estimating dose-response relationships constitutes one of the fundamental pillars of risk assessment, a surprisingly large number of chemicals do not have valid and uncontested dose-response curves (graphs correlating the impact that a chemical has at any given dose), especially at low doses. As we have seen, we do not always understand how chemicals interact with each other, either. Brown argued that the science of toxicology, as things stand today, is not often able to sift apart what he termed these synergistic effects. There is also the issue of etiological uncertainty, the inability to conclusively establish that a specific disease is caused by exposure to chemicals at defined doses. For example, tobacco smoke, radon, and asbestos are all sources of lung cancer. It is extremely difficult to conclusively prove that a person with cancer in a community exposed to
34
C ha p t e r 2
toxins got the disease due to that exposure, instead of due to a personal choice like smoking. Equally, it is difficult to demonstrate with certitude that a trend of cancers in a community is due to particular environmental influences. Consequently, physicians often face a considerable burden with clinical diagnosis, which is predicated on the ability to connect a wide range of potential exposures (some unknown or uncharted) with an individual person presenting with specific symptoms. When uncertainty abounds, reductionist scientific methods that form the basis of conventional risk analysis are inadequate for clinical practice. Brown thus observed that there is a considerable amount of guesswork, speculation, and even editorializing by medical professionals, who often must operate without adequate knowledge of factors such as exposure histories, doseresponse predictions, accurate understanding of synergistic effects, valid etiology models, and diagnostic capabilities. As a result, those impacted with illness suffer further collateral damage. The diversity of risks is also rendered complex by the way they are bundled. For example, it is seldom the case that risk management decisions are tradeoffs between one or two forms of hazards. Taking the case of the energy sector, Andy Stirling and Sue Mayer pointed out in a 2000 article that the spectrum of risks are broad and wide, ranging from greenhouse gas emissions to ambient waste; they manifest in many ways; and they have implications that impact not only physical and biological systems but cultures, societies, and economies of real people and their communities. These difficulties and complications show that the objective of precisely identifying the sources of risks through a straightforward scientific process is not quite as simple as the risk paradigm initially promised. Risk management, whether using legal, economic, or other instruments, does not have objective and uncontested information with which to proceed. Risk managers tend to substitute for the lack of objective measurements by selecting a commensurable yardstick. While they argue that the process is scientific, the very fact of such a selection, or scientific reductionism, makes such methods seem somewhat
R i s k
35
subjective in nature. Indeed, a range of judgment calls are buried amid claims of scientific validity, with the result that a variety of impacts are excluded from consideration. Stirling and Mayer thus observed that only “a minority of risks are captured by measuring mortality and morbidity, or monetary metrics.” On the contrary, risks, for the most part, have many dimensions. They also manifest in myriad ways. Some are voluntary while others are not; some are familiar while others remain not understood; and some are acute while others are chronic. The result of selection and exclusion is that the ensuing risk analysis accentuates some priorities and diminishes others. Depending on the analyst or the institution that does the analysis, or the broader political context, Stirling and Mayer point out, different relative priorities are attached to different effects such as toxicity, carcinogenicity, allergenicity, occupational safety, biodiversity, and ecological integrity. The same is true for the differential weights placed by risk managers on different demographic groups, especially those in marginal and liminal spaces, such as workers in toxic industries, racial minorities, and future generations. There are also no simple analytical techniques to guide how such selections or prioritizations ought to be made, for these are ultimately value judgments as much as anything else, often involving the comparison of proverbial apples and oranges. As a result, they pose some of the most vexing, and ultimately unresolvable problems in social choice theory. Stirling and Mayer succinctly concluded that the sheer complexity and subjectivity of environmental health risks implies that the idea of a clear and distinct “science-based” approach to regulation is naive and misleading. Stirling and Mayer therefore offered an expanded rendering of the Douglas-Wildavsky matrix, emphasizing the significance of uncertainty, ambiguity, and ignorance. They began by defining four key terms: 1. Risk: A state in which the slate of possibilities is known and probabilities determinable.
36
C ha p t e r 2
2. Uncertainty: A state in which the possible set of outcomes is known, but there is neither a theoretical basis nor data to assign probabilities for any of the outcomes. 3. Ambiguity: A state in which any description of outcomes can only be characterized in an ambiguous manner—even when there is a degree of confidence that one or another possible impact might occur. 4. Ignorance: A state in which there is neither a good enough characterization of the slate of problems, nor a way of determining the probabilities of various outcomes.
Stirling and Mayer made several critical points about the limits of the risk paradigm. First, they argued that the analytical tool kit with which to address uncertainty remains a work in progress. An example is the case of scenario analysis, one of the key approaches to resolve uncertainty. While the various scenarios might be identified and characterized, it is often difficult to rank them owing to the lack of information on their likelihoods. In other words, while we might identify the set of possible outcomes, we just don’t have the data to pinpoint the probability that any of them might occur. Second, ambiguity remains a critical challenge. Taking the case of greenhouse gases in the atmosphere to make this point, Stirling and Mayer suggested that although we know that the broad slate of climatic, ecologic, and socioeconomic impacts arising from the humanenhanced greenhouse effect, the exact implications for a particular region might be ambiguous, given the complexity of understanding the precise dynamics within a particular geographic space enough to make accurate predictions. Third, there is a need to comprehend the nature of ignorance, for the real world often throws up novel challenges that were hitherto foreign to the human experience, such as stratospheric ozone depletion, endocrine-disrupting chemicals, or “mad cow disease.” Stirling and Mayer argued that in contexts of uncertainty and ignorance, formal risk management processes tend to mask doubt. Their point was that uncertainty, ambiguity, and ignorance are
R i s k
37
addressed in risk assessments by using probabilistic techniques of risk assessment, although the methodologies involved often lack a grounding in facts (data) and a consensus about methods of research and analysis. Stirling and Mayer concluded that such techniques, in effect, constitute a fundamentally erroneous and manifestly inapplicable form of masking, covering up real problems with a thin veneer of scientific authority. The shortcomings in the quality of data, the nature of intrinsic complexities, and apprehensions about the misuse of scientific authority, collectively raise doubt about the very core of the risk paradigm. Doubt is made even more salient due to several factors that characterize the technocene. Among these are the facts that the nature of contamination is global and planetary in scale, not local; the impacts are long term, often lasting decades and centuries; data on the effects of toxins in the environment is “radically inadequate”; and the phenomena are novel, complex, and variable, and therefore, not well understood. Further, it is often not possible to derive simple resolutions through scientific theories or experiments alone. Often, the best regulatory entities can do are mathematical models and simulations. To a significant degree, the data used to drive these models do not stem from indubitable experimental or field studies; they are only the best numbers that an analyst could come up with, and in some cases, they are guesses by experts. Moreover, in many instances, models are run using standard software, incapable of engaging in a serious and deep causal understanding of natural processes. If we applied classical philosophical criterion about the scientific method, such analyses would face serious intellectual challenges. And yet forecasts at least partly based on assumptions and speculations are what is available to policymakers who need to resolve vital issues relating to human and planetary health, often in contexts of extreme urgency. This is a point that has been repeatedly made in recent times with reference to emergent threats, such as those posed by genetically modified organisms, nanotechnology, and endocrine disorders, to take a few examples.
38
C ha p t e r 2
Reflexi vi ty In his iconic work The Structure of Scientific Revolutions, Thomas Kuhn posited what he termed normal science, or day-to-day scientific work, which operates within an agreed framework of theory, practices, and methods, the object of which is to resolve problems posed by the natural world. He argued, however, that there are some circumstances in which this agreed framework could be contested, resulting in considerable debate, or in some cases, scientific revolutions, in which a new explanatory framework is adopted to replace or significantly modify the existing one. He argued that such revolutions have occurred when the observed data was not accounted for fully by the available theoretical and procedural framework. In recent times, scholars have applied the idea to understanding the difficulties faced by the risk paradigm in contexts such as uncertainty, ambiguity, and ignorance—states where there isn’t a paradigm that all practitioners agreed on. An important concept here is that of post-normal science, advanced by Jerome Ravetz and Silvio Funtowicz. Post-normal science is characterized by five key factors: a) facts are uncertain; b) values are in dispute; c) the stakes are high; d) the decisions need to be made urgently; and e) it is difficult to frame the problem, owing to the absence of an agreed-upon methodology to provide a clear solution. There are two possible scenarios that explain the contrast between normal and post-normal science. In normal science, decision stakes, as far as the impacts of scientific outcomes on the health and well-being of general publics, are low, and the basis of knowledge is not very contested. Quality assurance is done using traditional processes in science, such as peer review or referring, where the stakeholders are mostly within the scientific community, and the identity and role of scientific experts are clearly defined. In sharp contrast is the condition of post-normal science. This is a scenario in which the decision stakes become higher, with science needed to resolve an external public dispute in contexts of scientific
R i s k
39
uncertainty. There are more stakeholders here, and they determine quality in disparate and heterogeneous ways, almost always bringing values into play, such as interjecting cultural beliefs about safety or cost into a risk assessment discussion. Therefore, a result produced validly under one set of conditions might be seen as utterly inappropriate in others. For example, measurements of a toxicant are usually given as an average over time, space, or exposed populations. This might well be adequate for regulatory purposes, but not so to affected communities, who could argue that this approach ignores damaging peak concentrations or harm to them. In post-normal science, scientific investigations are not always owned by or available to the public; they are instead often vested in the interests of others, for example, corporate entities (see the case studies in chapter 4). In such a context, quality assurance is no longer a simple issue, driven by the integrity of peer review. Instead, it becomes controversial, incorporating a wide range of concerns, from confidentiality to other nonscientific factors—with political economy being the main driver. This is no longer a scenario of science being applied to public policy, but a much broader struggle involving administrative power, political tussles, and the rights of citizens. Therefore, quality control becomes a much wider affair, often including journalists and activists who contest official or corporate claims. The condition of post-normality is not helped by ambiguity about the role of experts, who often find their scientific and public roles and interests conflicting. Many are forced to operate under constraints of time and resources—not ideal for scientific investigation—and compromise their standardized routines as a result. Funtowicz and Ravetz wrote that this dire scenario does not mean that traditional science needs to be jettisoned. Rather, they called for a rethinking of to how to deploy science in contexts of uncertainty, ambiguity, and ignorance in high-stakes contexts. They argued for an acknowledgment of uncertainty and complexity, on the one hand, and the explicit rendering of values, on the other. This, they suggested, could be the basis for a new model of public engagement with
40
C ha p t e r 2
Science
The problem
Changes
Competing representations
Policymakers
Civil society
Policy process
Implementation/resistance
Figure 4. The condition of post-normal science in risk management. Source: Author, based in part on a 1997 lecture at UC Santa Cruz by Piers Blaikie.
science, based on an interactive dialogue. Such a dialogue would, in principle, take into account the salience and importance of social context, incorporating local history and the experiences of citizens and civic entities. New problem-solving strategies might come into view, embodying different fact/value hybrids and incorporating local knowledge, personal knowledge, citizen science, and democratic principles of knowledge production and value resolution. Instead of the simplicity and elegance of the situation depicted in figure 2, the condition of post-normal science involves a messier and more complex situation, in which dead certainties give way to competing representations, resulting in a cacophonic iterative loop. Figure 4 is an approximate summary. The idea of post-normal science has gained steam in the movements for science-and-democracy globally. This movement encompasses a wide range of concerns. However, it offers important insights into how environmental risks might be addressed in conditions of uncertainty, ambiguity, and ignorance. Stirling and Mayer offered an excellent summary of the prospects, listing a host of criteria. The
R i s k
41
first is humility. They urged a “culture of humility” in which knowledge claims that purport completeness or definitiveness are replaced with a more guarded open-endedness and willingness to listen. Next is the importance of a quest for completeness, to widen the scope of regulatory appraisal, and to include in the analysis not only the traditional processes of causality, but the cumulative, additive, complex, synergistic, and indirect affects that spawn doubt and apprehension. It is also important, they argued, to systematically identify not only the benefits of an action but the adverse consequences in a wide range of contexts. Moreover, they called for a comparative approach rather than a case-by-case one, so that public policy decisions can be made on the basis not just of the case in question but on the experience gleaned from others that are similar. The last four criteria are specifically about the public interface in decision-making. Stirling and Mayer called for a “full engagement by all interested and affected parties, to learn the full extent of existing knowledge and understand the priorities and assumptions of all the stakeholders.” They therefore emphasized the need to map the consequences of different value judgments and framing assumptions. Next, they demanded transparency in methods, including detailed audits of information and decision-making flows. Finally, they emphasized the importance of diversity, seeing the inclusion of a wide range of experiences, opinions, and social perspectives as a hedge against uncertainty and ignorance.
Cautio n Apprehension, doubt, and reflexivity urge a call to caution. In the context of environmental risk mitigation and management, this impulse has manifested itself in the concept of precaution. The precautionary principle has emerged as one of the key tenets of an alternative to the risk paradigm, at least in contexts involving scientific uncertainty, high stakes, and low consensus among scien-
42
C ha p t e r 2
tific experts. In its essence, it captures the idea that prevention is better than cure. It embodies and builds on many of the concepts associated with reflexivity—such as humility, the acknowledgment of complexity and variability in the natural world, and the recognition that planetary systems critical for human survival are often precariously vulnerable. Some advocates of the precautionary principle go further and talk of the intrinsic salience of all life. There is also a body of work in the field of environmental human rights that argues that natural entities, including trees and rivers, have legal standing. Precaution is thus both a principle, a set of practices, and an umbrella movement that emphasizes an approach to regulation involving long-term, holistic, and inclusive perspectives, and therefore, scrutiny of all claims about benefits, costs, justifications, and alternatives. The precautionary principle has been described succinctly by the philosopher Per Sandin in the following manner: An action A is precautionary with respect to something undesirable X, if and only if: 1. A is performed with the intention of preventing X 2. the agent does not believe it to be very probable that X will occur if A is not performed, and 3. the agent has externally good reasons a. for believing that X might occur, b. for believing that A will in fact at least contribute to the prevention of X, and c. for not believing it to be certain or highly probable that X will occur if A is not performed.
While there are many variations of the precautionary principle, they are all conceptually founded on the syllogism above. They also share other common elements. First, they agree that for the principle to apply, there must be a clear visible, salient, external threat to human health or the environment. Next, they concur that the actual extent and impact of this threat must be uncertain or ambiguous
R i s k
43
for scientific investigators. Critically, they also agree that because of the potential destructive nature of the threat, some kind of action is needed. Put differently, the precautionary principle mandates that in some contexts, the absence of scientific evidence to establish the extent and impact of a hazard must not stop us from acting to prevent the worst consequences. Philosophically, the precautionary principle has three elements. The first of these is intentionality. Any action must be performed with the explicit intent of mitigating a defined threat. The hazard in question must, however, be more than a mere logical possibility— there must be a basis, whether historical, comparative, or observational, to make the case that the threat is real and significant. Next, the precautionary principle only applies when there is scientific uncertainty. In deploying the principle, one should not believe it to be certain or highly probable that a threat will materialize if the precautionary action is not performed. It is this uncertainty that makes the precautionary approach different from traditional approaches to risk management, which assume certainty. The third criterion is that of reasonableness. For a regulatory action to be precautionary, the risk manager must not only have good external reasons to believe that the threat will materialize otherwise, but that the regulatory action might help mitigate its worst impact. It is important to note here that precaution is not the same as pessimism or unsubstantiated fear: the latter is about beliefs, whereas the former is about our actions. The precautionary approach differs from the risk paradigm in at least three ways. First, while the risk paradigm frames the question as “how safe is safe enough?,” or how much harm can be tolerated, the precautionary approach suggests that we ought instead to emphasize how much harm we can avoid. Second, the precautionary paradigm points out that the quest of traditional risk assessment— to carefully identify and evaluate known hazards and calculate the probabilities of harm—can be time-consuming, thereby allowing harm to continue to occur while technologies or substances continue
44
C ha p t e r 2
to be used. Instead, precaution, with its reversal of conventional burdens of proof, acts as a speed bump, preventing harm. Third, precaution demands the consideration of safer alternatives, and proposes “alternatives assessments,” with any regulation required to weigh all the possibilities to meet a given goal. In emphasizing these elements, the precautionary paradigm aims to ensure that chemicals without safety data are not allowed to be deployed or marketed. The goal here is to achieve higher levels of protection, mitigating, for example, the number of industrial toxins abundantly found in the open seas and distant tropical forests. It has a democratic and a peoplecentered mandate, asking for consultation, transparency, the consideration of worst-case scenarios, and a preference for the health and well-being of people and their environments over the economic interests of industries. The precautionary approach is strictly speaking not a paradigm, but a spectrum of options and approaches. The stronger versions of the precautionary, or no-regrets, philosophy, which I have described thus far, mandate action when there might be significant costs. The 1982 United Nations World Charter for Nature and the Wingspread Declaration of 1998 are both good examples. On the other end of the spectrum, the weak version allows preventive measures to be taken in contexts of uncertainty, but without a requirement for action. Here, actions might be postponed because of, for example, the prospect of economic harm. Examples of the weaker version of the precautionary principle are the 1992 Rio Declaration on Environment and Development and, in the same year, the United Nations Framework Convention on Climate Change. There is some debate about whether the weaker version is an example of the precautionary principle, with some commentators arguing that it softens the hard mandatory edge of the principle itself. Some of this debate stems from the fact that a legal principle is a source of law and therefore with a mandate in law, with courts being able to apply it in enforcing actions, as is the case in the European Union. On the contrary, an “approach” is merely suggestive.
R i s k
45
While the debate on this subject continues, it is worth recognizing that some version of the precautionary principle has been adopted in practice in several contexts. Among international treaties are the World Charter for Nature, which was adopted by the UN General Assembly in 1982, arguably the first international endorsement of the precautionary principle; the 1987 Montreal Protocol; and the just-mentioned 1992 Rio Declaration. Examples of the precautionary approach in national law include the 1983 Swiss Federal Act on the Protection of the Environment, the New South Wales Protection of the Environment Administration Act of 1991, the 2000 European Commission Communication on the precautionary principle, and the 2005 Charter for the Environment in France. Other variants of the precautionary approach are seen in Japan, the United States, the Philippines, and India, among other countries. Some companies have also adopted the precautionary principle as part of their stated chemicals strategy. In the past few years, however, the precautionary approach has evolved and expanded in scope, addressing perceived gaps in the regulation of toxic substances. To cope with the issue of persistent and bioaccumulative substances, a principle of zero discharge has been proposed, with the goal of eliminating all such releases. One way of achieving this is clean production, the redesign of products and processes to eliminate toxic chemicals before they are released into the environment. The idea here is to use the cleanest available technologies possible. Advocates argue that technological alternatives exist in many sectors, and have even been implemented in some industries, such as dry cleaning. They therefore demand more widespread searches for such alternatives across industrial sectors. At a philosophical level, many explicitly advocate for the removal of the regulatory approach that considers chemicals on a case-bycase basis, to replace it with one in which the burden of proof is reversed. The so-called reverse onus principle requires those who propose to produce a chemical to demonstrate its safety. The idea here is that under the risk paradigm, as pointed out earlier, lack of
46
C ha p t e r 2
data was often conflated with evidence of safety, and chemicals were used or emitted until evidence about their harm was documented. Advocates of reverse onus argue that this is not as radical an idea as it might seem at first glance, as it is the principle underlying the approval of pharmaceutical products. Another, related approach is to shift the unit of regulation from individual chemicals to classes of chemicals. The concept here is that if chemicals of a given class, such as organohalogens and metallic pollutants, have proved harmful in the past, a new chemical of the same class ought to be considered harmful at the outset. Over time, all synthetic chemicals that are part of classes of chemicals in which a prima facie case can be made, based on history, for hazard, would be progressively reduced, a process termed chemical sunsetting. This would gradually transform current industrial technological systems to make them compatible with human and ecological health. Since this process is gradual, it need not destroy industries, but instead would transform them over a period of time.
C onclusio n The risk paradigm that emerged in the mid-twentieth century articulated a commonsense approach to a novel public policy question: how to manage the deleterious impacts of toxins in the physical environment. The proposition was simple and straightforward: understand the underlying problem by studying causality scientifically, determine the best way to mitigate the worst effects using all the economic and legal tools at the disposal of environmental protection agencies, and execute the ensuing policy with clear communication to the public. In adopting such an approach, it consciously avoided conflating the two vocations of science and politics that Max Weber so eloquently characterized more than a century ago. Yet, this careful and consensual approach to public policy soon became a subject of considerable debate and soul searching. The
R i s k
47
psychological work on biases indicated that risk perception was not a simple matter of digesting the scientific facts, and that people—lay and expert alike—had biases that guided how they understood risks, interpreted communications, and responded to rules and regulations. Risk management also engendered a significant debate about individual responsibility versus state coercion as the way to govern. Moreover, it starkly contrasted two divergent ideological perspectives, Panglossian conservatism and reflective precaution. The former insisted that regulatory moves ought to be made based on verities—or in other words, society should do nothing until negative effects actually happen, risking the possibility of significant damage to the public good. The latter, on the other hand, held that public policy should not wait for negative impacts to occur—but this raised the further issue of costs, and the displacement effects involved with such policies. With the passage of time, more questions arose about the efficacy of the risk paradigm, including some that struck at its very core, when it became evident that risk assessments themselves were frequently challenged by a complete lack of data, or scientific uncertainty and ambiguity, or in some cases, even lack of clarity about investigative methods. This was no longer a scenario in which the scientific method could be applied in a traditional, straightforward manner. Recognizing this, some commentators, like Ruckelshaus, have observed that risk assessment is “something of an intellectual orphan.” Scientists were especially uncomfortable when the public and politicians asked for clear guidelines where they saw uncertainties and the need for more research. Alas, what policymakers asked for was in effect to use scientific information in a way that is outside the “normal constraints” of science. Science thereby became entangled with politics, with all the tensions and contradictions that Weber had identified. In particular, the neat division between facts and values that stood at the core of the risk paradigm became, in many instances, a chimera. With decisions increasingly being about values, science could only do so much. Such a scenario made risk
48
C ha p t e r 2
management extremely difficult, and the first commissioner of the EPA argued in his second term that there was a need for managers to be given more flexibility, and for the public to be engaged in debate about choices. Not surprisingly, these debates continue to this day. During the decades since the risk paradigm was first developed, it has been modified in many ways, but the core issues that it identified in its early days remain with us. Among these is the question of how to do public policy in the context of scientific uncertainty. These questions eventually gave rise to the precautionary framework. This is not a unified approach that displaces the risk paradigm, but rather a set of concepts that broaden the scope for public policy solutions. These include, among other things, a reflexive understanding of the nature of uncertainty and ambiguity; an understanding of bioaccumulation and the global nature of pollution and harm; the careful phaseouts of toxic chemicals based on similar classes of toxins; the articulation and implementation of weak and strong versions of the precautionary principle in law and public policy; and new forms of industrial engineering that embrace the concept of zero waste and pollution. Needless to say, such interventions do not address the political economies that shape distribution of risks in society, an important question that will be addressed in chapter 4.
3 Disaster
In tr o ductio n Even as scientists and policymakers grappled with the public health challenges posed by the presence of chemical toxins in the environment, the world was rocked by a series of major industrial disasters such as Bhopal, Chernobyl, and Fukushima. These dystopian nightmares, which embodied the sheer dread of the technocene, occurred at the intersection of three broad trends. First, there are sociological arrangements—economic, political, cultural, gender, and racial, among others—which underlie inequities and stratifications at both the global level and within nations and communities. Next is human frailty, with roots both in social psychology and in the dynamics of institutional interactions. Third, there are the technological systems themselves, with human-engineered infrastructures of great complexity that episodically fail to perform safely, and consequently cause mass casualty. Together, they combine to create the potential for catastrophe, exacerbated in some contexts by the state of nature. 49
50
C ha p t e r 3
When industrial accidents happen, the tendency, especially in the media and public discourse, is to look for scapegoats. Time after time, a catastrophic event is reported in gory detail, followed by a search for a cause, a clamor for retribution, a degree of corrective action, and then closure—until the next event, when this sequence recurs. Hidden largely from the public gaze, however, is academic research that suggests a much more complicated and nuanced understanding of how and why industrial accidents happen. There is, to begin with, a literature on the nature of human error, which speaks to human psychology and the cognitive capabilities and limitations of our species. Then, there is research exploring the nature of organizational dynamics, including the conditions under which mistakes and misconduct occur, and the role played by the very structure of firms, administrative units, and other groups in emergent disasters. Yet another thread that explains the causal arc of industrial accidents is that of bureaucratic malevolence, stemming from missing expertise, entrenched corruption, and discourses of apathy. Together, these literatures combine to create new and novel ways of theorizing industrial accidents. Among them are normal accident theory and its counterpoint, high reliability organization theory. The purpose of this chapter is to explore each of these threads, with a view of gleaning broader philosophical narratives that can shape a humanistic conversation on the nature of industrial accidents. As in chapter 2, the method will consist in examining some of the keywords that are deployed in the fields of psychology, sociology, politics, and management, among others, to describe and characterize the core concept. Not surprisingly, for the most part, these keywords are ordinary language terms used in a technical sense.
Human Err o r A classical expression of frailty is on display whenever a search for a scapegoat following an accident or disaster points to the actions of
D i s a s t e r
51
a set of individuals. The concept of human error is an oft-recurring keyword in such contexts. It is, however, a problematic term, as many catastrophic events are caused by systems and processes rather than by individuals. This section will explore the notion of human error, taking into consideration the significant psychological literature on the concept. A place to start is with the idea of error itself. Of the five distinct meanings of the word identified by the Oxford English Dictionary, two are particularly relevant: “The condition of erring in opinion; the holding of mistaken notions or beliefs; an instance of this, a mistaken notion or belief; false beliefs collectively,” and the idea of a mistake, as in the following definitions: “Something incorrectly done through ignorance or inadvertence; a mistake, e.g. in calculation, judgment, speech, writing, action, etc.,” and “A mistake in the making of a thing; a miscarriage, mishap; a flaw, malformation.” These meanings of the word have been in common usage from the fourteenth century. Social scientists who study organizations have, however, tended to use the word more specifically. They refer to contexts in which planned sequences of activities, either physical or mental, do not meet their intended outcome. Further, this failure is not attributable to either chance or external agency. It is worth noting here that conceptually the concept of error applies only in the case of intentional actions—because errors, by definition, cannot exist if the action itself is unintentional. The academic literature in psychology identifies at least two specific types of error: active and latent. The former consists of errors associated with the role of frontline operators in complex technological systems. They might be crews in control rooms of industrial plants, or pilots of airplanes, executive officers on ships’ decks, air traffic controllers, or others in similar positions where their decisions are critical for safety and whose actions are likely to be immediately felt. Latent errors, by contrast, are caused by entities not immediately in the line of action. They might be the result, or the culmination,
52
C ha p t e r 3
of decisions about processes made or infrastructures installed in the past, but which at a critical juncture might combine with other factors to cause an accident. A poorly designed instrumental panel might lie dormant across many use cycles. However, at a particular time when something starts to go wrong, it might end up playing a critical role in the unfolding of an event, such as when a concerned crew have difficulty comprehending what that panel is telling them. Latent errors are the results of decisions usually made by people who are distanced either spatially, temporally, or both, from direct interaction with the unit. For the most part, they tend to be designers, higher-level decision-makers, and managers. However, at the time of an event, actual workers on the site make the decisions and choices that result in an unwanted outcome, although for the most part, they were unaware of the structurally hidden dangers due to design and commission choices made by others farther up in the hierarchy. Another set of important keywords are slips, lapses, mistakes, and violations. The terms slips and lapses are derived from the concept of errors. Slips and lapses are errors stemming from a failure of execution and can occur even if the design or plan is adequate. Mistakes occur earlier in the timeline; they stem from failures of judgmental and/or inferential processes related to the selection of an objective and the identification of the means with which to achieve it. Mistakes can occur whether the actions conform to the plan or the original design or not. They are therefore likelier to be more complex and less comprehensible on the face of things than slips and lapses, as they arise out of thought processes or mental states that might be the preserve of the individuals involved at a particular time and place. Consequently, they pose significant threats, as they are not so easy to grasp or detect. Violations, the fourth category, are almost by definition intentional, for the very idea of a violation implies deliberation and intent. Some violations are routine, whereas others are exceptional. Routine violations either stem from the human tendency to choose the path
D i s a s t e r
53
of least effort, or from environments that are indifferent to punishing violations, or for that matter, rewarding observance. A common argument in the psychological literature on violations is that workers often find it convenient to violate rules in order to more quickly meet their objectives, especially if their transgressions are about what they see as trivial and rarely sanctioned safety procedures. Exceptional violations, in contrast to the routine, are often a product of specific contexts, producing what are called “system double binds”—operational circumstances that result in violations regardless of how wellintentioned or motivated the operator is. Last, but by no means least, is the idea of fallibility—especially concerning decisions. Important and often influential choices made by designers and managers, from the highest level at corporate headquarters to those in charge of a small unit of a given plant, turn out, at least in hindsight, to be poor. This is true even for the best-run companies and organizations. Compounding this apparent fact of life is the balancing act that organizations are often forced to make between the goals of production efficiency, on the one hand, and human and environmental safety, on the other. While these two goals are not necessarily incompatible in theory, there are many instances in real life where paucity of resources entails tradeoffs, at least in the short term. Further, given that accidents generally happen unannounced and randomly, and that many industrial systems are large and complex, there is often a lack of awareness among designers, managers, and operators of the imminence of any specific threat or danger, and indeed, even a complete comprehension of the overall safety profile of a particular plant. Making matters worse is an element of subjectivity in how decision-makers interpret feedback about safety that they receive. Some tend to switch on their defensive filters, buffering them from bad news and deflecting any problems to outside influences, be they unions, market forces, or resource shortages. Defensiveness is also evident in units and firms where bad safety records are attributed to the actions of operators. In contexts
54
C ha p t e r 3
where defensiveness is a trait in an organization or a subunit, effective remedies for problems are not actively pursued or acted on. In such cases, the problem of the original fallible decision is further exacerbated. Poor design can lead to bad outcomes if the line managers and workers at critical subsystems are unable to compensate for these flaws with astute actions at critical moments. Good and effective line management can result in better or safer outcomes, but this often necessitates investments. An example is quality training, which can result in workers having an ability to work effectively under time pressure, with a better perception of hazards, good skills, knowledge of rules, and ultimately, good management systems resulting in effective scheduling, personnel management, and procedures. Addressing safety also necessitates creating processes that clearly identify and address the preconditions of unsafe acts and building a work culture that recognizes that unsafe acts will occur despite the best measures. Such recognition can help create mindsets that enable operators to act effectively to prevent unfortunate outcomes. The keywords just discussed—latent and active error, slips, lapses, mistakes and violations, and fallibility—help psychologists explain and characterize the concept of human error. The upshot is that while there are indeed many instances in which the error concerned is caused by failures in execution, a great deal of “human error” can be understood as a result of problems beyond the acts of an individual or a group of individuals. The idea of latent error, for example, points to the existence of design faults, often visible only in hindsight. Again, the concept of a mistake, rooted as it is in failures of judgment, recognizes that even well-meaning and -intentioned operators can cause errors. Moreover, while judgments about the actions of individuals can be made in hindsight, it is quite important to be aware of the challenges of making critical decisions in the heat of the moment. Particularly important here are the interpretive difficulties in such contexts, stemming from ambiguities in the signals operators receive from systems.
D i s a s t e r
55
The case of violations is slightly different, although there are often reasons why they occur. A regular but generally nonthreatening alarm, for example, might be ignored with little or no consequence on a day-to-day basis—except on the rare occasion when it combines with some other event to trigger a sequence that leads to an accident, and in the process leaves operators second-guessing. The psychological research on human error thus contends that in complex engineered and high-risk systems, the rules of engagement are not always cut and dried. In some instances, due to the nature of the systems and the cognitive and other psychological limitations of the human actors, the narrative arc of the story might, in hindsight, or after the event, seem almost inevitable. In others, the case can be made that effective processes created a context of robustness in understanding, interpretation, and action, leading to better outcomes. These psychological insights are therefore quite important for the understanding of accidents and disasters stemming from industrial accidents. However, there is an important twist to this story—the role of human social arrangements, as elucidated in the social scientific work on the dynamics of organizations. The keyword organizational deviance captures this concept succinctly.
Organi zatio na l Dev ia n ce Studies of organizations have been an important concern for sociologists since the origins of the discipline, when one of its founders, Max Weber, warned that “a society dominated by organizations imbibed with legal rational authority would suffer negative consequences.” Subsequent scholarship posited that with the rise of formal organizations came the prospect of organizational mistake, misconduct, and disaster. These adverse outcomes stemmed from unanticipated consequences of actions, some of which were contrary to the stated objectives. The fact that suboptimal outcomes can arise in formal organizations with formal structures and processes cre-
56
C ha p t e r 3
ated to ensure certainty, conformity, and goal attainment gives rise to the concept of organizational deviance. Diane Vaughan provides a useful working definition of the concept of organizational deviance: “an event, activity, or circumstance, occurring in and/or produced by a formal organization, that deviates from both formal design goals and normative standards or expectations, either in the fact of its occurrence or in its consequences, and produces a suboptimal outcome as organizational deviance.” This definition includes behavior by individual members of the organization and applies regardless of whether they are conforming or deviant. Organizational deviance lies at the heart of what Vaughan calls routine nonconformity, which, she observes, is “a predictable and recurring product of all socially organized systems.” Routine nonconformity, in turn, is a consequence of three factors: the environment of organizations; organization characteristics (structure, processes, tasks); and the cognitive practices of individuals within them. Vaughan states that the social origins of routine nonconformity lie in the interconnections between these three elements. The upshot of her argument is that “aspects of social organization typically associated with the bright side also are implicated in the dark side.” The sub-sections to follow explore each of these three factors. As always, it is useful to start with some definitions of ordinary language words used in a technical sense. At the outset are the words mistake, misconduct, and disaster. With the definition of organizational deviance as a starting point, mistakes are defined as “acts of omission or commission by individuals or groups of individuals, acting in their organizational roles, that produce unexpected adverse outcomes with a contained social cost.” This definition recognizes the inherent uncertainties in outcomes. Mistakes in a hospital, for example, may harm individuals differently. Misconduct, by contrast, involves “acts of omission or commission by individuals or groups of individuals acting in their organizational roles who violate internal rules, laws, or administrative regulations on behalf of organization goals.” Vaughan points out that the nature of these adverse conse-
D i s a s t e r
57
quences will depend on the act itself, with the result that the social costs could be either extensive or limited. The third keyword here, disaster, is defined as a “type of routine nonconformity that significantly departs from normative experience for a particular time and place. It is a physical, cultural, and emotional event incurring social loss, often possessing a dramatic quality that damages the fabric of social life.” For an incident to be defined as a disaster, it needs to be large-scale, unexpected, and extraordinarily costly to the general public. According to Vaughan, mistakes and misconduct precede accidents and disasters. The latter, in turn, are distinct from mistake and misconduct by the nature and extent of the element of surprise, and of social cost. With these broad definitions of organizational deviance and its residual components mistake, misconduct, and disaster established, the stage is set to discuss three more keywords that help characterize routine nonconformity: the environment, organizational characteristics, and cognition/choice. The Environment In organizational theory, the term the environment consists of forces or institutions in and around an organization that affect its performance. It includes its resources, networks, and other relational structures within an organization, as well as the wider social, political, economic, legal, demographic, cultural, ecological, and technological contexts. Given the number of entities involved, the environment is almost by definition complex, dynamic, and turbulent, which can make for a considerable degree of uncertainty. Consider, for example, novelty or newness in various forms—such as new organizations, employees, programs, products, or services. In each instance, novelty and its implication—the absence of established routines—can potentially result in the creation of new roles, processes, procedures, and people, be they colleagues or clients. These new processes and procedures can result in further complexity and uncertainty. In such a condition, there could be ambiguity in the
58
C ha p t e r 3
understanding of the state of a system among operators—workers, line managers, and others—resulting in mistakes. Another important dimension of the role of the environment is the way prevailing belief and value systems become institutionalized, creating cultural rules governing the roles and goals of actors, be they organizations or individuals. Although rules, in theory, are created as means to ends, they sometimes become ends in themselves. When this happens, there can be conflicts between the way the rules are applied, and the needs in specific contexts. This, in turn, can lead to suboptimal outcomes. A final important aspect of many organizational environments is the role of power struggles, particularly over scarce resources caused by a host of factors including competition, disruptions in supply chains, and the activities of regulators. Responses to such contexts and conflicted attempts to cope with the challenge can potentially disrupt stable structures, leading to compromises and ultimately making the organization deviate from its original goals. Mistakes, misconduct, and disaster often stem from environmental contexts. At the macro level, mistakes often arise due to competition for scarce resources. A competitive environment can also engender mistakes at the micro level. For example, norms of efficiency can be correlated with the demands of the bottom line. Or mistakes could arise due to production deadlines, or institutional goals and metrics that reward meeting economic targets. Over time, mistakes can become “quasi-institutionalized,” resulting in the deployment of unproven or untested technologies into the marketplace. Competition for resources can also result in reductions in staff workforce, leading to inadequate supervision and mistakes that occur as a result. The environment can also engender misconduct. Taking the example of the regulatory environment, Vaughan argues that the “sources of regulatory failure are socially organized and systematic across cases, thereby undermining the efficacy of deterrence.” As with mistakes, power is a key determinant. Organizations with clout
D i s a s t e r
59
can shape both the broad regulatory environment, as well as specific outcomes. Such organizations are successful because they create narratives that justify and legitimate their actions in the public domain. On the opposite end of the spectrum, organizations with limited or blocked opportunities for success are likely to violate. It is important to note here that competition causing misconduct is not restricted to the economic realm and extends to a much broader range of scarce resources critical to the survival of any given entity. Not surprisingly, power and politics play significant roles in accidents and disasters. Conflicts of interest within organizations often result in poor design decisions, which then engender suboptimal outcomes. It is easy to see why. Political conflicts—whether due to scarcity or other factors—can result in ineffective monitoring, investigation, and sanctioning of offenders, and less than optimal responses to incidents. Over time, these increase the likelihood of accident or disaster. Common tradeoffs between cost and safety made in the context of competition and scarcity result in a decline in overall safety records. Moreover, offending organizations might be able to retain their cultural legitimacy by denying blame or displacing it from themselves to some other entity (“operator error”). In some instances, this is done by “the creation of ‘fantasy documents,’ official plans to respond to accidents that are culturally reassuring but lack appropriate resources, strategies, and knowledge for an effective response to crisis.” The twin issues of power and culture are amplified in what are called high-velocity environments, where factors such as demand, competitors, technology, and information are in flux or uncertain. In such contexts, disasters arise when responsible agents charged with the task of monitoring such changes fail to do so. Organization Processes in an organization also contribute to the production of outcomes that are contrary to the goals and norms of system design-
60
C ha p t e r 3
ers. Here too, power plays a role in process-related errors. For example, decisions made by higher-level managers concerned with corporate balance sheets might create processes that endanger the safety of employees and clients. Power struggles between subunits can set up a context in which initiatives and programs promoting health and safety are ignored. Subunits may also pursue divergent goals, producing an outcome that is not in sync with that of the organization as a whole. Many organizations are also rigid or lack a capacity to correct errors through robust feedback processes. Both these tendencies lead to the routinization of deviance and result in what has been described as deviance amplification, a process in which small deviations grow over time, with significant unexpected consequences. Moreover, decisions to correct errors could develop into what have been described as error-amplifying decision traps. Sociological research has documented cases that show that responses to mistakes by powerful interests end up exacerbating or amplifying error. Those higher up in organizational hierarchies might suppress mistakes and deny responsibility in a bid to protect the status and power of individuals, subunits, organization, or even a profession. As a result, formal and informal patterns of behavior might emerge within organizations, and get institutionalized over time, creating workplace cultures antithetical to safety. Research also shows that such organizational cultures are often engendered by competitive, regulatory, and cultural environments. For example, performance pressures might result in insufficient resources allocated to safety, or in the adoption of processes that help conceal or even encourage behavior that violates laws and norms. Organizational cultures also produce process-based violations. According to one theoretical framework, employees are socialized in groups—at least some of which encourage or normalize the violation of the rules, laws, and administrative regulations of the organization. Through such processes of onboarding and regularity, subunits, and in some instances, the organization itself, rewards deviance and nonconformity, while punishing those who attempt to play by the
D i s a s t e r
61
rules. Rank, informal cliques, and differences between professionals in the shaping of deviant cultures are contributing factors. Processes of deviation caused by environmental and cultural factors can potentially result in disaster. As mentioned earlier, environmental strains can help create internal processes that are characterized by conflicting goals, performance pressures and deadlines, and reward systems that reinforce the goals of productivity over those of safety. Some of these processes develop over long gestation periods during which rule violations and discrepant events accumulate to the point where they become routine and not noteworthy, especially in contexts in which cultural beliefs about hazards, nurtured as part of the cultural mores of the organization, allow them to be ignored or rationalized away. In some instances, “practical drift” occurs—a process, often relevant to tightly organized subunits, in which formal and documented procedures devised to tackle worst-case scenarios are incrementally modified, and therefore “uncoupled.” The resulting gap between rules and action can impair effective response in a crisis. Conversely, rule conformity can also contribute to accidents and disasters. Interestingly, such pathologies are more widespread than one might imagine, including in otherwise well-run organizations. To summarize, routine nonconformity can be a product of mistake, misconduct, informal organization, and cultures. Deviance from norms, and as a consequence, mistakes, misconduct, and disastrous outcomes, are also evident in the discharge of day-to-day tasks in the workplace. This, in turn, has several dimensions. First, negative outcomes are often a product of skill levels, practice, and the social contexts in which operators work. Equally important is how risks are distributed among various roles in the job; how organizations delegate and subdelegate, and how they understand and manage mistakes. Deviance is also often a byproduct of gaps in knowledge and communication between various actors in the workplace. Relevant here is the role of decision-making in contexts of scientific and technical uncertainty, or “imperfect knowledge.” This is particularly true for engineered systems that,
62
C ha p t e r 3
even in the best of times, can tend to be “unruly” in some contexts. Engineers often work under conditions of ambiguity, and their practices are driven as much by practice and prior experience as any given set of rules. The significance of practice is acknowledged in an important keyword in the literature on organizations—tacit knowledge. The term describes common understandings derived from practice, a “core set” of experiences that are used by operators during their work. It captures the reality that ad hoc strategies, methods, workarounds, and “local” knowledges shape workplace practice in ways that formal systems are unable to capture because of their sheer distance from the scene of work. Conversely, the very nature of tacit knowledge means that people higher up the chain of command are often not fully cognizant of the active rationales of those who operate in local contexts. These decision-makers thus do not possess the “core set” of local tacit knowledge, and therefore have an imperfect understanding of how the system works in practice. This lack of understanding, in turn, leads to what has been described as “interpretive flexibility,” with different agents or actors in a system interpreting the meanings of technological signals differently. The gaps in understanding and interpretation, and resultant ambiguities, create the basis for suboptimal unanticipated consequences. Mistakes can arise in such contexts when errors of commission stemming from skill or practice, or indeed via poor communication, accumulate over time. In certain instances, some mistakes can be compounded by others, until the negative consequences become irreversible. Misconduct, on the other hand, can arise when the very ambiguities in the system are exploited in new, novel complex technological systems to suit the purposes of some miscreants. Examples to illustrate this point can be found, among other places, in studies of financial fraud involving new technological systems, and of “technocrime” involving computers, electronic surveillance, and accounting technologies. In some instances, workers and managers manage to “convert disorder to order,” comprehend the workings of the system,
D i s a s t e r
63
and take corrective action. However, there are many examples in which workers are unable to do so either because the workings of a technological system mystify them or because operators are simply not equipped to cope with inconceivable occurrences, as training for adverse outcomes often addresses single failures, rather than complex interactive ones. They might also lack the requisite coping resources, or work in cultures that accept or tolerate what are seen as minor flaws or errors to meet deadlines and other commitments. The way workers understand, interpret, and make sense of risk amid uncertainty and ambiguity is also relevant to this discussion. At the outset, the social location of the agent—worker, expert, or manager—determines how they understand and make sense of risk. Also important are cultural beliefs stemming from social context and location that engender what have been termed “failures of foresight.” Indeed, failures that result from social stratification and cultural perceptions can influence the interpretation of information, with what are called “disqualifying heuristics,” leading decision-makers to neglect information that contradicts their understanding about the safety of a technical system. The key point here is that people tend to believe, or band together, around a core set of values stemming from their status, training, and physical roles in each industrial context. The resultant “normalization of deviance” can nullify signals of risk and danger, and in effect silence people from speaking up in contexts of institutional and organizational mandates that forbid such activity. Significantly, an adverse outcome can change these cultures. What were seen by decision-makers as “ill-structured” problems prior to an accident might seem “well-structured” in the aftermath. The literature on high reliability organizations is, to a degree, an offshoot of this insight. Cognition and Choice The literature in organizational theory also raises another topic of importance—the role of cognition and choice in the production of
64
C ha p t e r 3
deviance and nonconformity. Problems of routine nonconformity or deviance on account of cognitive choices arise primarily from the gaps in understanding and interpretation discussed above. Indeed, understanding and interpretation are based, as noted earlier, on cultural contexts that reflect histories and deeply embedded social relations. Often, these belief systems do not contradict or modify what came prior. This way, they get institutionalized within particular social contexts, with some groups arriving at shared ways of making sense of risks around them and acting appropriately in response. Ultimately, such practices get socially reproduced, institutionalized, and routinized. There is a practical, commonsensical reason why cultural knowledge is significant: it renders complexity legible and actionable. When institutionalized, such cultural beliefs help individuals determine what kind of action is rational at any given time, as their beliefs are sanctioned within the wider worldview of the organization in which they work. Such localized understandings can, however, run counter to designed or normative standards. Individuals might, in such circumstances, perceive their own conduct as conforming to norms because, seen from the point of view of their cultural framework of understanding, their actions seem quite nonproblematic, being harmonized with the subcultures and social expectations of the units in which they work. The problem is that their actions might be problematically deviant when looked at from a different vantage point. These ideas have been repeatedly corroborated by both psychological and sociological research. For example, scholars in social psychology’s symbolic interactionism tradition argue that the idea of an objective reality, in which “truth” and “error” can be easily and reliably ascertained, is a chimera. Rather, they contend that meaning-making in social groups is a product of interpretive work, spontaneous action (sometimes nonrational), situated and emergent epistemologies, and symbolism. Thus, meaning-making, or making
D i s a s t e r
65
sense of a given situation, is often a product of iterative creative processes, negotiation, intentional and unintentional deception, and the reordering of the social domains of the engineers and line workers operating a given system. The social processes of framing a problem, involving a careful selection of a particular set of concepts as the central points of foci and attention further exemplify the problem. When such framing turns out to be erroneous, there is potential for suboptimal outcomes. For these reasons, some theorists and practitioners, especially from the high reliability school discussed later in this chapter, make the concept of sense-making and subjective understanding by individual actors central to their analysis, arguing that the multiple pathways between information and interpretation impact action in unanticipated ways. They emphasize the salience of the moment-tomoment ways in which people perceive problems and select what to focus their attention on as they attempt to make sense of a given situation. One of the key insights that these theoretical frameworks offer is that deviance from norms, rules, and rationally laid down procedures is not as much a result of the immorality of actors but an artifact of how they perceive issues and are thereby acculturated to understanding them. In essence, the processes of neutralization and naturalization of certain kinds of mistakes render them routine and unremarkable, so much so that they do not appear to be violations at all, at least not in a conscious, cognitive sense. On the contrary, they claim that there can be contrary and even contradictory suppositions and understandings within any given organization, setting up tensions not only between the general rules and local cultural understandings, but between the interpretive and normative frames of subgroups and units at disparate areas of a given organization. Evidently, these contrary and contradictory understandings can in some contexts perpetrate a lack of understanding during the times of crisis.
66
C ha p t e r 3
No r m al Acc ide n ts In 1984, in the aftermath of the Three Mile Island nuclear event, sociologist Charles Perrow wrote a paradigmatic book on the nature and causes of industrial disasters. It presented a frightening possibility of accidents latent in many complex technological systems. While the conclusions might not have caught theorists of mistake, misconduct, and misdemeanor by surprise, Perrow’s normal accident theory (NAT) was novel in many important ways. To grasp his argument, it is important to understand some of its central ideas and premises. At the very outset is the concept of high risk, which, for Perrow, refers to accidents at installed plants, such as nuclear or chemical, as opposed to the impacts of pollution or other causes of risk to human life and the environment. Thus, high risk applies to “the operators, passengers, innocent bystanders, and . . . future generations,” and specifically to “enterprises [that] have catastrophic potential, the ability to take the lives of hundreds of people in one blow, or to shorten or cripple the lives of thousands or millions more.” A term that is particularly relevant to the concept of high risk is accident. Perrow defines an accident as an unintended event that results in damage to people, either in a direct, physical sense, or to their economic investments, such as homes or infrastructures. In his analysis of industrial accidents, Perrow distinguishes between four levels of such events. First, there are parts, such as valves and meters, and units, or collections of parts with a particular function within a subsystem. Next are events or incidents that are restricted to the failure of a single part or unit within a subsystem, such as, for example, a generator. The third level consists of combinations of units, which comprise subsystems. A secondary cooling subsystem, for example, might include a steam generator, a condensate polisher, pumps, pipes, and so forth. At the fourth or highest level are complete systems, comprised of many subsystems. Perrow uses the term accident to describe failures or disruptions at the subsystem or complete system level. Further, he distinguishes between “discrete
D i s a s t e r
67
failures,” which involve a single, isolated failure, and “component failures,” which “involve one or more component failures (part, unit, or subsystem) that are linked in an anticipated sequence.” A second cluster of terms critical to Perrow’s analysis are linear and complex systems and interactions. Linear systems, as Perrow defines them, consist of many parts that are linked in a linear fashion. On an assembly line, for example, subsystems are segregated spatially. Controls and connections tend to be designed for single purposes and are also segregated. The operators in linear systems, for the most part, understand how the entire system works. They are therefore able to substitute for each other and troubleshoot problems across much of the system. Moreover, feedback loops in linear systems tend to be local, rather than global in scope. This implies that they can be controlled in a decentralized manner, and that the information used to control any component of the system is likely to be direct and unambiguous. Complex systems, by contrast, usually involve subsystems that are interconnected, often for reasons of efficiency. When a unit that interconnects the subsystems fails, both are impacted. An example is a heat exchanger that is meant to remove heat from one subsystem and transfer it to another. While in principle, an entirely new heat source could be designed, it is far more efficient to recycle heat. However, such interconnected systems have the potential to disrupt both related subsystems, should they fail. Such failures are termed “common-mode,” and cause diagnostic concerns, as they pre sent operators with two sets of systems that are intermingled. To avoid common-mode failures, designers often include a wide range of safety devices, such as check valves that prevent backflow. Such devices can themselves fail, though; the valve normally operates by passing the flow from the first tank, which is at a higher pressure, to the second, which is at a lower pressure. When the pressure difference reverses, the valve might not work. There might be debris blocking the flow in that direction, or corrosion, or something as minor as a spring that has been weakened, having been compressed
68
C ha p t e r 3
for a long period of time. At a critical time, it just might not release, precipitating a failure and causing a bigger problem downstream that could cascade into a catastrophe. Complex systems also often have unusual interactions on account of their hidden and therefore not immediately comprehensible character. The reason these interactions are hidden is that the sheer complexity of the systems poses a design challenge. Designers can choose to expose all the facets of the system to the operator, and thereby make the system totally transparent. This however comes with a cost in terms of an increased workload for human operators, who need to process a huge amount of information and manage many control systems. The alternative is to simplify the tasks of operators by automating a range of subsidiary interactions. This, however, makes these interactions potentially invisible to the operator. Compounding the problem is the difficulty in designing large control panels that can be assembled and repaired easily, while remaining easily legible to the operators. Moreover, in some instances in complex, nonlinear systems, critical information cannot be easily obtained. During the Three Mile Island nuclear event, for example, the level of coolant in the reactor was not directly measured and had to be inferred. A combination of these factors, as well as the proximity of some parts and units that are not in a production sequence, implies that at critical junctures, operators might have a limited understanding of some processes. Another dyad of terms in Perrow’s analysis are tight and loose coupling, or the nature and extent of slack in any given system. In tightly coupled systems, the interactions between the component elements happen rapidly, affording human actors few opportunities to intervene or change course. Tightly coupled systems are characterized by invariant sequences and very little slack. Delays in processing can therefore lead quickly to disaster. In an assembly line, for example, a single point of failure can end up shutting down the entire system. Loosely coupled systems, by contrast, involve slower and more flexible interactions, allowing operators time and oppor-
D i s a s t e r
69
tunity to intervene. A university is an example of a loosely coupled system: if a subsystem is impacted—say by a teacher not teaching their classes—the end goal, students learning the material, is not necessarily impacted, as the department chair or dean can figure out alternative pathways. Buffers and redundancies are thus available to prevent the worst outcome. Perrow’s NAT built on these conceptual foundations. It applies especially to complex and tightly coupled systems. It posits that even a few small failures can interact in ways that cannot be anticipated a priori by designers or planned for with processes, procedures, and training. These unexpected interactions can also defeat redundancies, buffers, alarms, and other safeguards. Moreover, in tightly coupled systems, they can allow the failures to swiftly cascade to unmanageable proportions, leading to disaster. As these interactions are inherent parts of complex systems, Perrow argued that catastrophes they cause are not accidental in the classic dictionary sense. They are, on the contrary, stemming from or normal to the system itself, or, as he phrases it, normal accidents. Perrow is quick to observe that NAT applies to only a subset of high-risk industrial accidents. It does not apply to linear systems, or for that matter, loosely coupled systems. There is an array of possibilities involving linear and complex interactions, on the one hand, and loose or tight coupling, on the other, constraining and pinpointing the domain of applicability of NAT. Examples of systems that are linear and loose include assembly lines, most manufacturing, and single-goal agencies such as the post office or motor vehicle departments. In such systems, there are very few chances for unexpected interactions, and when they do occur, there is time to diagnose the causes and fix the root problems. Next, there are systems with complex interactions and loose coupling. Examples include mining operations, multi-goal agencies such as welfare offices, and universities. Here, even though unexpected interactions could occur, there is usually time to make amends. To recycle the aforementioned example, in a university, an absent teacher can be replaced with a substitute,
70
C ha p t e r 3
and students, for the most part, have alternative pathways to learning their material. Moreover, if a mistake is made, such as in scheduling, there is usually time to correct it without adversely impacting the career trajectories of the students. There are also systems that are linear but tightly coupled. Power grids and transportation systems are examples, as are dams. These are tightly coupled systems in that, in theory, the failure of a critical component at a particular time could precipitate others downstream, and lead to a disaster. However, because of the linearity of these systems, they are comprehensible to engineers and operators, and in well-run establishments, potential failures of components can be anticipated and addressed with substitutions of components, materials, processes, or training. Last, but by no means least, are systems that are complex and tightly coupled. Nuclear establishments, space machines, and some categories of chemical plants are examples. In such contexts, the complexity of interactions can create conditions in which the nature of the problem is incomprehensible to operators in real time, although it can be understood in hindsight or after the event. Moreover, the tight coupling characteristic could make a quick intervention doubly difficult, as there can, in the worst instance, be neither an understanding of what needs to be done, nor the time to intervene. Perrow illustrates this prospect with the example of the Three Mile Island accident. It was, he argued, a result of four failures. Three of them had happened before, and the fourth was a consequence of a failure of a new safety device. Had they occurred separately, they could have been addressed quite easily. However, they interacted in an unanticipated manner. The system sent correct signals, but these misled the operators, who, by behaving in the way they were drilled, ended up furthering the progression of the mishap. It took the providential insight of a fresh operator to prevent the meltdown of the core and a catastrophe. The central insight in NAT, thus, is that trivial errors and small problems can, in highly specified contexts, trigger catastrophes.
D i s a s t e r
71
The traditional belief about modern industrial contexts is that welldesigned engineered and maintained systems can help mitigate such outcomes. According to NAT, however, despite the best efforts of designers, system planners, and engineers, system designs are not infallible, and no engineer is so omniscient as to anticipate every possible situation that might arise—some of which can lead to a disaster. Normal accidents thus have the following set of characteristics. First, they are unpredictable and unavoidable. In normal accidents, no single failure is clearly identifiable as foretelling an accident that is to come. An alarm that sounds routinely, and therefore is interpreted as trivial, on a bad day might signify something entirely different—but owing to its normality, might be ignored. Therefore, seeking to ascribe blame to an operator for ignoring that signal might not be particularly useful. Normal accidents are blind spots, or “failures without conspicuous errors,” an ontologically unpredictable fact in a technological system out of control. For this reason, they cannot be prevented. Next, normal accidents are most likely to occur in complex and tightly coupled systems. Third, normal accidents are seldom likely to reoccur, at least in the same format. Each normal accident is unique, a result of the proverbial billion to one chance of a particular set of components interacting in a specific manner to fail disastrously. There is indeed very little chance that any given configuration of such failures will reproduce in another system, although the very same components can combine with others and lead to a different kind of normal accident. Further, normal accidents seldom result in paradigmatic changes in conceptual understanding in engineering design. Individual errors, such as a faulty valve or a misleading alarm, happen in most systems. A stuck valve in the sequence chain in a normal accident might not reveal something novel about the way it ought to be designed in the future, for it is not the failure of the valve that caused the accident in the first place. A corollary to this is that normal accidents do not change the worldview or design paradigms of engineers, as they are the result of unpredictable and improbable sequences of
72
C ha p t e r 3
errors. Finally, the term normal accident does not connote commonness; rather, it emphasizes inevitability—not in a probabilistic sense, but as an existential potentiality. NAT suggests that some high-risk systems have significant catastrophic potential, with a capacity to kill many people and lay waste to large natural environments, further endangering life—in some cases intergenerationally. Such systems, Perrow proposed, ought to be redesigned completely to either render the interactions in them more linear, or to render them more loosely coupled, helping prevent failures from spreading out of control. That said, Perrow argued that this prescription poses more intractable critical dilemmas, as many systems and processes need complex interactions and tight coupling for them to work efficiently and economically. The most critical of these dilemmas concerns age-old debates about the relative merits of centralization and decentralization. Tight coupling necessitates centralized decision-making to afford the top levels of the system a complete view of its state at any given point of time. However, to cope with complex interactions in contexts of uncertainty, there is a need for decentralized decision-making, as only lower-level operators have the situational knowledge to grasp and comprehend the import of small failures. Perrow observed, however, that it is difficult, and perhaps impossible and contradictory, to have a system that is both centralized and decentralized. Given the proclivities of designers and managers to favor centralization of power over decentralization, Perrow argued, risky systems err on the centralization side and neglect the advantages of decentralization. That said, immediate, centralized responses to failures also have their advantages. No clear solution to the dilemma, beyond massive redesign and accompanying inefficiencies, is apparent. Thus, Perrow concluded, high-risk systems that fall in the NAT category might need to be totally abandoned or at least vastly scaled back. This is akin to the argument made by those who seek to ban and replace toxic chemicals with significant bioaccumulative potential (chapter 2). Again, Perrow was careful
D i s a s t e r
73
to clarify that not all high-risk situations fall under the NAT category. Indeed, many accidents are not systemic, inevitable, or rare. A large proportion of accidents are what he termed component failure accidents, which in principle are preventable. Here he included the Challenger space shuttle disaster, the Chernobyl nuclear plant explosion, the Exxon Valdez oil spill, and with a caveat, the Bhopal gas disaster. There is an important coda to NAT. Sociologists of science and technology have pointed out one more systemic vulnerability in highrisk systems—that of scientific uncertainty. This raises the prospect of a different type of system accident. A case in point is the crash of Aloha Air 243, which suffered a catastrophic hull failure at 24,000 feet on April 28, 1988. A portion of the left side of the roof ruptured; the cockpit door had broken away, the captain could see “blue sky where the first-class ceiling had been,” and the decompression tore off a further substantial section of the roof, from the cockpit to the forewing area, spanning 18.5 feet. Miraculously, only one fatality occurred. In a paper on the event, John Downer uses the example of the materials used to construct the hull to illustrate the many problems with that aircraft’s design, but argued that the disaster was not a normal accident. Instead, it was what he called an epistemic accident. Downer argued that like normal accidents, epistemic accidents are unavoidable, but for different reasons. Whereas normal accidents occur because engineers cannot predict the plurality of potential interactions in complex, tightly coupled systems, epistemic accidents happen because designers sometimes are forced to build technologies around fallible theories, judgments, and assumptions. Epistemic accidents occur in novel and highly innovative systems, which stretch the boundaries of settled theory and experience. Aloha 243 was but one example; in another, involving Delta flight 191 near Dallas/Fort Worth airport on August 2, 1985, more than a hundred fatalities occurred because the phenomenon of microbursts, which caused the accident, was unknown to the designers and operators of aircraft at that time.
74
C ha p t e r 3
Another important contrast between normal and epistemic accidents is that whereas the former are “one of a kind” events, the latter are highly likely to occur. An epistemic accident opens the specter of a gap in knowledge, which can be addressed with future research, as was the case following the Aloha and Delta disasters. A corollary is that epistemic accidents clearly challenge existing design paradigms and their underlying theories and evidence, in contrast to normal accidents. Epistemic accidents challenge investigators to extend the domains, constraints, and limits of their theories and knowledge bases. The possibility that they might recur forces engineers to “leverage hindsight” so that it can be converted into foresight and prevent future accidents. The very prospect of prevention renders epistemic accidents different from normal accidents. System accidents, whether normal or epistemic, point to the many vulnerabilities that we have chosen to accept as part of what Downer, citing Jasanoff, called our “civic epistemology of technological risk.” Put differently, humanity is “deeply implicated in societal choices about technology.” What is worrisome is that public understanding of disasters, with its expectation of technological infallibility and its belief that the causes of disasters lie in the failure of bad actors—individuals and institutions—is at odds with a much more complex, difficult, and even intractable specter of vulnerability painted by system accident theorists.
Hig h Reliability NAT offered an important cautionary note about the risks of complex engineered systems. However, subsequent research has shown that disasters are not inevitable even in such systems. There are several examples of highly vulnerable entities—from aircraft carriers to hospital intensive care units to banks—that work with high degrees of reliability and safety despite having many of the characteristics of NAT systems. How does that happen? In response to this question,
D i s a s t e r
75
there is an engaging body of literature and a counter-theory to NAT that speaks to the positive potential of creating and nurturing safe institutions. This work examines what it calls high reliability organizations (HROs). HRO theory identifies three fundamental characteristics of such organizations. First, they aggressively seek to learn about what they don’t know. Second, their reward and incentive systems are designed in a way that recognizes the costs of failures as well as the benefits of reliability. Third, they emphasize communication—both about the big picture of what the organization is attempting to do, and by ensuring that everyone in the organization is equipped to communicate with each other about how they fit into the big picture. To address the first point, the research on entities identified as HROs shows that they are adept at discerning what they don’t know. This marks them out as distinct from organizations that are accident-prone. The outlook of many HRO managements is that strange and unpredictable outcomes can occur, with various kinds of results, including some that can be adverse. They therefore ask their employees to be on the lookout for unusual or nonroutine things, and discourage them from assuming that seemingly small or trivial phenomena somehow don’t matter or are not important. HRO managements also believe that for all their best efforts and intentions, system designers and organizational planners cannot anticipate all possible outcomes. On the contrary, negative outcomes can occur even when great efforts are expended to identify possible problems and address them in advance. HRO managements also recognize that human beings can make mistakes even in carefully designed systems. HROs therefore train their people to regularly scan for anomalies, and have the knowhow to respond to a variety of problems. Staff learn to decouple systems when problems are discovered, and consequently minimize the harm caused by the initial accident to the total system. Crucially, they are empowered by the organization to act and intervene to prevent adverse outcomes. Training also teaches people how to respond
76
C ha p t e r 3
to situations that are not routine and therefore not in a conventional training manual. Such preventive training involves recognizing decoys or false trails, so that staff are aware that not everything is as it appears. Employees in HROs therefore learn to recognize situations that may be getting out of control and have the ability to detect unusual or unplanned problems. The training of operators is underscored by informal, but strong, cultures that recognize that the system may not be so well-designed that safeguards will take care of any anomaly. HROs emphasize a work culture in which staff members “own” a problem until either they, or someone else, fixes it. An important strategy adopted by HROs to inculcate a preventive safety culture is that of staging simulations. Like fire drills, simulated accidents help HROs prepare staff for an adverse event that might one day materialize. Such training reinforces the idea that accidents can happen at any time, and that staff, and indeed members of the community, must not become complacent. Furthermore, it gives people throughout the organization the opportunity to see what responses work and how, so they can locate areas where changes may be needed to successfully cope with the normal accidents that will eventually happen. Failure simulations augment other formal and informal training processes by emphasizing to all the stakeholders the need to be heedful about the possibility of accidents. They also create cultures of open-mindedness regarding how accidents are caused and how they might be prevented or stopped before becoming catastrophes. Moreover, HROs encourage the building of organizational memory of the causal bases of previous events, and constant and iterative scanning of systems for redundancies. There are many concrete examples in the HRO literature that address these features. A case in point is the pediatric intensive care unit (ICU) at the Loma Linda University Medical Center in Southern California. Doctors and administrators there empowered nurse practitioners, who often have grounded knowledge about the states of their patients, to act in order to respond rapidly to the
D i s a s t e r
77
dynamic changing circumstances that characterize ICUs. The ICU was also designed to be flexible, so that teams could respond efficiently to changing patient loads. Moreover, they built an overall culture that emphasized the prospect of failure in contexts such as ICUs and encouraged doctors to collaborate and seek and share knowledge with staff members, such as nurses, who were historically lower in the care hierarchy. Decision-making about patient care thereby incorporated knowledge at lower organizational levels, where context and experience-based knowledge was prevalent. Another insight in HRO theory is that organizations with good safety records carefully balance rewarding efficiency and reliability. Knowing that the focus of managerial attention is often on what gets measured, they consciously seek to balance and evaluate the ostensible costs and benefits of short-run profitable strategies, on the one hand, and safety, on the other. They seek to make it possible for all stakeholders in the organization to pursue both these objectives. Some industries deliberately build redundancy against failures, even at the cost of additional expense. Commercial airlines, for instance, employ more flight crew than they technically need to operate an aircraft in case of sudden illness and to help respond in the case of an emergency. Other examples include ports that require a specially trained pilot, in addition to the ship’s captain, to direct a ship to its dock, and air traffic controllers, who often work in pairs to ensure that at least two sets of eyes are on the aircraft in the sky. To achieve the goals of high redundancy and safety, HROs adopt a number of managerial tools. Among these are interviews, focus groups, and employee surveys to ensure that the goals of the organization serve the public interest, incorporating performance on safety in employee evaluations and adopting innovative systems of accounting to ensure that the benefits of avoiding adverse events are incorporated into financial valuations. HROs consistently seek to communicate their objectives to their employees and others to ensure that all stakeholders understand the big picture about safety. Managers expend energy, time, and money
78
C ha p t e r 3
“developing and maintaining an effective communication capability.” Moreover, they generally establish so-called Incident Command Systems with clearly articulated decision rules. These systems curate processes and enable fluid organizational structures that help coordinate the work of all entities involved in a bid to “keep them from getting in each other’s way” and ensuring “a common reporting structure.” The consequence of such organizational frameworks is higher reliability, as all stakeholders are aware of how their actions and words impact the organization’s goal and purpose. Actors in HROs therefore seek to be actively connected with others, resulting in everyone knowing the big picture, and communicating their knowledge and understanding to their colleagues. Ultimately, when things do go wrong, it is not brushed away, but becomes the focus of attention by staff.
C onclusio n There are three key takeaways from this chapter. First, it is evident that there is an important gap between the safety demands of the technocene, with its complex engineered systems, and the psychological abilities of human practitioners. Human error—in active and latent forms—is not just about failures by individuals that can simply be explained after the fact, but a consequence of a host of decisions about the design of infrastructures that are more often than not visible only in hindsight. Mistakes, likewise, flow from unanticipated design factors that end up allowing operators with good training, motivation, and intentions to cause mistakes, especially while they make crucial decisions under pressure. Exacerbating their problems are interpretive difficulties arising out of ambiguous signals from the system. Violations too often arise from common psychological bases; further, they might not have any significant consequences, and for this reason are ignored, until on a particular day they cascade into a disaster.
D i s a s t e r
79
A second takeaway lies in the social structures and institutional norms that drive the functioning of risky technological systems. Here, there are often norms, rules, and clear procedures. Yet, disaster stems from the violation of these processes—not necessarily due to egregious acts, but because of differences in how some practition ers perceive and act on issues. Especially important is the role of routinization, because of which some deviant acts do not appear to be violations at all, at least not in a conscious, cognitive sense. Such processes are exacerbated by the fact that there are often contradictory suppositions and understandings within any given organization. Consequently, there are sometimes significant tensions between the general rules and local cultural understandings, and between the interpretive and normative frames of subgroups and units at disparate areas of a given organization. During times of crisis, these contradictory understandings can be exposed and cause serious problems as local operators need to respond with shared, not disparate, understandings. Third, a host of implications stem from the NAT and HRO approaches. The scope of NAT is limited. It identifies a particular set or category of accidents that are for the most part “unavoidable.” But most accidents do not fit this description and can be prevented by approaches such as those taken by HROs. Therefore, the NAT and HRO approaches do not necessarily contradict each other; NAT addresses complex systems that will inevitably fail, whereas HRO theory identifies and studies organizational systems where good processes can help avoid catastrophic accidents. There are significant debates and disagreements between adherents of these two perspectives. Important among these are the consequences of centralization and decentralization, tight and loose coupling, and redundancy. It is far from clear what exactly the conditions are that help produce safety or increase or decrease the probability of accidents. There is, however, considerable scholarly innovation around both approaches, separately and together. For example, there is work on the relationship between structure, process, and cognition; com-
80
C ha p t e r 3
parative studies of failure and successes in organizations; intrinsically risky industries; and on comparisons of structures, processes, performance, and accidents. There is also a move to study social contexts, and especially the roles of complexity, coupling, regulatory environments, and slack. Interdisciplinary and integrative research also recognizes that hard-earned insights in studies of disasters in the technocene derive from historically distinct disciplines, including psychology, sociology, political science, geography, anthropology, history, and the managerial sciences. Significant as these insights are, there is yet another arena to explore to understand the consequences of risk and disaster in the technocene. The important keyword here is vulnerability. The term refers to the historic patterns in the evolution of a given society, including politics, political economy, culture, ideology, race, gender, and other forms of social stratification, that result in members of a society being differentially impacted. The next chapter explores these ideas.
4 Vulnerability
In tr o ductio n The preceding chapters have explored some of the salient theoretical frameworks that have helped shape expert understandings of environmental risks and disasters in the technocene. The purpose of this chapter is to understand the consequences of environmental risks and disaster for society. Do some people, communities, and species feel the brunt more than others? Are there some common threads among risky and disastrous events across geographies and sociologies? If so, why? A great deal of disaster research has historically focused on the nature of the causative agents, whether earthquakes or chemical spills. In recent times, however, there has been a decided emphasis on tracing and analyzing patterns of vulnerability that explain why the impacts of environmental risks and industrial disasters vary by factors such as race, class, and gender. For the most part, this theoretical corpus has addressed case studies across a range of natural disasters, including floods, cyclones, hurricanes, and similar 81
82
C ha p t e r 4
phenomena, such as fire and earthquakes. However, the concept of vulnerability can be adapted to understand environmental risks and disasters too. This chapter will explore the key insights in the natural hazards literature by discussing some iconic industrial events. It begins by explaining the emergent vulnerability theory of natural disasters. It then attempts to apply this to chemical risks and industrial disasters. In doing so, it focuses on four distinct elements: the role played by the political economy of global capital in the production of catastrophic events; the sociology and politics underlying the distributive dynamics of risk and disasters, or, to put this differently, the roots of environmental injustices; the dynamics of expertise, state, and community in the production and perpetration of such injustices; and the discourses and ideologies that frame and undergird vulnerabilities. Like the preceding chapters, this one builds a narrative by focusing on a few central keywords, starting with the central term vulnerability. Unlike the others, however, it develops these through case studies of actual events, rather than advance a theoretical framework from the start.
Vulnerabi lit y According to the Oxford English Dictionary, the word disaster has its etymological origins in the mid-sixteenth century, with French and Italian roots. The OED defines the word as “an event or occurrence of a ruinous or very distressing nature; a calamity; esp. a sudden accident or natural catastrophe that causes great damage or loss of life,” and “the state or condition that results” from this kind of event. The related word crisis has Latin and Greek roots, and also entered the English language in the sixteenth century. Its primary meaning lies in pathology; it describes a stage in the progression of a disease “when an important development or change takes place which is decisive of recovery or death; the turning-point of a disease
V u l n e r a b i l i t y
83
for better or worse; also applied to any marked or sudden variation occurring in the progress of a disease and to the phenomena accompanying it.” The word also has a figurative meaning, connoting a “vitally important or decisive stage in the progress of anything; a turning-point; also, a state of affairs in which a decisive change for better or worse is imminent; now applied esp. to times of difficulty, insecurity, and suspense in politics or commerce.” The question is, when and why do disastrous events precipitate crises in societies that experience them? Disastrous events are precipitated by hazards. The word hazard suggests the “possibility of danger or an adverse outcome.” Some hazards have their origins in the natural world—for example, wildfires, earthquakes, cyclones, hurricanes, floods, and tsunamis. Other hazards, from road accidents to chemical spills and nuclear meltdowns, are a result of humanity’s use of technologies. Sometimes, two or more natural hazards occur at the same time and in multiple places, as with flooding and landslides. At other times, natural and technological hazards combine, as was the case in the Fukushima disaster, which involved a tsunami and a nuclear reactor. The hazards we face also vary significantly, being intense or severe in some contexts, and relatively mild in others. The big question is: “why do some hazardous events large and small, result in catastrophic crises, and why do others not?” For example, why is it that an earthquake of a magnitude of 6 on the Richter scale in California does not cause much disruption in social and economic life, while one of the same magnitude results in a major crisis in other parts of the world? Traditionally, among laypeople and expert communities alike, the relationship between hazards and disasters has been construed sequentially, with hazards seen as triggering disastrous events that morph into catastrophes. A great deal of early academic work on disasters referenced the fire and fury of nature. A variation on this theme, involving industrial events, argued that human irrationality, hubris, and irresponsibility led us to misunderstand the order of nature. When viewed in this manner, industrial disasters were
84
C ha p t e r 4
seen as inevitable consequences of such thought processes, and the actions that followed from them. Neo-Malthusian emphasis on population growth as the culprit and theories that blamed industrialization and modern technology are both examples of such arguments. Missing in such narratives, though, is an explanation about why different people encounter hazards differently, or more precisely, whether societal stratification, and its consequences in terms of social organization and politics, creates the conditions that shape why disasters are experienced differentially. It is in this context that the concept of vulnerability is useful. Vulnerability theory is an analytical framework for understanding natural disasters, developed primarily by geographers, anthropologists, sociologists and environmental historians working within the political ecology tradition. According to this literature, the risk of disasters is a complex function of three factors: natural and technological hazards; demographics (the number of people potentially impacted in a particular space and time); and underlying societal contexts, the mundane, day-to-day features of societies that are magnified during catastrophic events. According to this theory, hazards and disasters are inherent and integral parts of both environmental and human systems. They are, in a way, litmus tests of whether and to what extent human societies are environmentally and socially resilient and sustainable. To quote Anthony Oliver-Smith, a leading anthropologist of the subject, “disasters have variously been considered a ‘natural laboratory,’ as the fundamental features of society and culture are laid bare in stark relief by the reduction of priorities to basic social, cultural, and material necessities.” These “revelatory crises” make starkly visible the dormant and unstated conditions within a society. Disasters, then, are states in which “the essential functions of the society are interrupted or destroyed, which results in individual stress and social disorganization of varying severity.” The result can be both physical and social: physical infrastructures can be destroyed, and so can the organizational fabric of a society. This approach sees disasters as culminations of processes or events
V u l n e r a b i l i t y
85
“involving a combination of a potentially destructive agent(s) from the natural and/or technological environment and a population in a socially and technologically produced condition of environmental vulnerability.” There are many types of vulnerability, and the rest of this chapter is an attempt at exploring the multidimensionality of the concept. Given their ubiquity around the world, forest fires are a good place to start. For more than a century, professional forest and fire departments in Europe, North America, Asia, and elsewhere believed that the key to addressing the problem was to devise bigger and better approaches to fire suppression. In this view, the problem was the hazard itself: the fire. However, more recent work has pointed out that fires serve an ecological function, helping the natural regeneration of many species of plants and trees. In such ecosystems, fire suppression strategies are quite harmful, and if anything, create the conditions for the further accumulation of biomass, meaning potentially bigger fires in the future. Another dimension is the settlement patterns of people. What might historically have been landscapes without much habitation, allowing fires to follow their natural course, now host homes, industries, and farms. In building like this, humans have placed a great deal of social and economic value right in the middle of an active fire zone, and bet on the efficacy of fire suppression methods. When those fail, the resultant catastrophe poses crises for those communities. In this example it is not the natural trigger—the fire—that is the root cause of the catastrophe. Instead, it is a combination of that hazard, patterns of thinking (in this case, fire suppression), and human settlement patterns, that turns a natural event into a disastrous crisis. There are many similar examples of such socially produced or “constructed” patterns of vulnerability with respect to other hazards, such as earthquakes and floods. Another pattern of vulnerability involves the policies and actions of nation-states. An illustrative example can be found in a gripping book by Mike Davis, who examines the famines that devastated many countries across all continents in the nineteenth century. According
86
C ha p t e r 4
to Davis, between thirty-two and sixty-one million people died from these famines in China, India, and Brazil, and these countries were by no means alone. Based on these numbers, he claims that these famines were “the greatest human tragedy since the Black Death.” Davis’s analysis of these famines points to a natural trigger: the El Niño-Southern Oscillation, which precipitated droughts and set in motion a chain of events that led to the collapse of agriculture. Yet the natural trigger in itself did not cause the famines. These were, instead, a consequence of public policies. The British imperial state in India, for example, consciously withheld public assistance that could have kept many people alive. Various measures could have helped prevent the famine, including free rations, work programs to enable people to earn income to purchase food, and the creation of infrastructure to get food to people. But imperial governing policies, stemming from conservative ideologies, did not allow for welfare. Neither well-developed economic markets nor public distribution systems reached the affected people, nor were there opportunities to participate in democratic processes that could help them address their future. Moreover, in many instances the imperial state had destroyed the basis of precolonial rural economies, where mutuality, sharing, and other community values could perhaps have prevented a famine. This example illustrates again that it was not the natural trigger, in this case drought, but a pattern of state policies and ideologies that created vulnerability. Such patterns were not restricted to imperial states. Some of the biggest famines in the twentieth century happened in Communist China in 1958–61, resulting in an estimated thirty million deaths; in the Soviet Union during the 1930s; in Cambodia in the 1970s; and in North Korea and sub-Saharan Africa in the not-so-distant past. In each case, what caused a natural trigger to precipitate into the crisis of famine was a pattern of vulnerability involving a combination of state policies and the lack of a capacity for public participation and political power. Neoliberalism affords another example of political economic ide-
V u l n e r a b i l i t y
87
ologies causing patterns of vulnerability. The term refers to a cluster of developments including, among other things, the globalization of production in the wake of the economic crises of the 1970s, the deregulation of financial and labor markets and reduction of social and environmental protections, and the application of the methods of cost-benefit analyses as the basis of public policy at the cost of other ways of understanding meaning-making that constitutes the human experience. One example of patterns of vulnerability caused by neoliberal policies lies is the reconstruction of the city of New Orleans in the aftermath of Hurricane Katrina. Federal agencies involved with reconstruction awarded no-bid contracts to private companies for the provision of a wide range of services. What seemed like a clear example of crony capitalism was justified via the “market liberalization” and “privatization” mantras of neoliberalism. The outcome did not serve the needs of disaster victims but did increase the valuation of some of the companies that were involved. Authorities in New Orleans also used neoliberal arguments and justifications about profitability to demolish public housing or convert it into mixed-income developments. Low-income people, who constituted the backbone of the service sector, a critical part of the city’s tourism economy, were severely impacted by the loss of public housing. In this case, the disaster—the inability of poor service-sector workers to live and work in the city, and the consequent impacts on the economy—was not caused by the hurricane itself. Rather, it was a consequence of a pattern of vulnerability created by ideologically motivated public policies stemming from neoliberalism. At a broader level, the very structures, and organizations of societies the world over, from the smallest village to the global economy, engender a range of vulnerabilities. Societies are deeply stratified along the lines of ethnicity, caste, economic class, race, gender, and age. The ability of individuals, families, and groups to access and use resources critical to survival is often very limited. Landowners, to take the classic example in political economy, own the productive basis of land and gain their wealth from it, and as a result, can
88
C ha p t e r 4
employ others to do the hard work of tilling, sowing, and harvesting. The laborers who work in their fields have less freedom, being dependent on wages. Another classic example is that men typically have more access to resources and opportunities than women, with the consequence that the latter are less well-off across a range of parameters, including income, health, ability to participate in political life, and the availability of time for leisure or education. Essentially, rights to economic, social, legal, and cultural resources are inequitably distributed among people. Social stratification affects access to and the distribution of resources among people even in the best of times. Moreover, the underlying politics of stratification often forms the basis for the structure and organization of states, especially the creation and deployment of legal frameworks, the definition and enforcement of rights, the elaboration of governance ideologies impacting public distribution, and the use of the police and the military. When hazardous events occur against the backdrop of social stratification, the impact of any physical trigger—such as an earthquake, tsunami, landslide, cyclone, or a chemical spill or a nuclear accident—on human populations is magnified, reflecting both the physical trigger and the organization of the society. Disasters, therefore, tend to have differential impacts. Facing the same physical event, people higher up the resource hierarchy, with greater access to finances, natural assets such as land or cattle, technologies and tools, information, and social networks that they can leverage and mobilize, are able to ride out a disastrous event much better than those who cannot. For example, when a cyclone hits, a family with access to early warning systems, the transportation to enable them to evacuate, better physical infrastructure such as a well-built home, surplus resources in the form of extra food and clothing, a network of similarly well-resourced support that can provide shelter and assistance, and other forms of collateral (perhaps cash or credit cards), can escape the impact of the event. When they return to a damaged home, they can mobilize these finances and human power to rebuild
V u l n e r a b i l i t y
89
their economic lives. In the very same cyclone, a poor family with few of these resources might find themselves losing all their assets and have nothing left to rebuild and reconstruct their lives. The most vulnerable people, as defined in terms of their status and location in the economic and power dynamics of a stratified society, therefore have the greatest challenges in the aftermath of disastrous events. They have lesser access to defense mechanisms, fewer pathways to finding solutions to regain livelihoods and solving other problems relating to their survival, fewer tools and equipment, little or no access to legal resources to enforce whatever rights they might have, diminishing social networks since their wider community is similarly impacted, and lesser ability to overcome stress. In other words, they struggle to cope, and are often unable to mobilize resources to meet their needs, expectations, and ends. Vulnerability to disasters or, on the other end of the spectrum, security in the face of them, is for these reasons strongly correlated to poverty and wealth, measured in both economic and cultural terms. This is of course the case even in normal or nondisaster times, which reinforces the central point of vulnerability theory: that disasters exacerbate preexisting patterns of vulnerability. Another example of preexisting patterns of vulnerability can be seen by understanding the nature of victims. There are some (event victims) who are impacted because they live or work in places that are proximate to the physical cause of the event. Then there are those (context victims) who suffer from the material and psychological consequences of being confronted by the disaster. There are also peripheral victims who did not directly face the impact because they were not physically resident in that space, but who lost property or kin at the site. Last are entry victims who come into a disaster zone to help, such as doctors and paramedics or volunteers, but who suffer from both physical damage and psychological stress. The dynamics of political organization and mobilization around natural disasters also illustrate patterns of vulnerability. In some contexts, disasters can alienate communities along existing fault lines, reflect-
90
C ha p t e r 4
ing preexisting power relations, relationships between people and the state, and political consciousness. But they also can provide the opportunity for a reorganization of the polity and, consequently, a renegotiation or rearrangement of such political arrangements. In such instances, new forms of political solidarity, ideas, and agendas emerge. Disasters therefore have the potential to foster social, economic, and political change, and these changes bring forth new patterns of vulnerability, reflecting the extent to which the postdisaster rearrangements impact a society’s ability to mobilize and distribute resources, and care for their community and individuals. An important facet of vulnerability is the societal (in)capacity to mobilize internally and externally to meet material losses in the aftermath of disaster. In some societal contexts, individual behavior can be collectively selfish, and result in critical supplies and services not flowing to the needy in a timely manner. Alternatively, individual behavior can be constrained by state policies and actions. In other contexts, altruism, mutuality, reciprocity, trust, contracts, and principles of distribution result in more egalitarian outcomes, reflecting the oft-quoted dictum that disasters represent “the best of times and the worst of times.” The organization of societies prior to a disaster can thus impact social solidarity in the aftermath. Moral conflicts over resource allocation are often dynamic. Following the Peruvian earthquake of 1970, for example, the community began altruistically donating their assets, such as animals, to the public good. However, with the arrival of foreign aid, predisaster social stratification patterns resulted in arguments and conflicts over how the new resources were to be allocated, with some claimants demanding differential access. Another dimension of postdisaster recovery is in reconstruction. Here, the nature of reconstruction, critical for the resumption of normality, reflects the patterns of social organization that existed prior to a disaster: some win, others lose, and there is considerable contestation over priorities. After the Exxon Valdez disaster off the coast of Alaska in March 1989, for instance, the community was split about whether to accept cleanup
V u l n e r a b i l i t y
91
employment with a company that destroyed their environments and livelihoods. Although vulnerability theory was initially adopted to understand and explain natural disasters, it can illuminate catastrophic industrial events too. The key is to focus as much on human social, cultural, and political dynamics, as on a trigger event itself. There are at least four types of sociocultural and political trends that can help explain and understand industrial events during the technocene: the role of unaccountable corporations, especially multinationals, in the creation of a global risk society; the distributive underpinnings of industrial risk and disaster, especially the differential impacts of disasters; the nature of scientific, technological, and governance expertise, and the capacity of state systems to address the consequences of disastrous events, especially on marginal peoples; and the ideological and discursive frames that justify the preceding three trends. The sections to come examine each of these points in turn.
A cco untabi lity One of the most iconic examples of industrial catastrophe in our time is the Bhopal gas disaster. The disaster unfolded in a chemical plant in India that was owned by the American multinational Union Carbide. The Bhopal plant was established in 1969 to synthesize a range of pesticides and herbicides; from its inception to the December 1984 gas leak, the plant was plagued with accidents. On November 24, 1978, for example, the Alpha-Naphtol storage area had a huge fire that took ten hours to control. On December 26, 1981, the plant operator Mohammad Ashraf was killed because of a phosgene gas leak. In January 1982, another phosgene leak caused severe injuries to twenty-eight people. On April 22, 1982, three electrical operators were severely burned while working on a control system panel, and on October 5 of the same year, methyl isocyanate (MIC) escaped from a broken valve and seriously affected four work-
92
C ha p t e r 4
ers besides causing irritation to several nearby residents. Two similar incidents were reported in 1983. Considering this history, what happened on December 2–3, 1984, was clearly not accidental in the sense of a chance, random, unpredictable event. To begin with, the company was not unaware of the consequences of its choices on issues such as risk and safety. In addition to worker protests and the litany of accidents that plagued the plant, an internal investigation in May 1982 by a team of three Carbide experts raised several important safety concerns. It found, among other things, that the tank relief valve could not contain a runaway reaction, that many pressure gauges were defective, that valve leakages were endemic, and that there was no water spray system for fire protection or vapor dispersal in the MIC operating or storage area. Several articles in the local press had also predicted an impending disaster. Unsafe working conditions meant that between half and twothirds of the engineers who had been hired when the plant was commissioned had quit by December 1984. The result was a great reduction in operator strength. In the MIC plant, for example, the number of operators had been reduced to six from the original eleven. In its control room, normally staffed by two people, there was only one operator at the time of the accident, with the virtually impossible job of checking the seventy-odd panels, indicators, and controllers. The resignation of qualified engineers had also resulted in the company forcing underqualified and underpaid workers to operate highly complicated and risk-ridden technological systems. T. R. Chouhan, for example, was an operator who was transferred to the MIC plant in 1982. At the time of his appointment, Chouhan, according to his own testimony, did not have the necessary background to operate the plant. He neither had a degree in science nor a diploma in engineering, the company’s prerequisites for the position. To make up for this, he was promised six months of training before being put on the job. After a week of classroom training and a month of work, however, Chouhan was ordered to
V u l n e r a b i l i t y
93
take charge as a full-fledged MIC operator. When he refused, he was treated as a recalcitrant worker and reported to superior authorities. In Chouhan’s words, During my transfer to the MIC plant, I was given a letter mentioning a minimum training period of six months, so when the manager of industrial relations, D. S. Pandey, called me in, I showed him the letter. Two more officials . . . joined Pandey and the three of them tried to convince me that I was being unreasonable in insisting on further training. . . . But I stood my ground and flashed the letter before them as often as I could. Finally, it was resolved that I would be given three weeks of training for handling the storage unit of the MIC plant and then I would be in sole charge of the storage unit.
Chouhan was not alone in seeking adequate training and working conditions. The labor force protested on several occasions with similar demands and drew attention to the abysmal safety record. The company’s response was to deploy force to dispel what it saw as labor unrest. For Union Carbide, the Bhopal event was not unique or unprecedented. On the contrary, its record, throughout its international operational history over nearly a hundred years, suggests that its organizational culture puts profit over safety. Its long record of environmental negligence includes the infamous Hawks Nest Tunnel Incident in the 1930s and the Oak Ridge Mercury contamination problem from the 1950s. It also includes the TEMIK poisonings on Long Island in the 1970s, the Kanawaha Valley pollution controversy in the 1970s and 1980s, and several other incidents in the United States, Puerto Rico, Indonesia, Australia, France, and India. There is also a discernible pattern in the company’s response to harmful events. In upstate New York, for example, the company was involved in a case of pollution and deployed a five-pronged strategy to deal with the situation. First, it attempted to deny the problem, releasing minimizing statements: “Minute traces of . . . TEMIK aldicarb have been recently detected in wells in the vicinity . . . in
94
C ha p t e r 4
amounts measurable only by ultra-modern technology.” Second, the company attempted to put the problem “in perspective” by arguing that “it is well known that much larger residues of other agricultural chemicals . . . have been found in the same water for many years.” Third, the company attempted to redirect the focus of the problem by blaming “a hysterical public” with statements such as: “some people have an unarticulated worry over the possibility of unspecified future health impairment . . . however unjustified.” In the same breath, the company also overtly blamed the victim: “Victims mislead in claiming that TEMIK is a poison,” although it is they who have “refused to take prophylactic measures to protect themselves.” Fourth, Union Carbide attempted to divide and conquer the plaintiffs in the lawsuit against it by telling farmers and developers that they had conflicting interests and would do better by settling with Carbide. Finally, it attempted to settle with the government when it became less expensive to do so. Strikingly, Union Carbide adopted a similar strategy in Bhopal— although whether this happened consciously or not cannot be established. Many of its senior employees denied any responsibility for the accident from the beginning. The Union Carbide India Limited (UCIL) Works manager, J. Mukund, for example, told the additional district magistrate on the night of the accident that “The gas leak can’t be from my plant. The plant is shut down. Our technology just can’t go wrong. We can’t have such leaks.” This initial denial soon evolved into a strategy of claiming employee sabotage, a claim so evidence-free that it was abandoned by UCIL’s legal team in the Bhopal court that heard the criminal charges of culpable homicide against the company. Despite this, the company continued for several years to invoke the sabotage theory as its explanation to the media and the public in the United States and elsewhere. Union Carbide also attempted to put the accident “in perspective” and to blame the victims. As the latter poured in to the Hamidia hospital, L. D. Loya, the company’s medical officer, told frantic doctors
V u l n e r a b i l i t y
95
that “the gas is non-poisonous. There is nothing to do except ask the patients to put a wet towel over their eyes.” Again, the works manager, J. Mukund, told the media barely fifteen days after the accident that “MIC is only an irritant, it is not fatal. It depends on how one looks at it. In its effects, it is like tear gas, your eyes start watering. You apply water and you get relief. What I say about fatalities is that we don’t know of any fatalities either in our plant or in other Carbide plants due to MIC.” The company claimed subsequently that the large mortality was due to a combination of undernourishment and a lack of education among the people affected. It also claimed that the persistent morbidity had to do with baseline diseases like TB in the gas-affected areas and that the victims were responsible for their own plight because they maintained poor standards of public hygiene. Union Carbide subsequently began to downplay the potency of the gas in the media and in courts. Following the disaster, it sponsored research on the toxicological impact of the gas on the physiology of the Bhopal survivors, to counter the data of state hospitals and other NGO clinics. Union Carbide also employed a divide-andconquer strategy in Bhopal by adopting political campaigning; it hired several prominent persons, including a former British MP, for this purpose. The lingering criminal and civil cases and the continued support they were getting from environmental, labor, and consumer movements internationally were portrayed to Western governments as a dangerous consolidation of anti-Western and anticapitalist forces. As a result of sustained lobbying, the then–US administration brought enormous pressure to bear on the Rajiv Gandhi administration in India, then desperately interested in attracting global capital. Ultimately, the Indian government succumbed to the pressure. Without any consultation with victims or their representatives, the Government of India offered a settlement package to Carbide in the chambers of the Chief Justice of the Supreme Court of India, R. S. Pathak, in the spring of 1988. Union Carbide’s postdisaster strategy
96
C ha p t e r 4
is palpably visible in the settlement ledger. In the aftermath of the accident, victims’ organizations in Bhopal made a claim of US$10 billion, based on standards in the United States. The Indian government, meanwhile, claimed US$3.3 billion. Union Carbide’s initial offer was US$300–$350 million and the final settlement was worth US$470 million. The cost to Union Carbide was a mere forty-three cents a share. In its annual report following the settlement, Union Carbide boasted that “The year 1988 was the best in the 71-year history of Union Carbide, with a record $4.88 earnings per share which included the year-end charge of 43 cents a share related to the resolution of the Bhopal litigation.” The parent company then proceeded to sell its entire 50.9 percent stake in UCIL to the Calcutta-based McLeod Russell India Ltd., clearing the way for it to exit India without any further involvement in Bhopal. Another facet of the pattern of vulnerability that arises out of unaccountable corporate power lies with the public relations industry. A case in point is the public relations company BursonMarsteller (B-M), which was hired by Union Carbide Corporation in the aftermath of the Bhopal accident to manage the public image of the company. Founded by Harold Burson and Bill Marsteller more than forty years ago, B-M had previously been deployed by Babcock and Wilcox in the aftermath of the Three Mile Island nuclear accident and by A. H. Robins during its problems with the Dalkon Shield contraceptive device. B-M had also advised Eli Lilly over the Prozac controversy, and Exxon after the oil spill. According to B-M’s corporate brochure in the mid-1990s, Often corporations face long term issue challenges which arise from activist concerns (e.g. South Africa, infant formula) or controversies regarding product hazards. . . . B-M issue specialists have years of experience helping clients to manage such issues. They have gained insight into the key activist groups (religious, consumer, ethnic, environmental) and the tactics and strategies of those who tend to generate and sustain issues. Our counselors around the world have helped clients counteract activist-generated . . . concerns.
V u l n e r a b i l i t y
97
To deal specifically with environmental issues, B-M has established a Worldwide Environmental Practice Group (EPG)—an international network of professionals who specialize in various aspects of environmental communications. Harold Burson explicitly presented B-M’s outlook in a paper arguing that “being the professional corporate conscience is not part of the job description of other executives. It is part of the job description of the chief public relations officer.” Burson added: “A corporation cannot compensate for its inadequacies with good deeds. Its first responsibility is to manage its own affairs profitably,” and “we should no more expect a corporation to adopt a leadership role in changing the direction of society than we should expect an automobile to fly. The corporation was simply not designed for that role.” The case of Union Carbide Corporation in Bhopal is iconic given the sheer scale of the disaster. However, the various issues it raised have continued to persist in different parts of the world. While it is not possible to generalize, many corporations globally have been accused of placing profits over the safety and well-being of workers and local communities. This is true across sectors and industries, including banking, finance, infrastructure, consumer goods, agriculture, pharmaceutical, information technology, and prisons. Journalists and scholars have also documented the work of corporate public relations firms and especially their ability to manipulate public option. There is also specific work with respect to the emergence of social media networks and the impacts they have in framing issues, even affecting the way people vote. Because of such developments, communities locally and within nation-states are impacted, with their land and water sources polluted, and their human and dignity rights threatened. If there is a silver lining, it is emergence of new approaches to corporate management, emphasizing multiple bottom lines (not only the financial) along with public support for ethical corporate governance. What is unclear is how impactful such developments will be in the future, and whether they can reduce vulnerabilities arising from unaccountable corporate governance.
98
C ha p t e r 4
In equality Corporations such as Union Carbide are unaccountable on account of the power they wield, and their ability and willingness to pursue their interests, even at the cost of communities of citizens, and in many cases local and national governments. This unequal power dynamic results in a host of issues related to inequality addressed by the environmental justice movement, which mobilizes around the human rights implications of adverse living and working environments. According to Bunyan Bryant, a leading American pioneer in environmental justice studies, environmental racism addresses institutionalized rules, regulations and policies, and governmental and corporate decisions that very consciously disproportionately expose marginal and liminal communities to toxic wastes, while excluding people of color from decisions made about their immediate neighborhoods. Environmental equity refers to equal protection under environmental laws, as opposed to the unequal protection that results when, for example, public action and government agencies and fines in the United States and other countries favor white communities far above areas with largely black and or otherwise minoritized populations. Where either environmental racism or environmental inequity prevails, environmental injustice occurs. Here I also use a case study methodology, with reference to three iconic cases in the United States. The first concerns the history of silicosis in the twentieth century. An important early event was the Hawks Nest disaster of 1935, in which an estimated 764 workers employed in digging tunnels at Gauley Bridge, West Virginia, died. The workers had been laboring since 1930 on a three-mile tunnel, being drilled to divert water from the New River to a hydroelectric plant that would power metals plants in the town of Alloy—owned by Union Carbide, the same company responsible for the later disaster in Bhopal. According to some estimates, at least two-thirds of the workers were African Americans. They labored in a mountainous environment in which the rocks contained high levels of silica, and
V u l n e r a b i l i t y
99
the drilling technique used in the operation released large amounts of silica dust into the air. Contemporary reports mention black diggers emerging from the hole in the mountain covered in white dust, and the interior of the tunnel itself was lined in silica, impairing vision and clogging the lungs of workers. Reports also mention the abysmal conditions of work within the tunnel—characterized by poor ventilation and the limited availability of personal protective equipment. As a result of all this many workers developed silicosis, a then-incurable lung disease that caused shortness of breath and was ultimately fatal. Investigations by the US Congress revealed that workers who died were buried in unmarked graves at the side of the road outside the tunnel. The Hawks Nest event was but one node in a sordid history of silicosis-related morbidity. During the 1930s, during the Great Depression, the disease was the “king of occupational diseases,” and impacted tens of thousands of American workers and communities. It became a national crisis, with significant public, governmental, and media interest. Thousands of lawsuits were subsequently filed, directed at a range of industries including metal mining, construction, foundries, and potteries. In response, industries lobbied state governments, and convinced many to incorporate silicosis into worker compensation schedules. In effect, this move meant that workers no longer had the right to sue in court. They were instead subject to decisions by expert panels, who often looked on workers’ claims with skepticism. Moreover, workers were not fully cared for due to the provisions of state worker compensation bills that addressed silicosis. For example, in New York State, no compensation was provided for partial disability, and a maximum amount of $3,000 was allowable for total disability. From the enactment of this bill in 1936 to the end of the decade, fewer than eighty workers were compensated. With the passage of state worker compensation bills, silicosis retreated from public memory. However, the industrial boom in the 1940s brought with it a spate of new kinds of workplace injuries and
100
C ha p t e r 4
in workers’ communities. This, in turn, prompted the passage of the Occupational Safety and Health Act (OSHA) in 1970, and the creation of the National Institute of Occupational Safety and Health (NIOSH). One of the earliest investigations by the new agency was of silicosis. The 1974 report found that the existing threshold limit values that guided regulation were weak and afforded very little protection to affected populations. Accordingly, it revised the permissible exposure limit and advocated the banning of silica in blasting and similar industries. Industry lobby groups responded by creating and funding a Silica Safety Association (SSA), ostensibly to “investigate and report on possible health hazards involved in the use of silica products and to recommend adequate protective measures considered economically viable.” In the months that followed, the SSA argued that silicosis was not a serious problem, and that the hazards associated with its use could be addressed by the adoption of “proper protective devices.” This was despite a study conducted some months earlier which found that even in a plant owned by an SSA member and meeting its own definition of good working conditions, nearly half the samples were above the threshold limit values. But the SSA was successful in its lobbying efforts during the Carter presidency, and the issue was removed as an OSHA priority in the succeeding Reagan administration. However, in the 1970s large numbers of people, including a significant percentage of nonunion contract workers, were hired to do sandblasting. A new epidemic of silicosis broke out soon after. This triggered a spate of lawsuits in the late 1980s and early 1990s, this time directed at sand providers and equipment manufacturers, who, unlike the employers protected by employment compensation rules, were vulnerable because under common law in Texas, vendors of dangerous products had a duty to warn workers. While there were some racial dimensions in these suits concerning Anglo juries and Mexican laborers, some of these legal actions were successful. Silicosis subsequently became an issue in the coal mines in the Eastern United States, and a new chapter in the drama commenced,
V u l n e r a b i l i t y
101
complete with the creation of a new industrial silica coalition and lobbying efforts. Lead poisoning affords another example of the distributive or environmental justice aspects of vulnerability. The topic began to gain national attention in the United States in the 1950s as epidemiologists, public health officials, and community activists noticed patterns linking lead poisoning to wider societal problems such as poverty, malnutrition, slums, and racism. Through the 1950s and 1960s, multiple cases of lead poisoning emerged in children’s hospitals in several cities, including New York, Baltimore, and Chicago. The cases involved children, often as young as two years, who lived in old, rented properties and who, it was believed, were exposed to paint flakes in their homes. Many cities ignored such cases, and when some, such as Baltimore, attempted to implement minor changes like lead paint warnings, they ran into considerable opposition. Reform to address public health concerns were also thwarted by controversies. For example, what constituted normal lead levels came into question, with scientists commissioned by the lead industry claiming that the element is naturally found in human tissues, and that there is no evidence to establish that the concentration of lead in urine or tissues implies lead poisoning. After decades of effort, lead was established as a highly dangerous element and removed from paint, gasoline, and other consumer goods. However, it still lingers in older homes, in the soil, and in infrastructure like pipes. In our time, Flint, Michigan, has been an iconic example of environmental justice involving lead poisoning. This originated when, in 2014, the city of Flint switched from the Detroit water system to that of the Flint River in an attempt to save money. Long before the switch, though, the Flint River, which runs through the town, had made headlines as effectively an unofficial waste disposal site for local industries, ranging from meatpacking plants to car factories and lumber and paper mills, and sewage from the city itself, besides toxic elements leaching from landfills to agricultural and urban run-
102
C ha p t e r 4
off. The river was also rumored to have twice caught fire. From the mid-twentieth century onward, Flint has been a major industrial city. It was the birthplace of General Motors, and during the midtwentieth century was a bustling town populated by workers in the auto and other industries. However, rising oil prices in the 1980s resulted in the decline of US auto exports, and this in turn, meant that many workers were either laid off or relocated, precipitating an economic crisis. By 2011 45 percent of its residents, mainly African American, lived below the poverty line, and nearly one-sixth of homes were abandoned. The city of Flint had incurred a $25 million deficit and fell under state control. Michigan Governor Rick Snyder appointed a manager to cut city costs. One of the decisions made by the new regime was to terminate the practice of procuring treated piped water from Detroit. Instead, they adopted the cheaper alternative of pumping water from Flint River, although it was known that this river water was highly polluted and not suitable for human consumption. This move was meant to be temporary, pending the construction of a new pipeline from Lake Huron. As the pumping began, the highly corrosive river water reacted to the old pipes, causing them to leach lead into thousands of homes. Soon after the source of water switched in April 2014, residents began to demonstrate the truth of their complaints by showing officials jugs of discolored and foul-smelling water. A study by researchers at Virginia Tech showed that nearly 17 percent of samples taken from 252 homes had lead concentrations above fifteen parts per billion (ppb), the level at which federal guidelines mandated corrective action. Moreover, 40 percent of these samples indicated a level above five ppb, indicating a serious problem. There was further evidence of a major public health crisis. In September 2015, the Flint pediatrician Mona Hanna-Attisha reported elevated lead levels in children citywide; these levels had doubled and even tripled in some neighborhoods since 2014. Nearly nine thousand children had been contaminated with lead for more than eighteen months,
V u l n e r a b i l i t y
103
exposing them to lifetime health consequences. In addition to lead, the polluted water resulted in an outbreak of Legionnaires’ disease, a severe form of pneumonia, that was responsible for twelve deaths and significant illness between June 2014 and October 2015. The city’s attempt at addressing this development—the addition of more chlorine—resulted in elevated levels of the carcinogen total trihalomethanes (TTHM). Faced with the response of city and state officials who, during the early stages of the Flint crisis, had insisted that the water was safe, and the broader failure of regulatory agencies, including the Environmental Protection Agency, Flint residents mobilized. In early 2016, a coalition of citizen organizations and other institutions— including the Concerned Pastors for Social Action, the National Resources Defense Council, and the American Civil Liberties Union of Michigan—filed a lawsuit demanding that the city and state officials provide safe drinking water. In 2016, they filed a further motion in a bid to ensure that all residents, especially those unable to access the city’s free water distribution centers, would get some form of safe drinking water. In response, a federal judge ordered the implementation of a door-to-door water delivery service for those unable to gain access to potable water. In March 2017, a major settlement required the city to replace the lead pipes with funding from the state government. Moreover, there was a mandate to guarantee comprehensive water testing, the installation of filters in faucets, continued education and health outreach programs, and free bottled water. While the situation in Flint has improved since then, all is not entirely well, as thousands of residents still get water via lead pipes. Since the vast majority of impacted people are poor African Americans, the Michigan Civil Rights Commission concluded that the Flint crisis was a “result of systemic racism.” However, Flint is by no means a one-off case. Recent studies have shown that community water systems across the country are in violation of federal drinking water laws relating to lead contamination, and that there are many contaminants and
104
C ha p t e r 4
potential sources of public health crises that are not even monitored or regulated. As these case studies suggest, there is often a direct correlation between economic class and vulnerability to risk, both within regional contexts in developed and the developing worlds, and across the economic divide that separates nation-states within the world system. On the one hand, industries that pose environmental and occupational risks have increasingly moved during the past two decades from the developed to developing parts of the world. On the other, part of the reason they have moved is that developing countries, for a variety of reasons ranging from economic priorities to a lack of an adequate environmental institutional infrastructure, are often unable to enforce adequate regulatory protection. Within developing countries, in turn, hazardous industries are often located in sites closest to liminal or marginal populations. These trends are illustrated excellently in the case of the Bhopal disaster. The people most exposed to the gas leak were those who were economically and politically marginal. There are two facets to this. First, those who lived near the Union Carbide plant did so because it was the only place they could afford. Being in an industrial location, with the associated problems of noise, air, and water pollution, meant that the land around the Union Carbide plant was the least desirable, and consequently, the least expensive. Moreover, the people who were most exposed to the gas were those who lived in semipermanent dwellings. These were shanty houses whose windows and doors did not seal tightly enough to effectively keep the gas out. So, the economic status of those who became gas victims forced them to live in hazardous conditions. Second, those who were affected by the gas leak were politically marginal. Union Carbide built its 1979 MIC plant within its existing facility, which was located next to a densely populated neighborhood and a heavily used railway station. In doing so, it violated the 1975 Bhopal Development Plan, which had stipulated that hazardous industries such as the MIC plant ought to be located in the northeast
V u l n e r a b i l i t y
105
end of the city, away from and downwind of the heavily congested areas. According to M. N. Buch, one of the authors of the plan, UCIL’s initial application for a municipal permit for the MIC plant was rejected. The company, however, managed to procure approval from the central governmental authorities and went ahead with its plan to build the MIC unit in a dense urban settlement. The political economy that defined the relations between the company and the decision-making elites within the state was an important reason for the power of the corporation relative to that of the community. Relatives of several powerful politicians and bureaucrats were either employed by the company or had received illegal favors from it. The company’s legal adviser at the time of the gas leak, for example, was an important leader of the then-ruling Congress (I) party, and its public relations officer was a nephew of a former education minister of the state. The chief minister himself was facing a court case over claims that he had personally received favors from the company, and his wife had received the company’s hospitality during visits to the United States in 1983 and 1984. Moreover, the company’s plush guesthouse was regularly used by the chief minister and had been placed at the disposal of the Congress party for its 1983 convention. All this, among other things, meant that the state government often looked the other way when Union Carbide violated environmental regulations or cracked down on the worker protests. Underlying the correlation between environmental injustice and economic and political status in Bhopal, Flint, and other places around the world, is a political economy that manufactured both a physical and a moral metaphysic of environmental violence. This metaphysic had structurally built-in potentialities for serious risk to the workers and the community. When not physically actualized, the agents responsible for creating the metaphysic were morally lucky. However, when the potentiality was actualized, as happened in December 1984, the moral metaphysic underlying the environmental injustice in Bhopal produced a catastrophic disaster. Adding to the problem is the absence of competent expertise, both s cientific and
106
C ha p t e r 4
administrative, capable of recognizing, empathizing, and addressing the public health problems of communities. The next section addresses the underlying issues related to this issue.
Mi ssing Expertis e An important type of vulnerability in the technocene is the fact that the production of the potential for risk is not often matched by a concomitant creation of the expertise and institutions with the wherewithal to help mitigate a crisis, should one ensue. Environmental justice advocates in the United States, for example, have long pointed out that working-class and racially liminal communities lack the basic infrastructure for monitoring pollution and danger. They argue that assumptions about contamination are made based on measurements taken significant distances away from the places of livelihood and work of impacted people, so that the pollution estimates produced by agencies such as the EPA are inaccurate. The absence of tangible data about the threats faced by such communities means that they are vulnerable, not only due to the toxins present in their environment, but due to their inability to learn about the threats they face and receive help from scientific experts. Globally, the absence of mitigating expertise is ubiquitous, such that people often live and work in environments that are extremely hazardous. Moreover, when disasters ensue, the absence of infrastructure and relevant expertise amplify the negative impacts of the actual event. A case in point is that of Bhopal. My ethnographic work there revealed three types of missing expertise—contingent, conceptual, and empathetic. By contingent expertise, I mean the lack of preparedness by governments or governmental agencies to respond immediately and effectively to a disastrous event. Such institutions include warning systems, evacuation procedures, and other measures that help mitigate the societal impact of disaster in the immediate aftermath. Contingent experience was largely miss-
V u l n e r a b i l i t y
107
ing in Bhopal. The state government was unable to evacuate the population from the scene of the gas leak, despite a policy decision to do so after the accident had been confirmed. Government agencies were also, on the night of the disaster, unable to communicate effectively with the people by informing them, through the radio or other means, about how to respond to the gas leak. It took forty hours for the government to set up the first coordination meeting of secretaries and heads of departments to develop an effective relief strategy. The Indian army, which was deployed by the state to evacuate the gas-stricken area on the night of the disaster, and a variety of other agencies and individuals, including medical professionals and voluntary service agencies, made a heroic effort, but were overwhelmed by the sheer scale of the problem. This raises the broader question of preparedness. Bhopal was missing three critical features present in the successful adaptive systems set up to meet the threat of disaster. The first of these was hazard awareness. The administration in Bhopal did not invest in understanding the full potential of the hazard posed by the Union Carbide pesticides factory, and consequently, the state government failed to scope out potential hazards and generate systematic data on possible threats. The second missing feature involved the absence of efforts, based on such awareness, to minimize either the onset of the threat or its impact when a cataclysmic event occurs, such as through effective monitoring of the plant. The third missing feature was the nonexistence of the infrastructure needed to effectively respond to a disaster, should one ensue. Such infrastructure might have included the deployment of appropriate technological systems; the provision of adequate training to designated staff; and effective risk communication procedures. The absence of effective contingency planning for tackling a scenario such as the gas leak meant there was no agency or scheme that had the training, method, theory, or the infrastructure to do the job. The experience of erecting functional institutions to deal with conventional disasters of low intensity, however, indicates that
108
C ha p t e r 4
there is no a priori reason why novel and large-scale disasters cannot equally be subject to effective contingent planning. Missing expertise, in such contexts, therefore reflects an absence of societal and cultural prioritization of the need to build such expertise. Such absences speak to a wider problem in the culture of risk and the political economy of hazard. In this sense, the Bhopal gas disaster is indeed a canary in the mine, pointing to a more entrenched and perhaps intractable set of social factors that underlie how risk and vulnerability are framed and tackled. The second type of missing expertise, conceptual expertise, is the kind needed to devise long-term rehabilitation strategies and to troubleshoot them in practice. It is particularly salient in novel industrial disasters, such as Bhopal or Chernobyl, where there is no template for action based on prior experience. Traditional disasters (such as, say, a forest fire or a hurricane) manifest themselves as sudden, catastrophic events, but have defined endpoints after which recovery and rehabilitation is possible. In contrast, novel industrial disasters might metamorphosize into chronic events, affecting communities over months and in some cases years and decades. Such disasters demand a wide range of expertise, over and beyond the contingent, and an ability to build an iterative and adaptive longterm rehabilitation program. Missing conceptual expertise in Bhopal is best illustrated by how the state government approached the task of social and economic rehabilitation. The first attempts to address this immense problem involved tested strategies conventionally used in responding to natural disasters, such as ex-gratia payments to the victims’ families to help them through the immediate crisis, and the distribution of clothes, food, blankets, and other material goods. When it became clear that such piecemeal methods were not going to suffice because of the lingering, chronic character of the disaster, the administration realized that it was faced with a fundamental challenge: to devise an innovative economic rehabilitation strategy. By the first anniversary of the gas disaster, the state administra-
V u l n e r a b i l i t y
109
tion faced mounting public pressure to launch an effective rehabilitation program. In a bid to address this pressing problem and cope with a public relations crisis, the government proposed a strategy that combined three regional development programs already underway elsewhere in the state. The problem was that these schemes had been conceived of in the absence of adequate socioeconomic and other relevant data on the survivors, and without systematic feasibility studies. The programs consequently began to unravel almost immediately. Similar failures were visible in the governmental medical rehabilitation program. Unlike the case of contingent expertise, the problem of the absence of conceptual expertise needs to be addressed with a dynamic and pragmatic approach to governance, especially one that builds institutions that expand the role of government from beyond the traditional domains of preserving law and order and collecting taxes. Effective rehabilitation programs cannot emerge in such a conceptual vacuum. The third type of missing expertise is what I term empathetic expertise. This is the capacity in agencies and institutions to act and intervene based on observations of the dynamics of social suffering caused by the daily commerce of interactions between victims and bureaucracies. In Bhopal, there was a palpable distance between the victim and bureaucrat, virtually blinding the latter to the suffering faced by the former. One arena where this blind spot was particularly visible was the process involved in getting the bureaucracy to formally acknowledge someone was a victim and could benefit from the governmental rehabilitation schemes. For that to happen, a claimant needed to procure the appropriate certificate issued by the administration. This crucial piece of paper, however, was only given when a series of other documents, testifying to everything from proof of residence to exposure to the gas, were produced—not easy for the demographic of victims in Bhopal, who were largely working-class poor, with many among them recent migrants from the rural hinterland. Even in cases where a person possessed the requisite documentation, they had to negotiate a series of further hurdles. For example,
110
C ha p t e r 4
they had to fill out a form applying to be recognized as a victim. The complexity of the form, and the accompanying hassles of document procurement, made it difficult for the average Bhopal survivor to satisfy the requirements of the claims process. Victims often became totally dependent on expensive touts for an act as basic as filling out the claim form. The government-run medical rehabilitation programs also posed challenges. Many gas victims claimed that medical doctors in the government hospitals were poor listeners and brushed aside their testimonies of bodily pain and ailments. The gap in social status between the educated middle-class doctors and the victims played a role in this. Gas victims therefore often went to private practitioners, many of whom had no formal certification. But at least some of them were good listeners and provided not just drugs, but a space where the victims got a sense that their subjective testimonies did count and that they could express their pain in the vernacular. If empathetic expertise is the ability to gain a contextual and compassionate understanding, and to act based on such experience, then in Bhopal its existence could have meant a radically redesigned claim form, or training to change the nature of the doctor-patient interaction. Attention to details such as these could have altered the lifeworlds of the victims in tangibly positive ways.
Disco ur ses The technocene also throws up what might be termed discursive vulnerabilities, which on the face of things might seem abstract, but nevertheless have important material manifestations. Discursive vulnerability in Bhopal, for example, had at least three dimensions. The first of these was the violence of what may be described as developmentalism. Perhaps the best illustration of the nature of this type of vulnerability can be found in a statement of shared concern signed by several Indian environmentalists in the aftermath. “The Bhopal disaster,” they wrote,
V u l n e r a b i l i t y
111
has stunned those responsible for pollution control and put fear in the hearts of millions of industrial workers and people living near factories. But Bhopal is not the only disaster. Subtle and invisible processes continue to undermine the human and natural resource base. . . . The most brutal assault has been on the country’s common property resources, on its grazing lands, forests, rivers, ponds, lakes, coastal zones and increasingly on the atmosphere. The use of these common property resources has been organized and encouraged by the state in a manner that has led to their relentless degradation and destruction. And sanction for this destructive exploitation has been obtained by the state in the name of “economic advancement” and “scientific management.”
When Union Carbide workers were protesting and petitioning for better safety in the operation of the plant prior to the disaster, a state labor minister, Tara Singh Viyogi, argued that the factory was “not a stone which I could lift and place elsewhere. The factory has its ties with the entire country. And it is not a fact that the plant is posing a major danger to Bhopal or that there is any such possibility.” Implicit in such a statement was a primordial commitment to a risk assessment strategy that, as the environmentalists wrote later, put “economic advancement” above all else. A second facet of discursive vulnerability in Bhopal concerned the appropriative violence of some of the nonprofit organizations that worked there in the aftermath. For a number of these groups, Bhopal provided an opportunity to highlight their central outlooks and ideologies. These ranged from a concern with the negative consequences of the spread of multinational organizations and the world capitalist system, to a deep-rooted skepticism of technologies such as the green revolution. Their approach to political mobilization, therefore, was to use an immense human tragedy to bring to public consciousness the issues that concerned them. In doing so, they often described themselves as “victims’ organizations” and spoke on behalf of those affected by the gas disaster. However, they failed to take notice of the day-to-day needs of the victims. Although their rhetoric accused the state and the company of not doing enough in
112
C ha p t e r 4
this regard and of being part of an “anti-people” conspiracy, they never managed to provide any details or mobilize expertise that would help shed light on how viable relief could be provided. The result was the appropriation of the pain and the voice of the victims, with no tangible benefits to them. Bhopal also displayed vulnerabilities related to discursive absence. When the government first attempted to create a rehabilitation program, it appealed to various Indian universities and institutions of higher learning for help. Although anthropologists and political scientists decried the discourse of development, the nature of the world capitalist system, and the “econometrics of suffering” produced during the state rehabilitation process, they did very little tangibly to help address and solve the complex problems associated with rehabilitation. No social scientist, for example, committed to conduct long-term research toward this end. Even those institutions that did make more of a tangible effort to respond, such as the Tata Institute of Social Sciences, which the government commissioned to conduct a socioeconomic survey, could not adequately deliver. These material absences are symptoms of a wider phenomenon that I have described elsewhere as categorical politics. There was no middle ground between the heady discourse of technocratic optimism, on the one hand, and the equally vehement antidevelopmental pessimism, and a social scientific discourse obsessed with describing power gradients and discourses of governmentality and environmentality, on the other. There was no prevalent culture or praxis in the social sciences that could tangibly intervene in the thicket of pragmatic detail that putting together a rehabilitation program demands. Arguably, the case of Bhopal is unique, both due to its scale and size, and because of its contextual features—the time that the disaster occurred (toward the end of the Cold War era), and amid a particular constellation of ideologies within India. That said, it serves to illustrate the concept of discursive vulnerability—which, by its very nature, tends to manifest in different ways in various contexts.
V u l n e r a b i l i t y
113
There is however one aspect of discursive vulnerability that appears to be more ubiquitous, and increasingly global, and this concerns the role of public relations firms. The section on corporate accountability already alluded to this, but more can be said about the nature of media manipulation. In a 2002 book, Trust Us, We’re Experts!, Sheldon Rampton and John Stauber explored the role of ostensibly independent experts. Among other things, Rampton and Stauber argued that rather than old-fashioned advertisements, public relations firms now employ what they call the “third party technique,” in which an ostensibly neutral third-party entity, such as a university professor, a doctor, or a nonprofit watchdog organization, is mobilized to make claims that appear to be credible, via artifacts such as research documents or opinion editorials or news shows with a veneer of objectivity. Rampton and Stauber offered a number of sobering examples. They claimed, for instance, that the drug company Bristol Myers Squibb paid $600,000 to the American Heart Association to display the latter’s name and logo while promoting a cholesterol-lowering drug. Likewise, Smith Kline Beecham paid the American Cancer Society $1 million to promote Nicoderm CQ and Nicorette anti-smoking products. A different type of example involves the role of a prestigious university in obfuscation. Rampton and Stauber recounted that in 1997, Georgetown University’s Credit Research Center published a study claiming a widespread prevalence of spurious bankruptcy filings by companies who wanted to escape their obligations to creditors. But the center was bankrolled in its entirety by credit card companies and other related financial entities, who also funded the study in question to present results that were in their interest while making a claim of objectivity. Famously, too, tobacco companies paid scientists significant amounts to write letters to influential medical journals supporting their claims. Examples like these, and indeed others, show the workings of a large industry that uses very sophisticated approaches: employing public figures, manufacturing studies
114
C ha p t e r 4
of dubious research provenance, and otherwise manipulating public opinion to accept products and policies that might not only be against the public interest, but might maim and even kill consumers. During the past two decades, the advent of powerful social media, backed by advanced algorithms and psychological methods, has led to the further refinement of such approaches, leading in some cases to the overthrow of elected governments and the undermining of elections.
C onclusio n The literatures on risk and disaster discussed in the preceding chapters were attempts to conceptualize the nature, causes, and public policy solutions to impurity and danger in the technocene. Despite the complexity of the matter that they addressed, it is possible to read them in the abstract, for the landscape of ideas that they offer. Addressing vulnerability, however, requires a different approach as, almost by definition, the concept is tied into tangibly real worlds of communities of people. This sheer materiality implies that there can be many different iterations, manifestations, and patterns of vulnerability in different local contexts around the world. It is with this in mind that I adopted the method of case studies with which to approach this chapter. The examples were chosen because they are iconic, and therefore representative of some of the biggest challenges that humanity has faced thus far during the technocene. That said, it is arguable that some of the themes discussed in this chapter, namely, the vulnerabilities owing to unaccountable corporations, distributive vulnerabilities, vulnerabilities due to missing expertise, and discursive vulnerabilities, are resonant across geographies and contexts, although the actors, politics, and other manifestations will vary. Indeed, the work of countless scholars and organizations documenting these cases around the world stand testimony to the fact that these vulnerabilities are widespread.
5 Looking Ahead
In tr o ductio n The discussion thus far has engaged the canonical literatures on risk, disaster, and vulnerability in the social sciences and public policy. However, it would be remiss to end this book here. Important ideas, of a broader, philosophical import, have been advanced by more recent thinkers, scholars, and practitioners reflecting on the dystopias of the technocene. They argue that the vulnerabilities associated with environmental risks and disasters cannot flow just through narrowly construed or technical tweaks, and that it is equally important to grapple with the messier aspects of human agency and capacity. They challenge us to reimagine a host of assumptions, and in doing so, re-think some critical concepts, such as reflexivity, democracy, and justice. The purpose of this final chapter is to explore these emerging themes and ideas.
115
116
C ha p t e r 5
A Qui ck Recap It is useful to begin by quicky summarizing the salient features of the discussion on risk, disaster, and vulnerability thus far. The so-called risk paradigm emerged as a rational approach to addressing pollution: determining scientific facts to learn the nature of the problem and then applying policy tools to devise a solution. Not long after it was formulated, the paradigm encountered many conceptual challenges. Psychological research identified the existence of significant biases that informed how both experts and laypersons understood risks. Philosophical disagreements already extant in society about the relative importance of individual responsibility versus the role of the state framed conversations about managerial approaches and priorities. Questions emerged about the quality of available data relevant to risk assessment and management strategies, the nature of scientific uncertainty and ambiguity, and the challenges posed by bioaccumulation. The ensuing public policy debate on managing and regulating environmental risks generated some novel ideas, including the precautionary framework. Studies of disasters, undertaken by scholars and policymakers who were mostly not associated with the formulation of the risk paradigm, led to an appreciation of the issues and difficulties of ensuring safety in complex engineered systems. A central keyword was human error, stemming from the recognition of the psychological predilections of human actors amid the challenging contexts in which they operate. Rather than failures by individuals, as commonly perceived, this literature posited that human error is better understood by contextualizing actions within a larger context of decisions made by others, who are not necessarily present during the frenetic phase of any given unfolding accident. Studies of industrial disasters also explored the social structures and institutional norms that drive the functioning of risky technological systems. The upshot of this research was that disasters often ensue when norms, rules, and clearly laid down procedures are violated. Significantly,
L o o k i n g A hea d
117
however, many such violations are not the result of nefarious acts but are often due to genuine differences in understanding among practitioners. Here, normal accident theory and high reliability organization theory pushed the envelope in important but subtly different ways. NAT identifies categories of accidents that are ostensibly “unavoidable.” HRO theory, on the other hand, argues that some organizational systems can be established to avoid catastrophic accidents. Finally, various case studies on vulnerability pointed primarily to five issues: unaccountable corporations; inequality and environmental injustice stemming from racial, gender, and class-based liminality; inadequacies in scientific, technological, and administrative expertise; differential power dynamics in different geographies, based on the nature and extent of the distribution of wealth and power; and discursive vulnerabilities stemming from the power of rhetoric in justifying ideological approaches to economies and polities. Each of these is an important factor in the political economies underlying public policy and governance choices that impact risk and disaster. The rest of this chapter sketches the salient points in newer and noncanonical approaches to understanding risk, disaster, and vulnerability. Three important themes here are reflexivity, democracy, and justice. The purpose here is not to offer a comprehensive analysis, as the extent and quality of the literature on these subjects warrants a book in itself. Instead, this chapter will briefly survey some of the possibilities for alternative futures in these debates.
Reflexi vi ty Following the Chernobyl nuclear disaster in Ukraine, Ulrich Beck, a sociology professor at Munich, wrote a thought-provoking book, Risikogesellschaft, in 1986. In the book, which was reprinted in English translation as Risk Society: Towards a New Modernity in
118
C ha p t e r 5
1992, Beck argued that environmental risks were not merely a necessary collateral damage of industrialization, unpleasant but manageable, though they might be, but the primary artifact of industrial society. He saw the societal consequences of events such Chernobyl as unique in human history, in that they appear to defy older social orders. For example, being a member of a higher-class strata does not necessarily, now, diminish one’s chances of being exposed to environmental contaminants. And wealth might not now enable people to escape the consequences of disaster, as deploying resources to mitigate risk depends on knowing about the nature and extent of the risk in the first place—which is not often the case in situations of ambiguity discussed in chapter 2. Beck postulated, therefore, that a feature of “risk societies” is that social class or place does not necessarily protect one from the impacts of industrial pollution, which affects all of society. This proposition generated considerable debate, with some scholars, notably Anthony Giddens, pointing out that class structure continues to be a relevant category in industrial and postindustrial society. These scholars, and Giddens in particular, took the debate in a different direction, arguing that dynamic economies and innovative societies need to think carefully through the consequences of deploying complex technologies in societal contexts and find ways of making tradeoffs carefully and thoughtfully. This is the essence of their argument, that the processes of modernization ought to be reflexive. This genre of thinking has become an increasingly important component of environmental policy in the European Union and elsewhere.
Dem o cracy If vulnerability is the key ingredient in the recipe for disasters, the creation of resilience via reflexive policy responses is the insulation, or perhaps vaccine. One method toward this end can be found in scholarly and advocacy-based traditions that focus on innovations in
L o o k i n g A hea d
119
democracy. In environmental politics and studies of risk and disaster, discussions of democracy tend to focus on science and technology and the wider issues of recreancy and faith in experts discussed in chapter 2. There is now a significant body of literature in Science and Technology Studies (STS) that shows that—contrary to conventional wisdom—science, as practiced, is not a value-free enterprise, but significantly mixed up with broader societal contexts, reflecting public values and the passions, interests, and politics of a range of actors including laypeople, governments, and corporations, among others. The domain of democracy, again contrary to conventional thought, also extends beyond electoral processes, and its study now includes a range of contestations over the nature and shape of institutions and infrastructure in public spaces. Such disputes include, among other things, the siting of industrial facilities; the design of landscapes, transportation, and housing systems; and budgetary allocation priorities. There are also conflicts between experts and laypersons, and competing and sometimes contradictory values emerge during public consultation processes. The politics and values of expert communities in these and other domains, such as climate change, genetic engineering, nanotechnology, and other cutting-edge controversies, are often different from those of other societal stakeholders. The academic and civic discussions on science and democracy ultimately aim to identify where power resides and how it operates in a manner that augments vulnerabilities to risk and disaster. It also explores ways of circumscribing this kind of power to enable safer and less dangerous working and living environments. These deliberations, and many civic experiments around the world, have spawned at least two broad types of initiatives. The first attempts to increase public participation in a manner that enables a better appreciation of how a wide range of actors, including communities impacted by pollution and danger, frame and understand the underlying stakes. The many elements here include the articulation of the way risks and vulnerability are differentially distributed in society, leading to forms of social learning that might result in more equitable outcomes.
120
C ha p t e r 5
In recent times, a new field of academic inquiry has emerged at the intersection of reflexivity and democracy. Critical Infrastructure Studies, as this field is called, has scholars from around the world “looking at the world through the concept of infrastructure—of things and systems made, built, shaped, crafted, interwoven, old, new, lived, loved, hated, sustained, or resisted.” How does seeing this way enable new thinking about the way we live in the world that human beings have created? Rather than seeing infrastructure as passive artifacts, this promising interdisciplinary enterprise, involving scientists, engineers, social scientists, humanists, and artists, carefully examines how infrastructure can include and exclude people, enhance or dimmish human rights, and enable or disable democracy. Among the many initiatives in this domain are attempts at studying how infrastructures can include local communities, and especially, the question of how such infrastructures can draw on community expertise, including skills, techniques, and worldviews. There is also a very promising genre of academic research and public policy experimenting with building what has been termed inverse infrastructures. The goal here is to reimagine the process of building infrastructure, using participatory consultative processes to deliver projects that reflect the actual stated needs of communities, rather than the judgment of experts. There are many examples of such experiments around the world, in fields ranging from agriculture, irrigation, and urban planning to workplace safety initiatives that help redesign the production processes.
J us tice Chapter 4 offered examples of social movements advancing environmental justice. The case studies are only a few of many related ideas and initiatives around the world. Environmental justice advocates have engaged in extensive mapping projects, showing that the brunt
L o o k i n g A hea d
121
of the toxic burden is faced by poor populations from working-class communities, who are disproportionately members of historically disadvantaged racial groups. Data-driven exercises such as these bring a tangible and visceral sense of reality that can speak truth to power, and force policymakers to initiate significant changes. Environmental justice campaigners have also undertaken to identify the nature and quality of extant data, and especially, document the absence of monitoring and mitigation expertise and infrastructures in such communities. They have also countered this absence with citizen science initiatives that aim to address these important gaps. In parts of the world where environmental justice is not the formal framework within which differential risks are discussed, there are other kindred “people’s science” initiatives. The past few years have also given rise to strong transnational efforts at advancing accountability. For the most part, these interventions focus on financial institutions that fund toxic industrial complexes and exert pressure on governments and corporations who abet them. There has also been a significant rethinking of the teleology of development, especially as it is articulated in public policy. While some of this work has been done in the West, focusing on colonial, postcolonial, and neocolonial ideologies and frameworks, there is also a powerful literature from the Global South, offering tangible alternative ways to imagine human development and emancipation. More recently, the legal profession has taken an active interest in issues relating to risk, disaster, and vulnerability. Courts the world over have taken up significant cases and produced important rulings on these matters, driven by a legal movement advocating human rights and the wider rationale of advancing “dignity rights.”
Pr o s pects Reflexivity, democracy, and justice are three frameworks that have emerged during the last few decades to help place risk, disaster, and
122
C ha p t e r 5
vulnerability within a wider perspective and to offer ways of thought and action beyond the managerial. These ideas and frameworks have also engendered considerable grassroots mobilization. The environmental justice advocates and legal professionals mentioned in the previous paragraphs, while influential agents of change, are but the proverbial tip of the iceberg. There are many novel experiments in local governance, new forms of social entrepreneurship, movements toward corporate citizenship and responsibility, and national and international coalitions, including human rights and environmental organizations, that seek to effect better environmental governance, and thereby address the big issues raised in this book. While only the future will reveal the impact of these initiatives, it is possible to speculate on a few scenarios. At the cynical end of the spectrum, there is the real possibility of business as usual, reflecting the economic and social divisions that shape our world. If this scenario transpires, the world will likely witness more Bhopals and Chernobyls. At the more optimistic end of the spectrum, a new multilateralism and a global commitment to addressing existing and imminent threats might well result in a series of policy and technological changes reflecting the many ideas promoted by those engaged in these thoughtful public policy discussions. Somewhere in the middle is the prospect of moderate actions, informed by the sustainability calculus and technological substitution approach, which might well reduce pollution and mitigate the potential for large catastrophes without necessarily addressing the morbidity that the environmental justice movement has identified as characterizing the unequal distribution of pollution and danger in the technocene. The world we live in is witness to each of these scenarios. There are some countries and places where risk, disaster, and vulnerability are well-understood and managed, and others where even rudimentary elements of the risk paradigm are absent. There is considerable multilateralism, and at the same time, a culture and politics of denialism that renders even basic policy changes and adjustments on critical planetary issues such as climate change difficult to achieve.
L o o k i n g A hea d
123
There are some amazing technological achievements on sustainability focused on the principle of substitution, without equity and justice being a part of the vision for change. There is also a culture, especially among many sustainability advocates in the industrial West, that the way forward is via consumer education, influencing individual personal choices. But examples such as Bhopal point in a different direction, and argue for the need for political mobilization and tangible collective action. We literally live in a multiverse, with contrary imaginaries of the past, present, and future, and differing capacities to mitigate risk, disaster, and vulnerability. Can humanity, as a collective, redress and remedy the historically determined universe that has produced our divided planet? Only time can tell, but the conceptual foundations for such change do exist.
Bibliographic Essays
The bibliographic essays that follow, organized by chapter, perform a dual role. First, they list the principal sources that I have specifically used. Second, they provide guides to further reading, describing sources from a wide range of literatures, offering explanations of important assumptions, and giving definitions of some central concepts. In adopting this format rather than a citationbased approach, I have followed the lead of other works that I have enjoyed, including Peter Sahlins’s Forest Rites: The War of the Demoiselles in Nineteenth-Century France (Cambridge, MA: Harvard University Press, 1998) and Richard White’s The Organic Machine: The Remaking of the Columbia River (New York: Hill and Wang, 1995). Given the glut in articles and books about environmental risks, disasters, and vulnerability, any bibliographical selection will necessarily be subjective and driven by my narrative. The constraints of space, too, dictate that not everything I have read during the years of my research can be mentioned. To those wonderful authors that 125
126 B i b l i o g r a p h i c E s s a y s
have been inadvertently omitted, I offer a sincere apology, and point out that their works are often indeed mentioned in the review articles that I do cite. I urge the reader to take these bibliographic essays as starting points for their own perambulations. (Note: all URLs provided were active as of October 12, 2022.) 1. Setting the Stage As an environmental historian undertaking a project in the environmental humanities, I explore physical pollution, industrial disasters, and their impacts on the environment and human health via an analysis of the discourses and ideas underlying the concepts of risk, disaster, and vulnerability. This method has long been adopted by environmental historians. For example, see Clarence Glacken, Genealogies of Environmentalism: The Lost Works of Clarence Glacken (Charlottesville: University of Virginia Press, 2017), and Traces on the Rhodian Shore: Nature and Culture in Western Thought from Ancient Times to the End of the Eighteenth Century (Berkeley: University of California Press, 1967); Donald Worster, Nature’s Economy: A History of Ecological Ideas, 2nd ed. (Cambridge, UK: Cambridge University Press, 1994); Richard Grove, Green Imperialism: Colonial Expansion, Tropical Island Edens and the Origins of Environmentalism, 1600–1860 (Cambridge, UK: Cambridge University Press, 1995); Keith Thomas, Man and the Natural World: Changing Attitudes in England 1500–1800 (Oxford: Oxford University Press, 1996); John Prest, The Garden of Eden: The Botanic Garden and the Re-creation of Paradise (New Haven: Yale University Press, 1982); and S. Ravi Rajan, Modernizing Nature: Forestry and Imperial Eco-Development 1800–1950 (Oxford: Clarendon Press, 2006.) It is in the past few decades that concepts such as the anthropocene and technocene have emerged, connoting the sense of a disjuncture in the relationship between humanity and the rest of nature. There are many excellent reviews of the concept of the anthropo-
B i b l i o g r a p h i c E s s a y s
127
cene. See, for example, John Green, The Anthropocene Reviewed: Essays on a Human-Centered Planet (New York: E. P. Dutton, 2021); Erle Ellis, Anthropocene: A Very Short Introduction (Oxford: Oxford University Press, 2018); and Julia Adeney Thomas, Mark Williams, and Jan Zalasiewicz, The Anthropocene: A Multidisciplinary Approach (Cambridge, UK: Polity, 2020). For the term technocene, I am using a very specific interpretation of the concept, developed by the philosophical sociologist Herminio Martins in The Technocene: Reflections on Bodies, Minds, and Markets (New York: Anthem Press, 2018). Martins refers not just to the transformation of nature by technology, but to the contemporary social, economic, and geopolitical configurations in which science and technology are embedded. In particular, he emphasizes the role of markets in the shaping of research priorities, and the way artifacts and technological affects are deployed. I use the term here as an epochal marker characterizing the era in which we live, comprising complex technologies with accompanying hazards that can potentially harm human societies and living environments on historically unprecedented scales. Another important starting point for me in thinking about the technocene was Mary Douglas’s classic Purity and Danger: An Analysis of Concepts of Pollution and Taboo (New York: Praeger, 1966), which sought to understand and explain why different societies and historical periods differed on the meaning of the pure and the impure, and in doing so, introduced the importance of culture in perceptions about risk and disaster. I will now turn to the specific subjects that I have mentioned in the introduction. The literature on science, technology, and democracy, and related topics such as the role of big data and media, is too vast to summarize here. However, a curious reader is advised to consult the website of the Science and Democracy Network, https:// sts.hks.harvard.edu/about/sdn.html. The relevant literatures on risk, disaster, and vulnerability, and on economic systems, human rights, and accountability, are described in the bibliographic essays for the chapters to follow. The field of environmental humanities is
128 B i b l i o g r a p h i c E s s a y s
protean, incorporating diverse disciplines. Environmental history, environmental ethics, and environmental literature have their origins in quintessentially humanist disciplines, but anthropologists, geographers, and sociologists too have contributed in significant ways to the development of the field. They offer critiques of the cultural and societal order that produces socioenvironmental problems and discuss alternative framings and discourses that help new imaginaries emerge. For some recent examples of works that illustrate the breadth and depth of this field, see Rob Nixon, Slow Violence and the Environmentalism of the Poor (Cambridge, MA: Harvard University Press, 2013); Robert Boschman and Mario Trono, On Active Grounds: Agency and Time in the Environmental Humanities (Waterloo, Ontario: Wilfrid Laurier University Press, 2019); Jodi Frawley and Iain McCalman, Rethinking Invasion Ecologies from the Environmental Humanities (Abingdon, UK: Routledge, 2014); Jennifer Hamilton and Astrida Neimanis, “Composting Feminisms and Environmental Humanities,” Environmental Humanities 10, no. 2 (2018): 501–27; Ursula K. Heise, Jon Christensen, and Michelle Niemann, The Routledge Companion to the Environmental Humanities (London: Routledge, 2017); Christopher Schliephake, The Environmental Humanities and the Ancient World: Questions and Perspectives (Cambridge, UK: Cambridge University Press, 2020); and Alfred K. Siewers, Re-imagining Nature: Environmental Humanities and Ecosemiotics (Lanham, MD: Bucknell University Press with Rowman & Littlefield, 2014). Environmental justice is also a field with diverse intellectual roots. Although the term was coined in a North American context, it now incorporates a wide range of inquiries, including racial disparities in exposure to toxic chemicals, the struggles for survival by indigenous populations all over the world, movements against industrial infrastructures that displace people, and popular protest against coercive regimes that exclude people from accessing natural resources, among others. These subjects have been extensively addressed by scholars in a wide range of social science disciplines,
B i b l i o g r a p h i c E s s a y s
129
and articles and books can be found in journals and monograph series in all the fields described in the previous paragraph. There are also interdisciplinary journals and series that bring scholars together, in fields such as environmental justice, political ecology, environmental labor studies, and gender and the environment, to take a few examples. In invoking the work of Raymond Williams, I am referencing particularly his classic Keywords: A Vocabulary of Culture and Society (London: Fontana, 1976). There is also a very useful web-based resource, the “Keywords Project,” involving a collaboration between the University of Pittsburgh and Jesus College at Cambridge University; see https://keywords.pitt.edu/williams_keywords.html. My use of the term is partial, in that I use keywords to organize the book and frame the ideas, without necessarily exploring every term in all its semiotic depth. 2. Risk There is a great deal of work on critiques of the Industrial Revolution and its ostensible satanic mills. For a relatively recent work that spans the history of such critiques, from the original Luddites to contemporary ones, see Steven E. Jones, Against Technology: From the Luddites to Neo-Luddism (London: Routledge, 2006). For a good account of smokestacks and the conflicts between early advocates of progress and environmentalists, see David Stradling, Smokestacks and Progressives: Environmentalists, Engineers and Air Quality in America, 1881–1951 (Baltimore, MD: Johns Hopkins University Press, 1999). For an excellent account of the controversies surrounding DDT, see Thomas R. Dunlap, DDT: Scientists, Citizens, and Public Policy (Princeton, NJ: Princeton University Press, 1981). For a survey on climate change science and policy at the turn of the millennium see Stephen Henry Schneider, Armin Rosencranz, and John O. Niles, Climate Change Policy: A Survey (Washington, DC: Island Press, 2002), and for useful transnational accounts of
130 B i b l i o g r a p h i c E s s a y s
environmental legislation and policy, see Lynton K. Caldwell and Robert V. Bartlett, Environmental Policy: Transnational Issues and National Trends (Westport, CT: Quorum Books, 1997) and Barry L. Johnson, Environmental Policy and Public Health (Boca Raton, FL: CRC Press, 2007). The US EPA offers an excellent summary of the risk paradigm on its web page on risk assessment (https://www.epa.gov/risk/about -risk-assessment). More resources on risk analysis are available on the website of the Society for Risk Analysis (https://www.sra.org). Risk Analysis is the flagship journal of the society, and its articles form the backbone of my understanding of the evolution of the field. My account of some of the salient observations made by William D. Ruckelshaus are from “Science, Risk, and Public Policy,” Science New Series 221, no. 4615 (September 9, 1983), 1026–28; “Environmental Protection: A Brief History of the Environmental Movement in America and the Implications Abroad,” Environmental Law 15 (1985): 455–755; and “The Beginning of the New American Revolution,” The Annals of the American Academy of Political and Social Science 396, no. 1 (1971): 13–24. The phrase “how safe is safe enough?” was often used in discussions about risk in the early years of the risk paradigm. In the text I cited Richard Wilson, “Analyzing the Daily Risks of Life,” Technology Review (February 1979): 41–43 and Stephen L. Derby and Ralph L. Keeney, “Understanding ‘How Safe Is Safe Enough?,’” Risk Analysis 1, no. 3 (1981): 217–24. In my description of the psychometric approach, I draw heavily on Paul Slovic, Baruch Fischhoff, and Sarah Lischtenstein, “Rating the Risks,” Environment 21, no. 3 (April 1979): 14–39; Daniel Kahneman and Amos Tversky, “Judgment under Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (1982): 1124–31; and a series of articles by Paul Slovic: “Perception of Risk,” Science New Series 236, no. 4799 (April 17, 1987): 280–85; “The Psychology of Risk,” Saúde E Sociedade 19, no. 4 (2010): 731– 47; “Affect, Reason, and Mere Hunches,” Journal of Law, Economics, and Policy 4 (2007): 191–465; “Rational Actors and Rational Fools:
B i b l i o g r a p h i c E s s a y s
131
The Influence of Affect on Judgment and Decision-Making,” Roger Williams University Law Review 6, no. 1 (2000): 163–212. Finally, I recommend a wonderful book by Slovic, The Perception of Risk (London: Routledge, 2000). For the risk and culture section, I drew on the classic text by Mary Douglas and Aaron B. Wildavsky, Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers (Berkeley: University of California Press, 1982), and Aaron Wildavsky, “Views: No Risk Is the Highest Risk of All,” American Scientist 67, no. 1 (January–February 1979): 32–37. For more on Ravetz’s work discussed in the text, see Jerome R. Ravetz, Scientific Knowledge and Its Social Problems (Oxford: Clarendon Press, 1971), and The Merger of Knowledge with Power: Essays in Critical Science (London: Mansell, 1990). I also greatly profited from reading Kathleen Tierney’s The Social Roots of Risk: Producing Disasters, Promoting Resilience (Stanford, CA: Stanford University Press, 2014). The idea of reflexivity has gained great currency in recent years and has had a wide range of authors and influences. See Margaret S. Archer, Making Our Way Through the World: Human Reflexivity and Social Mobility (Cambridge, UK: Cambridge University Press, 2007); Malcolm Ashmore, The Reflexive Thesis: Wrighting Sociology of Scientific Knowledge (Chicago: University of Chicago Press, 1989); Steve Bartlett and P. Suber, eds., Self-Reference: Reflections on Reflexivity (Springer, 1987); Pierre Bourdieu and Loïc J. D. Wacquant, An Invitation to Reflexive Sociology (Chicago: University of Chicago Press, 1992); R. K. Merton, Social Theory and Social Structure, rev. ed. (Glencoe, IL: The Free Press, 1957); S. Woolgar, Knowledge and Reflexivity: New Frontiers in the Sociology of Knowledge (London: Sage, 1988); Ulrich Beck, World at Risk, trans. Ciaran Cronin (Cambridge, UK: Polity Press); Zygmunt Bauman, Globalization: The Human Consequences (New York: Columbia University Press, 1998); and Ulrich Beck, Anthony Giddens, and Scott Lash, Reflexive Modernization: Politics, Tradition and Aesthetics in the Modern Social Order (Stanford, CA: Stanford University Press, 1994).
132 B i b l i o g r a p h i c E s s a y s
For the section on fear, I drew largely upon the work of William R. Freudenburg, and especially “Risky Thinking: Irrational Fears about Risk and Society,” Annals of the American Academy of Political and Social Science 545 (May 1996): 44–53, and Joe Thornton, Pandora’s Poison: Chlorine, Health, and a New Environmental Strategy (Cambridge, MA: MIT Press, 2000). For an excellent account of the philosophical issues underlying the “rounding error” described by Freudenburg in the Exxon-Valdez case, see Kristen S. ShraderFrechette, Risk and Rationality: Philosophical Foundations for Populist Reforms (Berkeley: University of California Press, 1991) and “Technological Risk and Small Probabilities,” Journal of Business Ethics 4 (1985): 431–45. On public health, data, and risk, the works of Phil Brown are compelling. See Toxic Exposures: Contested Illnesses and the Environmental Health Movement (New York: Columbia University Press, 2007); No Safe Place: Toxic Waste, Leukemia, and Community Action (Berkeley: University of California Press, 1990); Phil Brown, Rachel Morello-Frosch, and Stephen Zavestoski, Contested Illnesses: Citizens, Science, and Health Social Movements (Berkeley: University of California Press, 2017); and Phil Brown, J. Stephen Kroll-Smith, and Valerie J. Gunter, Illness and the Environment: A Reader in Contested Medicine (New York: New York University Press, 2000). The Andy Stirling and Sue Mayer article I discuss at length is entitled “Precautionary Approaches to the Appraisal of Risk: A Case Study of a Genetically Modified Crop,” and is a part (pp. 296–311) of an excellent review section: Carl Smith et al., “The Precautionary Principle and Environmental Policy: Science, Uncertainty, and Sustainability,” International Journal of Occupational and Environmental Health 6, no. 3 (Oct./Dec. 2000): 263–330. The article on post-normal science I discuss is Silvio O. Funtowicz and Jerome R. Ravetz, “Science for the Post-Normal Age,” Futures 25, no. 7 (September 1, 1993): 739–55. Figure 4 is based on my notes from a lecture delivered at the Department of Environmental Studies at UC Santa Cruz by Piers Blaikie in the late 1990s. The philosophical treatment on the idea of
B i b l i o g r a p h i c E s s a y s
133
precaution that I have cited here is Per Sandin, “The Precautionary Principle and the Concept of Precaution,” Environmental Values 13 (2004): 461–75. I chose this piece as it offers the logical precision of an analytical philosopher to explain the stakes involved in the principle. However, a wider range of articles describe the concept. The Carl Smith et al. review cited earlier is an excellent introductory resource on the precautionary principle. Other recent works on the subject include Timothy O’Riordan and James Cameron, Interpreting the Precautionary Principle (London: Cameron May, 1994); Indur M. Goklany, The Precautionary Principle: A Critical Appraisal of Environmental Risk Assessment (Washington, DC: CATO, 2001); Poul Harremoës, The Precautionary Principle in the 20th Century: Late Lessons from Early Warnings (London: Earthscan Publications, 2002); Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (New York: Cambridge University Press, 2005); and Mary O’Brien, Making Better Environmental Decisions: An Alternative to Risk Assessment (Cambridge, MA: The MIT Press, 2000). The references to Max Weber are from the seminal papers Science as Vocation and Politics as Vocation, both found in From Max Weber, trans. and ed. by H. H. Gerth and C. Wright Mills (New York: Free Press, 1946). Last, but by no means least, there is a vibrant literature on emergent threats. See, for example, Sheldon Krimsky, GMOs Decoded: A Skeptic’s View of Genetically Modified Foods (Cambridge, MA: The MIT Press, 2019); Geoffrey Hunt and Michael Mehta, eds., Nanotechnology: Risk, Ethics and Law (London: Routledge, 2006); and Nancy Langston, Toxic Bodies: Hormone Disrupters and the Legacy of DES (New Haven, CT: Yale University Press, 2011). 3. Disaster For excellent overviews of theories of disasters in general see Kathleen Tierney, Disasters: A Sociological Approach (Cambridge, UK: Polity, 2019); Anthony Oliver-Smith and Susanna Hoffman, eds., The Angry Earth: Disaster in Anthropological Perspective, 2nd
134 B i b l i o g r a p h i c E s s a y s
ed. (London: Routledge, 2019); Susanna Hoffman and Anthony Oliver-Smith, Catastrophe and Culture: The Anthropology of Disaster (Albuquerque: University of New Mexico Press, 2002); and Ben Wisner, Piers Blaikie, Terry Cannon, and Ian Davis, At Risk: Natural Hazards, People’s Vulnerability and Disasters, 2nd ed. (London: Routledge, 2003). There are many lists and accounts online describing some of the major industrial disasters of the past century. An excellent paper that comprehensively analyzes these events is Efthimia K. Mihailidou, Konstantinos D. Antoniadis, and Marc J. Assael, “The 319 Major Industrial Accidents Since 1917,” International Review of Chemical Engineering (November 2008): 1–12. For Bhopal, see S. Ravi Rajan, “Toward a Metaphysic of Environmental Violence: The Case of the Bhopal Gas Disaster,” Violent Environments (2001): 380–98, and “Bhopal: Vulnerability, Routinization, and the Chronic Disaster,” in The Angry Earth, ed. Oliver-Smith and Hoffman, 257–77. For the nuclear accidents, see Thomas Filburn, Three Mile Island, Chernobyl and Fukushima: Curse of the Nuclear Genie (Cham, Switzerland: Springer, 2016); Alexey V. Yaboklov, Vassily B. Nesterenko, and Alexey V. Nesterenko, eds., Chernobyl: Consequences of the Catastrophe for People and the Environment (Hoboken, NJ: WileyInterScience, 2009): David Marples, “The Chernobyl Disaster,” Current History 86, no. 522 (1987): 325–28, and The Social Impact of the Chernobyl Disaster (Edmonton, Canada: University of Alberta Press, 1988); Alla Yaroshinska, Chernobyl, the Forbidden Truth (Lincoln: University of Nebraska Press, 1995); Majia Holmer Nadesan, “Nuclear Governmentality: Governing Nuclear Security and Radiation Risk in Post-Fukushima Japan,” Security Dialogue 50, no. 6 (2019): 512– 30; and The Independent Investigation on the Fukushima Nuclear Accident, The Fukushima Daiichi Nuclear Power Station Disaster: Investigating the Myth and Reality (London: Routledge, 2014). The extensive literature in psychology on human error is excellently summarized in James T. Reason, Human Error (Cambridge, UK: Cambridge University Press, 1990). Given its resonance across
B i b l i o g r a p h i c E s s a y s
135
academic, industry, and regulatory audiences, I have largely relied on this book and its comprehensive bibliography. The literature on organizational deviance also has some outstanding review articles and books with thorough bibliographies, and I have particularly learned a great deal from reading many works by Diane Vaughan, such as “The Dark Side of Organizations: Mistake, Misconduct, and Disaster,” Annual Review of Sociology (1999): 271–305; Controlling Unlawful Organizational Behavior: Social Structure and Corporate Misconduct (Chicago: University of Chicago Press, 1983); The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA (Chicago: University of Chicago Press, 1996); “The TrickleDown Effect: Policy Decisions, Risky Work,” California Management Review 39, no. 2 (1997): 80–102; “Theorizing Disasters,” Ethnography 5, no. 3 (2004): 315–47; and “The Role of the Organization in the Production of Techno-Scientific Knowledge,” Social Studies of Science 29, no. 6 (December 1999): 913–43. For normal accident theory, see Charles Perrow, Normal Accidents: Living with High-Risk Technologies (New York: Basic Books, 1984); “Normal Accident at Three Mile Island,” Society 18, no. 5 (1981): 1726; and “Normal Accidents,” in the International Encyclopedia of Organization Studies (2008). For epistemic accidents and associated concepts, see John Downer, Anatomy of a Disaster: Why Some Accidents Are Unavoidable (London: London School of Economics, 2010), and Christopher R. Henke, “Situation Normal? Repairing a Risky Ecology,” Social Studies of Science 37, no. 1 (January 6, 2007): 135–42. There are many accounts of the Delta Flight 191 accident, and an accessible one was issued by the National Weather Service and published online at https://www.weather.gov/fwd/delta191. High reliability organization theory is becoming increasingly widespread in several domains, including business studies, but some of the formative works remain excellent introductions to the genre. See, for example, Karlene H. Roberts, Robert Bea, and Dean L. Bartles, “Must Accidents Happen? Lessons From High-Reliability Organizations [and Executive Commentary],” The Academy of
136 B i b l i o g r a p h i c E s s a y s
Management Executive 15, no. 3 (January 6, 2001): 70–79; Karlene H. Roberts, “Some Characteristics of One Type of High Reliability Organization,” Organization Science 1, no. 2 (January 6, 1990): 160– 76; Todd R. LaPorte and Paula M. Consolini, “Working in Practice but Not in Theory,” Journal of Public Administration Research and Theory 1, no. 1 (January 6, 1991): 19–48; and Karl E. Weick and Karlene H. Roberts, “Collective Mind in Organizations: Heedful Interrelating on Flight Decks,” Administrative Science Quarterly 38, no. 3 (January 6, 1993): 357–81. An excellent book on the subject with many useful case studies is Karl E. Weick, Managing the Unexpected: Sustained Performance in a Complex World, 3rd ed. (Hoboken, NJ: Wiley, 2015). 4. Vulnerability The vulnerability theory of disasters emerged in the works of geographers, anthropologists, and others writing in the political ecology tradition. See Anthony Oliver-Smith, “Anthropological Research on Hazards and Disasters,” Annual Review of Anthropology 25 (1996): 303–28 and work by Anthony Oliver-Smith and Susannah M. Hoffman, and Ben Wisner et al., cited in the essay on chapter 3. For vulnerability on account of fire see Stephen J. Pyne. World Fire: The Culture of Fire on Earth (Seattle: University of Washington Press, 1997), and many other works by Pyne focusing on specific geographies. For an excellent analysis of settlement patterns and vulnerability, see B. Newell and R. Wasson, “Social System or Solar System,” UNESCO Document SC.2002/WS/53, 2002, p. 3–17, https://unes doc.unesco.org/ark:/48223/pf0000128073. The Mike Davis work cited in the text is Late Victorian Holocausts: El Niño Famines and the Making of the Third World (London: Verso, 2017). For useful discussions on Hurricane Katrina, race, ethnicity, and the impact of neoliberal ideologies, see A. Christophe and M. Adams, “Return to a State of Nature, Compassionate Conservatism, Failed Response, and Their Impact on Race, Ethnicity, and the U.S.
B i b l i o g r a p h i c E s s a y s
137
Economy: Hurricane Katrina Case Study,” in Cities and Disasters, ed. Davia C. Downey (Boca Raton, FL: CRC Group, 2015), 105–25, and Alice Fothergil et al., “Race, Ethnicity, and Disasters in the United States,” Disasters 23, no. 2 (1999), 156–73. The references to Peru and Valdez are from the works of Anthony Oliver-Smith cited earlier. For a discussion of neoliberal ideologies more generally, see Naomi Klein, The Rise of Disaster Capitalism (London: Penguin Books, 2008). For a thorough overview of grassroots struggles for accountability, see Jonathan Fox and L. David Brown, The Struggle for Accountability: The World Bank, NGOs, and Grassroots Movements (Cambridge, MA: MIT Press, 1998). The case study of Bhopal is based on my articles cited earlier, and the evidence of T. R. Chouhan is documented in T. R. Chouhan, Claude Alvares, Indira Jaising, and Nityanand Jayaraman, Bhopal: The Inside Story (Lexington, KY: Apex Press, 2004). The quote cited in the settlement figures related to Bhopal is from the Union Carbide Annual Report, 1988, and the one by Harold Burson is from “The Role of the Public Relations Professional,” http://www.bm.com/fileslper !PER-R07a.html. See also Harold Burson, “Social Responsibility or ‘Telescopic Philanthropy’: The Choice Is Ours,” Garrett Lecture on Managing the Socially Responsible Corporation, Columbia University Graduate School of Business, March 20, 1973. On Bhopal and Burson-Marsteller see Josh Halliday, “Burson-Marsteller: PR Firm at Centre of Facebook Row,” The Guardian, May 12, 2011; Burson-Marsteller-Operations-Crisis Management–Bhopal, https:// www.liquisearch.com/burson-marsteller/operations/crisis_manage ment/bhopal; and Joyce Slaton, “Bhopal Bloopers: How Dow and Burson-Marsteller Made a Big Stink Even Stinkier,” San Francisco Chronicle, January 9, 2003. For greenwashing in general around the time of Bhopal see The Greenpeace Book of Greenwash (Greenpeace International, 1992). For a description of the TEMIC and other cases involving Union Carbide, see Ward Morehouse, M. Arun Subramaniam, and Citizens Commission on Bhopal, The Bhopal Tragedy: What Really Happened and What It Means for American
138 B i b l i o g r a p h i c E s s a y s
Workers and Communities at Risk (New York, NY: Council on International and Public Affairs, 1986). Environmental justice is now a vigorous field for both civic action and scholarly research. For the latter, see Bunyan I. Bryant, Environmental Justice: Issues, Policies, and Solutions (Washington, DC: Island Press, 1995); Joni Adamson, Mei Mei Evans, and Rachel Stein, The Environmental Justice Reader: Politics, Poetics, & Pedagogy (Tucson: University of Arizona Press, 2002); Michael Mascarenhas, ed., Lessons in Environmental Justice: From Civil Rights to Black Lives Matter and Idle No More (Los Angeles: Sage, 2021); and David N. Pellow, Resisting Global Toxics: Transnational Movements for Environmental Justice (Cambridge: MIT Press, 2007). For a vivid account of the Hawks Nest Tunnel disaster, see Patricia Spangler, The Hawks Nest Tunnel: An Unabridged History (Proctorville, OH: Wythe-North Publishing, 2008). For silicosis, see the excellent collection edited by Paul-André Rosental, Silicosis: A World History (Baltimore: Johns Hopkins University Press, 2017). My account also draws significantly on Janet Siskind, “An Axe to Grind: Class Relations and Silicosis in a 19th Century Factory,” in Illness and the Environment: A Reader in Contested Medicine, ed. Stephen J. Kroll-Smith, Phil Brown, and Valerie J. Gunter (New York: New York University Press, 2000), 145–61, and David Rosner and Gerald E. Markowitz, “From Dust to Dust: The Birth and Re-birth of National Concern about Silicosis,” in the same volume at pp. 162–74. For Flint, see Julie Knutson, Flint Water Crisis. Unnatural Disasters: Human Error, Design Flaws, and Bad Decisions (Ann Arbor, MI: Cherry Lake Publishing, 2021); Benjamin J. Pauli, Flint Fights Back: Environmental Justice and Democracy in the Flint Water Crisis (Cambridge, MA: MIT Press, 2019); and Flint Water Crisis: Impacts and Lessons Learned: Joint Hearing Before the Subcommittee on Environment and the Economy and the Subcommittee on Health of the Committee on Energy and Commerce, House of Representatives, One Hundred Fourteenth Congress, Second Session, April 13, 2016 (Washington, DC: Government Publishing Office, 2017). I use
B i b l i o g r a p h i c E s s a y s
139
the terms potentialities and actualities in the sense described by Aristotle; see Aristotle’s Metaphysics, trans. Joe Sachs (Santa Fe, NM: Green Lion Books, 1999). The discussion on missing expertise is from S. Ravi Rajan, “Missing Expertise, Categorical Politics, and Chronic Disasters,” in Hoffman and Smith, Catastrophe and Culture, 237–62. The case study on Bhopal at the end of the chapter and the section on discourse are also adapted from my various papers on Bhopal cited earlier. The book cited on the corruption and misuse of expertise is Sheldon Rampton and John C. Stauber, Trust Us, We’re Experts! How Industry Manipulates Science and Gambles with Your Future (New York: Jeremy P. Tarcher/Putnam, 2001). 5. Looking Ahead There is a rich literature on reflexivity and reflexive modernization. A good formulation of the central idea can be found in U. Beck, W. Bonss, and C. Lau, “The Theory of Reflexive Modernization: Problematic, Hypotheses and Research Programme,” Theory, Culture & Society 20, no. 2 (2003): 1–33. Other key works include Ulrich Beck, Risk Society: Towards a New Modernity, trans. Mark Ritter (London: Sage Publications, 1992); Richard V. Ericson and Kevin D. Haggerty, Policing the Risk Society (Toronto: University of Toronto Press, 1997); and a variety of works by Anthony Giddens, including Consequences of Modernity (Cambridge, UK: Polity Press, 1990); Modernity and Self-Identity: Self and Society in the Late Modern Age (Cambridge, UK: Polity Press, 1991); and The Third Way: The Renewal of Social Democracy (Cambridge, UK: Polity Press, 1998). Also of relevance in the sociology of risk, albeit from the vantage point of systems theory, is the large corpus of work by Niklas Luhmann. His archive online is a good place to start: see https:// niklas-luhmann-archiv.de. There is a vast literature addressing different facets of the relationship between science, industrial risk, and democracy. Among these are Sheila Jasanoff, “Technologies of Humility: Citizen Participation
140 B i b l i o g r a p h i c E s s a y s
in Governing Science,” Minerva 41, no. 3 (2003): 223–44; David H. Guston, “The Essential Tension in Science and Democracy,” Social Epistemology 7, no. 1 (1993): 3–23; Mark B. Brown, Science in Democracy: Expertise, Institutions, and Representation (Cambridge, MA: MIT Press, 2009); Massimiano Bucchi, Beyond Technocracy: Science, Politics and Citizens, trans. Adrian Belton (Dordrecht, The Netherlands: Springer, 2009); Yaron Ezrahi, The Descent of Icarus: Science and the Transformation of Contemporary Democracy (Cambridge, MA: Harvard University Press, 1990); and Richard E. Sclove, Democracy and Technology (New York: Guilford, 1995). For work on conflicts and differences between experts and laypeople, see Timothy Mitchell, Rule of Experts: Egypt, TechnoPolitics, Modernity (Berkeley: University of California Press, 2002); Arturo Escobar, Encountering Development (Princeton: Princeton University Press, 2011); Brian Wynne, Rationality and Ritual: Participation and Exclusion in Nuclear Decision-Making (London: Routledge, 2010); Alan Irwin and Brian Wynne, Misunderstanding Science? The Public Reconstruction of Science and Technology (Cambridge, UK: Cambridge University Press, 2009); Brian Wynne, “Sheepfarming after Chernobyl: A Case Study in Communicating Scientific Information,” Public Understanding of Science 1 (1992): 281–304; Brian Wynne, “Uncertainty and Environmental Learning: Reconceiving Science and Policy in the Preventive Paradigm,” Global Environmental Change 2, no. 2 (1992): 111–27; Melissa Leach, Ian Scoones, and Brian Wynne, Science and Citizens: Globalization and the Challenge of Engagement (London: Zed Books, 2005); and Harry Collins and Robert Evans, Rethinking Expertise (Chicago: University of Chicago Press, 2007). The website of the Science and Democracy Network, started by Sheila Jasanoff and other key thinkers, also offers important insights and bibliographies; see https://sts .hks.harvard.edu/about/sdn.html. Among the salient works in the field of “Infrastructure Studies” are Geoffrey Bowker, Karen Baker, Florence Millerand, and David Ribes, “Towards Information Infrastructure Studies: Ways of
B i b l i o g r a p h i c E s s a y s
141
Knowing in a Networked Environment,” in International Handbook of Internet Research, ed. Jeremy Hunsinger, Matthew Allen, and Lisbeth Klasrup (The Netherlands: Springer, 2010); Susan L. Star and K. Ruhleder, “Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces,” Information Systems Research 7, no. 1 (1996); Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences (Los Angeles: SAGE, 2014), especially chapter 2; Paul N. Edwards, Steven J. Jackson, Geoffrey C. Bowker, and Cory P. Knobel, “Understanding Infrastructure: Dynamics, Tensions, and Design” (2007), https://deepblue.lib.umich.edu/bitstream/handle/2027.42 /49353/UnderstandingInfrastructure2007.pdf; Susan L. Star, “The Ethnography of Infrastructure,” American Behavioral Scientist 43, no. 3 (November 1, 1999): 377–91; Geoffrey C. Bowker and Susan Leigh Star, Sorting Things Out: Classification and Its Consequences (Cambridge, MA: MIT Press, 1999); Paul Edwards, “Infrastructure and Modernity: Force, Time, and Social Organization in the History of Sociotechnical Systems,” in Technology and Modernity: The Empirical Turn, ed. Philip Brey, Arie Rip, and Andrew Feenberg (Cambridge, MA: MIT Press, 2002); Langdon Winner, “Do Artefacts Have Politics?” Daedalus 109, no. 1 (Winter 1980): 121–36; and Marilyn Strathern, “Robust Knowledge and Fragile Futures,” in Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems, ed. Aihwa Ong and Stephen J. Collier (Malden, MA: Blackwell Publishing, 2005), 464–81. For the concept of inverse infrastructures, see Tineke M. Egyedi and Donna C. Mehos, eds. Inverse Infrastructures: Disrupting Networks from Below (Northampton, MA: Edward Elgar, 2012). For participatory and community expertise, see, for example, Kulbhushan Balooni and Makoto Inoue, “Joint Forest Management in India,” IIMB Management Review 21, no. 1 (June 20, 2009): Reprint No 09101. There is also an excellent and comprehensive bibliography of infrastructure studies at the website Critical Infrastructure Studies, https://cistudies.org.
142 B i b l i o g r a p h i c E s s a y s
The literature on environmental justice has been referenced earlier in this section. There are also several interesting webbased mapping tools on environmental justice and human rights. Among these are EJ Atlas (https://ejatlas.org); EJ Screen (https:// www.epa.gov/ejscreen); and Justice Map (http://www.justicemap .org). In addition to these self-defined environmental justice projects are a host of movements around the world on many topics. These include accountability; for example, see Jonathan Fox and L. David Brown, eds., The Struggle for Accountability: The World Bank, NGOs, and Grassroots Movements (Cambridge, MA: MIT Press, 1998). For development, violence, and state hegemony see Ashis Nandy, ed., Science, Hegemony and Violence: A Requiem for Modernity (Oxford: Oxford University Press, 1989) and Arturo Escobar, Encountering Development: The Making and Unmaking of the Third World (Princeton, NJ: Princeton University Press, 2011). For colonialism and indigeneity and environmental justice see Max Liboiron, Pollution Is Colonialism (Durham, NC: Duke University Press, 2021) and Dina Gilio-Whitaker, As Long as Grass Grows: The Indigenous Fight for Environmental Justice from Colonization to Standing Rock, reprint edition (Boston: Beacon Press, 2020). For environmental human rights and related topics, such as dignity rights, see Erin Daly, Dignity Rights: Courts, Constitutions, and the Worth of the Human Person (Philadelphia: University of Pennsylvania Press, 2012); Jona Razzaque, Dinah L. Shelton, James R. May, Owen McIntyre, and Stephen J. Turner, eds., Environmental Rights: The Development of Standards (Cambridge, UK: Cambridge University Press, 2019); Donald K. Anton and Dinah Shelton, Environmental Protection and Human Rights (New York: Cambridge University Press, 2011); Bridget Lewis, Environmental Human Rights and Climate Change: Current Status and Future Prospects (Singapore: Springer, 2018); Romina Picolotti and Jorge Daniel Taillant, Linking Human Rights and the Environment (Tucson: University of Arizona Press, 2003); Anna Grear, “Legal Imaginaries and the Anthropocene: ‘of ’ and ‘for,’” Law and
B i b l i o g r a p h i c E s s a y s
143
Critique (2020): 1–16; Anna Grear, “Foregrounding Vulnerability: Materiality’s Porous Affectability as a Methodological Platform,” in Research Methods in Environmental Law: A Handbook, ed. Andreas Philippopoulos-Mihalopoulos and Victoria Brookes (Cheltenham: Edward Elgar Publishing, 2017), 3–28; and Anna Grear and Louis J. Kotzé, “An Invitation to Fellow Epistemic Travellers—Towards Future Worlds in Waiting: Human Rights and the Environment in the Twenty-First Century,” in Research Handbook on Human Rights and the Environment, ed. Anna Grear and Louis J. Kotzé (Cheltenham: Edward Elgar Publishing: 2015), 1–6.
Index
Note: Page numbers in italics refer to figures. accidents: definition of, 66; epistemic, 73–74. See also industrial accidents; normal accident theory (NAT) accountability. See corporate accountability African Americans, and Hawks Nest disaster (1935), 98–99 A. H. Robins Co., 96 Aloha Air flight 243 crash, 73, 74 appropriative violence in Bhopal gas leak disaster, 111–12 availability bias, 19 Babcock and Wilcox, Inc., 96 Beck, Ulrich, 117–18 Bhopal gas leak disaster, ix, 91–97; academics’ inability to contribute to rehabilitation, 112; bureaucratic obstacles to aid, 109–10; as component failure disaster, 73; death toll from, ix; and discursive vulnerabilities, 110–12; gaps in social status between doctors and
victims, 110; history of accidents at plant, 91–92; and illegal location of hazardous industry near marginal population, 104–5; impact on author, ix–x; Indian government settlement deal with Union Carbine, 95–96; issues raised by, as ongoing, 96–97; lack of expertise available for crisis mitigation, 106–10; and need for political mobilization, 123; as one of many industrial disasters, 4, 49; plant’s difficulty keeping qualified staff, 92; plant’s reliance on unqualified operators, 92–93; and underlying metaphysic of environmental violence, 105; Union Carbide’s awareness of safety issues, 92; Union Carbide’s poor global safety record, 93; Union Carbide’s public relations campaign following, 95, 96–97; Union Carbide’s sale of Indian division following, 96; Union Carbide’s strategy of downplaying and
145
146 I n d e x
Bhopal gas leak disaster (continued) deflecting blame, 93–95; Union Carbide’s ties to Indian politicians, 105; variation of impact with victim’s economic level, 104 biases in risk assessment, 19–20 bioaccumulation: and difficulty of determining safe levels, 29–30; and global impact of toxins, 4 Bristol Myers Squibb, 113 Brown, Phil, 33–34 Bryant, Bunyan, 98 Buch, M. N., 105 Burson, Harold, 96–97 Burson-Marsteller (B-M), 96–97 categorical politics, 112 certainty, desire for: as bias in risk evaluation, 19; and impossibility of zero risk, 17–18, 24; and uncertainty, ambiguity, and ignorance as inescapable factors, 35–37. See also uncertainty Challenger space shuttle disaster, 73 Charter for the Environment (France, 2005), 45 chemicals: chemical sunsetting, 46; individual exposures, lack of data on, 33; new, lack of toxicity data on, 31. See also toxins Chernobyl nuclear disaster: as component failure accident, 73; consequences of, as unique in human history, 118; as one of many industrial disasters, 4, 49; scholarship on, 117–18 Chouhan, T. R., 92–93 civic epistemology of technological risk, 74 climate change, 6 cognitive practices of individuals within organization, and organizational deviance, 56 Cold War: concerns about environmentalism, 24; concerns about political ramifications of risk management, 24–25 competition, and organizational deviance, 59 component failure accidents, 73 computer models of pollutions’ effects, dubious accuracy of, 37
conceptual expertise in crisis mitigation, 108–9 conservative ideology: and famine in British India, 86; and Wildavsky’s environmental views, 24. See also neoliberalism contingent expertise in crisis mitigation, 106–8 corporate accountability: Bhopal gas disaster and, 91–97; corporate power and, 98; emerging corporate focus on, 97; lack of, as factor in environmental disasters, 97, 117; and prioritization of profit over community wellbeing, 97; public relations industry and, 96–97; transnational efforts to increase, 121; vulnerability theory on , 91–97 corporations, public’s lack of trust in, 27 COVID-19, 6 crisis, definition of, 82–83 Critical Infrastructure Studies, 120 culture’s effect on risk perception, 20–26; and choice of concerns, 22–24; and Cold War concerns about political ramifications of risk management, 24–25; and lack of consensus on concerns, 21–22; limits to scientific risk analysis and, 28; and political effects of excessive demand for risk prevention, 25–26 Dalkon Shield contraceptive device, public relations campaign, 96 data for risk assessment: and complexity and diversity of risks, 34–35; doubts about adequacy of, in risk paradigm, 32–37, 47, 116; scientific reductionism to compensate for lack of, 34–35 Davis, Mike, 85–86 Delta flight 191 crash, 73–74 democracy’s interface with science: and participatory consultative processes for infrastructure, 120; and risk prevention, 118–19 Derby, Stephen, 18 developmentalism, and discursive vulnerability, 110–11 development studies, 2
I n d e x dignity rights, and corporate accountability, 121 disaster minimization planning, as part of preparedness for disaster, 107 disaster prevention: grassroots activism in, 122; as part of preparedness for disaster, 107; variation by country and place, 122 disaster response infrastructure, as part of preparedness for disaster, 107 disasters: definition of, 56–57, 82; human irresponsibility as cause of, 83–84; scholarship on, 81–82, 83–84, 116; as test of social resilience and sustainability, 84. See also Bhopal gas leak disaster; Chernobyl nuclear disaster; environmental disasters; Exxon Valdez oil spill; Fukushima nuclear disaster; Hawks Nest disaster (1935); human error; newer and noncanonical work on risk, disaster, and vulnerability; normal accident theory (NAT); organizational deviance; Three Mile Island nuclear incident disasters caused by organizations: organizational environment and, 58; strategies for deflecting blame, 59 discursive absence in Bhopal gas leak disaster, 112 discursive vulnerabilities, 91, 110–14, 117; and appropriative violence, 111–12; Bhopal gas leak disaster and, 110–12; developmentalism and, 110–11; and discursive absence, 112; public relations firms and, 113–14 dose-response assessment, 11 Douglas, Mary, 21, 21–23, 33 Downer, John, 73 economic development, prioritization of, 110–11 Eli Lilly Corporation, 96 El Niño-Southern Oscillation, and drought, 86 empathetic expertise in crisis mitigation, 109–10 energy sector, complexity and diversity of risks in, 34 environmental disasters: global impact of,
147
4; notable examples of, 4; questions surrounding, 6 environmental equity: definition of, 98; increase in, with increased reach of disaster damage, 118 environmental humanities, 5 environmentalism: Cold War concerns about, 24; range of perspectives on, 1–2 environmental justice: issues surrounding, 4; lead poisoning vulnerability and, 101–4. See also inequality, and vulnerability environmental justice movement, 98; data-drive research by, 120–21; people’s science initiatives globally, 121; on poor neighborhoods’ lack of resources to measure risk, 106 environmental protection agencies, creation of, 9 Environmental Protection Agency, U.S. (EPA): inaccurate measurement of contamination in poor neighborhoods, 106; risk assessment procedures, 11; and risk paradigm, 10; unrealistic demands on, 15–16 environmental racism, 98 environmental risk: current unprecedented levels, 1; as primary artifacts of industrial society, 118; questions surrounding, 3–4, 5–6. See also environmental justice; newer and noncanonical work on risk, disaster, and vulnerability environment of organization: definition of, 57; and institutionalized rules, 58; and organizational deviance, 56, 57–59 EPA. See Environmental Protection Agency, U.S. epistemic accidents, 73–74 error-amplifying decision traps, 60 European Commission Communication on the precautionary principle (2000), 45 European Union, environmental policy, and reflexivity, 118 expertise required for crisis mitigation, 91, 106–10, 117; absence in Bhopal gas
148 I n d e x
expertise required for crisis mitigation (continued) leak disaster, 106–10; conceptual expertise, 108–9; contingent expertise, 106–8; empathetic expertise, 109–10; global lack of, 106; novel disaster types and, 108–9 experts: importance of recognizing cognitive limitations, 20; public’s lack of trust in, 27 Exxon Valdez oil spill: community debate on accepting cleanup employment with Exxon, 90–91; as component failure accident, 73; Exxon public relations campaign on, 96; and risk assessment, blind spots in, 27–28 famine, nation-state policies as factor in, 85–86 feedback processes, inadequate, and organizational deviance, 60 Flint, Michigan: contaminated water supply, and environment injustice, 101–4; economic decline in, 102 food, and bioaccumulation in humans, 29 forest fires as disasters, vulnerability theory on, 85 Freudenburg, William, 26–28 Fukushima nuclear disaster: combining of cultural and technological hazards in, 83; as one of many industrial disasters, 4, 497 Funtowicz, Silvio, 38–40 future of risk, disaster, and vulnerability, scenarios for, 121–22 Georgetown University’s Credit Research Center, 113 Giddens, Anthony, 118 global impact of toxins, 3–4; bioaccumulation and, 4; and cumulative effects, lack of method to analyze, 30; persistent organic pollutants (POPs) and, 29 Global South, and corporate accountability, 121 global unity, prospect for, 123 government agencies, public’s lack of trust in, 27 grassroots activism in disaster prevention, 122
greenhouse gases, ambiguity inherent in analysis of effects, 36 Hanna-Attisha, Mona, 102 Hawks Nest disaster (1935), 98–99 hazards: awareness of, and preparedness for disaster, 107; definition of, 83; as factor in disaster risk, 84–85; location of Bhopal gas plant near large marginal population as, 104–5; reasons for developing into disaster, 83–84; types of, 83 healthcare: etiological uncertainty in, 33–34; lack of data on chemical exposure of individuals, 33 heuristics in risk evaluation, 19–20 high-reliability organizations (HROs), 74–78; and centralization vs. decentralization, 79; characteristics of, 75; and disaster prevention, 117; emphasis on communication, 75, 77–78; vs. normal accident theory, research and debate on, 79–80; reward and incentive systems in, 75, 77; training of staff to identify and act on anomalies, 75–76; work to learn about what they don’t know, 75–77 high-velocity environments, and organizational deviance, 59 hindsight bias, 19 HROs. See high-reliability organizations human error, 50–55; active vs. latent forms of, 51–52; definition of, 51; and difficulty of identifying future faults in complex systems, 53, 54, 116; fallibility, 52; keywords, 52–53, 54; mistakes, 52; psychological responses to risk information and, 53–54, 78; and realworld balance of safety with cost and efficiency, 53, 59; slips and lapses, 52; training and, 54; types of, defined, 52–53; value of robust system of detection and response in preventing, 54, 55; violations, 52–53, 55. See also organizational deviance humans, bioaccumulation in tissues of, 29 India, British policies and famine in, 86. See also Bhopal gas leak disaster
I n d e x individual cognitive practices within organization, and organizational deviance, 56 industrial accidents: Bhopal gas leak as iconic example of, 91; as characteristic of technocene epoch, 49, 118; keywords in, 50; large and complex systems and, 53; notable examples of, 4, 49; novel, expertise needed in, 108; scholarship on, 2, 50, 116–17; typical unproductive response to, 50. See also human error; normal accident theory (NAT); vulnerability theory, on industrial accidents industry: and alternative clean technologies, 45–46, 48; dangerous, move to developing world, 104; in Flint, Michigan, 101–2; global impact of toxins from, 3–4, 44; initial growth of concern about pollution from, 8–9; and prioritization of profit over community well-being, 97; public lack of trust in, 27; in technocene, factors affecting, 91. See also pollution inequality, and vulnerability, 91, 98–106, 117; lead poisoning and, 101–4; location of hazardous industries near marginal populations, 104; nation-states and, 104; and silicosis in workers, 98–101 infrastructure: democratic consultative processes in development of, 120; inverse, 120 inverse infrastructures, 120 Keeney, Ralph, 18 keywords, 5, 9–10, 26, 55, 80; in human error, 51, 52–53; in industrial accidents, 50; in organizational deviance, 57; in vulnerability, 82 Kuhn, Thomas, 38 landowners vs. workers, and vulnerability to disaster, 87–88 law, environmental, uncertainty of scientific evidence and, 15–16 lead poisoning: and environmental justice, 101; Flint Michigan water supply and, 101–4
149
Loma Linda University Medical Center pediatric ICU, 76–77 Loya, L. D., 94–95 lung cancer, difficulty of isolating cause of, 33–34 Marsteller, Bill, 96 Mayer, Sue, 34, 35–37, 40–41 media: and bias in risk evaluation, 20; and scapegoats for industrial accidents, 50; and Union Carbide downplaying of Bhobal gas release, 95. See also public relations industry methodology of this book, 5, 7 misconduct by organizations: definition of, 56–57; organizational environment and, 58–59; technological crime by rogue employees, 62–63 mistakes, as type of human error, 52 mistakes by organizations: compounding of, 62; definition of, 56–57; organizational environment and, 58; quasiinstitutionalization of, 58 mitigation of environmental disasters, questions surrounding, 5. See also expertise required for crisis mitigation Montreal Protocol (1987), 45 Mukund, J., 94 NAT. See normal accident theory National Institute of Occupational Safety and Health (NIOSH), 100 nation-state policies as factor in disasters, vulnerability theory on, 85–87 neoliberalism: definition of, 87; and nation-state policies as factor in disasters, 86–87 newer and noncanonical work on risk, disaster, and vulnerability, 117; calls for reflexivity, 117; and corporate accountability, 121; Critical Infrastructure Studies, 120; in environment justice, 120–21; interface of democracy and science in risk prevention, 118–20; inverse infrastructures and, 120 New Orleans post-hurricane reconstruction, and political and social structures as factor in vulnerability, 87
150 I n d e x
New South Wales Protection, of the Environment Administration Act (1991), 45 nonprofit organizations, and appropriative violence in Bhopal gas leak disaster, 111–12 normal accident theory (NAT), 66–74, 117; on accidents due to scientific uncertainty, 73; on accidents other than normal accidents, 73; on centralized vs. decentralized control, 72, 79; on component failure accidents, 73; definition of accident, 66; vs. highreliability organizations, debate on, 79–80; on high risk accidents, 66; on levels of system malfunction, 66–67; limited scope of systems addressed by, 79; on linear vs. complex systems, 67, 69–70; on normal accidents, characteristics of, 71–72; on normal accidents, definition of, 69; and normal vs. epistemic accidents, 73–74; research undermining conclusions of, 74; on tight vs. loose coupling of systems, 68, 69–70 normal accident theory, on complex, tightly-coupled systems: accidents in, as normal to system, 69; examples of, 70; impossibility of preventing normal accidents in, 71; need to redesign or abandon those with potential for catastrophic failure, 72; potential for small malfunctions to mushroom into catastrophe, 70–71 normal accident theory, on complex systems: and common-mode failures, 67–68; invisibility of sub-operations to operators, 68, 70 novel disaster types, expertise needed for rehabilitation, 108–9 Occupational Safety and Health Act (OSHA), creation of, 100 Oliver-Smith, Anthony, 84 organizational deviance, 55–65, 79; caused by environment of organization, 56, 57–59; caused by individual cognitive practices, 56, 63–65; caused by organization characteristics, 56,
59–63; compounding of mistakes, 62; definition of, 56; deviance amplification and, 60; disqualifying heuristics and, 62; and divergent meaning-making in social groups, 64–65, 79; evolution of workplace cultures antithetical to safety, 60–61; gaps in knowledge and communication and, 61–62, 64; inappropriate responses to mistakes and, 60; ineffective processing of risk information and, 62; inevitability of, 55–56; keywords in, 57; and strategies for deflecting blame, 59; types of, 56–57 organization characteristics, and organizational deviance, 56, 59–63 organizations, as focus of sociological study, 55. See also high-reliability organizations (HROs) overconfidence in risk evaluation, 19 Perrow, Charles. See normal accident theory (NAT) persistent organic pollutants (POPs): and bioaccumulation, 29–30; global spread of, 29 Peru, earthquake of 1970, 90 political change, disasters as opportunity for, 90 political conflicts, and organizational deviance, 59 pollution: cumulative effects, lack of method to analyze, 30; dubious accuracy of computer modeling of effects, 37; early action on, 9; industrial, growth of concern about, 8–9; permits for, 28–29. See also precautionary principle; “safe enough” levels of pollutants POPs. See persistent organic pollutants post-normal science: characteristics vs. normal science, 38–39; criteria for, 40–41; greater importance of cultural factors in, 39, 40, 41; greater number of stakeholders in, 39, 41; and nonpublic corporate risk evaluations, 39; and transparency, 41; usefulness in complex, uncertain situations like risk assessment, 39–41, 40
I n d e x power, and organizational deviance, 58–59, 60 precautionary principle: alternatives assessments in, 44; as alternative to risk paradigm, 41–42, 48, 116; characteristics vs. risk paradigm, 43–44; and chemical sunsetting, 46; common elements among variants of, 42–43; criteria for use of, 43; definition of, 42; expansion in scope, 45; focus of prevention, 42, 43; international and national agreements using, 44–45; as people-centered, 44; principles underlying, 42; and prompt action, 43–44; and redesign of products and processes, 45; and reflexivity, 42; and reverse onus principle, 45–46; as spectrum of options and approaches, 44, 48; strong vs. weak forms of, 44; and zero discharge principle, 45 preparedness for disaster: elements required for, 107–8; social factors inhibiting, 108 Prozac controversy, public relations campaign in, 96 psychometric paradigm, on psychological factors in risk assessment, 18–20, 47 public opinion: difficulty of aligning with expert opinion, 20; lack of trust in corporate and government motives, 27 public policy on environment: factors complicating, 12–13; federal enforcement, perceived need for, 15; philosophical problems hampering, 9; uncertainty of evidence and, 15–17, 48 public relations industry: and corporate accountability, 96–97; and discursive vulnerabilities, 113–14; and “third party technique,” 113–14 public’s demand for certainty: and impossibility of zero risk, 17–18, 24; uncertainty of scientific evidence and, 15–17, 48 Rampton, Sheldon, 113 Ravetz, Jerome, 21, 38–40 reason. See risk paradigm recovery, postdisaster: expertise needed
151
in, 108–10; inequality and contestation in, 90–91 recreancy, 27 reflexivity: in civic thought and political discourse, 2; and democracy, 120; and post-normal science, 38–41; and precautionary principle, 42; in processes of modernization, calls for, 118; and set of concepts replacing risk paradigm, 48 regulatory failure, organizational environment and, 58–59 resilience, reflexivity and, 118 reverse onus principle, 45–46 Rio Declaration on Environment and Development (1992), 44, 45 Risikogesellschaft (Beck), 117–18 risk, environmental: current unprecedented levels, 1; as primary artifacts of industrial society, 118; questions surrounding, 3–4, 5–6. See also environmental justice; newer and noncanonical work on risk, disaster, and vulnerability Risk and Culture (Douglas and Wildavsky), 21, 21–23 risk assessment: blind spots and, 27–28; and complexity and diversity of risks, 34–35; and doubt about adequacy of risk data, 32–37; doubts about analytic methods, 35–37; goal of, as not in dispute, 32; post-normal science in, 39–41, 40; scientific, as fallacy, 35–37; scientific, limits to, 28; subjectivity in, 35; uncertainty, ambiguity, and ignorance as inescapable factors in, 35–37. See also data for risk assessment; risk paradigm, risk assessment in risk communication, in risk paradigm, 10, 13, 14 risk inherent in complex technology, public’s failure to grasp, 74. See also normal accident theory (NAT) risk management: Cold War concerns about political ramifications of, 24–25; difficulty of knowing adequacy of, 21; formal, masking of inherent uncertainty, 36–37; inevitability of tradeoffs in, 17, 25; overwhelming
152 I n d e x
risk management (continued) nature of risk and, 32; political effects of excessive demand for prevention, 25–26; and risk displacement, 25–26. See also risk paradigm, risk management in risk paradigm, 10–14, 14, 15, 116; characteristics vs. precautionary principle, 43–44; cultural influences on risk perception and, 20–26, 47; discovery of problems with, 15–16, 28–32, 46–47, 116; doubts about adequacy of risk data and analytic methods, 32–37, 47, 116; foundational beliefs of, 14–15; inability to provide absolute assurance, 27–28, 47; as iterative process, 14; precautionary principle as alternative to, 41–42, 48, 116; as purely scientific approach, 46; risk communication in, 10, 13, 14; separation of fact finding and response, 14; set of concepts replacing, 48; three-part process of, 10; trust issues in risk perception and, 26–32 risk paradigm, risk assessment in, 10–11, 14; for ecological risk, 11, 12, 14; goals of, 11–12; for human health risk, 11–12, 14; and lack of data, 18, 47; psychological factors in, 18–20, 47, 116; variations in risk tolerance and, 17–18 risk paradigm, risk management in, 10, 12–13, 14; factors complicating, 12–13; unrealistic demands for, 15–17 risk perception: gaps in knowledge and, 21–22; psychological factors in, 18–20, 47, 116; in risk paradigm, trust issues and, 26–32. See also culture’s effect on risk perception Risk Society: Towards a New Modernity (Beck), 117–18 risk tolerance: culture’s effect on choice of concerns, 22–24; psychological factors in, 18–20, 47, 116; variation in, 17–18. See also “safe enough” levels of pollutants routine nonconformity, and organization deviance, 56 Ruckelshaus, William, 15–16, 47
rules, institutionalized, and organizational deviance, 58 “safe enough” levels of pollutants: focus on individual chemicals at specific locations, inadequacy of, 28–32; lack of adequate data on, 18, 31, 33; lack of information on synergistic effects, 30, 33; lack of method to analyze global cumulative effects, 30; and public’s lack of trust in experts’ opinions, 27; risk paradigm and, 17–18; and “safe until proven otherwise” approach, 31; subjectivity of, 18; and tradeoffs between safety and economy, 17, 25 Sandin, Per, 42 scholarship on environmental risk, 2; clear summary of, as goal, 3, 5, 7; compartmentalization of, 2–3 science: gaps in evidence on pollutants, 15–17; limits to, and inability to calculate “safe enough” pollutant levels, 28–32; range of social forces influencing, 119 science, democracy’s interface with: and participatory consultative processes for infrastructure, 120; and risk prevention, 118–19 Science and Technology Studies (STS), 2, 119 scientists, conflicting roles in post-normal science, 39 Silica Safety Association (SSA), 100 silicosis in workers, 98–101; and disadvantageous workers compensation rules, 99; government regulation and, 100; Hawks Nest disaster (1935), 98–101; industry struggle against government regulations, 100; lawsuits on, 99, 100 Slovic, Paul, 18–19 Smith Kline Beecham, 113 Snyder, Rick, 102 social media, and corporate manipulation of public, 97, 114 society: and disasters as opportunity for change, 90; disasters as test of resilience of, 84; engagement with, in post-normal science, 39–40, 40;
I n d e x social factors inhibiting preparedness for disaster, 108; stratified, and vulnerability to risk, 87–90; violation of structures and norms of, as source of human error, 116–17. See also public opinion Stauber, John, 113 Stirling, Andy, 34, 35–37, 40–41 stressors, environmental, and risk assessment, 11 Structure of Scientific Revolutions (Kuhn), 38 STS. See Science and Technology Studies sustainability, technological achievements in, 123 Swiss Federal Act on the Protection of the Environment (1983), 45 synergistic effects of chemical pollutants, lack of information on, 30, 33 systemic racism, and Flint, Michigan, water contamination, 102 tacit knowledge, and organizational deviance, 61 technocene epoch, 1; characteristics complicating risk evaluation, 37, 49; discursive vulnerabilities in, 110; impurity and danger in, 114; and mismatch of potential for disaster and preparation, 106; nightmare industrial accidents and, 49; recent work on dysfunctions of, 115; safety demands vs. human psychological abilities, 78; trends influencing industrial events in, 91 technology: and achievements in sustainability, 123; complex, public’s failure to grasp risk inherent in, 74. See also normal accident theory (NAT) third party technique in public relations, 113–14 Thornton, Joe, 28–32 Three Mile Island nuclear incident: instrumentation issues in, 68; multiple failures in complex system, 70; public relations campaign following, 96 threshold of safety. See “safe enough” levels of pollutants
153
toxins: bioaccumulation and, 4; and cumulative effects, lack of method to analyze, 30; global impact of, 3–4, 44; persistent organic pollutants (POPs) and, 29; toxicity data, inadequacy of, 31, 33. See also lead poisoning; silicosis in workers trust, and risk perception, 26–32; experts’ inability to provide absolute assurance, 27–28; skepticism about corporate and government motives, 27 Trust Us, We’re Experts! (Rampton and Stauber), 113 uncertainty: environment of organization as source of, 57–58; as inescapable in risk assessment, 35–37; inherent, risk management’s masking of, 36–37 uncertainty, scientific: etiological uncertainty in healthcare, 33–34; normal accident theory (NAT) on accidents due to, 73; public policy on environment and, 15–17, 48; value of postnormal science in, 39–41, 40 Union Carbide: and Hawks Nest disaster (1935), 98–99. See also Bhopal gas leak disaster United Nations Framework Convention on Climate Change (1992), 44 United Nations World Charter for Nature (1982), 44 Vaughan, Diane, 56–59 victim blaming, in Bhopal gas leak disaster, 94–95 victims of disaster, types of, 89 Viyogi, Tara Singh, 111 vulnerability theory: definition of, 84; on demographic factors in vulnerability, 84–85; on disaster as opportunity for social and political change, 90; on disaster response, patterns of vulnerability exposed in, 89–90; on disasters as exacerbation of preexisting patterns of vulnerability, 84–85, 89; on disasters as test of social resilience and sustainability, 84; on hazards as factor in disaster risk, 84–85; on postdisaster recovery, inequality and contestation
154 I n d e x
vulnerability theory (continued) in, 90–91; scholarship on, 84; on social stratification and vulnerability to risk, 87–90; on societal factors in disaster vulnerability, 84, 85–90; on three factors in disaster risk, 84–85; on victim types, 89; on vulnerability types, 85 vulnerability theory, on industrial accidents: corporate accountability and, 91–97, 117; and discursive vulnerabilities, 91, 110–14, 117; on expertise required for crisis mitigation, 91, 106– 10, 117; focus on political and sociocultural dynamics, 91; and socioeconomic inequality and vulnerability, 91, 98–106, 117 vulnerability to environmental disaster: different manifestations globally, 114;
keywords, 82; questions surrounding, 4–5, 6; role, 81–82; variation across nations, 117; variation by socioeconomic status, 84, 87–89, 117. See also newer and noncanonical work on risk, disaster, and vulnerability Weber, Max, 46, 47, 55 Wildavsky, Aaron, 21, 21–26, 33 Williams, Raymond, 5 Wilson, Richard, 17–18 Wingspread Declaration (1998), 44 women, unequal rights and opportunities, 88 workers compensation, disadvantageous rules for silicosis, 99 World Charter for Nature (1982), 45 zero discharge principle, 45
About the Author
S. Ravi Rajan is Olga T. Griswold Chair and Professor of Environ mental Studies at the University of California, Santa Cruz. He is the founder of the Global Environmental Justice Observatory at UC Santa Cruz, and hosts the Liminal Spaces Podcast. In the past, he served as Provost of College Eight (Rachel Carson College) at UCSC, Senior Research Fellow at the Asia Research Institute at the National University of Singapore, Visiting Senior Fellow at The Energy and Resources Institute (TERI), New Delhi, and Visiting Professor at TERI University. He is on the editorial board of the journal Environmental Justice, and was previously on the editorial boards of Environment and History and Society and Natural Resources. He has also held leadership positions in nonprofit organizations, including as President of the Board of Directors of Pesticide Action Network, North America (PANNA), member of the Board of Directors of Greenpeace International, and member of the Board of the International Media Project. For more, see https://ravirajan .sites.ucsc.edu. 155
Founded in 1893,
University of California Press publishes bold, progressive books and journals on topics in the arts, humanities, social sciences, and natural sciences—with a focus on social justice issues—that inspire thought and action among readers worldwide. The UC Press Foundation raises funds to uphold the press’s vital role as an independent, nonprofit publisher, and receives philanthropic support from a wide range of individuals and institutions—and from committed readers like you. To learn more, visit ucpress.edu/supportus.