The Risk Society Revisited: Social Theory and Risk Governance [Illustrated] 1439902585, 9781439902585

Risk is a part of life. How we handle uncertainty and deal with potential threats influence decision making throughout o

229 104 2MB

English Pages 264 [265] Year 2013

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
In Memoriam
Contents
Foreword
Preface
Acknowledgments
Introduction
Part I. Social Science Foundations of Risk
1 Meta-Theoretical Foundations
2 An Evolution of Risk: Why Social Science Is Needed to Understand Risk
Part II. Risk and Social Theory
3 Overarching Perspective: The Rational Action Framework
4 Reflexive Modernization Theory and Risk
5 Risk in Systems: The Work of Niklas Luhmann
6 Jürgen Habermas and Risk: An Alternative to RAP?
Part III. Risk Governance: Links between Theory and Strategy
7 The Emergence of Systemic Risks
8 The Three Companions of Risk
9 Risk Governance: A Synthesis
10 An Analytic-Deliberative Process
Conclusion: Risk Governance as a Catalyst for Social Theory and Praxis
References
Index
Recommend Papers

The Risk Society Revisited: Social Theory and Risk Governance [Illustrated]
 1439902585, 9781439902585

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

This content downloaded from

The Risk Society Revisited

The Risk Society Revisited Social Theory and Governance

Eugene A. Rosa, Ortwin Renn, and Aaron M. McCright

temple university press Philadelphia

TEMPLE UNIVERSITY PRESS Philadelphia, Pennsylvania 19122 www.temple.edu/tempress

Copyright © 2014 by Temple University All rights reserved Published 2014 Library of Congress Cataloging-in-Publication Data Rosa, Eugene A. The risk society revisited : social theory and governance / Eugene A. Rosa, Ortwin Renn, and Aaron M. McCright. pages cm Includes bibliographical references and index. ISBN 978-1-4399-0258-5 (cloth : alk. paper) ISBN 978-1-4399-0260-8 (e-book) 1. Risk—Sociological aspects.  2. Technology—Social aspects. ​ 3. Social structure. ​4. Sociology. I. Renn, Ortwin. II. McCright, Aaron M. III. Title. HM1101.R66 2014 361.1—dc23

2013016620

 The paper used in this publication meets the requirements of the American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, ANSI Z39.48-1992 Printed in the United States of America 2 4 6 8 9 7 5 3 1

To our inspiring colleague and dear friend Gene Rosa, who passed away before this book was published. We will miss him.

In Memoriam

E

ugene A. Rosa, a pioneer in environmental sociology, died on February 21, 2013, at seventy-one. Gene was committed to linking the leading edge of the social sciences to the ecological and earth systems sciences, as well as to engineering. His work is truly interdisciplinary and is influential among scholars who span the social, ecological, and physical sciences. At the same time, his work is foundational to contemporary thinking in structural human ecology, the sociology of risk, and the sociology of energy. Gene began his sociological career with his graduate work at the prestigious Maxwell School at Syracuse University, where he studied with Allan Mazur. Gene’s dissertation work was in what he termed “biosociology” to emphasize that he was studying the influence of the social on the biological, in contrast to the genetic reductionism of sociobiology. This work presaged the current interest in neurosociology. Allan and Gene published one of the first articles to demonstrate that energy consumption was decoupled from the quality of life in industrial economies. It may be the first macro-comparative analysis in environmental social science. It spawned further analysis and started to shift our understanding of energy consumption in contemporary societies. Gene continued to publish extensively on energy, which led to three other major themes in his work: nuclear power, risk, and structural human ecology. After completing graduate school and a two-year postdoctoral fellowship at Stanford University, Gene moved to Washington State University. At the time, the Department of Sociology had an amazing cluster of scholars working on the environment and closely related issues: Bill Catton, Riley Dunlap, Lee Freese, Bill Freudenburg, and Jim Short. Gene thrived in this environment and collaborated

viii

In Memoriam

regularly with these colleagues, particularly on topics in nuclear power and risk. Nuclear power has been a contentious issue since the 1970s, and Gene and his colleagues were pioneers in developing a sociology of nuclear power through a series of influential articles and edited books. Recently, Gene led a distinguished collaboration of scholars who raised the importance of social science perspectives in assessing the nuclear waste issue. As a result, he was asked to testify before President Barack Obama’s Blue Ribbon Commission on America’s Nuclear Future. Although his health prevented him from testifying, other colleagues from the team he led did. As a result, the commission’s reports were much more attentive to social science than otherwise would have been the case. Gene’s interest in nuclear power led naturally to his foundational work in the sociology of risk, where he made immense and wide-ranging contributions. His cross-cultural comparisons of risk perceptions have been cited as an exemplar of comparative research methods, and his U.S.-based work on risk perceptions has been highly influential in both scholarship and policy. He made perhaps his most important contributions to the sociology of risk by engaging our basic conceptualizations of risk and risk policy. His famous article on the ontology and epistemology of risk, “Metatheoretical Foundations of Post-Normal Risk,” continues to spark discussion. One of his monographs, Risk, Uncertainty, and Rational Action, won the 2000–2002 Outstanding Publication Award from the Section on Environment and Technology of the American Sociological Association. This, his last book, engages current thinking on societal risk and offers both theoretical advances and suggestions about risk governance, while one of his last papers lays out a logic for broad comparisons of some the most important risks facing society, including climate change and terrorism. One of his major accomplishments in the field of risk research was to link the work on social theory of risk championed by Ulrich Beck, Anthony Giddens, and Niklas Luhmann to the risk management and governance community. He wanted risk theory to become practical for improving risk management and fostering sustainability. This interdisciplinary outreach has been a major stimulus for building bridges between the often very abstract risk theory camp in sociology and the more practical organizational- or governance-oriented studies of applied risk sociology. Overall, Gene published more than forty articles and book chapters on various aspects of risk, and this summary necessarily touches only a few of the themes he engaged. For the past two decades, Gene has been a leader of a research program in structural human ecology, an effort intended to bridge the social and ecological sciences in the analysis of human drivers of environmental change. With his collaborators Richard York and Tom Dietz, Gene established an analytical logic that evaluated the contributions of population, affluence, technology, institutions, culture, and other factors in shaping environmental stress. The work was a continuation of his pioneering analysis of energy consumption and quality of life and was germinal in advancing a new macrosociology of the environment. Gene’s work in structural human ecology has been published in journals across the social and ecological sciences. An edited volume, Human Footprints on the Global

In Memoriam

ix

Environ­ment: Threats to Sustainability, examines structural human ecology and related approaches to global environmental change and won the Gerald R. Young Book Award from the Society for Human Ecology. The most recent thread in this work—examining the efficiency with which societies produce human well-being relative to the stress placed on the environment—is deeply resonant with his pioneering work on energy and quality of life. Gene considered it a new way to think about sustainability. It is not surprising that so accomplished a scholar won many accolades. Gene was the Edward R. Meyer Distinguished Professor of Natural Resources and Environmental Policy in the Thomas S. Foley Institute of Public Policy and Public Service at Washington State University, where he was also the Boeing Distinguished Professor of Environmental Sociology and Regents Professor. He was a Fellow of the American Association for the Advancement of Science. He was one of only two people to twice win the Outstanding Publication Award of the Section on Environment and Technology of the American Sociological Association. The other two-time winner is one of his students, Richard York. In addition to his scholarship, Gene was an accomplished artist and was very proud of his appointment as an Affiliated Professor of Fine Arts at Washington State University. His sculptures, which he described as Ecolage, have appeared regularly in the annual Faculty of Fine Arts Exhibition and were the subject of a solo exhibition at Washington State. Gene was also an avid collector; he had converted the top floor of his home in Moscow, Idaho, into a gallery for his collection of contemporary art. Gene was an extraordinary friend and colleague. Whether with new ideas for research, sage advice about professional life and ethics, or gourmet cooking and his incredible collection of wines, his generosity was unfailing. Every conversation with Gene sparkled with new ideas and his unflagging good humor. Having come from a working-class family in the Finger Lakes and Lake Erie region of New York State, Gene always had a sense of wonder at the social and intellectual journey he was taking. He was proud of his family and his heritage. He established the Luigi Gastaldo and Flora Brevette Rosa Endowment, named for his parents, at Washington State’s Museum of Art to fund transportation to the museum for children who otherwise might not be able to experience an art museum. Thomas Dietz, Michigan State University Aaron M. McCright, Michigan State University Ortwin Renn, University of Stuttgart Richard York, University of Oregon

Contents

Foreword: Risk Society as Political Category, by Ulrich Beck

xiii

Preface

xxv

Acknowledgments Introduction: Sketching the Contemporary Era

xxix 1

Part I. Social Science Foundations of Risk 1 Meta-Theoretical Foundations

13

2 An Evolution of Risk: Why Social Science Is Needed to Understand Risk

33

Part II. Risk and Social Theory 3 Overarching Perspective: The Rational Action Framework Reflexive Modernization Theory and Risk: The Work of Ulrich Beck 4 and Anthony Giddens

49 69

Risk in Systems: The Work of Niklas Luhmann 5

102

Jürgen Habermas and Risk: An Alternative to RAP? 6

110

xii

Contents

Part III. Risk Governance: Links between Theory and Strategy 7 The Emergence of Systemic Risks

123

8 The Three Companions of Risk: Complexity, Uncertainty, and Ambiguity

130

9 Risk Governance: A Synthesis

150

1 0 An Analytic-Deliberative Process: A Proposal

for Better Risk Governance

170

Conclusion: Risk Governance as a Catalyst for Social Theory and Praxis

195

References

203

Index

231

Foreword Risk Society as Political Category

W

hen finishing the manuscript for my initial book, Risk Society: Towards a New Modernity, in 1986, I could not have expected that more than twenty-five years later, in 2012, three international social scientists specialized in different fields of risk research and management would publish The Risk Society Revisited: Social Theory and Governance. This is, first of all, an honor. Second, it is an indication that in the course of the 1990s, and especially at the beginning of the twenty-first century, a considerable number of “global risks” have emerged, writing ever new chapters of the “world risk society” script. Third, it proves the liveliness of the debate. Today, a vast body of literature is in dialogue with Risk Society (1992c), World Risk Society (1999), and World at Risk (2009b) across nations and academic disciplines that has used this framework for a great variety of issues.1 They include social inequality and class (Atkinson 2007b; Curran 2013; Goldthorpe 2002; Mythen 2005a),2 politics and power (Fischer 1998; Franklin 1998; Hajer 1997; Hood et al. 1999), work and labor markets (Allen and Henry 1997; Edgell 2012; Ekinsmyth 1999; Mythen 2005b; Webb 2006), international relations (Grande and Pauly 2005; Hajer 2009; Jarvis 2007), war and military (Heng 2006; Rasmussen 2006; Williams 2008), business and management (Fitchett and McDonagh 2000; Hutter and Power 2005; Matten 2004; Shrivastava 1995), 1. In what follows, I refer to and list only the English-language literature. 2.  For the discussion with Atkinson on class and individualization, see Beck 2007. For the discussion on class and risk with Curran, see Beck 2013a.

xiv

Foreword

insurance (Ericson and Doyle 2004; Ericson, Doyle, and Barry 2003),3 law and justice (Hudson 2003; Shearing and Johnston 2005), criminology (Ericson 2007; Ericson and Haggerty 1997; Kemshall 2003), terrorism (Amoore and de Goede 2008; Rosa et al. 2012), youth (Roberts 2010; Woodman 2009), media (Allan, Adam, and Carter 2000; Cottle 1998, 2009), technology and culture (Lash 2000; Loon 2002; Ong 2004), environmental politics and global warming (Bulkeley 2001; Goldblatt 1996; Hulme 2010; Urry 2011; Voß, Bauknecht, and Kemp 2006; Wynne 2002), social theory (Adam 1998; Adam, Beck, and Loon 2000; Arnoldi 2009; Clarke 2006; Jasanoff 1999; Latour 2003; Mythen 2004; Sørensen and Christiansen 2012; Strydom 2002; Urry 2004; Watson and Moran 2005; Zinn 2008), and last but not least globalization and cosmopolitanism (Blank 2011; Calhoun 2010; Chang 2010; Delanty 2009; Gilroy 2010; Han and Shim 2010)— just to name some of the more recent examples. Let me illustrate the last topic. Modern societies—Western and non-Western alike—are confronted with qualitatively new problems, which create the “cosmopolitan imperative”: cooperate or fail! This cosmopolitan imperative arises because of the spread and scale of global risks: nuclear risks, ecological risks, technological risks, economic risks, informational risks created by radicalized modernity and insufficiently regulated financial markets, and so on. These new global risks have at least two consequences. First, they mix the “native” with the “foreign” and create an everyday global awareness. Second, the concept of imagined communities, so beautifully outlined by Anderson (1991), has to be redefined. However, the result of global interconnectedness is not a “pure” normative cosmopolitanism of a world without borders. Instead, these risks produce a new hybrid cosmopolization—the global other is in our midst. What emerges is the universal possibility of “risk communities” that spring up, establish themselves, and become aware of their cosmopolitan composition—“imagined cosmopolitan communities” that might come into existence in the awareness that dangers or risks can no longer be socially delimited in space or time (Beck 2006a, 2011). What makes this book by Eugene A. Rosa, Ortwin Renn, and Aaron M. McCright outstanding is first that the authors are comparing “the thinking of key European scholars to describe the contours . . . ​of this world risk society. A primary aim of this book was to capture the essence of their writings, critically evaluate their logic and theoretical implications, and identify key gaps that need to be filled. We distilled and summarized the works of Ulrich Beck, Anthony Giddens, Niklas Luhmann, and Jürgen Habermas and attempted to work through the logic of their theories as they bear on risk” (pp. 195–196 herein).4 Secondly, they share the “common foundation for describing the current era as a categorical break with the past” (p. 196 herein), thereby proving an in3.  For the discussion with Ericson and Doyle (2004) on terrorist risk and insurance, see Beck 2009b: chap. 8. 4. In fact, the list of named risk theory authors could or should include Douglas 1994; Ewald 1986; and Foucault 1980 (see Arnoldi 2009; Mythen 2004; Wilkinson 2010).

Foreword

xv

terdisciplinary explanatory framework for the new character of risks and the institutional failure of modern societies and states in tackling risks. Consequently, they are thirdly looking for a way out of the risk dilemma by writing “a proposal for better risk governance” (p. 170 herein). These are undoubtedly important steps that may help sociology, whose self-understanding is rooted in the experiences of the nineteenth and twentieth centuries, to become more receptive to the new realities at the beginning of the twenty-first century. Today one often observes a general avoidance reflex among social scientists in the face of a global risk situation whose upheavals overtax the familiar instruments of theory, of firmly established expectations of social change, and of the classical means of politics. Moreover, this avoidance reflex ensures that the social sciences are irrelevant for contemporary public debates (Burawoy 2005). This is especially true of many sociologists who in their theories and research mainly pursue the question of the reproduction of social and political order while ignoring the transformation of this order. We face the epistemological challenges of a “changing order of change.”5 But most sociology asks, in view of the present and the future and through all upheavals, not how the class system is uprooted, but how it is reproduced (Atkinson 2007b; Bourdieu 1984; Goldthorpe 2002; Scott 2002), how the system of power is reproduced (Foucault 1980), and how the (autopoietic) system is reproduced (Luhmann 1995), etc.6 To be sure, these authors focus on the normal functioning of society and politics and are acute observers of the resistance of established institutions to change. Nevertheless, it must be asked how far this very resistance of their cate­gorical outlook to transformation blinds them to the explosive dynamic of transformation that is currently changing the world. What I mean by this becomes abundantly clear when one thinks of the major risk events of recent decades: Chernobyl, 9/11, climate change, the finan­ cial crisis, Fukushima, and the Euro crisis. Three cosmopolitan features are common to them all. First, because they give rise to a dramatic radicalization of social inequality both internationally and intra-nationally, they cannot continue to be conceptualized in terms of the established empirical-analytical, conceptual instrumentarium of class analysis as “class conflicts in the class society.” By contrast, they indeed vary the narrative of discontinuity as contained in the 5. This change in the framing of change is what the theory of reflexive modernization is about. This transformation is propelled by industrial modernity and represents a natural outgrowth of its success rather than its failure and crisis. It is the victory of industrial capitalism which produces outcomes and secondary consequences that are undermining their own institutional foundation. “By virtue of its inherent dynamism modern society is undercutting its formation of class, stratum, occupation, sex roles, nuclear family, plant, business sectors and of course also the prerequisites and continuing forms of natural techno-economic progress” (Beck, Bonss, and Lau 2003; Beck, Giddens, and Lash 1994: 2; Beck and Lau 2005). 6. Interestingly, this is not true of the founders of sociology, who were interested in the transformation of social and political systems, as the works of Auguste Comte and Karl Marx, though also of Max Weber, Georg Simmel, Émile Durkheim, etc., impressively testify.

xvi

Foreword

theory of the world risk society. Second, before they actually occurred, they were inconceivable. And third, they are global in character and in their consequences and render the progressive networking of spaces of action and environments tangible. These “cosmopolitical events” were not only not envisaged in the paradigm of the reproduction of the social and political system, but they fall outside of this frame of reference in principle and as a result place it in question. In contrast, my theory of the “world risk society” consciously starts from the premise of the self-endangerment of modernity. It attaches central importance to the question of how, in view of impending catastrophe, the social and political system of the nation-state is beginning to crumble and how the under­ standing of the basic concepts of modern sociology and society—hence, the understanding not only of risk, but also of power, class, the state, the nation, religion, family, the household, work, and love—are undergoing open-ended transformations. For more than twenty-five years now I have been arguing that it is not the social reproduction of class that is the master framing, but it is the social explosiveness of global risk that is the frame for understanding the transformations of modernity. This debate on class and risk, or class and individualization, suffers from the fact that class theorists, trapped in the “class logic,” can conceive of the antithesis to the persistence of classes only in terms of the “disappearance of classes”—specifically, in terms of a decrease in inequality and an increase in equality. However, that is precisely not my perspective. The antithesis to the sociology of class that I propose and develop attaches central importance, on the contrary, to the radicalization of social inequality. Class is too soft a category to capture the explosiveness of social inequality in world risk society. This forces us to overcome the epistemological monopoly of the concept of class over social inequality and to uncouple historical classes from social inequality, something which is evidently inconceivable for analysts of class (Beck 2007; Atkinson 2007b; Curran 2013). Eugene A. Rosa, Ortwin Renn, and Aaron M. McCright are to be congratulated for a major effort to overcome at least some of the obstacles which block sociologists from understanding the “inconceivableness” of a world in transformation. I have never read anything as sophisticated in comparing highly abstract social theories and building bridges to include and combine those theoretical reflections with empirical and political issues and dilemmas of risk management. In fact, I myself, participating in the debate on risk, didn’t quite realize that theories of modernity can be read as contributions to the problématique of world risk society, highlighting one of the major debates in modern European sociology, as the authors of this book convincingly do. However, confronting the theories with the question “How can societies develop the institutional and political means for governing [new] risk effectively?” (p. 9 herein), the authors seem to be rather disappointed: If we raise the simple question “What is the route out of this undesirable state of affairs?” the common answer by Beck, Giddens, and

Foreword

xvii

Luhmann is, “There is none. This is the fait accompli of advanced modernity.” This is the inescapable state of the human condition at this time. The most we can hope to do is to monitor and mitigate the worst effects. For Luhmann this presumably would be one of the results of the self-referential examination of the environmental consequences of system functioning. Giddens and Beck, providing for a more active role for institutions, agree that the vehicle for traveling this route can only be a reconstituted form of democracy—certainly not any version of socialism. They also see a crucial role to be played by social movements. But Giddens and Beck are far more successful at theorizing the nature of the risk society and developing a perspective than at explicating the means for addressing its dislocations and moral issues, other than to adumbrate a fairly vague vision around a revitalized form of democracy. The contours of such a revitalization have yet to crystallize. (pp. 198–199 herein) The “war against risks” (the anticipation of a catastrophe) goes on, but contemporary risks are first and foremost “problems resulting from techno-­ economic development itself” (Beck 1992c: 19). The most alarming of risks are now the products of problem-solving, one can say; the indifference to the political dimension of risk society in our risk-infested society gives rise to “the solution as problem” (Bauman 1992: 25). World risk society is a political category in itself and involves the social explosiveness of global risk conflicts. “The ­‘social explosiveness’ of global financial risks,” as I wrote in 1999, “is becoming real: it sets off a dynamic of cultural and political change that undermines bureaucracies, challenges the dominance of classical economics and of neoliberalism and redraws the boundaries and battlefields of contemporary politics” (Beck 1999: 8). It was the Lehman Brothers bankruptcy in 2008 that actually plunged the global financial system into a crisis that threatened its very survival. Combined with the global climate risk, the global financial risk—now exacerbated into threatening the existence of the Euro and of a united Europe—has generated a difficult historical moment that conforms to Antonio Gramsci’s definition of crisis. “Crisis,” Gramsci (1971: 276) says, “consists precisely in the fact that the older order is dying and the new order cannot be born.” This is what we are currently experiencing: the rupture, the interregnum, the simultaneity of collapse, and a new open-ended departure which the paradigm of reproduction of order fails to grasp. For this is a matter of state-mediated redistributions of risk across nation-state borders which cannot be forced into the narrow pigeonhole of “class conflict.” The risks posed by big banks are being socialized by the state and imposed on retirees through austerity dictates. Risk redistribution conflicts are breaking out between debtor countries and lender countries not only in Europe, but across the world. This tension certainly cannot be read in terms of a “class conflict between countries.” These risks are instead giving rise to highly explosive political risk conflicts between and within European member

xviii

Foreword

states, as can be observed in real time over the past two years in the case of the almost endless Euro crisis (Beck 2013b). In the following I will discuss some of the ambiguities in the conceptual architecture of this book by emphasizing the dualisms: (1) risk versus harm– catastrophe; (2) threat versus risk–manufactured uncertainty; (3) the logic of war versus that of global risks.

Risk versus Harm; Risk versus Catastrophe As far as I can see, the authors fail to clearly recognize the key distinction between risk and catastrophe, and risk and harm. We are currently experiencing an inflation in talk of “catastrophes,” “breakdowns,” or the “looming end.” Yet, here we must distinguish carefully between catastrophe and the rhetoric of catas­trophe. For example, the Euro, at least at the present time of writing (August 2012), has not yet collapsed. The point, therefore, is that a possible catastrophe, which could occur in the future, is to be prevented by its anticipation in the present. This is exactly what is meant by the concept of risk in my theory of the world risk society. It stresses the need to distinguish between the harm or catastrophe that has actually occurred and the impending harm or ­catastrophe—that is, the risk. Therefore, risk society, to be precise, is not a catastrophe society or a Titanic society. On the contrary, the concept of risk not only contains descriptions of how (to develop the image) “the iceberg can be circumnavigated” and hence the sinking of the Titanic prevented. It is based instead on an image of the world that replaces the fateful catastrophe, the “too late,” by the exhortation to act. The world risk society is a society perceiving itself to be encouraged and even forced to search for alternatives to Max W ­ eber’s “iron cage” of risk capitalism. Why is this so? This becomes necessary in order to prevent “the inconceivable,” the threat to humanity. Visions of alternative modernities which have not been taken seriously so far are suddenly getting on the agenda of mainstream international and national politics: “Tobin tax,” “green economy,” “renewable energy,” “European minister of finance,” “United States of Europe,” “cosmopolitan democracy,” etc. There is to be discovered an “elective affinity” (Wahlverwandtschaft) between my theory of world risk society and Ernst Bloch’s (1995) “The Principle of Hope.” Risk society comprises the dialectics between “realistic dystopia” of catastrophe to humanity and “real­ istic utopia” of reinventing the political for the “Global Age” (Albrow 1996). It is, therefore, the opposite of the “postmodern constellation.” It is a self-critical, highly political society in a new sense: the transnational dialogue and cooperation between politics and democracy—and perhaps even sociology—becomes a matter of survival. Generally, the talk is of “crisis.” But, here by contrast we speak in terms of “risk.” What is the relation between these two conceptual schemes? The concept of “risk” contains the concept of “crisis,” but goes beyond it essentially in three points. First, the concept of crisis effaces the distinction between (staged) risk as

Foreword

xix

the present future and catastrophe as the future future (about which we cannot ultimately know anything). Speaking in terms of crisis “ontologies,” as it were, it is the difference between anticipated and actual catastrophe, risk and harm, to which the theory of the world risk society attaches central importance. Second, today the use of the concept of crisis misleadingly suggests that it is possible to return to the status quo in the process of surmounting crises. In contrast, the concept of risk reveals the “difference of the century” between what is threatening globally and what answers are possible within the framework of nation-state politics. This, however, also implies, third, that risk as it is understood is not a state of exception—like crisis—but is instead the normal situation, and hence the motor driving a major transformation of society and politics. Bryan S. Turner (like many others) criticizes the theory of world risk society on the basis of what could be called a kind of “naïve risk realism,” because it ignores the conceptual difference between risk and harm/catastrophe. As Turner notes: A serious criticism of Beck’s arguments would be to suggest that risk has not changed so profoundly and significantly over the last three centuries. For example, were the epidemics of syphilis and bubonic plague in earlier periods any different from the modern environment illnesses to which Beck draws our attention? That is, do Beck’s criteria of risk, such as their impersonal and unobservable nature, really stand up to historical scrutiny? The devastating plagues of earlier centuries were certainly global, democratic, and general. Peasants and aristocrats died equally horrible deaths. In addition, with the spread of capitalist colonialism, it is clearly the case that in previous centuries many aboriginal peoples such as those of North America and Australia were engulfed by environmental, medical, and political catastrophes which wiped out entire populations. If we take a broader view of the notion of risk as entailing at least a strong cultural element whereby risk is seen to be a necessary part of the human condition, then we could argue that the profound uncertainties about life, which occasionally overwhelmed earlier civilizations, were not unlike the anxieties of our own fin-desiècle civilizations. (Turner, quoted in Elliott 2002: 300) Others, by contrast, very carefully distinguish at the level of empirical analysis between harmful events and risk situations, for they operationalize these events and situations in terms of empirical indicators of spatial vulnerability (Bankoff, Frerks, and Hilhorst 2004; Blok 2012). For example, we repeatedly find that the poorest citizens live in the geographical zones in which the risks of toxic exposure, risks of inundation, and risks of destruction through hurricanes, cyclones, and landslides are the greatest, whereas the wealthiest, relatively speaking, monopolize the least vulnerable residential areas (Curran 2013: 14–15).

xx

Foreword

Threat, Risk, and Manufactured Uncertainty The authors note, “The reality that risk has always been and will continue forever to be a universal feature of the human condition is nearly unassailable” (p. 195 herein). Thereby, to me it seems necessary to distinguish between three types of uncertainty: threats, risks, and manufactured uncertainties. Risk is a modern concept that presupposes human decisions and humanly made futures (probability, technology, modernization). Risk-as-anticipation is the turning point for modern technology, as it has to embrace the future as an extended present. While the confidence in large-scale planning and regulation has proved deceptive, the concept of risk calls for an engagement with the future which is both less speculative and less careless, but opts for a political commitment to responsibility and accountability. This modern concept of risk has to be distinguished from “manufactured uncertainties,” as Anthony Giddens (1994b) and I argue. Typically today, communication and conflict flare up around this particular type of new manufactured risk. Natural disasters—threats—coming from the outside and thus attributable to God or nature, such as prevailed in the pre-modern period, no longer have this effect. Nor do the specific calculable uncertainties—risks— that are determinable with actuarial precision in terms of a probability calculus backed up by insurance and monetary compensation. These were typical of early modern industrial society (in Europe and the United States). At the center of attention today, by contrast, are “manufactured uncertainties.” They are distinguished by the fact that they are dependent on human decisions, created by society itself, embedded in society, and thus not externalizable, collectively imposed, and thus individually unavoidable. Perceptions of them break with the past, break with experienced risks, and break with institutionalized trust and security norms; they are incalculable, uncontrollable, and in the final analysis no longer (privately) insurable: “it makes no sense to insure oneself against a global recession” (Beck 1999: 8). From this two things follow. First, the realistic dystopia of catastrophe has to be prevented by all means. Second, the utopia of fundamental change, therefore, becomes realistic and rational. Threat, risk, and manufactured uncertainty can be differentiated in idealtypical terms as outlined here. But in reality they intersect and commingle. In fact, the problems of drawing hard and fast distinctions between these politically very differently valued aspects of future uncertainty comprise a decisive focus and motor of risk conflicts.7 The world risk society is distinct from nation-state modernity in this crucial respect: the “social compact” or “risk contract” is increasingly broken down 7. In order to illustrate the necessity of a “cosmopolitan turn” to risk theory and analysis, it is helpful to distinguish between “self-induced” and “externally induced” dangers. This allows us to locate the problems of inequality and dominance within the concept of globalized risks itself. In this way, it is possible to grasp more precisely that there is a radical inequality in the situations of decision-makers and of those affected by risks and/or dangers. With the cosmopolitan turn it becomes evident that the distinction between self-induced risk and external

Foreword

xxi

(Ewald 1986). As regards the state, this implies the following. Confronted with global risks, the side effects of the success of capitalistic modernity—the leading patterns of political organization that, since the Peace of Westphalia in 1648, have governed society in terms of its spatial-political and economic configuration—are now being eroded by economic and political activities that occur between states and by processes that are not bound to the state. The outcome is the transition from a Westphalian-based system of government to a post-­Westphalian system of governance, where the bounds of the state and its capacity effectively to regulate and control all manner of processes, risks, and externalities are soundly compromised (Eberlein and Grande 2005; Grande and Zangl 2011; Hajer 2009; Jarvis 2007; Peters 2005; Voß, Bauknecht, and Kemp 2006). This “risk governance” is exactly a main topic of Rosa, Renn, and McCright. They ask: What actions can be taken to deal with the problématique [of risk]? Traditional theorizing is typically framed to address “what is” questions, to explain the past or current state of the world, and to identify the social forces that have made it so. So, too, for the risk society theorists. From their theoretical standpoint, assessing and managing specific risks is of little intrinsic interest. While they may adumbrate broad social intentions and practices in the risk society, they leave the dirty work of risk governance and management to others. Therefore, analysts, policymakers, and practitioners must do the work of anticipating risk and managing it. Stated differently, the key goal of this book is to integrate the lofty whiteness of risk society theory with the sooty details of risk decision making. (pp. 4–5 herein) But the crucial question then becomes: Who are the deciders? Who are the constituents of risk governance? In the world risk society “methodological nationalism” becomes antiquated not only as a conceptual framework but also in its relationship with the governed. This is a fundamental reason why highly professionalized social science, on the one hand, loses itself in a nirvana of abstractions and, on the other, breaks down into an esoteric fragmentation of unrelated, highly detailed, empirical research projects (Burawoy 2005). In methodological nationalism the nation-state categorical frame of research coincides, unknown to the researchers, with specific national actor categories (for example, social classes, unions, parties, civil society movements, national governments, etc.). This definitely does not hold for the relationship of the risk is a cosmopolitan flash-point, insofar as the relation of whole regions of the world to one another can be analyzed in terms of the externalization of self-produced dangers, i.e., by shifting them onto others. The powerful produce and profit from the risks, whereas the powerless are affected to the core of their being by the “side effects” of decisions taken by others (Beck and Grande 2010: 423; Mason 2005; see also the special issue “Varieties of Second Modernity,” British Journal of Sociology 61(3), 2010).

xxii

Foreword

­various actors of global risk governance. Rather, the subjects of trans­national risk governance: are cosmopolitan coalitions of actors in all their diversity. They are to be found in these heterogeneous, permanently fluctuating coalitions which include governments, national and international ‘sub-politics’, international organizations, informal gatherings of states (such as the G-8, G-20), etc. It is in such coalitions that hegemonic struggles are fought out between conflicting projects, all claiming to represent the universal and to define the symbolic parameters of social life. And in forging such coalitions across borders of all kinds, new spheres and spaces of political action open up, under conditions which still have to be researched. Confronted with a new quality of global dependencies and interdependencies, no single player can expect to win on his/her own; they are all dependent on coalitions, alliances and networks (Beck 2005). This is the way, then, in which the hazy power game of ‘global domestic politics’ opens up its own immanent alternatives and oppositions and creates novel collective identities and political subjects. (Beck and Grande 2010: 435–436)

The Logic of War versus the Logic of Risk “Risk society” is a political category. It contains (as I argued before) a transformation of “the political.” This can also be shown by distinguishing between the logic of threat inherent in war and the logic of threat inherent in risk (Beck 2011). The threat logic of war is a matter of the antagonism between nations. The strength and the survival of the nation depend ultimately on the readiness of each member of the nation to sacrifice his or her life for it. Action in world risk society is based on the complete opposite—that the interest in survival of all becomes the self-interest of each individual. By implication, the threat logic of global risk, by contrast, is concerned with cross-border cooperation to stave off catastrophes. Wars by their very logic spark the us-versus-them conflict between nations. Living and surviving within the horizon of global risk, by contrast, follows precisely the contrary logic. Here it becomes rational to overcome the us-versusthem conflict and to recognize us-with-others as cooperating partners. Risk, therefore, directs our attention to the explosion of a global plurality that the logic of war negates. World risk society opens up a moral space out of which a civic culture of responsibility reaching across borders and antagonisms could (but by no means must) emerge. As the bipolar world fades away, we are moving from a world of enemies to one of dangers and risks. A further difference between these two conflict logics lies in the threat logic of war, where the enemy can be clearly identified. Within the framework of the threat logic of risk, often it is not possible to identify a concrete actor, an antagonistic intention, or the causal sequences of risk. The threat is not direct, inten-

Foreword

xxiii

tional, and certain, but indirect, unintended, and uncertain. Risk, not war, is the determining factor of power, identity, and the future. The scope of the theory of world risk society is ultimately a function of the following. The socially constructed memory of past catastrophes (the Holocaust, world wars, colonialism, Hiroshima, Chernobyl, 9/11, Fukushima) initiates an “ethics of never again.” The human catastrophe of yesteryear was not prevented. The human catastrophe of tomorrow can and must be prevented. This is the ethics of the never-again which is inherent in the logic of global risk (Beck and Sznaider 2011). To summarize, what does “risk society” as a political category mean in the context of research? In response to my critics, I have earlier outlined potential lines of further inquiry: I argue for the opening up to democratic scrutiny of previous depoliticized realms of decision-making and for the need to recognize the ways in which contemporary debates of this sort are constrained by the epistemological and legal systems within which they are conducted. This then is one of the themes which I would like to see explored further, preferably on a comparative transnational, transcultural, potentially global level. It would entail that we reconstruct the social definition of risks and risk management in different cultural framings, that we find out about the (negative) power of risk conflicts and definition where people who do not want to communicate with each other are forced together into a ‘community’ of shared (global) risks, and that we therefore combine it with the questions of organized irresponsibility and relations of definition in different cultural-political settings. This, it seems to me, would be a worthwhile new conceptual and political social science. (Beck 2000c: 226–227) The routines of the mainstream obscure how breathtakingly exciting sociology could become again. The early sociologists were fascinated by the newly discovered, but yet to be surveyed, continent called “society.” A reflection of this fascination could reappear if the curiosity of discovering and testing with highly developed professional methods the unexplored landscapes, enthusiasms, contradictions, and dilemmas of world risk society and its resources and perspectives for governance and action were to revive the sociological imagination— a process that the authors of this book have already begun. Ulrich Beck, University of Munich

Preface Our life of fishing is so perilous that even though we worship all the gods in the world, many of us still die untimely deaths. —Noriko Ogiwara, Dragon Sword and Wind Child (1993)

T

he twentieth century was largely shaped by successive waves of a technological torrent. Comprising a remarkable range of technological changes, the torrent could hardly have been anticipated by both those humans who came before and those actually living in this revolutionary century (Hughes 2004). Included among the many developments were motorized personal transport, air travel and space flight, a remarkable range of synthesized chemicals, the ability to harness the atom for commercial uses, high-speed personal computing, and the myriad forms of electronic communication that now crisscross the world. The century also not only witnessed breakthroughs in our understanding of the genetic structures of many species, but also breakthroughs in the capacity to modify organisms in a variety of ways to serve human purposes. Now part of the biological toolbox are such techniques as xenotransplantation of organs, creation of synthetic organs, and sophisticated genetic testing to anticipate the abnormalities of fetuses or sensitivity to toxic exposure. Nanotechnology, the ability to control matter at the atomic or molecular scale, is already operational in a variety of consumer products. Any one of these innovations would have astounded the most visionary observers of technological change, such as science fiction writer H. G. Wells. But all scientific developments that drive technological change are accompanied by an unshakeable traveling companion: risk. Each innovation entails the possibility of not only gain or enjoyment, but also of loss or misfortune. Each embeds human action in an arena of uncertainty. Motorized ground and air transport are accompanied by the risk of injury or death. Generating electricity with nuclear power is accompanied by the possibility of radiation exposure at any stage of the fuel cycle. Modifying the genetic structure of organisms

xxvi

Preface

is accompanied by the risk of uncontrollable hybridizations. Transplanting organs or creating new organs are accompanied by the risk of animal viruses crossing the species barrier and causing human disease or the risk of creating hybridized humans. And creating motors at the molecular level is accompanied by the risk of creating self-replicating technological devices that may escape human control. In the twentieth century the world population grew from 1.6 billion to over 6 billion people, a nearly fourfold increase. During the same period the gross domestic world product went from $2.9 trillion to over $60 trillion, a more than twentyfold increase. These two key drivers, along with the rapid technological changes described above, resulted in yet another entirely new species of risk: grand risks that threaten the entire planet. Perhaps none of these species of risk is more daunting than the risk of outpacing nature’s capital and services. Humans in the twentieth century placed a greater burden on the earth’s ecosystems than at any time in previous history. We now face the distinct possibility that the planet will not provide the ecological capital and services that humans will need for long-term survival. So, the twentieth century was not only a century of great invention, it was also one of increased awareness of the risks associated with technological innovation, unprecedented human intervention into nature, and economic growth. The realization that many inventions carried unwanted side effects or produced unintended consequences came into sharp focus. Patterns of global warming, species loss, desertification, deforestation, and resource exhaustion signaled that human survival itself is at risk. The practice of anticipating and assessing risks reached a level of sophistication unknown to our forebears. A global “risk conscience” emerged. A variety of techniques were developed to characterize risks. Risk characterization and management are now routine practices in a wide variety of settings—private sector, policy arenas, financial markets, military operations, and regulatory regimes. This emergence of risk assessment and management as an institution began with the technical tasks of estimating both the degree of uncertainty—the probability that some outcome would happen—and the consequences if an outcome did occur (Kaplan and Garrick 1981; Rowe 1977; Wilson and Crouch 2001). Also appearing early with the institutionalization of risk was a focus on the psychological features of risk. Perceptions, attitudes, beliefs about risks, and risk behavior dominated the social science of risk (Boholm 1998; Covello 1983; Heimer 1988; Jaeger et al. 2001). This body of research was augmented by research on human decision making under conditions of uncertainty (Kahneman, Slovic, and Tversky 1982; Kahneman and Tversky 1979, 1982). As a result we now have a considerable body of cumulative knowledge about the psychological features of risk. We know that a variety of qualitative factors shape perceptions and interpretations of risk (Slovic 1987, 2001). We know that laypeople typically differ from experts in their beliefs about risk (Dietz and Rycroft 1987; Slovic 1999). We know that a variety of social, political, and other factors can

Preface

xxvii

amplify or dampen risk judgments (Kasperson et al. 1988; Machlis and Rosa 1990). And we know that trust in the institutions responsible for managing risks markedly influences peoples’ attitudes toward risk (Flynn et al. 1992; Slovic 1999). Moreover, social and cultural values as well as worldviews color our propensity to emphasize the benefits, on the one hand, or the risks, on the other hand, of an activity or a technology (Douglas and Wildavsky 1983; Rayner 1992; Thompson, Ellis, and Wildavsky 1990; Wildavsky and Dake 1990). The cumulative knowledge delineating the risk conscience has been invaluable for bridging the divide between lay and expert knowledge about risk. Furthermore, it set the stage for developing effective techniques for combining both types of knowledge in risk decision-making. It has been invaluable, too, for developing effective campaigns of risk communication. And in a limited way it has been useful to direct behavioral change. But such attempts at social engineering have their limitations. The twentieth-century risk conscience further produced an awareness that the rapid growth and global spread of risks have major consequences for society. A concern about these consequences was, to some extent, embedded in the process of risk characterization. For example, take a comparison of the estimated fatalities of two risks, X and Y. If their probabilities were comparable but one, say X, was expected to have a greater number of fatalities, other things being equal, we would say that it had a higher social cost than Y. We might even be in a position to say that the risk and cost is higher for some segments of society and lower for other segments. But we would be at a loss to address such questions as: How did this risk arise in the first place? Who or what is responsible for risk? Who benefits, and who loses? What are the institutional processes that embed risk? How do considerations of risk enter the political process, and what are the means for the effective governance of risk? What does the global reach of common risks mean for international regulatory practices? And perhaps most challenging of all, to what extent has the growth and spread of risks changed the fundamental character of society? These are core sociological questions. They are the raw material that ani­ mates the sociological imagination. Early in the process of the academic institutionalization of risk—with its technical and psychological focus—these questions remained unaddressed. That all changed with the emergence of attention by American and some European social scientists on specific technological risks, such as nuclear power and other technologies (Dunlap, Kraft, and Rosa 1993; Freudenburg and Rosa 1984; Perrow 1999), on toxic wastes (Clarke 1989; Edelstein 1988), on disasters (Erikson 1994), on technological systems (Perrow 1999), on general social processes (Short 1984), on ecological risks threating the sustainability of all societies (Catton 1980), and on others. But, the real impetus from sociology came from social theories about risk, especially in Europe. The works of Ulrich Beck, Anthony Giddens, and Niklas Luhmann beginning in the 1980s and 1990s and, in the cases of Beck and Giddens, continuing into the twenty-first century, set the stage for broad sociological thinking about risk. Beck’s neologism “Risk Society” provided the collective theme of these separate

xxviii

Preface

theorists. The first aim of this book is to capture this theme, to critically evaluate its logic and theoretical implications, and to identify key gaps that need to be filled. For addressing many of those gaps—especially in the connection between risk and its governance—we lean on the work of Jürgen Habermas. From the American pragmatist John Dewey, Habermas adopted and refined the idea of the public sphere. The basic idea of the public sphere is that it is an arena that serves as the mechanism for mediating between the private sphere and formal political structures (parliaments and other elected bodies). The public sphere is the arena of open discourse that steers political action. True democratic governance emanates from this sphere. And this is the second focus of this book: governance. In particular, we examine and critically evaluate governance where judgments about risk are central to public decisions that need to be made. What are fair and effective procedures for making risk decisions in a democratic society? Eugene A. Rosa, University of Washington Ortwin Renn, University of Stuttgart Aaron M. McCright, Michigan State University

Acknowledgments

W

e began working on this project in 1997. It was initially pitched as an article-length manuscript explicating the theoretical contributions on societal risk of Ulrich Beck, Anthony Giddens, and Niklas Luhmann—bringing in the work of Jürgen Habermas at the end to provide insights on the democratization of risk management. Over time, we realized that our objectives would be better served by expanding the scope of our project and pitching it as a book-length manuscript. At various stages of this project, we received many valuable comments on different versions of our manuscript from Robert Brulle, Tom Burns, Thomas Dietz, Jürgen Habermas, Carol Heimer, Michael Howes, Raymond Murphy, Luigi Pellizzoni, Lauren Richter, Arthur Stinchcombe, and Thomas Webler. Additional thanks are due to Scott Atran, Chip Clark, Paul Ehrlich, Carlo Jaeger, Ursula Heise, Roger Kasperson, Ilan Kelman, Donald Kennedy, Andreas Klinke, Joseph Kremer, Thomas Leschine, Nora Machado, Allan Mazur, Amy Mazur, Edward Miles, Hal Mooney, Susanne Moser, Richard Moss, Bonnie Ram, Pia-Johanna Schweizer, Piet Sellke, and Paul Stern. We also thank SarahKristina Wist for her diligent editing of our references section and Nina Fischer for her meticulous work creating our index. We extend a special thanks to Micah Kleit, our editor at Temple University Press. He has been extraordinarily patient, understanding, supportive, and flexible as we completed our chapters—always long past the deadlines for which we aimed. Eugene A. Rosa was supported by the Boeing Distinguished Professorship of Environmental Sociology, by the Edward R. Meyer Distinguished Professorship of Natural Resource and Environmental Policy, and by Julie Roberts.

xxx

Acknowledgments

Ortwin Renn has been supported by the Helmholtz Alliance “Future Infrastructures for Meeting Energy Demands: Towards Sustainability and Social Compatibility.” Aaron M. McCright has been supported by the Dutton Fellowship in Science and Technology Studies from Lyman Briggs College at Michigan State University.

Introduction Sketching the Contemporary Era Democracy cannot dominate every domain—that would destroy expertise—and expertise cannot dominate every domain—that would destroy democracy. —Harry Collins and Robert Evans, Expertise Reconsidered (2007)

C

ountless thinkers, especially in philosophy and the humanities, throughout the ages have sought to capture and document the “human condition.” What does it mean to be human? Who are we in the context of personal, social, and historical circumstances? What are the material, cultural, and spiritual conditions that shape us? What resources and constraints confront us not only in our daily lives, but also in our relationships with the institutions and the processes that connect us all? What is the range of choices, what decisions do we make when faced with those choices, and why? Countless characterizations have filled the caldron quest that seeks an answer to the question of the human condition, the question of who we are—as individuals, as social animals, and as a species.

Risk: Taming Uncertainty Whatever the specific answer that scholars may put to the question, one ineluctable feature of the specific conditions in which humans have found themselves is uncertainty. It has always been this way. It will always be this way, even as the context of individual and societal conditions inevitably changes. Since our evolutionary origins, Homo sapiens always have been confronted with choices under conditions of uncertainty—that is, states of the world where the circumstances are unknown or undeterminable but decisions must be made. Often, that uncertainty stems from threats to humans and what they value—that is, the threats we now call risk.

2

Introduction

As we have outlined, the advanced modernity of the twentieth century witnessed a hyper-accelerated change in the number of risks produced, their magnitude, and their global spread. Early modernity, of course, laid the groundwork for this evolutionary past. A considerable body of research has documented the now well-known structural changes associated with modernity: the rise of science and engineering, industrialization, economic growth, urbanization, demo­graphic shifts, the emergence of markets and their globalization, new transportation infrastructures, and the expansion of global communications. But often neglected in modernity studies is the most fundamental change of all: a change in worldview. In particular, serving as both a cause and an effect of modernity was a shift in how we thought historical processes unfolded. Premodern thinking was dominated by the belief that life was cyclical; modern thinking, in contrast, was dominated by a belief in directionality. “In a premodern society, where the purpose of life is understood to be the reproduction of the customs and practices of the group, and where people are expected to follow the life path their parents followed, the ends of life a given at the beginning of life. . . . ​Modern societies do not simply repeat and extend themselves; they change in unforeseeable directions, and the individual’s contribution to these changes is unspecifiable in advance” (Menand 2001: 399). Three key elements of this worldview change directly set the stage for risk and risk governance: uncertainty, future projections, and decision making. Removing the cyclical cake of custom that predetermines the direction of individual and societal change, humans now lived in a world with an uncertain but shapeable future. Future directions, assured by past patterns, with modernism became probabilistic estimates, not determined processes. And individual and collective decisions, which were obviated by the deterministic processes of premodernity, now became a central governance challenge for societies. Putting the matter succinctly, our ideas about risk could have crystalized the way they have only with this worldview change. All concepts of risk have one element in common: the distinction between possible and chosen actions (Renn 1992). Philosophers refer to this as the ability of humans to anticipate virtual “future” contingencies. It is based on the assumption that humans have agency to shape the future rather than just experience it. At any time, an individual, an organization, or a society as a whole faces several options for taking action (including doing nothing), each of which is associated with potential positive or negative consequences. If option A is not taken, a possible future pathway is (deliberately) excluded. Thinking about risks helps people to select the option that promises at least a marginal surplus benefit compared with all other available options. Humans thus have the ability to design different futures. We can construct scenarios that serve as tools for us to anticipate consequences and to adjust, within the constraints of nature and culture, our course of actions accordingly. Hence, risk decision making is at the core of human agency. Thinking about “what could happen” is one of the major distinctions between instinctive and deliberate actions. Hence, in this book we specifically address the connections between risk and the human condition.

Introduction

3

Few features of the human experience can compete with risk for its ubiquity over time and across place. Consider the risk of saber-toothed tigers to our distant ancestors or the risk of hostile conditions or hostile peoples encountered by ancient mariners. Or consider the risk of mine collapses threatening our industrial forebears or the risk from recurring natural disasters such as the volcano Krakatoa in the nineteenth century. Think about natural disasters still in the public memory, such as the Indian Ocean earthquake and tsunami in 2004, the earthquake and tsunami off the Pacific coast of Japan in 2011, Hurricane Katrina in the U.S. Gulf Coast in 2005, and human disasters such as the British Petroleum (BP) oil spill in the Gulf of Mexico in 2010. Even consider the growing recognition of the risk of climate change, the financial consequences of collateralized debt obligations and credit default swaps, and the hidden risks of new technology such as genetic engineering. These examples and many more like them illustrate that risk always has been and always will be a central feature of the human condition. There is no other option. Indeed, it would not be entirely far-fetched to write a history of the human condition as a narration of human challenges and responses in understanding uncertainty and in managing risks.1 We share the experience of risk with all of our forebears. What distances the contemporary era of advanced modernity—where societies nearly everywhere are undergoing fundamental structural transformations—from previous ages, first, are the types of risks that concern us. While we no longer worry about routine attacks by wild animals, we do worry about the quality of air and water everywhere and about the possibility of runaway nanotechnology facing our descendants. Further distancing this present age of structural ferment from the past are the pace, scale, and spread of risks around the world (see Rosa, McCright, and Renn 2010). There is virtually no place left on Earth where one can completely avoid complex technology, its interactions, and—importantly— its risks. For instance, a peasant in the remote rice fields of rural Vietnam faces not only the risks of the traditional vagaries of farming—such as weather and markets—and possible famine but also a whole host of new risks of advanced modernity, including the risk of illness due to the toxicity associated with the introduction of pesticides and herbicides; the risk of acid rain from the burning of high-sulfur coal in India; the risk of frequent and excessive flooding from a dam built for electrical power; the risk of radiation exposure from a nuclear accident in nearby China; and the risk from global climate change induced by carbon dioxide emissions thousands of miles away. Risk is now such a central feature of societies everywhere that its presence is frequently camouflaged by its ubiquity and embeddedness. As a result, mounting risks are typically submerged below the conscious awareness of the individuals who are exposed to them. Added to the incremental growth of technological risks is the growth of abrupt risks introduced by terrorists around the 1.  A partial such history is Diamond 2005.

4

Introduction

world (Atran 2010). The capacity for imposing many terrorist risks is made possible by the use of modern technology (e.g., jetliners and bioengineered viruses) and by the global connections enabled by technology. How do we characterize this era of pervasive risks of greater magnitude, spread, and awareness than any time in the past? To this question the German theorist Ulrich Beck provided the answer in the title of his 1986 book Risk Society. A name can identify an era, but it cannot explain it. What is clear is that the pervasive and widespread pattern of risk is the result of economic, social, and political forces that transcend the individual psyche and experience. They are social forces that beg sociological investigation. What is our sociological understanding of risk is this era of advanced modernity? What is our understanding of the risk society? In this book we grapple with these questions.

What Is Going On? The principal efforts to address this pivotal sociological question have come from the European tradition in sociological theory. The idea of “the risk society,” first advanced by Beck, who coined the term, also has been taken up by other leading European theorists—particularly Anthony Giddens and Niklas Luhmann. While they differ in the details, Beck and Giddens share a vision about risk that stems from the process of globalization, from the boundless reach of risk, and from the effect risk has on individual identity and civic involvement. Luhmann, however, provides a stark contrast in orientation. He extends his social systems approach to the topic of risk, seeking to understand how systems are socially constructed, how they demarcate between risk and danger, and how they manage risk. For Luhmann, there is no place for agency— that is, for individual identity or action—in risk systems. This book provides, for the first time in one place, a critical, pedagogical exposition of these theorists. This exposition is augmented by theoretical principles we three authors have developed to address neglected or underdeveloped issues by these risk society theorists. In particular, the thirty thousand-foothigh abstraction of risk society theorizing has left far below—too far for them to see—the sizable and growing literature accumulated in the risk characterization, assessment, and management field. We draw insights from that literature to provide a ground level complement to abstract European theory. The disciplined comparison of these three theorists, along with the integration of other theoretical principles by the authors, ensures that this book will be a unique contribution to the contemporary problématique centered on risk.

What to Do about It: Risk Governance What actions can be taken to deal with the problématique? Traditional theorizing is typically framed to address “what is” questions, to explain the past or current state of the world, and to identify the social forces that have made it so. So, too, for the risk society theorists. From their theoretical standpoint, assess-

Introduction

5

ing and managing specific risks is of little intrinsic interest. While they may adumbrate broad social intentions and practices in the risk society, they leave the dirty work of risk governance and management to others. Therefore, analysts, policymakers, and practitioners must do the work of anticipating risk and managing it. Stated differently, the key goal of this book is to integrate the lofty whiteness of risk society theory with the sooty details of risk decision making. The sooty work that constitutes all phases of risk decision making is captured in the idea of risk governance. This term links the experience of risk with the individual and institutional mechanisms of risk generation, observation, consequence anticipation, and management. The general concept of governance “refers to a complex of public and/or private coordinating, steering, and regulatory processes established and conducted for social (or collective) purposes where powers are distributed among multiple agents, according to formal and informal rules” (Burns and Stöhr 2011: 173). In the case of risk, this collection of processes is set in motion by the recognition that a risk—say, of a toxic substance in the environment—may be present and needs to be characterized and managed collectively. The general process of making and implementing collective decisions (governance) is as old as the human species itself. It encompasses the traditions and institutions that are the vehicle and outcome of these decisions and has a long past.2 But that long past was interrupted by centuries of inattention. In earlier decades, the broad charge embedded in the idea of governance devolved into a much narrower idea that refers to the administrative functions of government bodies and formal organizations. Governance took on a pedestrian quality. Recent events have changed all that. Entirely new forms of coordination and regulation have emerged in response to rapidly changing societal conditions, such as globalization. Boundaries between the public and private spheres, between formal governmental bodies and informal political actors (especially nongovernmental organizations), and between markets and business interests and the regulatory needs of society all are blurred. At the same time, due to the growing recognition of the increased scale of collective problems, the domains of sovereignty shifted upward to supranational bodies.3 Owing to these and other changes, the idea of governance has been re-elevated to its original— broad—scope while attracting unprecedented cachet. A number of key events are responsible for that elevation. Among them is a general rejection of the word “government” in favor of “governance” in postmodern thought on political and economic institutions. Others include the adoption of “governance” in the official parlance of the European Union. Still more specific actions include the prominent place (including its own title) the term holds in the ­prestigious 2. The etymology of the term dates back to the Ancient Greek times (Halachmi 2005; Kjaer 2004). Plato used the term kuberman as a reference to leadership, which assimilated in Latin to gubernanre. This notion evaluated along various trajectories. In addition to its meaning in English, it is part of, among others, the French, Spanish, and Portuguese vocabulary. 3.  For an elaboration, see Burns and Stöhr 2011.

6

Introduction

independent organization in Geneva: the International Risk Governance Council (IRGC). Drawing on an analysis of a selection of well-established approaches to what traditionally has been called “risk analysis” or “risk handling,” the IRGC already has made an attempt to develop a framework of risk that integrates the concept of governance (International Risk Governance Council 2005, 2007; Renn 2008b: 47ff.). This framework promises to offer both a comprehensive means of integrating risk identification, framing, assessment, characterization, management, and communication and a tool that can compensate for the absence of—or weaknesses in—risk governance structures and processes. The risk governance process is understood to include, but also to go beyond, the three conventionally recognized elements of risk analysis (assessment, management, and communication). Governance thus includes matters of institutional design, technical methodology, administrative consultation, legislative procedure, and political accountability on the part of public bodies and social or corporate responsibility on the part of private enterprises. But it also includes a more general provision on the part of government, commercial, and civil society actors for building and using scientific knowledge, for fostering innovation and technical competences, for developing and refining competitive strategies, and for promoting social and organizational learning. The governance framework builds on the logical structure of four consecutive phases of risk analysis: pre-assessment, appraisal, characterization/evaluation, and management. In addition, risk communication accompanies all four phases (Renn 2008b). Within each of the phases, specific criteria are listed that are deemed essential for meeting the requirements of “good” or effective governance. This simple framework is consistent with almost all other competing concepts and ensures the compatibility of the framework with professional codices and risk governance legislation. Moreover, the framework renders the established linear structure—in common with other contemporary conceptions of risk governance—into an “open, cyclical, iterative, and interlinked process” (Renn 2008b: 47). This conceptualization of risk governance is illustrated in Figure I.1. It consists of the five phases identified above: the four core phases (pre-assessment, appraisal, characterization/evaluation, and management) and the fifth (risk communication), which bridges all phases. The four core phases make up two complementary activities: generating and evaluating knowledge (vertical axis) and decisionmaking and management (horizontal axis). The importance of governance for risk has been highlighted by the coawarding of the 2009 Nobel Prize in Economics to Elinor Ostrom for her work on governance of the commons—decision making about common pool resources. (Ostrom’s co-winner was the economist Oliver Williamson.) To quote the Nobel Committee, “Common-pool resources . . . ​are resources to which more than one individual has access, but where each person’s consumption reduces availability of the resource to others” (Royal Swedish Academy of Sciences 2009: 8). Decisions about the disposition of such resources are clearly

Introduction

7

Pre-assessment

Management

Communication

Appraisal

Characterization/ Evaluation

Figure I.1  The Five Elements of Risk Governance

about governance, as the committee underscored by naming its award to Ostrom “Economic Governance.” But the connection between the use of common pool resources and risk, as is the case elsewhere, is less transparent. It is nevertheless present in virtually all decisions involving the allocation of commons goods or bads. How we manage chemicals in the immediate environment bears directly on human health. How we use finite resources, including common pools such as the air and water of the planet, bears directly on the risk of making the planet unsustainable. How we choose our fuels that emit greenhouse gases bears directly on the systemic risk of climate change. Few risks can compete in magnitude with these systemic ones. To summarize, we have argued for risk as a constant of the human condition; claimed that the risks of advanced modernity exceed in pace, magnitude, and spread the risks of all previous generations; and put forth the need for a link between risk theory and a theoretically grounded basis for risk governance. Risk theory, as noted above, is provided by the European risk society theorists. The theoretical underpinning of governance, we will then argue, is provided by the German philosopher, social theorist, and public intellectual Jürgen Habermas. We make this argument under a banner of self-recognized irony: Habermas never explicitly dealt with risk or provided a blueprint for its governance. Nevertheless, it is a small step from his remarkable insights about the transformation of the spontaneity of the Lebenswelt (lifeworld) into the rational calculus of the political and social world—that is, to the calculus of the risk society (see Rosa, McCright, and Renn 2010). Furthermore, his Habermasian theory of communicative action (HCAT) provides a foundation for arriving at a rational and legitimate consensus over relevant knowledge and preferred action. It also presents a theoretically informed framing for risk governance within a

8

Introduction

­democratic context. The HCAT is one of the most refined elements of Habermas’s lifelong project. That project has been to integrate the positivist tradition of the physical and social sciences with the semantic hermeneutics of the humanities to arrive at, after Wittgenstein, consensual knowledge and action. Risk provides an especially appropriate context not only to draw on the indispensability of science, but to take into account citizens’ preferences for the most appropriate science and to arrive at consensus about appropriate action in the light of that science.4 Our goal of combining the theoretical insights of the three risk society theorists with the Habermasian basis for risk governance represents an unprecedented step. Only by the reception of readers will we be in a position to judge whether this is a step forward or backward.

Outline of the Book In Part I (“Social Science Foundations of Risk”), we explicate the meta-theoretical foundations of risk (Chapter 1), first defining the concept and then elaborating on the different ontological and epistemological positions that risk theorists have taken. This will aid our comparison of the work of risk theory scholars in Part II. In Chapter 2, we present a brief, recent history of social-science work on risk, focusing on the pivotal role played by the risk crisis of 1986: the Chernobyl catastrophe, the Challenger accident, and the pollution of the Rhine River after a fire destroyed a chemical storage building in Basel, Switzerland. After 1986, the risk society frame became firmly implanted in the social-science scholarship on risk through the continued work of Ulrich Beck, Anthony Giddens, and Niklas Luhmann and the increased appearance of systemic risks everywhere. In Part II (“Risk and Social Theory”), we explore the theoretical insights about societal risk that are offered by several leading social theories. We begin in Chapter 3 with a critique of the rational action framework, the perspective guiding technical risk assessment and much work on risk governance in recent decades—generally proceeding quite independently of theory. In the two chapters that follow that one, we critically analyze the possible contributions that the reflexive modernization theories of Ulrich Beck and Anthony Giddens (Chapter 4) and the systems theory of Niklas Luhmann (Chapter 5) can make to our sociological understanding of societal risk in advanced modernity. Rather than conduct a comprehensive examination of any one these perspectives, we instead investigate more specifically how the theories address the following three questions: 1. What social order emerges in advanced modern societies where technological and ecological risks are virtually everywhere? 2. What is our knowledge about these risks? 4.  Ortwin Renn has taken broad lessons from Habermas to develop a theoretically informed but practical framework for executing democratically based risk governance (see Aven and Renn 2010; Renn 2008b; Renn and Walker 2008).

Introduction

9

3. How can societies develop the institutional and political means for governing and managing risk effectively? While Luhmann’s systems theory precludes acknowledging that there may be an effective overall system of governance in advanced modernity, Beck and Giddens each call for a deeper democratization of risk governance. While Beck and Giddens provide a theoretical rationale for such an approach, the details where devils often lie are left unaddressed. In Chapter 6, we turn to the critical theory of Jürgen Habermas for a detailed analysis of the theoretical underpinning of governance. His HCAT provides a foundation for arriving at a rational and legitimate consensus over relevant knowledge and preferred action. It also presents a theoretically informed framing for risk governance within a democratic context. Building on our arguments for a sociology of risk from Part I and responding to the risk society theorists’ calls for more democratic risk governance discussed in Part II, we lay out our ideas for enhancing risk governance in Part III (“Risk Governance: Links between Theory and Strategy”). To better understand the grand challenges we increasingly face, we first discuss the emergence of systemic risks (e.g., the world financial crisis of 2008, global ecological vulnerability), which threaten to undermine entire systems (Chapter 7). Further, in Chapter 8, we discuss in greater detail three features of systemic risks that make them especially difficult to understand and to govern: complexity, uncertainty, and ambiguity. This provides a foundation for understanding the special challenges that systemic risks impose on advanced modern societies. In Chapter 9, we first explore the genesis and analytical scope of risk governance. We then introduce a framework for adaptive and integrative risk governance for these new systemic risks that aims to address the features of complexity, uncertainty, and ambiguity in a much more effective way than traditional risk management practices. We also derive lessons for each stage of this risk governance process: pre-estimation, interdisciplinary risk estimation, risk characterization, risk evaluation, and risk management (with risk communication and public involvement occurring in each stage). We argue that the combination of analytic rigor and deliberative democracy is the most effective risk governance strategy for dealing with systemic risks. Thus, in Chapter 10, we characterize the main features of—and conditions for—successful implementation of an analytic-deliberative process. With this approach, decisions on risk reflect effective regulation, efficient use of resources, legitimate means of action, and social acceptability. We summarize and wrap up our argument in the Conclusion.

I Social Science Foundations of Risk

1 Meta-Theoretical Foundations What, then, is time? If nobody asks me, I know. But if I try to explain it to one who asks me, I don’t know. —Saint Augustine of Hippo, The Confessions of St. Augustine (1963)

S

ince our goal is to provide a logically grounded basis for effective forms of risk governance, our first task is to identify and disambiguate key disagreements about the meaning of risk itself. Once that is done, the next task is to demarcate the logical status of risk itself from the status of our knowledge about risk; the two are not the same. Then, we need to establish certain first principles for demarcating different levels of knowledge certainty about risks. Those tasks completed, we will be in a position to lay out the connections between our understanding of risk—including a large input from science— and appropriate governance structures and procedures for managing risks—­ including such grand risks as global climate change. What Saint Augustine says about time in the epigraph can also be said about risk—even more so. Risk and time are universal human characteristics, a species-shared experience. Fundamental features of the human experience, like these, are often the most elusive to logical rigor. The issue is much more than just a definitional challenge; it goes to the core question of whether risk exists, and if they do, under what conditions. It raises the question of how do we come to recognize and give meaning to risk. It conditions the choices humans have about how to deal with the risks they face. At rock bottom, these are questions about a state of the world of growing importance and how humans interpret and react to that state. Large portions of this chapter are drawn from Rosa 1998a, 1998b, 2008 and from Rosa and Clarke 2012.

14

Social Science Foundations of Risk

Epistemological Debates What is the state of the world in the context of risk?1 This question opens a window on a major epistemological debate in the risk field: is risk an objective state of the world or merely a social process that produces a collective judgment (social construction)? A simplified way to look at the problem is to compare two contradictory meta-theoretical presuppositions.2 1. There is a world independent of human understanding of it. 2. The world exists in large part because of human understanding of it. We can think of the first presupposition as “mind-independent” (Boghossian 2006) and the second as “observer-relevant” (Searle 1995). The first claim summarizes the realist position, while the second summarizes the postmodern and social constructivist position. As for risk, the question becomes: is risk a state of the world or a state of the world as we see it? Is seeing believing, or is believing seeing? The logical point here is there cannot be a world that is both independent of our understanding of it but also, simultaneously, dependent on understanding it, although we will see later that a variety of theoretical orientations embed this very contradiction into their conceptualizations. There are good reasons to be concerned about such pivotal presuppositions. They become master frames for how we apprehend, reflect on, and explain the world. Logicians have long told us that systems of ideas containing a fundamental contradiction can be used to deduce any conclusion whatsoever, no matter how absurd. Perhaps more important than such insurmountable logical challenges are the political consequences of this line of reasoning. Not only can contradiction vitiate the logic of our knowledge claims, and not only can it lead to incoherence in principles of governance, but also, more dangerous, it can open a large window for political abuse. It allows the powerful and unscrupulous to adopt just about any terminology—risk or otherwise—to impose their will on others with impunity.3 The first claim, the realist one, is underpinned by a wide variety of indisputable human experiences. Some things are universally and indisputably ac1.  We define “state of the world” as it applies to humans as a set of environmental and social forces at a given point in time and place that conditions available choices. The states of water provide a useful parallel in the physical world. Water’s various states (liquid, solid, gas) are determined by the volume of molecules in motion (heat), which in turn condition the use to be made of that form—whether to drink, cool a drink, or humidify a room. 2.  Regardless of the logic developed to support one or the other of these claims, there is no way, on the basis of first principles or a priori logic, to make a foolproof case for one claim over the other. Hence, they must remain as presuppositions and must be justified on other grounds. 3.  While the foundations of our critical comments are grounded in analytic philosophy and empirical science, the delineation of their political implications are consonant with a wide range of views, from George Orwell to, remarkably, the postmodern thought of Foucault (1980), who argue that words—the elements of knowledge claims and discourse—are a form of power.

Meta-Theoretical Foundations

15

cepted as real; they are, in the words of the analytic philosopher John Searle (1995), “brute facts.” Certainly, airplane crashes are real; certainly, the cancers and premature deaths of those who smoke or who are exposed to toxic chemicals or high levels of radiation are real. Certainly, breached dams that lead to the destruction of property and people are real. Certainly, there are many similar events in the world that are unassailably real. In earlier times, our forebears may have attributed their demise from volcanic eruptions to angry gods, but they are just as dead as if they understood plate tectonics. In many instances, human interpretations of factual events are essentially correct. If that were not so, we would have hardly survived as a species. The second claim finds grounding in an unavoidable limitation in human understanding of the world. First, unless we believe in divine, magical, or cosmic revelation, all of our knowledge of the world must come from the social world. In other words, all knowledge can be traced to the social world.4 That is, it must from the collective experience of humans. Furthermore—and this point cannot be overemphasized—there can never be a perfect isomorphism between the world and its states and our understanding of it. Our perceptions are inherently subject to distortion and fallible interpretation from a variety of social and other forces. Since we can never have perfect knowledge of the world—even an independent world—all of our knowledge is always a socially shaped and mediated. In that restricted sense, then, all knowledge is socially constructed. The dual reality of risks as the real and constructed presents a paradox. This paradox leads to the key epistemological challenge for risk: how can we connect our socially mediated perceptions of the world, subject to distortion and fallible interpretation, to the harsh reality of realized dangers in social life? How can we reconcile the realism of certain events in the world, such as the sickened and dead bodies of war, with the constructions we create around them? The two antithetical presuppositions establish the poles of continuum connecting the real with the constructed. There are two logical choices of how to proceed from the two sides of this fundamental challenge. We can begin with a presupposition of constructivism and build to a case for realism. Or we can begin with the presupposition of realism and build to a case for constructivism.

Constructed Realism? Let us try to address these challenges by beginning at the constructivist end of the epistemological continuum and moving toward the realist end. We could begin with the unassailable reality that the only way to understand that reality is via social construction. Risks, then, such as the examples above, are real enough, but they remain social constructions. “Seeing” them is a collective, subjective process. Otherwise, we could neither perceive nor understand them

4. This is a key contribution and reminder from the relativistic, social-constructivist, postmodern school of risk and one that formal risk analysis often has inappropriately ignored.

16

Social Science Foundations of Risk

at all, for facts seldom speak for themselves. This approach poses three logical problems. First, if all knowledge is socially constructed, saying that the risk is a social construction is a tautology.5 Tautologies are mischievous, for they leave no way to map the content of statements to the external world. Indeed, their logic is a self-contained denial of an external world; the only reason there are dead bodies is because we think so. Put differently, saying that risk is a social construction means that one can substitute “social construction” for “risk” in a sentence. The result of our substitution—“social construction is social construction”—is a pure tautology. Furthermore, framing the issue in this way precludes any continuum since, by assuming the realism pole away, both poles have been fused into one. Second, if our constructions result from collective judgment, how do we ever know when risks really exist? How do we know enough to anticipate untoward futures with those sickened or dead bodies? Without the disciplining traction of an external reality, on what basis should we choose among alternatives? Pushing this skepticism to its conclusion exacerbates the problem, as one reformed constructivist observed: “by definition, the logic of skeptical argument defeats any amount of evidence” (Collins 2009: 31). Third, there is the bane of analytic philosophy: the infinite regress. If a risk is socially constructed—say, the risk of an airplane crash—then our understanding of that risk is a social construction. There is the risk that the plane may crash because we collectively think that it will. Likewise, for consistency we must say that our understanding of the crash is a construction, as well. Our understanding of, for example, plane crashes must come, if not from something independently real, from other social constructions. After all, this is the very presupposition embedded in the constructivist viewpoint. Whence will these others come? They must come from still other social constructions that themselves come from still other constructions. This reasoning sends us into an infinite regress, leading us to the absurd land of ad infinitum.

Realism for Constructions An alternative strategy proceeds in the diametrically opposite direction, from a presupposition of realism to social constructions. A realist view of risk presupposes a world independent of humans, even if it is humans that produce risks. The natural world could not care less about whether humans or cockroaches inhabit it. That world just is. But humans make and transform it and then make up stories about the things we have made and transformed. In turn, our ideas about what is at risk originate from real-world events—from states of the world

5. There is perhaps no better exemplar and expositor of best practices in science than Stephen Jay Gould (1977: 40), who observes that “tautologies are fine as definitions, but not as testable scientific statements—there can be nothing to test in a statement true by definition.”

Meta-Theoretical Foundations

17

independent of the percipient observer. Seen in this light, our mental constructions of risk and the stories we develop about risk are emergent properties that arise out of experience, not out of a circumscribed, self-referential mind.

Irreconcilable Challenge? Even if we assume a world “out there,” our understanding of it, as underscored above, is never isomorphic with it: human understanding only approximates the world we seek to understand. Thus, claims to knowledge about our world are always subjective—and fallible. Social construction cannot be avoided. Many phenomenological thinkers and strong constructivists claim, on the basis of this fact, that all knowledge about our world is relative. One claim is generally as good as any other. We disagree. Some facts, especially brute facts that attract universal agreement, are too real to contest. Some ideas conform more to our observations in the world than others—as we will show later—while others may be entirely fanciful. Whether we have developed a case for a “realism to construction” strategy requires us first to develop a case demonstrating that a world external to us exists. The strategy of beginning with realism and then adding constructions first requires making a case for such an independent world, a case for realism. That is our task here. Ontological realism presupposes, as already noted, that a world exists independently of percipient actors. As the distinguished philosopher Paul Bo­­ghos­sian (2006) puts it, there are things out there that are simply “mind-­ independent.” Since the case for ontological realism was thoroughly addressed by Eugene Rosa (1998a, 1998b, 2008; Rosa and Clarke 2012), we do not repeat the argument in full here. Instead, we summarize it by answering the following pivotal questions. On what foundation can we base ontological realism? What class of signals from the world, whether conceived as independent or dependent on our constructs, can we use to illustrate realism? Which signals give us the strongest evidence for this illustration? Do the signals cohere around discernible underlying principles? We address the middle two of these questions first.

Gravity The most compelling class of signals in the case for realism is the one that, after N. Katherine Hayles (1995), represents “constraints.” Constraint is meant to convey the notion of limitation—limitation imposed by a world beyond us. Many “physical” constraints may, in fact, be unalterable limitations beyond human agency. The issue of whether constraints should be viewed as disembodied interpretation of conditions in the world, versus conditions resting in an embodied construction of the world, need not be resolved to take advantage of the power of constraint to reveal knowledge. By ruling out some

18

Social Science Foundations of Risk

possibilities, constraints provide a focused way to apprehend reality.6 Gravity is a prime example of such a constraint. On this, Hayles (1995: 52; emphasis added) writes: Consider how conceptions of gravity have changed over the last three hundred years. In the Newtonian paradigm, gravity is conceived differently than in the general theory of relativity. For Newton, gravity resulted from the mutual attraction between masses: for Einstein, from the curvature of space. One might imagine still other kinds of explanations—for example, a Native American belief that objects fall to earth because the spirit of Mother Earth calls out to kindred spirits in other bodies. No matter how gravity is conceived, no viable model could predict that when someone steps off a cliff on earth, she will remain spontaneously suspended in midair. Although the constraints that lead to this result are interpreted differently in different paradigms, they operate universally to eliminate certain configurations from the realm of possible answers. Gravity, like any other concept, is always and inevitably a representation. Yet within the representations we construct, some are ruled out by constraints and others are not.7 If we ferret out the continuities in the cultural representations of gravity rather than focus on alternative, context-bound accounts, we can discern a

6.  Many of the highly heralded discoveries of science have this quality. The formulation of the second law of thermodynamics (that forever dashed the hopes of building a perpetual motion machine) and Werner Heisenberg’s uncertainty principle (that the position and speed of subatomic particles could not be measured simultaneously) are two of the most obvious examples of this. 7.  One thoughtful but sympathetic critic opined that the gravity example was trivial. This objection is unassailable for those who are already committed to a realist ontology. However, there is a much larger community of constructivists who resist such examples of realism, however simple. Our illustration is directed toward those thinkers who have identified themselves as social constructivists without reflection on the presuppositional and consequential implications that this position entails. Second, gravity provides an example of the unbridgeable gulf between the workings of a real world and our understanding of it—the isomorphism impossibility described in the text. Those who have followed the “science wars” will recognize that Alan Sokal, the perpetrator of the hoax in the postmodern journal Social Text (1996), fashioned his ruse around the topic of quantum gravity, his field of specialty. Quantum gravity is a speculative theory to describe the space–time continuum on scales of a millionth of a billionth of a billionth of a centimeter. Those who have followed developments in the sociology of scientific knowledge will be familiar with Harry Collins’s (2004) three decades of work on the science of gravitational waves. It illustrates the difficulty of finding evidence that is not entirely ostensible because of a low signal-to-noise ratio that is barely repeated, except in bursts of short duration between long time intervals. Third, since we adopted a “realism to constructivism” strategy for developing a hierarchical knowledge continuum, it makes the most sense to anchor the extreme pole of realism on the continuum with an example, such as gravity, that attracts virtually no disagreement.

Meta-Theoretical Foundations

19

principle for addressing the first question—namely, the foundational basis for realism. The principle to be discerned is contained in the context invariability (over historical time in Western science and between Western and native representations) in the shared recognition of gravity’s constraint. Importantly, the principle contains two sine qua non elements: ostensibility and repeatability. Consider a common observation of gravity, the dropping of a pencil. First, when our pencil falls we know that gravity is present because the action is ostensible, we can point to it. Furthermore, if we ask an independent observer, “Did you see what I saw?” we will almost certainly hear “yes.” Hence, there is intersubjective reliability, a collective agreement about an observable event. Second, the act is repeatable. If the other observer missed it the first time, we can repeatedly drop the pencil, and the result will be precisely the same. And again, others would likely agree. In essence, gravity enjoys pan-cultural recognition (meaning similar perception and interpretation across gulfs in time and culture).8 Such intersubjective agreement about this constraint, widely dispersed across history and collective experience, suggests that this physical feature of the world is sending compellingly similar signals to percipient observers—wherever or whenever they are. Moreover, it implies that the source of these signals is outside our own phenomenological context of interpretation.

Realism-Antirealism Consequences The convergence of perceptions and representations across time and cultures is not empirical proof of realism.9 Adopting a realist-objectivist perspective instead reflects the logical choice between two competing presuppositions. On the one hand, if we argue that convergences such as these are evidence of an independently existing reality, we are already presupposing that reality via the reality of the convergences. It thus defeats the very point of our attempt to develop an independent proof of an external reality. Viewing evidence as real embeds the presupposition of ontological realism into the argument at the outset rather than establishing it on independent grounds, for this evidence is unproblematically real only within the presupposition that a world exists independent of us. The argument thus begs the question. On the other hand, if we presuppose an entirely constructed, or culturally conditioned, reality, we also

8. In this instance, we can go even further and speak of the “universal” recognition of the constraint of gravity. The term “pan-cultural” is preferred because it is meant not only to subsume the universal but also to account for instances in which there is widespread intercultural agreement—which falls short of universality because of marginal cases. 9. Indeed, there is no foolproof argument that an external world exists. There is, likewise, no foolproof argument that it does not. But even within this limitation, formal reasoning would seem to favor the realist view because of the extreme difficulty of proving a negative proposition such as that of the phenomenologically based constructivist position that claims there is no separate world “out there.”

20

Social Science Foundations of Risk

are presupposing a reality independent of all social constructions that provides the raw material out of which these constructions are formed.10 Even if our sensations are always unreliable, they are being activated by some external source. Searle (1995: 191) puts the matter this way: “the ontological subjectivity of the socially constructed reality requires an ontologically objective reality out of which it is constructed. . . . ​[A] socially constructed reality presupposes a nonsocially constructed reality. . . . ​It is a logical consequence of the main argument . . . ​here that . . . ​you cannot have institutional facts [socially constructed facts] without brute facts [empirical facts].” Tying Searle’s point back to our gravity example, we can see that pan-­ cultural and time-distanced convergences occur because, however many layers of construction we remove, the signals, wherever they are coming from, must be coming from somewhere independent of the constructions. Furthermore, the signals are received with such similarity that their representations are similar across time and place. It makes sense to think of that “wherever” as external reality. In sum, based on the principle of pan-cultural convergence and on the logical choices associated with observed pan-cultural practices, we have sketched a reasonable case for ontological realism. Later we take that case to risk.11

What Is Risk? We have made a case for an independent, real world. What about risk? Is risk also real? These questions beg a definition of “risk.”12 A starting point is the widely shared presupposition that distinguishes between reality and possibility. 10. The idea of universal discovery or verification in science is often used to justify the ­realism-objectivism assumption (Cole 1992). But this reasoning, too, suffers from a question-begging fallacy. Even if this logical problem is overlooked, universally shared scientific discovery or verification seems less compelling than pan-cultural convergences that are less institutionally structured. After all, scientists work within paradigms (Kuhn 1970) so that all scientific observations are, to use Karl Popper’s (1959) term, theory-impregnated—meaning, among other things, the adoption by different scientific observers of the same conceptual template. That being so, we should not be surprised to find the same template producing similar findings, however distanced in time or place they may be. 11. Searle makes an alternative, but consistent case for realism. He observes that to abandon realism, we are forced to abandon common sense. From modern computer jargon he borrows the term “default” positions to refer to “views we all hold prereflectively so that any departure from them requires a conscious effort and a convincing argument” (Searle 1998: 9). He deepens his point by explicating precisely our key default positions: “there is a real world that exists independently of us, independently of our experiences, our thoughts, our language. We have direct perceptual access to that world through our senses, especially touch and vision. Words in our language, words like rabbit or tree, typically have reasonably clear meanings. Because of their meanings, they can be used to refer to and talk about real objects in the world. Our statements are typically true or false depending on whether they correspond to how things are, that is, to the facts of the world; and, Causation is a real relation among objects and events in the world, a relation whereby one phenomenon, the cause, causes another, the effect” (Searle 1998: 10). 12. Definitions of risk abound. Aven and Renn (2009a) identify at least ten, but there are more. The net result, in our view, is not clarification of key terms but befuddlement.

Meta-Theoretical Foundations

21

If the future is either predetermined, as in the premodern worldview described in the Introduction, or independent of present human activities, the term “risk” makes no sense. Thus, the foundation of our definition is the notion that certain states of the world, which are possible and not predetermined, objectively can be defined as risk. The fact that these states are not predetermined means they are probabilistic and therefore embedded with some degree of uncertainty. As a feature of a state of the world, independent of the human observer, we understand uncertainty (a term widely used but seldom defined) in the broadest sense to be an indeterminacy between cause and effect.13 Despite this “realist” foundation, our ability to identify, measure, and understand risks is limited, as already noted, by cognitive and social constraints. Our knowledge ranges from the putatively certain to the entirely uncertain. To the extent that we are limited in these abilities, risk will appear less like an objective state of the world than like a social construction. Furthermore, what individuals or societies perceive as risk and choose to concern themselves with are shaped not only by the objective state of risk but by social, cultural, and political factors—as well as the precision of our analytic tools for identifying risk in the first place. To the idea of possibility—and concomitantly, uncertainty—we add a second presupposition consisting of two elements: risk exists only when the uncertainty involves some feature of the world, whether in natural events or in human activities, that affects human reality in some way. This is the unavoidable reality of the human condition outlined in the Introduction. Combining these three elements here, according to Terje Aven and Ortwin Renn (2009a), is the most widely adopted definition of risk in the social sciences: Risk is a situation or an event where something of human value (including humans themselves) is at stake and where the outcome is uncertain. (Rosa 1998a, 2003, 2010)14 The definition seems reasonable enough. Few would disagree with its core meaning, even if there were objections to specific elements. The straightforward interpretation of this definition is that risk is real, an ontological state of the world that results from the presupposition that some outcome is possible. A linguistic conundrum here for the constructivist position on risk, but not for the ontological realism position, emerges with the ordinary-language assertion that someone or something is at risk or that certain situations are risky or hazardous. A person is at risk because she is subject to an uncertain outcome 13. The sources of the indeterminacy are many: spurious relationships between cause and effect, an inability to determine the time ordering between the cause and effect, a long time delay between cause and effect, systems too complex to determine which are causes and which are effects, and more. 14.  Aven and Renn (2009a, 2009b) and Aven, Renn, and Rosa (2011) have addressed the metatheoretical issues identified here and have given new rigor to the ontological and epistemological status of risk.

22

Social Science Foundations of Risk

in the world where the outcome contains something of stake to that individual. However, if risk is entirely a social construction, a collective agreement about what risk is, what does it mean to say that an individual is at risk? Do we say the person is at a social construction? If someone is at risk, does it mean that the person is simply in a position to violate a collective agreement termed “risk”? Within a constructivist orientation it is difficult to know the meaning of the assertions exactly but easy to see the strained awkwardness that orientation produces. To sum up these important features of what we mean by risk, we can first recognize that the proposed definition expresses an ontological realism that specifies which states of the world are to be conceptualized as risk. The definition captures the three defining elements found in nearly all conceptualizations of risk—even when the term is left undefined and must be inferred.15 The first is the notion that risk expresses some state of reality where some outcome is possible but not predetermined. Second, since outcomes are not predetermined, it is virtually impossible to talk about risk absent the notion of uncertainty. Therefore, both environmental and health risks have uncertainty as a defining feature. We worry that there is some likelihood, never absolute certainty, that certain human activities will affect the environment and our health in untoward ways—the types of fuel we use and exposure to certain chemicals has some likelihood of causing sickness or death. Third, an uncertain outcome is only a risk when humans have a stake in the outcome. A number of phenomena in the physical world are embedded in an unavoidable uncertainty (e.g., Heisenberg’s indeterminacy or uncertainty principle), but they are not risk because the outcome does not affect humans in any way. If Suzy neither smokes tobacco nor is exposed to second-hand smoke, there is no possibility that she will contract lung cancer due to smoking. If unrelated, distant strangers are smoking cigarettes, Suzy has no stake in their risk—so it is not a risk to her. Only if Suzy is among the small fraction of those who never smoked or have had limited exposure to second-hand smoke but acquire lung cancer is she subject to this risk. Risk assessment often includes a calculation of probabilities and expected values. To be faithful to the universal, neutral context of uncertainty embedded in risk, there is a need for a basic definition that is devoid but simultaneously accommodating of the idea of mathematical probabilities. Indeed, the need is to avoid the idea of probability itself, since that knowledge is neither part of past cultures nor part of many cultures today. Uncertainty is an inherent feature of the independent world. Probability is not. Uncertainty is an ontological constraint. Probabilities, likelihoods, and odds are a human invention—social constructions—to represent that uncertainty (Hacking 1975). They are epistemological features of risk. They are a subjective aspect of risk, or Searle’s 15. Defining risk in this way is consistent with the widely accepted, standard technical definition of risk in Kaplan and Garrick 1981.

Meta-Theoretical Foundations

23

“observer-relevant” presupposition (Searle 1995: 12). A universal context also argues for a distinction between a categorical representation of risk over a normative one, since normative values vary widely among cultures. This is not to deny the importance of normative considerations: they are important requisites for the decision-making and governance features of risk characterization and management, but not of risk per se. Hence, faithful to a categorical framing, we have explicitly left out normative elements in defining risk.

Cartooning a Realist Conception of Risk We can summarize the argument thus far with a cartoon sequence. The sequence illustrates the distinction between risk as a state of the world and risk governance as a social construction. While the sequence portrayed is of individual actors, the distinctions and connections between them apply to collectivities of any size. Illustrated in Figure 1.1 is “common terminology” of governance in the risk literature. Governance comprises three key elements: risk perception (a recognition of a risk), risk assessment (a determination of probability and consequence), and risk management (decision making). All three elements are the products of socially mediated processes: (1) we perceive something—say, risk—and verify with others that a particular state of the world is present; (2) we interpret and judge with others that this state of the world has a likelihood of producing certain consequences; and (3) we exercise a decision based on our judgment. In terminology adopted from the long tradition of analytic philosophy, each of these terms would be considered part of epistemology—that is, the means of acquiring and using knowledge. But this sequence begs the fundamental question raised above: what are the conditions in the world that lead us

Figure 1.1  Three Elements of Risk Governance: Perception, Assessment, and Management (Source:  ScienceCartoonsPlus.com; reprinted with permission)

24

Social Science Foundations of Risk

Figure 1.2  State of the World: Comprising Possible Outcome, Uncertainty, and Human Stakes

Figure 1.3  State of the World: Meeting Three Criteria of Risk

(Source:  ScienceCartoonsPlus.com;

reprinted with permission)

(Source:  ScienceCartoonsPlus.com;

reprinted with permission)

to label something “risk”? What is this strolling couple perceiving, assessing, and managing? The answer is provided in Figure 1.2. The set of conditions, or state of the world, here comprises possibility, uncertainty of outcome, and human stakes—the very elements of our definition of risk. Importantly, those conditions exist independently of whether the couple sees the boulder or not. There is the distinct possibility that the boulder will fall. There is indeterminacy about whether the boulder will actually fall; it is uncertain. And there is indeterminacy about consequences if it does. But the strolling couple clearly has a stake in the outcome. Hence, we can replace the question mark in Figure 1.2 with the word “risk” in Figure 1.3. Combining Figures 1.1 and 1.3 produces the sequence in Figure 1.4. The first panel in the sequence is a state of the world of risk independent of the couple’s awareness. Again, in the honored tradition of analytic philosophy, it is an ontological reality. It represents the conjoint of possibility, uncertainty of outcome, and human stakes in the outcome. The following three panels, the epistemological counterpart, illustrate both the risk to the couple (they now do have stakes in the outcome) and their response. All three activities—­perception, assessment, and management—are entirely social and quite subjective. Note that the couple apparently judges the potential consequences as severe, because their management decision is to flee. A final distinction can be made between the real and the constructivist features of this risk-characterization sequence if we wonder how the boulder arrived at its precarious position in the first place. One obvious possibility is that a natural event, such as a lightning strike, edged the boulder toward the

Meta-Theoretical Foundations

25

Figure 1.4  The Coupled Risk State of the World (Ontology) with the Governance of Risk (Epistemology) (Source:  ScienceCartoonsPlus.com; reprinted with permission)

edge of the precipice. So the source of risk is an event in nature independent of human agency or construction. Risk here is due to events in an external world. At the other end of possibilities is that a jealous husband has learned about his wife’s repeated rendezvous at this location and wishes to take revenge on the lover. Hence, he manages to position the boulder, perhaps with the help of others or with equipment, in the precarious location in which we find it. In this instance, the reason we find the boulder is clearly the outcome of a social process: it is a construction. Nevertheless—and this point cannot be overstated— the process of how the boulder reaches its resting place, whether by nature or by a social process, is irrelevant to the resulting state of the world. It is an independent state that comprises the key defining features of risk: possibility, uncertainty of outcome, and human stakes in the outcome. It is risk—a brute fact.

Constructivist Objections Who are you going to believe, me or your lying eyes? —Groucho Marx in Duck Soup (1933)

There are at least two possible objections to our distinction in talking about risk between a realist, ontological state of the world and our understanding of that world. One is semantic. Some scholars, such as Luhmann (discussed in Chapter 6), would claim that the first panel is not risk at all, but danger. This position poses these problems. First, if we try to maintain a semantic continuity in the four-panel sequence, we are challenged to maintain layperson language and common usage in the risk field. So, for example, we would have to relabel

26

Social Science Foundations of Risk

the panels “danger,” “perception of danger,” “danger assessment,” and “danger management.” This is a possible set of descriptions, but it is not the language of professionals or laypeople—or of governance. Semantic nuance also sends us into a definitional circularity, since virtually all dictionaries define danger in terms of risk, and vice versa. Consult any reputable dictionary and you will find, for example, that risk is “the possibility of suffering harm or loss; danger,” and danger is “exposure or vulnerability to harm or risk.” Lacking a clear semantic demarcation between the terms denies their usefulness as distinct theoretical concepts. A second line of critique, by building on the foundations of continental philosophy, rejects the analytic separation of ontology and epistemology and fuses them into one concept: interpretation. This towering philosophical act was best summarized perhaps by Nietzsche’s well-known claim that “there are no facts, only interpretations” (quoted in Ferry and Renaut 1990: 6; Glock 1999: 21) and elaborated by Foucault in the claim, “If interpretation can never be achieved, it is simply because there is nothing to interpret . . . ​since after all everything is already interpretation” (quoted in Ferry and Renaut 1990: 6). This curse on both philosophical houses is the foundational presupposition of post­ modern thought.

The Suspension Heuristic The version of constructivism that fuses ontology and epistemology is identified by Paul Boghossian (2006) as the most influential constructivist approach: the fact-constructivists. This is the group who claim that no fact stands independent of human society and its contingent needs and interests. The Pied Piper of fact-constructivism, Bruno Latour, provides a telling example. As told by Boghossian (2006:26): One famous constructivist, the French sociologist Bruno Latour, seems to have decided to just bite the bullet on this [all facts are contingent] point. When French scientists working on the mummy of Ramses II (who died c. 1213 bc) concluded that Ramses probably died of tuberculosis, Latour denied that this was possible. “How could he pass away due to the bacillus discovered by Robert Koch in 1882?” Latour asked. Latour noted that just as it would be an anachronism to say that Ramses died from machine gun fire so it would be an anachronism to say that he died of tuberculosis. As he boldly put it: “Before Koch, the bacillus had no real existence.”16 In our context here, on the principle of symmetry, we would need to conclude that gravity did not exist until it was shown by Isaac Newton in the sev16. It is, of course, an anachronism in the chronology of human history but not in physiological reality.

Meta-Theoretical Foundations

27

enteenth century. But this requires a cognitive heuristic that has escaped the cognitive and decision sciences—the suspension heuristic. To accept Latour’s claim, we must reject the scientific claim for what seems to be a straightforward cause-and-effect relationship. We are required to abandon our common sense, to suspend our unmediated judgment about the reality of gravity and the judgments of science about subtler features of the world. After Searle, this requires the abandonment of a key “default position”—a view of the world that is so ostensible that it is held pre-reflectively. Each position, summarized in the two presuppositions at the beginning of this chapter, is antithetical to the other. And neither can be established a priori or proved on logical or empirical grounds. Therefore, this remains a source of considerable disagreement among scholars and practitioners. Nevertheless, we can observe pragmatically that the realist position on risk underpins virtually all formal risk characterization. Furthermore, it is, in our view, more consistent with the way laypeople see the world. The most promising forms of governance analyzed and promoted in this book, based on the Habermasian notion of the public sphere and discourse, requires the broad participation of all citizens and stakeholders. That framework, in turn, requires a language understandable to all. This implies discourse between experts and laypeople in language accessible to all—a common language, when possible. Here is a simple example of layperson realism at work. Sit at a soccer game and try to explain to fellow spectators that the uncontested goal they just observed was not a fact “out there” on the field but an interpretation “in here” within ourselves. It was the outcome of shared judgment alone. This is an argument one is very unlikely to win. Again, it is important to preserve layperson presuppositions because those presuppositions set the stage for the master frames of communicative action essential to the Habermasian governance structure we adopt here. There is an intimate connection between how we position ourselves in the world and the language we use to understand and describe that positioning. Pragmatically, because the direction over the past decades has been to engage laypeople more directly in the assessment and governance aspects of risk, often on an ad hoc basis, there is all the more need for presuppositional and semantic symmetry among all involved experts, citizens, and other stakeholders. Hence, a definition of risk that rests on a realist presupposition makes not only logical sense but also governance and practical sense.

HERO: Hierarchical Epistemology, Realist Ontology There is a final problem to consider. How do we distinguish risks that are truly real from those that are partially or totally constructed? Having made the case for ontological realism—what might be called, with humility, the objective side of risk—what about the issue of epistemology, the subjective side? Insofar as

28

Social Science Foundations of Risk

our argument for a realist-objectivist ontology of risk is sound, it immediately raises a crucial question: does it ineluctably follow, on the principle of symmetry, that the epistemology of risk is also realist-objectivist? In other words, if the world is ontologically real, is our understanding of it also real? The extreme positivist position inherent in the “God’s-eye view” of science and in early technical risk analysis answers this question with a resounding “yes.” In contrast, we argue that the proper sociological answer to this question is a categorical “no.” Knowledge claims about risk may be realist-based or constructivist-based depending on the evidentiary basis of our claims to knowledge. That there is not necessarily a one-to-one correspondence between the ontology and epistemology of risk is due fundamentally to the intervening role of human activities: culture, values, institutions, perceptions, and interpretations of our realist world of risk. In short, an amplification process intervenes between risk and our knowledge of risk. As a consequence, the epistemology of risk constitutes a continuum that ranges from realism/objectivism to relativism/​subjectivism.

Meta-Theoretical Underpinnings of Major Risk Perspectives In bare-bones form, there are four possible combinations of ontological and epistemological presuppositions. Each of the four represents a different knowledge predicate. The ontological refers to predicates of entities, while the epistemological refers to the predicates of judgment. The four logical combinations, then, are (1) objective in an ontological and epistemological sense (e.g., Mount Rainier is in the Washington State section of the Cascade Mountain Range); (2) objective ontologically but subjective epistemologically (Mount Rainier is more beautiful than Mount Hood); (3) subjective ontologically but objective epistemologically (I have a pain in my arm); and (4) subjective in both the ontological and epistemological sense (Duchamp is a better artist than Picasso). Figure 1.5 displays the four possibilities. The horizontal dimension, the ontological, identifies the predicates of entities and the vertical dimension, the epistemological, identifies the predicates of judgment—via their social origins. Each cell thus represents the combination of ontological and epistemological predicates or presuppositions that orient our knowledge claims about risk. Above, we developed a case for risk as a combination of ontological realism and epistemological constructivism. Risks are real, but our understanding of them is entirely a social process. That position lies in cell 3 of the table. Cell 1, a combination of a realist ontology with a realist epistemology, is still a dominant view for many scientists (see, e.g., Weinberg 2001) and is the underpinning of much of the work in risk assessment. Cell 4 is reserved for the entirely constructed systems view of the world, including risk. The chief proponent of this point of view is Niklas Luhmann, whose systems theory we examine in Chapter 5. Cultural theory (Douglas 1986; Douglas and Wildavsky 1982) also is sit-

Meta-Theoretical Foundations

29

Ontological Dimension Realist (R)

Constructivist (C)

R

1. Standard model of science; formal risk analysis

2. Beck’s and Giddens’s reflexive modernization theory

C

3. Rosa, Renn, and McCright; risk as a hybrid of real harm and our mental models of it

4. Luhmann’s systems theory; cultural theory

Epistemological Dimension

Figure 1.5  Combinations of Ontological and Epistemological Presuppositions (Predicates)

uated here, though we do not address it as fully as we do Luhmann’s work. Cell 2, representing reflexive modernization theory, the work of Ulrich Beck and Anthony Giddens (Chapter 4), assumes that risks are socially constructed but somehow real nevertheless (Rosa, Diekmann et al. 2010).

Limitations to Human Knowledge Because it cannot be overstated, we have repeatedly pointed out that human perceptual and cognitive capabilities are inherently limited. As a consequence, we can neither generate perfect knowledge about the world—that elusive God’seye view—nor create a “true” understanding of our physical and social environments, risky ones or otherwise. Facts seldom speak for themselves, and ambiguity may surround even the most basic facts. The world “out there” and our understanding of it can never be isomorphic, as we have said, so knowledge claims are ineluctably subjective and fallible. Also as noted, many phenomenological thinkers and strong constructivists take this fact as a basis for claiming that all knowledge claims are so relative that one claim is generally as good as any other. Our position, in contrast, is a hierarchical epistemology. Connecting it to the ontological realism of our framework, we refer to their combination as hierarchical epistemology and realist ontology (HERO). Epistemological hierarchicalism does not deny the fallibility of all knowledge claims. It denies that all knowledge claims are equally fallible. Indeed, if all knowledge claims were equally fallible (or equally valid), we all would be living behind a veil of ignorance where there would be no knowledge at all, including our own claims in this chapter, as well as all whose views with whom we compare ourselves. Again, we note that our survival as a species argues strongly against the radical constructivist view. Hierarchicalism consists of variations in the quality of knowledge claims along a continuum, ranging from those characterized by

30

Social Science Foundations of Risk

considerable agreement to those characterized by great disagreement. Knowledge claims, while always short of absolute truth, admit to degrees of approximation to what is true. An important result of our logic until now is to make the ontology and epistemology of risk not conjoint but logically independent complements to each another. HERO is consistent with epistemological relativism but also goes beyond those versions that equalize all knowledge claims or deny the privileging of particular claims. It admits to differences in the types, quality, and aptness of our knowledge. Admitting those differences means we need to explicate a principle consistent with HERO for judging the placement of knowledge claims along the epistemological continuum. To do that, we must first identify the principle’s proper scope of applicability.

Ostensibility and Repeatability as the Demarcation Principle for Epistemological Hierarchicalism We have sketched an epistemological continuum—“the realism-to-constructivism” continuum—to account for the variation in our knowledge claims about risk from the unassailably real to the unassailably social. How do we decide on the placement of knowledge claims on this continuum of epistemic agreement? We need a principle that facilitates a hierarchy of risk judgments that range from realism to relativism, a continuum of epistemological realism to epistemological constructivism. And we need to align the resultant hierarchy with consonant paradigms for characterizing risk. We propose to develop this principle on the basis of the consistency of signals from the external world. The earlier example about gravity, with its consistency of signals, provides a solid a basis for deducing such a principle. To cement our earlier point, the underlying realism of the gravity example is the ostensibility (“I can point to examples”) and repeatability (“The examples can be repeated”) of the relevant phenomena. Furthermore, the ostensibility and repeatability (O&R) principle is consistent with attempts to develop scientific explanation, developed as they are on the basis of evidence that ultimately can be perceived by the human senses—that is, empirical evidence. The ostensibility criterion asks the question about the pencil (as above), “Do you see what I see?”17 If the answer is “yes,” we have a high probability of intersubjective agreement. The greater the agreement, the higher the placement of this knowledge claim in our hierarchy. But if the answer is “no” or “maybe” or “I’m not sure,” the repeatability criterion responds, “Just wait and you will have another opportunity to observe what I see.” To the extent the subject observation is truly ostensible, the repeatability criterion almost ensures intersubjective

17. The “seeing” may be with the naked eye, as here, or with the aid of instrumentation—for example, to observe the recurrent patterns of celestial bodies.

Meta-Theoretical Foundations

31

agreement at some point.18 Should particular evidence fail these criteria, support for the epistemological realism also fails. Then we need to look away from epistemological realism and toward constructivism and related perspectives as a way to understand. The O&R principle is straightforwardly applicable to risks, a basis for judging their placement on the realism–constructivism continuum. Ostensibility and repeatability provides the underpinning of one of the strongest, most widely held scientific principles: predictability. The strength of scientific knowledge lies in its ability to make accurate, conditionalized predictions. In addition, because of its commitment to precision, science often prefers to express relationships in functional, quantitative forms.19 The extent to which some phenomenon is ostensible and repeatable would seem to give solid clues about the degree to which the phenomenon can be quantified. While O&R reflects the fundamental demands of the scientific method, it leaves the door open for alternative approaches when the conditions of O&R are not met. Indeed, it insists on the inclusion of a wide range of alternative orientations under conditions of low ostensibility or low repeatability. Furthermore, it provides a decision rule for distinguishing conditions that support a realist epistemology from circumstance that do not. The O&R criteria also establish scope conditions to determine the boundaries of aptness of different classes of knowledge claims. The principle provides a matching of our state of knowledge about given risks with the toolbox of paradigms at our disposal. As evidence becomes increasingly thin, there is an accompanying rise in the applicability of social constructivism, discourse analysis, and other knowledge systems. Figure 1.6 displays the epistemological scope conditions of risk. The horizontal axis reflects our knowledge about the probability of occurrence and the vertical axis reflects our knowledge about consequences. The three conditions are labeled “grounded realism,” “synthetic realism,” and “social construction.” The placements of the curved lines on the graph are rough approximations, not sharp dividing lines. The proposed demarcation principle, O&R, is stated at a level of abstraction appropriate to theoretical explication, not to operational precision.

Grounded and Synthetic Realism The types of risk that meet the criteria of grounded realism, the first segment in Figure 1.6, are well known. The highly ostensible and repeatable injury and accident events of automobiles and airplanes are orderly enough for the actuarial 18. The expectation of widespread intersubjective agreement about risk is not merely an unrealistic hope, as the accumulating empirical evidence shows. Cross-cultural studies of risk perceptions comparing Americans with Hungarians (Englander et al. 1986), with Norwegians (Teigen, Brun, and Slovic 1988), with the French (Karpowicz-Larzeg and Mullet 1993), with Poles (Goszczynska, Tyszka, and Slovic 1991), with Hong Kongese (Keown 1989), and with Japanese (Kleinhesselink and Rosa 1991) show a continuity in the structure of the cognitive maps between cultures, even as the content of those maps vary. 19. Such as in the definition in Kaplan and Garrick 1981: Risk = f {Si, Pi, Ci} (I = 1, 2, . . . , nS).

32

Social Science Foundations of Risk

Low O&R

Social Construction

Basis of Knowledge Claims about Outcome Stakes

High O&R

Synthetic Realism

Grounded Realism High O&R

Basis of Knowledge Claims about Uncertainty

Low O&R

Figure 1.6  The Realism–Constructivism Continuum of Knowledge Claims about Risk Note:  O&R = Ostensibility and Repeatability. Because the diagram compresses four variables— ostensibility, repeatability, uncertainty, and outcome stakes—into two dimensions, the orientation of the axes is high to low rather than the typical orientation.

tables of insurance companies to provide reasonable estimates of their future occurrence, as well as what premiums to charge. The risks of complex technologies, such as a chemical plant, whether publicly acceptable or not, are often determined synthetically as in the second segment. For example, seldom is the failure probability of every part of every subsystem known for the specific operations of complex technology. Nevertheless, disciplined judgments can be, and are, made based on the operating experience of that part or subsystem in some other application. Rare events, such as hurricanes, floods, tsunamis, bridge or dam failures, and terrorist attacks, are a different story. Such risks, to the right of synthetic realism, are the most challenging to understand and, in many ways, the most interesting for theoretical imagination and investigation. What makes these risks so challenging and so political is that, without an anchoring with independent criteria or principles, they are free-floating judgments where it is difficult, if not impossible, to distinguish real risks from politically motivated ones—or, worse, from chimera. Having laid out and evaluated key meta-theoretical underpinnings of risk (its status as a state of the world and as a knowledge system), we can now address the role of social science in risk characterization and governance. We turn to that topic in Chapter 2.

2 An Evolution of Risk Why Social Science Is Needed to Understand Risk State a moral case to a ploughman and a professor. The former will decide it as well and often better than the latter because he has not been led astray by artificial rules. —Thomas Jefferson, letter to Peter Carr, August 10, 1787

The Risk Crises of 1986

T

he year 2012 marked the twenty-fifth anniversary of three major technical disasters: the Chernobyl catastrophe, the Challenger accident, and the pollution of the Rhine River after a fire destroyed a chemical storage building in Basel, Switzerland. These three events had lasting repercussions on public opinion. Even before 1986, many surveys in the United States, Canada, and most of Europe had shown an ambivalent position of most over the opportunities and the risks of large technological systems (Covello 1983; Gould et al. 1988; Lee 1998; Slovic 1987). Risk perception studies and investigations of popular attitudes toward technologies showed that people were concerned about the environmental and health-related impacts of large-scale technology but, at the same time, assigned a fair proportion of trustworthiness to the technical and political elite. Although trust had been eroding since the nuclear accident at Three Mile Island in 1979 and the continuing debate on nuclear waste, at least in the United States (Bella, Mosher, and Calvo 1988; Kasperson, Golding, and Kasperson 1999; Rosa and Clark 1999), most Americans and Europeans were convinced that large-scale technology such as nuclear power or waste incinerators was a necessary, but highly unwanted, manifestation of modernity. Furthermore, opinion polls provided evidence that the “culture of experts” was credited with This chapter is a revised and updated version of Renn 2008c: 53–66.

34

Social Science Foundations of Risk

technological competence and know-how but did less well with human concerns and moral issues (Barke and Jenkins-Smith 1993; Otway and von Winterfeldt 1982). Ecologists and technology critics, by contrast, were seen as sincere and brave underdogs with convincing arguments, even if they lacked real technical knowledge. The lasting public image was the rationality of the science and technology expert versus the morality of the ecologist. Technological experts seemed to have the stronger public support, and the technical elite certainly seemed to dominate official policy. Their risk assessments provided sufficient “objective” reassurance that the intuitive perception of immanent threats by critics was unwarranted. The technical elite not only were able to reassure the public that design criteria and risk-management practices would be sufficient to contain the catastrophic potential of large-scale technology; they also were successful in convincing governments and public management agencies that the technology had a legitimate role to play in modern society. Nuclear power and similar large-scale technology offered many benefits to society. The official line was that, as long as the risk of a major catastrophe was small, society had to accept the risk. Despite a large number of initiatives against the highly unpopular nuclear industry, persistent protests against the building of new chemical plants and the expansion of airports, and new alternative movements springing up over many landscapes, the movers and shakers in the technological elite were able to influence Conservative, Liberal, and Social Democratic parties in all Western countries. In Germany, more and more nuclear power stations became functional; in Switzerland, all referenda before 1986 decided in favor of keeping nuclear power stations in operation; and in Sweden, a referendum decided in favor of a limited term operation of the existing nuclear power stations. Other European countries slowed down the pace of developing this unpopular technology, but on the whole, there was no sign of a moratorium, and even less of any political U-turn. This picture changed dramatically after the three disasters in 1986. Supporters of large-scale technology were on the defensive, while skeptics began to define a new way to think about risk. The new thinking pointed out that “objective” estimates of risk were not so objective after all; many of the facts underpinning risk assessments did not speak for themselves, and qualitative features of risk important to citizens escaped the reductionistic calculus of objective risk. Now the experts were taken to task not only for lacking morality, but also for lacking rationality. An immediate consequence of this was that virtually all European countries, with the exception of France, deferred the development of nuclear energy. In Germany, after long and acrimonious arguments, the project of reprocessing nuclear waste was completely abandoned. Later, the new government of 1998 decided to phase out nuclear power altogether. In Austria, the building of nuclear power stations was halted by a referendum, and in Switzerland a moratorium on further development of nuclear plants was enforced. But nuclear energy was not the only technology that fell into discredit after thorough scrutiny. There was a massive mood of non-acceptance toward the

An Evolution of Risk

35

chemical industry, waste recycling plants, road-building schemes, airport expansions, and the establishment of the first laboratories and production plants for applying genetic engineering (Bastide et al. 1989; Hampel Klinke, and Renn 2000; Sjöberg et al. 2000). The magic words of the late 1980s were “decentralization,” “supply close to the consumer,” “renewable sources of energy,” “ecological farming methods,” “expansion of local public transport infrastructures,” and “development based on ‘soft’ technology.” Underlying all of this technology was an expectation of reduced risks of catastrophe. This new perspective on risk was further expressed through tougher safety criteria and the rigorous implementation of the precautionary principle (Sand 2000). The politics of risk were now based on the principle of “better safe than sorry.” In the decade between 1986 and 1996, hardly anyone wanted to be seen as a supporter of large-scale technology. Remaining arguments in favor of existing plants were based on purely economic reasoning. Extensive technological risk was tolerated, at best, as a transitional phenomenon. Large-scale technology inadvertently had been put on the defensive; a number of technology experts disappeared and concealed their ambitious projects from the hostile Zeitgeist. In essence, the decade between 1986 and 1996 was characterized by a clear defensive attitude of the risk assessment community, a growing distrust in scientific expertise and risk management agencies, and the formation of a powerful counter-elite who challenged the official risk assessments of the former experts and demanded new directions in risk evaluation and technological policies (cf. Stern and Fineberg 1996). Many people felt that the risk assessments of the pre-1986 period were discredited by the events of the year 1986.

Sociological Reflection on the 1986 Risk Crisis More by coincidence than by systematic marketing, two important sociological books on risk were published in 1986: Ulrich Beck’s Risk Society (see Beck 1992c) and Niklas Luhmann’s Ecological Communication (see Luhmann 1989), followed by Risk (see Luhmann 1993). Both books were written before the three accidents, but their appearance shortly after the accidents—while the accidents were still very much alive in the public mind—could not have been timelier. Although the two authors represented vastly different theoretical backgrounds and pursued very different lines of thinking, they offered an explanation for the obvious gap between the reassuring risk assessments of the technical elite and the experience of disasters by the public. Beck’s book entered the bestseller list in Germany within six months, and Luhmann’s work was reprinted several times, despite the fact that people unfamiliar with the terminology of systems theory had a hard time—and still have a hard time—understanding the language. What was the reason for their immediate success? In the tension between private values and perceptions and the “rational” strategies of the “culture of experts,” the disillusioned citizenry found the basis for their newly bred skepticism in the analyses by these two German

36

Social Science Foundations of Risk

s­ ociologists. Intuitive laypeople’s judgments now had a home in the articulated foundation these theoretical frames. Their arguments resonated with a population disgusted with the seemingly unshakeable beliefs of the technological elite and technological fixe (Cohen 1999; Horlick-Jones and Sime 2004). Luhmann’s elevated layperson thinking in his central argument that the “culture of experts” had no more claim to ultimate truths than any other branch of culture or science. The decision to make a risk acceptable was simply a product of selective perception, as well as a result of partial rationalization processes in society (Luhmann 1993). Based on the observation that life expectancy of individuals was increasing in modern Western societies while perceptions of security were decreasing, Luhmann introduced the distinction between danger and risks (Luhmann 1990). Danger is what people are exposed to; risk is danger that people have weighed and chosen to accept. Since those who create risks expose others to dangers, an unshakeable incongruence existed between the risk takers and the risk bearers. This incongruence is independent from the perceived magnitude of risks or the socially constructed choices on how to reach a balance between danger and safety. Risk management institutions are not at fault—as Beck would claim below—​ but are inadvertently caught in the inevitable dilemma of regulating risks they have chosen to take while dangers are what people fear more. In this dilemma, risk creators (and, as a means for exculpation, public risk managers) use probabilistic reasoning for justifying and legitimizing their risk-taking behavior, while the risk bearers use perceptions of threat or alternative professional judgments as a justification for rejecting dangers. Both lines of reasoning are incompatible with each other, and there is no methodology available to reconcile the two different positions on risk-taking. As a consequence the political system is unable to resolve risk conflicts. We will return to Luhmann’s arguments and its inherent paradox in more detail in Chapter 5. In contrast, Beck’s analysis pointed toward a different bias in technical risk assessments. Risk assessors in science and government systematically underestimate the real threats to society. This is because they rely on a methodology that truncates the full range of risk elements to meet the demands of risk assessment formulas. The result is to legitimize the ubiquitous exposure of society to incalculable risks (Beck 1994a). Beck argued that the expert’s strategy is to legitimize political and economic interests by “relativizing” the incalculable and unlimited risk exposures. The multiple association between probability and the extent of potential damage is, according to Beck, a strategy of immunization on the part of the technology enthusiasts against rationally valid arguments for reasonable precautions against risk and, above all, against empirical evidence. The logic of the probabilistic analysis technique “invites” accidents to happen at any time, even in conditions of minimal probability. This makes the idea of antidotes—the subtitle of the German sequence to Beck’s The Risk Society— particularly attractive. His critique allows new methods of rationalizing risk and for providing ammunition to those who neglect the cultural and political

An Evolution of Risk

37

features of risk prevention and follow the inexorable logic of expanding technological modernization (Beck 1995). Charles Perrow’s work on organizations or Steve Rayner’s work on fairness made similar claims even before 1986 (see Perrow 1999; Rayner 1984; Short and Clarke 1992). But the time was not ripe with a receptive audience. After the three major disasters, especially Chernobyl, Beck succeeded in bringing different traditions and empirical findings of sociology together, thereby providing a consistent and rounded explanation for the public confusion about risk that dominated the times. In addition, by placing the risk society in the context of reflexive modernization (to be described later), a subject that has been a top priority on the agenda of leading European sociologists such as Anthony Giddens (1990, 1994a: 56–109) and Scott Lash (2000; Lash and Urry 1994), he integrated the risk issue within an overarching sociological theme. We further discuss the sociological contributions of Beck and Giddens to our understanding of risk within the framework of reflexive modernization in Chapter 4. For now, we situate Beck in the historical and cultural context of 1986. Beck’s analysis has been a starting point for many new ideas in sociology. Although it took almost six years before his book was translated into English, the term “risk society” became a general theme of sociological work in Europe and to a lesser extent in the United States during the late 1980s throughout the 1990s. The concept of risk society provided a new theoretical framing and vocabulary for understanding traditional sociological concerns, such as social class, as well as providing a broader purchase on the dominant features of advanced modernity. Moreover, Beck’s interpretation of risk resonated with the predominant feelings and attitudes of ambivalence about large-scale technologies and their proponents of many citizens in modern Western societies. Finally, the works by Beck and Luhmann broadened our understanding of risk away from a narrow technical focus to a social scientific one. This shift of the risk debate to issues of equity, distributional effects, organizational constraints, and political legitimacy—and away from issues of magnitude and probabilities—left technical professionals with a much more limited role to play. As a result, risk became the domain of the social sciences, attracting the work of psychologists, sociologists, political scientists, and policy analysts. Nobody has better expressed this new paradigm than Sheila Jasanoff (1999: 150): I have suggested that the social sciences have deeply altered our understanding of what ‘risk’ means—from something real and physical, if hard to measure, and accessible only to experts, to something constructed out of history and experience by experts and laypeople alike. Risk in this sense is culturally embedded in texture and meaning that vary from one social grouping to another. Trying to assess risk is therefore necessarily a social and political exercise, even when the methods employed are the seemingly technical routines of quantitative risk assessments. . . . ​Environmental regulations call for a more open-ended

38

Social Science Foundations of Risk

process, with multiple access points for dissenting views and unorthodox perspectives.

Uneasy Balancing Act The shift in emphasis toward social science was led by a great number of cognitive and social psychologists who, beginning in the 1970s (see, e.g., Fischhoff et al. 1978), had raised the study of lay perceptions of risk to the status of a principal research focus (review in Boholm 1998; Renn 2008c: 98ff.; Rohrmann and Renn 2000; Slovic 1987). These distinguished psychologists were able to demonstrate scientifically that the experts’ preferred way of perceiving risk as a result of probabilistic thinking was diametrically opposed to the intuitive perception and evaluation of risk by laypeople (Fischhoff et al. 1978; see also Fischhoff 1985; Slovic, Fischhoff, and Lichtenstein 1981a, 1981b). Moreover, cognitive psychologists demonstrated that probabilistic thinking was difficult even for experts (McNeil et al. 1982). Scientific risk analysis aims at assessing the average expected damage per unit of time, independent of the subjective context of risk. Laypeople’s concerns, however, emphasize qualitative notions such as the amount of dread associated with a risk, voluntary consent to risk taking, control over risk, how the risk is distributed across the population, and, above all, the potential for disaster (the worst possible scenario, independent of the probability of its occurring). These features carry far more weight in perceiving and evaluating risk than the cold numbers of probability or outcome (Boholm 1998; Rohrmann and Renn 2000). To resolve this conflict between expert and layperson, many analysts from the field of social sciences advocated mutual coexistence based on implicit consensus among risk scientists. Until recently, that rapprochement was accepted worldwide: for a rational governance of risk, it is necessary to recognize both elements—scientific risk assessment and risk perception by laypeople—as two legitimate parts of the process. Thus, risk managers began looking for a third way to manage risk exclusively beyond expert opinion or exclusively beyond lay perception. The risk psychologist Peter Sandman, an influential adviser to the U.S. government and industry, found a simple formula for this need: risk is to be understood as a function of expert risk analysis and public outrage (Sandman 1988: 164). The coexistence of analysis and perception matched the presupposition of postmodern thinkers critical of scientific truth claims with their illusion of objectivity. The implicit endorsement of this broad and influential tradition of thought added depth to the claim for balance. After all, any opinion or truth claim could be seen as acceptable or not depending on one’s social position and perspective. Hence, competing versions of truth, justice, and fairness could be dealt with only on a weighted basis, taking into account the various versions (Jasanoff 2004; Liberatore and Funtowicz 2003; Rosa 1998a). But Luhmann diagnosed the dilemma of postmodern society as the inability to fully integrate the variety of social life designs and rules for truth into a

An Evolution of Risk

39

reasonably balanced concept. If this is true, then society would have no choice but to live with pluralist and competing concepts of risk. In this postmodern world, experts would have no more clout than any other group in society. Truth and morality are thus negotiable, and risk becomes a question of individual and collective perception shaped by competing worldviews. (For a review, see Jaeger et al. 2001: 193–208.) If an influential group of people promote a technology with risks, then this risk has to be minimized, no matter how beneficial it is expected to be according to expert estimates. Failing this test, the technology has to be sacrificed on the altar of unacceptability. Such was the fate of nuclear energy in many countries. In such a situation, attitudes toward risk and risk politics are largely the end product of a process of social communication. The promise of coexistence lost its attraction during the 1990s. After more than a decade, memories of the three accidents of 1986 faded, with the result that old fears lost their power over the public conscience. Times changed. The call to return to technical expertise rather than to be guided by public concerns appeared in a multitude of risk publications and speeches (cf. Breyer 1992; Coglianese and Lazer 2003; Graham and Wiener 1995). Perhaps no single event more succinctly captured this transformation than the speech given by John Graham at the annual European Society for Risk Analysis conference in 1996, held at the University of Surrey, Guildford. Assembled members nearly choked on their canapés when Graham, president of their international headquarters, gave the lunchtime address (Graham 1996). Instead of showering them with the usual pleasantries, Graham launched an attack on his social-science colleagues. He argued that far more people than necessary were dying because environmental and health policies are based on the irrational perceptions and fears of laypeople rather than on objective risk calculations conducted by experts. In this way, great risks such as nicotine and even the pollution of homes from radon are downplayed, while negligible risks, such as the development of dioxin in incineration processes, are exaggerated into catastrophes. This trend, Graham continued, was made worse by the growing (false) notion of democratizing specialist knowledge. Out of an understandable wish to conform to an egalitarian Zeitgeist, risk social scientists had opened a veritable Pandora’s box that would lead to a misleading ­fusion of the differences between reality and what is perceived as reality. What was needed was a sharp turning away from that pleasing illusion of equality between expert knowledge and lay perception. Graham stressed that the perception of risk by laypeople should not play a central part in the governance process of risk regulation. This speech left none of the social sciences untorched. It was aimed not only at sociologists such as Beck and Luhmann and at the psychologists who were so instrumental in elevating the role of lay perceptions in risk debates, but also at social constructivist scholars such as Brian Wynne and cultural theorists such as Mary Douglas and Aaron Wildavsky. The attack was to be evenly distributed among all those disciplines that had the temerity to say that peoples’ thoughts and feelings should trump hard numbers.

40

Social Science Foundations of Risk

The Comeback of Technical Analysis During the first decade following the environmental disasters of 1986, technical risk professionals were placed in a defensive position. Many technical professionals either withdrew into their own communities or tried (sometimes desperately) to incorporate public outrage into their decision or assessment models. But after 1996, the wind changed direction again. The previously rejected opinions of experts reemerged with a certain amount of Schadenfreude, or satisfaction over the decline in influence of the cultural approach to understanding risk. The disasters of 1986, in retrospect, were not such terrible disasters after all. The Rhine recovered from the Schweizer­halle accident much more quickly than even the most optimistic opinions would have dared to predict. Until very recently, only one additional disaster associated with space flights could mirror the Challenger episode. And, according to expert toxicologists, even the great reactor accident of Chernobyl may have caused fewer deaths than the public was led to believe (International Atomic Energy Agency 2006). Thus, the seemingly apocalyptic incidents of the year 1986 became simply technological examples of a succession of tragic but ultimately disastrous events—such as dams breaking, hurricanes, floods, and earthquakes—that were embedded in the fabric of societies. That fabric was fashioned from a human history replete with disasters from its very beginning. So have we come full circle? It appeared as if many risk scientists were once again reverting to their conventional style: that risk decisions and the politics of risk would have to be measured in real damage expected per unit of time. Does this hail the end of the risk society? Our short answer here, to be elaborated throughout the book, is “no.” The sudden revitalization of an older view about the significance and role of science and technical expertise for risk analysis and management did not occur by chance. There were several reasons for this new development: (1) limited budgets; (2) evolving risk perception studies; (3) a lull in technological opposition; (4) disenchantment with growth; and (5) the recognition of unreal­ istic goals. Funds for risk reduction became far scarcer than they had been in the 1980s and early 1990s. As long as risk managers had large budgets to spend, it was relatively easy to please both the experts and the public. Money was spent simultaneously on the top entries of the experts’ and the public’s list of risk reduction priorities. After the collapse of the new economy, governments were forced to place more emphasis on efficiency. Every penny spent on minimizing risk today would be used at the cost of realizing another of society’s objectives. The cost of improvements to environmental quality accelerated. At the same time, there was little money for other essential areas, such as education and preventative health care. One example has been the debate about tougher limits to electromagnetic radiation (such as from cell phones), whose implementation would

An Evolution of Risk

41

cost millions (Kemp and Greulich 2004; Kunsch 1998). There was no doubt among the experts that this was the appropriate way to make policy, since there was no evidence of a health risk. Policymakers quickly concurred, since implementing tougher measures could not be justified under conditions of tight budgets. With tighter budgets and less public attention to environmental issues, risk managers were unable and unwilling to please all “cultures” simultaneously (for the United States, see Zeckhauser and Viscusi 1996; for Germany, see Renn 2006). Facing the choice between respecting public perceptions and following the advice of risk scientists, many policymakers opted to follow scientific advice, even in the face of strong public opposition. Satisfying the public turned out to be more difficult than first anticipated. During the 1970s and 1980s, data on public risk perception were often collected on a highly aggregate level (see Slovic, Fischhoff, and Lichtenstein 1981a, 1981b; a critical review at the time was Otway and Thomas 1982). The priority list of public concerns was normally based on the average values of individual risk perceptions that reflected either estimates of small groups of respondents or mean values of larger samples. Most risk managers were not aware that these mean values tended to obscure the substantial variance found among individuals and social groups. More sophisticated research designs revealed a wide array of risk estimates among individuals and social groups and a variety of risk priorities depending on group affiliation, personal values, and social orientations (Dake 1991; Drottz-Sjöberg 1991: 163; Gould et al. 1988). Which estimate should then be used for risk management: the average of all groups, the mean value of all respondents with college degrees, the average of all women (since they tend to respond more cautiously than men to most technological risks)? The “gap” between experts and the public has been transformed into numerous “gaps” among experts and among publics (Fischhoff 1996). Confused by this spectrum, many risk managers abandoned the idea of public input altogether and have returned to the cloistered haven of institutional or technical expertise. Public opposition to technology and other risk-inducing activities became less pronounced than in previous years. Many former opponents of technology became professionals in risk management and adopted at least parts of the risk assessment methodology of their former opponents (Dietz, Stern, and Rycroft 1989). In Western Europe, particularly in Germany, the environmental movement and the Green parties absorbed many risk issues. Similar to the delegation of social concerns to professional groups in the past, many environmentalists delegated their risk-related concerns to professional caretakers of environmental risks. With the success in Germany of the Green Party’s efficacy in shaping policies from inside the political system came the need to compromise and forge political deals. Once adapted to the language, reasoning, and culture of the administrative and political elite, representatives of Green parties experienced a growing distance between themselves and their fundamentalist base. The base, for its part, diverted much of its energy from risk issues (except for a continuing concern for nuclear risks) to issues of sustainable development

42

Social Science Foundations of Risk

(such as Agenda 21) or to active involvement in local siting controversies. The latter activities tend to have little influence on national or even regional politics (Brion 1988; Rosa 1988). With less public support, the concerns of environmentalists and other public groups became less visible in the political arena and, as a result, less important for designing risk policies. The belief in unlimited growth, a driving vision of modernity, proved to be unrealistic. A focus on risk and the environment was some of the pioneering work to facilitate this realization. As early as 1972, the Club of Rome pointed to the limits of economic growth (Meadows, Meadows, and Randers 1972, 1992). The dream of continuing growth and increasing prosperity was replaced with the risk of a looming ecological breakdown. For two decades there was some awareness of ecological limits (Rosa, Diekmann et al. 2010). But during the late 1990s, other limits became visible and palpable in all areas of society: the limits of the welfare state, of economic competitiveness (in Germany, the catchphrase is “Standort Deutschland”—Germany as a global player but also a slave to global market conditions), of social integration, and of developing a more pluralist morality. Frugality, cost efficiency, and instrumental priorities became the key policy concepts in societies that now had to compete in the global marketplace (Marshall 1999). An approach to risk governance that tries to satisfy all societal demands for risk management was no longer feasible in the face of competing objectives and globalization (Coglianese 1997). With a globalized economy came the problem of growing industry mobility. To circumvent risk regulations at home, transnational firms located facilities in poor countries with weak regulations or lax enforcement. These “pollution havens,” in turn, nudged the countries that are recipients of those facilities into a race toward the bottom. For the exporting countries, this meant a loss of jobs at home. For example, Ciba-Geigy (Chemistry Industry Basel-Geigy) finally gave up the long struggle to get the people of Basel to accept a biotechnical plant and instead moved to a nearby site in France. Only later did the citizens of Basel come to the painful realization not only that they had lost a large number of jobs and a hefty tax income in this victory, but that they were still bearing a large share of the risk because of the proximity of the plant. The most ironic outcome was that Ciba-Geigy and Sandoz (Schweizerhalle in 1986) merged to become Novartis in 1996. By the end of the 1990s, the pendulum appeared to swing back to a new era of expert domination in risk policies. At the same time, however, many policy analysts and social scientists warned that ignoring public perception may not only alienate those who have a stake in the decision-making process and violate democratic principles, but it also may underestimate the potential input into the decision-making process that the public could provide (Dietz, Stern, and Rycroft 1989; Jasanoff 1999). A failure to find a practical procedure for integrating expertise and laypeople’s input could rekindle conflict between the “culture of expertise” and those bearing the risks. That conflict could prevent society from governing risks in accordance with rational criteria of risk reduction and fair burden sharing.

An Evolution of Risk

43

The Appearance of Intentional and Systemic Risks Other significant changes took place in the second half of the twentieth century. At the dawn of the new millennium, the focus of the risk debate moved toward the field of social risks—in particular, terrorism, sabotage, mobbing, depression, suicide, and other difficult-to-grasp causes of potential harm (Organization for Economic Cooperation and Development 2003). This change in focus defused the debates about expertise versus perception, social constructivism versus realism, and science-based versus concern-based policy responses. The new social risks seemed to render past conflicts nearly obsolete. In contrast to the plausible assumption that risk consequences are real but their causes are somehow socially constructed, some of the new risks tend to reverse this relationship: the resulting risk symptoms appeared to be socially or psychologically constructed, while many of the causes were quite real (for many psychosomatic diseases). For example, when France decided to accept post-traumatic stress as a disease that car insurers were obliged to compensate, the number of post-traumatic stress patients increased by more than four times within two years (Renn and Jaeger 2010; Young 1995). Social constructions such as fundamentalist convictions and sacred beliefs, not differences between experts and laypeople, are the main drivers for terrorist risks (Atran 2010). While the causes are socially constructed, the effects are anything but. The effects of bombs and other forms of violence are straightforward in their ostensible effects. Probabilistic modeling of such risks is, at best, only marginally helpful in understanding them (Rosa, Diekmann et al. 2012). It became clear that this new collection of risks added challenges to the already complex process of risk governance. It also became clear that social-science expertise was badly needed to help understand the psychological bases and social reasons for these risks (Aven and Renn 2009a). Yet the most dramatic challenges to governing risk came with the emergence of a new type of risk, called systemic risks (International Risk Governance Council 2005; Organization for Economic Cooperation and Development 2003; Renn and Keil 2009), discussed in greater depth in Chapter 7. Briefly, systemic risks are states of uncertainty in which an entire system is under threat of breakdown or collapse. Environmental examples include threat to global climate system due to the emissions of greenhouse gases and threats to the oceans due to acidification. Economic examples include the threat of collapse of entire financial systems or markets. Technological examples include threats to power grids and air traffic control systems. Systemic risks have evolved from increased vulnerabilities and interconnections among geographic areas, as well as functional dependencies among the various sectors of society, such as the coupling of the physical world with the economy, social relationships, and political cultures. Globalization and world trade have increased the potential for systemic risks. With their upward trajectory, there is every reason to expect systemic risks to become the major challenge of governance in the years to come.

44

Social Science Foundations of Risk

At a time that terrorism and disaster potential is on the increase, the resilience and coping mechanisms of many societies appear to have become less effective, and there are deep questions about whether societies can develop effective mechanisms of governance. The governance challenge is deepened by an additional challenge: a rise in vulnerability due to demographic and other structural changes in society. Vulnerability has increased as a result of the following factors: • The speed of urbanization: probably two-thirds of the world’s population will live in cities after 2020 (United Nations 2009). • Insufficient response in building infrastructure to cope with expanding urbanization and vulnerability to hazards (Organization for Economic Cooperation and Development 2003: 44ff.). • The accelerated pace of resource use and the production of wastes (Rosa, Diekmann et al. 2010). • The coupling of independent risk sources due to multiple interactions of natural disasters with chemical, technological, lifestyle, and social risks (Ezell 2007). • The loss of traditional management capabilities due to increasing geographic mobility and cultural de-rooting (Wissenschaftliche Beirat der Bundesregierung Globale Umweltveränderungen 2000). • The increase in social pressures and conflicts (Giddens 2000a). • The insufficient capacity for mitigation and contingency management (International Federation of the Red Cross and Red Crescent Societies 2000). Given these recent challenges, societies urgently need to develop governance strategies for dealing with the new social and religious risks, with systemic risks, and with increased vulnerability. In particular, new conceptual frameworks and methodologies, as well as institutional solutions involving the different levels of governance (local, regional, national, international, and global), are needed to provide tools to understand and manage these risks. The new challenges became even more pronounced in the aftermath of the earthquake and tsunami in Japan in March 2011 that resulted in the release of radioactivity into the air as nuclear fuel rods melted and nuclear power plants collapsed near Fukushima (Hirano et al. 2012). The combination of a natural disaster coupled with a technological failure, the complexity of the technical system, and the organizational inability to cope with the disaster are all indicators that the technical efforts alone to assess and manage risks of that magnitude were inadequate, to say the least. The new risk landscape has presented unprecedented challenges to risk governance. One of the most challenging features of the landscape is the interpenetration of physical, environmental, economic, and social risks. The central question now is how to design suitable approaches and instruments for characterizing the new suite of risks, such as terrorism and systemic risks, with

An Evolution of Risk

45

their herculean potential. Those approaches will need to take into account the health-related, environmental, financial, and political aspects of risks. They must also take into account strategic policy concerns embedded in these risks. The challenge is magnified by the new systemic risks that are highly complex in their ripple effects on different levels of the physical and social world. It is further magnified by the wide-ranging uncertainties over long-term effects and the numerous viewpoints, opinions, and convictions on the part of different concerned actors (Renn et al. 2002; Renn and Keil 2009). The new risk landscape makes it perfectly clear that it is no longer sufficient to look at risk with the truncated vision of technical risk analyses. It is insufficient to focus exclusively on probabilities and untoward consequences when investigating and managing risks. Risks may trigger amplifying ripple effects that expand through a whole sequence of secondary and tertiary domains of a social system or to other linked systems. (On social amplification of risk, see Kasperson 1992; Kasperson et al. 2003.) The tendency for systemic risks to trigger ripple effects across traditional policy boundaries has fueled much concern and fear, particularly in relation to financial institutions and climate change. Unfathomably complex cause-effect chains have already turned insurance policies into a lottery for many companies. As the number of risks from new technology, terrorism, or systemic risks grow, the boundaries between the traditional domains of risk assessment, risk perception, and social coping mechanisms become less pronounced. Another complicating feature has to be taken into consideration: systemic risks are likely to be transboundary or even global in their reach (LinneroothBayer, Löfstedt, and Sjöstedt 2001). There is much concern and fear among citizens about complex global interconnections. The recent, often violent demonstrations against global multinational organizations and institutions demonstrate the cultural divide between those who believe they will benefit and those who believe they will lose from a global governance model. This links back to social risks that have their roots in societal changes with respect to identity, justice, and legitimacy. The promises of new developments and technological breakthroughs need to be balanced against the potential ills that the opening of Pandora’s box may entail. This balance is not easy to strike as opportunities and risks emerge in a cloud of complexities, uncertainties, and ambiguities. (Chapter 8 elaborates these three issues.) The dual nature of risk as a potential for technological progress and improved well-being, on the one hand, and as a real threat to society and its members, on the other hand, demands a dual strategy for risk governance. Public values and social concerns may act as the driving agents for identifying those topics for which more refined assessments are judged necessary or desirable. As much as new scientific assessment methods are needed to broaden the scope of research targets, as well as to improve the handling of uncertainty, the expertise of social sciences will remain necessary to inform policymakers as well as the various attentive audiences in a plural society about new social trends and emerging public concerns. So armed, we will be better prepared to

46

Social Science Foundations of Risk

develop methods of social foresight and to provide models for improved risk communication, which are designed to bring technical analyses in line with the social and cultural needs of societies. There is no shortage of new problems and challenges in contemporary risk governance that require the expertise of social science. We next look at the underlying foundation that dominates the social science of risk, as well as critiques of it.

II Risk and Social Theory

3 Overarching Perspective The Rational Action Framework It is impossible for the behavior of a single, isolated individual to reach any high degree of rationality. The number of alternatives he must explore is too great, the information he would need to evaluate them so vast that even an approximation to objective reality is hard to conceive. Individual choice takes place in an environment of “givens”—premises that are accepted as the bases for his choice, and behavior is adaptive only within limits of these “givens.” —Herbert Simon, Administrative Behavior (1976)

The Foundation of the Rational Action Framework

A

ny theoretical examination of risk must begin with a core recognition of the logical structure underpinning the concept. The field of risk has been dominated by an overarching framework with both behavioral and normative dimensions. That framework is founded on the principal insight that humans are percipient organisms capable of assessing key features of their environments, determining alternative courses of actions, and thoughtfully choosing one of those courses—one that best meets an objective. So the expectation is that rational humans will take actions—such as avoiding tobacco smoke or wearing seatbelts—because it lowers their health and safety risks. Since good health and safety are unquestioned desiderata, it is rational for the individual to take actions that maximize them. We refer to this general orientation in the risk field as the rational action framework (RAF).1 Below, we outline 1. Sociology from its inception has had a love-hate relationship with the rational action framework. The founding trinity of sociology—Max Weber, Émile Durkheim, and Karl Marx, with added ammunition from Georg Simmel—was highly agnostic to such agency-based, individualized theories. Herbert Spencer, in contrast, attempted to integrate the rationality of the market with evolutionary change in society. George Homans, recognizing the disappearance of the individual and the individual’s agency, embraced both the foundations and fundamental features of the rational action framework in his theoretical framing. For a review of this history, see Jaeger et al. 2001: 58–68.

50

Risk and Social Theory

the general contours of this framework, followed by its more refined structure and its specific theoretical specifications. We then examine the strengths and weaknesses of this framework. The philosophical idea underlying RAF for explaining risk has three levels of abstraction: rational action as worldview (RAW); rational action as paradigm (RAP); and rational action as specific theory (RATh). The levels are nested within one another. At its broadest level, RAW presupposes that human beings are capable of weighing alternatives to act in a strategic fashion by linking decisions with outcomes (Dawes 1988; Shrader-Frechette 1991). Humans are assumed to be purposive agents who are goal-oriented, have options for action available, and select options that they consider appropriate to reach their goals. In this broadest form, most people in the Western world take this worldview as a presuppositional given. With the exception of postmodern theories, there is hardly any disagreement among the different schools in the social sciences that rational action is not only possible but also a “touchstone” of human action (Barnes and Bloor 1982; Rosa 1998a). If the postmodernists were correct, risk studies based in social science would make little sense except as socially constructed knowledge systems amendable to rhetorical criticism or deconstruction. The second, more refined level of rational action is RAP. It is an overarching framework for theorizing human risk choices and actions. It embeds a whole set of more specific assumptions. Many of the variety of theories on risk and uncertainty (RAThs) are framed by this second level of abstraction— that is, by RAP and its assumptions. The defining assumptions of RAP refer to human actions based on individual decisions. The most important of them are • One-dimensional view of rationality (all actions can be reduced to choice situations). • Analytical separability of means and ends (people as well as institutions in principle can distinguish between ends and means to achieve these ends). • Goal-attainment motivation (individuals are motivated to pursue their own goals when selecting decision options). • The maximization or optimization of individual utility (human actors select the course of action that promises to lead to more personal satisfaction than any other available competing course of action). • The existence of knowledge about potential outcomes (people who face a decision can make judgments about the potential consequences of their choices and their likelihood). • The existence of human preferences (people have preferences about decision outcomes based on values and expected benefits). • The predictability of human actions if preferences and subjective knowledge is known (rational actor theory is not only a normative model of how people should decide but also a descriptive model of how people select options and justify their actions). (Jaeger et al. 2001)

Overarching Perspective

51

This set of fundamental assumptions linked to individual behavior is also extrapolated on the basis of methodological individualism to the context of collective decision making or collective effects of individual decisions. These contexts refer to three classes of phenomena: (1) human actions that reduce or enlarge the potential for actions of others (external effects); (2) a multitude of rational (individual) actions that create social structures such as markets or governance institutions, where the aggregate effect of many rational actions provide predictability and consistency even in the absence of a “self-conscious” collective will;2 and (3) actions that are designed and can be implemented by more than one actor or other forms of actions that lead to or stem from interactive effects among a variety of individual actions (structural effects such as class biases). The rational action framework has a third level of abstraction: RATh. Here it is assumed that humans maximize their satisfaction or utility by choosing from among different options the one that provides the maximum or optimum payoff. Just as the indivisible hand of the market coordinates individual actions into a beneficial social outcome, so do individual risk choices lead to such an outcome (Wright 1984). In its collective version, rational behavior by an individual leads to a social and economic equilibrium within a web of the rational actions of others—as long as humans have equal access to resources and information and are allowed to compete with one another. This third version of RAF is closely linked to classical economic theory—for example, utility theory. But it also includes utility theory in psychology, rational choice theory in political science, and rational action theory in sociology. Its core presupposition, framed by RAF, rests, of course, on the foundation of rationality outlined above. Rationality is the presumed process used for the efficient selection of means to reach predefined goals. The main thrust of the third level is the aggregation of factors that govern individual actions to the realm of collective action. This aggregation process has implications for the institutions and social structures engaged in governance. Along with the assertions about individual actions above, the realm of collective actions within the third level of RAF constitute a set of additional claims: • Methodological individualism (all aggregate social actions can be interpreted as a complex network of individual actions). • Treatment of organizations or social groups (via methodological individualism) as “virtual” individuals (organizations as persona ficta act like individuals by selecting the most efficient means to reach predetermined goals). • The possibility of extending individual preferences to aggregate preference structures (institutions that aggregate individual preferences 2.  “Social facts,” in the language of Durkheim (1982:50–59), represent collective behavioral patterns that prevail beyond individual actions and constitute complex structures.

52

Risk and Social Theory









such as markets or governance bodies resemble not only the sum of individual preferences but also their combined collective interest). Availability and effectiveness of organizational principles and practices (markets, democracy, negotiations, and others) that provide a systematic link between individual utility maximization and social welfare (in particular, the so-called invisible hand of markets). Reasonable knowledge of individual actors about the effect of social interferences (actions of others that interfere with one’s actions) on one’s own probability to attain goals (such as the handling of complexity). Indifference to the genesis and promulgation of values and preferences (values are seen as preexisting or exogenous; RAP can make predictions only on the premise that preferences are given, not created in the decision process). Independence between allocation of resources and distributional effects (it is rational for societies to assign priority to the most efficient allocation of resources, regardless of distributional effects, before redistributing the wealth among the members according to preconceived principles of social justice). In some laissez-faire versions of RAP, a Darwinistic selection rule for distributing wealth is assumed as most appropriate because the most successful entrepreneur should also reap most of the benefits. Modern versions inspired economists of the neoliberal school to emphasize the distinction between allocation of resources and equitable distribution (Hayek 1948). RAP models are appropriate and perhaps necessary for ensuring the most effective and efficient production and exchange of goods and services. However, they are not sufficient. They need auxiliary ethical norms or moral principles to distribute equitably the added value among society’s members.3 (Renn et al. 1999)

What follows is a critical examination of the rational action framework. We avoid a criticism of RAW, since it is the virtually universally accepted presupposition of the Western world. We also avoid a detailed critique of the variety of risk theories (RAThs) that speak to the topic of risk. Rather, our critique is at the paradigmatic level (RAP), and that is the terminology we adopt for our arguments. At the same time, it is important to recognize that the various levels of abstraction constituting the rational action framework are nested. Thus, while a critique at the paradigm level—RAP—does not assault rational action as worldview (RAW), it is simultaneously a critique of all the specific theories (RAThs) framed by that paradigm.

3. This sufficiency complement to the logic of RAP, we argue later, is provided by Habermas’s theory of communicative action (Habermas 1984, 1987) as discussed in Chapter 6.

Overarching Perspective

53

A Critique of RAP The Optimization Principle Underlying the individual as well as the collective presuppositions of RAP is the basic idea that all human actions can be described as problems of maximization or, in more contemporary versions of RAP, optimization. The social world is divided into countless decision problems, each requiring the generation of options for future actions and some type of an algorithm (decision rule) to choose among available options. This algorithm is meant to guide an individual or collective body to optimize its own benefits. The algorithm may be well founded in many social arenas, such as markets and political debates. It may, however, be unfounded for other social and governance structures, such as those that create mutual trust among actors, build individual and social identity, gain ontological security, construct solidarity among people with similar interests, or monitor compliance with collective decisions. Although these latter social activities are certainly goal-oriented, thus fitting into the first level of abstraction (RAW), they do not lend themselves to a process of optimization. But the assumptions of neither the second nor the third level of RAP are met under circumstances in which maximization or optimization is not the primary goal of action. It makes no sense to think of trust, identity, solidarity, or monitoring resources to be maximized or optimized as presumed in the RAP tradition. These social phenomena are products of communication and mutual understanding, elements of social life itself that require preexisting cultural meaning and constant feedback to reach acceptable decisions and stability. They are always endangered by the perceptions of strategic or disruptive social actions and need to be fueled by reciprocal actions and exchange of symbols that emphasize shared values and convictions and a willingness to cooperate. This process of offering, sustaining, and symbolically reinforcing cannot be described adequately by a calculated means-ends process. Nevertheless, a range of rational actor theorists have made attempts to frame these phenomena in the language of RAP (e.g., Opp 1989). These attempts include many additional ad hoc assumptions and qualifying conditions beyond the RAP framework. As the historian of science Thomas Kuhn (1970) pointed out in his classic study of scientific paradigms, a scientific theory becomes weak and is eventually abandoned when it needs too many ad hoc adjustments to subsume phenomena that do not fit theoretical predictions. A classic historical example is the case with the Ptolemaic view of the solar system, with its ad hoc epicycles, which eventually was replaced with the Copernican, heliocentric view (Rosa 1998a). Our main point here is that the principal limitation of the rational action framework is its extension beyond its scope of applicability. RAP is inapplicable to the many areas of social life and governance that are not underpinned by instrumental gain, areas that cannot and should not be regarded as maximization or optimization problems. It assumes that individuals pursue the

54

Risk and Social Theory

three requisite steps of decision making: option generation, evaluation of consequences, and selection of the most beneficial option. Without a doubt, many social situations can be described, or at least simulated, in such a fashion, making RAP a reasonable theoretical strategy for a well-defined range of human actions. An erstwhile purchaser of an automobile would be wise to follow the prescribed steps of RAP, deciding on transportation and style needs and comparing models and prices in reaching a final choice. However, there are many other situations in which the model of decision making as an act of optimizing outcomes appears as a weak, if not misleading, descriptor of actual behavior, let alone of what actors project will happen. Furthermore, social reality becomes impoverished if all actions have only one common goal: to maximize or optimize one’s own utility. Balancing social relations, finding meaning within a culture, showing sympathy and empathy to others, and being accepted or even loved by other individuals belong to a class of social phenomena—termed the “lifeworld” by phenomenologists and adopted by Jürgen Habermas—that do not fit neatly into the iron rule of RAP-based theory (Habermas 1970). Individuals may conscientiously or unconscientiously behave in accordance with the means-ends optimization process of RAP in appropriate decision arenas, but certainly not all the time. It is this fault line that leads to a major challenge to how we understand and govern risks. A major task of post-RAP theories is therefore to identify and define additional schemata of social actions that are based on intentionality but rely on non-RAP processes to describe the categorical and normative processes of governance choices. This criticism is not directed at the idea of RAF per se, for there clearly are decisions that are well served by that instrumental rationality. The criticism is directed at the imperialistic extension of RAP to social phenomena that do not meet the assumptions of RAP. Importantly, the unsuitability occurs at the individual and collective levels. Systematic empirical deviations from the predictions of RAP are treated as anomalies or noise. It is our judgment that the number and breadth of anomalies have reached the paradigm-challenging level Kuhn (1970) associated with scientific theory. More fundamentally, this practice of treating deviations from theoretical prediction as anomalies brings into sharp relief a fundamental question of whether optimizing strategies underlie all classes of human and social actions. This question has not been adequately addressed by the defenders of RAP.

RAP and Individual Choice The most prominent debate about RAP stems from the empirical evidence that humans more often violate rather than conform to the rules of rational action (Jaeger et al. 2001). RAP seems plausible as a normative standard for judging individual action if it can be framed as a decision problem. But as a descriptive tool for predicting people’s actions it, has been shown over many studies to have limited validity (Gigerenzer and Selten 2001). This is true even if the analyst has access to the preferences and subjective knowledge of the individual

Overarching Perspective

55

decision maker. Most psychological experiments demonstrate only modest correlations between rationally predicted and intuitively chosen options (Dawes 1988; Tversky and Kahneman 1974; von Winterfeldt and Edwards 1986). Furthermore, by asking people in “thinking-out-loud experiments” how they arrive at their decisions, a wide variety of rationales are articulated. Only a few of these have any resemblance to the presumed thought processes of rational actors (Gigerenzer 2000). The accumulation of observed deviations from RAP-based decisions led two leading cognitive scientists to summarize the empirical findings with this conclusion: “the logic of choice does not provide an adequate foundation for a descriptive theory of decision making. We argue that the deviations of actual behavior from the normative model are too widespread to be ignored, too systematic to be dismissed as random error, and too fundamental to be accommodated by relaxing the normative system. We conclude from these findings that the normative and descriptive analysis cannot be reconciled” (Tversky and Kahnemann 1987: 82ff.). In response to this empirical challenge, proponents of RAP in psychology and economics have proposed five modifications that would bring the theory more in line with the actual observations of behavior. First, they have claimed that the procedures prescribed by RATh serve only as analytical reconstructions of the intuitive choice processes in humans. Whether humans follow these prescriptions consciously or not does not matter as long as the outcome of the decision process is close to what rational theories predict (Esser 1993; Hedstrom 2005). Second, people use simplified models of rationality such as the lexicographic approach (i.e., they choose first the most important attribute that the options should accomplish, then choose the option that fares best on this specific attribute), elimination by aspects (they choose the option that meets most of the aspects deemed important), or the satisficing strategy (they choose the option that reaches a satisfactory standard on most decision criteria). All of these are strategies within the Simon and March (1958) idea of bounded rationality (elaborated by Gigerenzer 2000; Gigerenzer and Selten 2001). Under most conditions, choices produce suboptimal outcomes (Simon 1976; Tversky 1972). These suboptimal outcomes are acceptable to the individual because the increase in utility is less than the cost of such a decision, or the time saved to reach a satisficing solution is more valuable than the additional utility expected in an optimal decision. Recent experiments have shown that people use more complex and elaborate models of optimization when the decision stakes are high (large potential payoffs), but they prefer simplified models when the decision stakes are low (Miller 2006). Introducing simplified models for suboptimal decision making substantially increases the validity for making predictions of chosen decision options. Third, most applications of RATh equate utility with an increase in material welfare. However, there is a large and rapidly growing literature in economics and psychology that challenges conventional measures of welfare (Fisch­­hoff, Goitein, and Shapiro 1982; Frey 2006; Green and Shapiro 1994; Sen 1977).

56

Risk and Social Theory

­ urthermore, considerable empirical evidence demonstrates an upper bound F to objective and subjective conditions of welfare at modest levels of income (Knight and Rosa 2011). Still further, people often feel an increase in satisfaction absent material gain, such as when they act altruistically or when they enhance their reservoir of symbolic gratifications. Although risk and other decisionmaking experiments are usually designed to exclude these factors (or to keep them constant), it is not clear whether symbolic connotations (such as accepting money for a trivial task) may play a role in the decision-making process (Gigerenzer 2000). Some of the “thinking-out-loud experiments” mentioned earlier revealed that some subjects felt they should choose the most cumbersome decision option because that way they felt they deserved the promised payoff. Fourth, participants in such experiments often respond to the conditions of the situation by taking on a strategic posture (Camerer 2003). They assume that their choices will depend on actions and reactions of others even in experimental settings designed to observe individual actions without any interference from other actors. By deliberating over how others could influence their preferred choices, participants may select the suboptimal solution because such a solution avoids the strategic response assumed of others. Game theory models, although normatively inappropriate for these conditions, may actually offer better predictions than those based on expected utility. Fifth, some analysts claim that anomalies can be traced not to problems of the rational logic but to a common problem in experimental research: the problem of external validity. Can we expect laboratory results to mirror actual, realworld behavior? Critics claim that the artificial situation of a laboratory and low motivation, as well as the “playful nature” and compliant disposition of the participants (normally undergraduate students), are the main reasons for the many observed deviations from the predictions of RAP (Heimer 1988). This criticism was sustained in a recent study (Prior and Lupia 2008), where subjects were randomly assigned to a control group that mimicked conventional surveys. In a randomly assigned experimental group respondents were paid $1 for every correct answer to questions about political knowledge. Compared with the control group, members of the incentive, experimental group showed an increase in correct answers of 32 percent. Hence, the simple motivation of a small monetary reward apparently motivated participants to more fully engage in the task, resulting in significantly better performance. Aside from the lesson to be learned here about the effects of motivation on performance in the artificial setting of experimentation, it points to a key interpretation we have about RAP—namely, that under well-defined restricted conditions, behavior does match prediction. Hence, in real-life situations with real stakes and real people, individuals might be more inclined to use rational, or at least bounded rational, models to select their preferred options. But such cases are the exception and not the rule. Yet comparative studies among students, laypeople, and experts show that while there are intergroup differences, all exhibit perceptions that deviate

Overarching Perspective

57

from the theoretical expectations of RAP (Slovic, Fischhoff, and Lichtenstein 1981a, 1981b). Our compass here precludes a detailed review of all of these arguments. However, two implications emerge from them. First, if utility maximization includes all aspects of decisions that matter to people, then the model becomes tautological and cannot be falsified. This is not to say that humans do not act intentionally or that they do not perform utility calculations in certain circumstances. But if altruism, feelings of solidarity, the struggle for meaning, and the desire for mutually acceptable decisions are all manifestations of utility, the concept of utility itself becomes meaningless and trivial. Furthermore, if closeness to real-life situations and the inclusion of bounded rationality indeed improve the predictability of decision choices, as some experiments suggest, then it is reasonable to conclude that many individual decisions can be explained by a modified rational actor approach. The modifications to RAP including the adoption of simplified models that remove the tautological conditions (they exclude many potential options from the selection). Under these circumstances, the utility measurements are still defined in terms of actual payoffs. Finally, even granted that rational choice occurs at the individual level in many instances, many other human actions do not follow the collective optimization processes presumed by RAP. Laboratory experiments narrowly frame the situation as a decision-making context. Left out of this context are many of the social forces that define humanness. Many human actions are motivated by cultural imperatives—most notably, habituation (e.g., imitation, conditional learning, emotional responses, and subconscious reactions). These actions are not perceived as decision situations; hence, rational strategies are not even taken into conscious consideration. In addition, anecdotal evidence tells us that many actions are functions of both cognitive balancing and emotional attractiveness. The fragmentation of the psychological profession into communities of clinical, cognitive, analytical psychologists has blinded these researchers from testing the relative importance of these factors in human thought and actions.4 Despite the considerable limitations of RAP, as a theory of human decision making, it remains effective as a normative framework for guiding strategic actions. Hence, it seems quite appropriate to conclude that rational actor theories can provide a normative algorithm for decision making. Also, RAP can describe some important aspects of human actions that center on decisions—in particular those decisions that involve known options and measurable outcomes. The real issue is the extent to which it ignores other important aspects of social life that shape how we think about risk and take risk actions and how it affects forms of risk governance. To

4.  A thoughtful, theoretically based alternative based on human values and social norms is the VBN (value-belief-norm) model of Paul Stern and Thomas Dietz (1994; Stern, Dietz, and Kalof 1993; Stern et al. 1998).

58

Risk and Social Theory

ignore those aspects would be to empty any theory of its theoretical breadth and depth.

RAP and Collective Choice: A Critical View from Systems Theory These problems of explicating the limitations and boundaries of RAP are especially relevant for two applications that extend its scope beyond the individual level: to collective decision making and governance and to the effects of individuals’ choices on other individuals. There are considerable conceptual problems for RAP when applied to collective decision making. At the most fundamental level, all rational actor theories (RAThs) imply that individuals have sufficient knowledge about the consequences of their potential courses of actions to make reasoned choices. Among other strong presuppositions embedded in this assumption is that individuals know not only their current preferences but also their future preferences. This assumption and others have been strongly contested by critical theorists, such as Habermas (1975; see also Chapter 6 in this volume), and systems sociologists, such as Luhmann (1990, 1993; see also Chapter 5 in this volume). System theorists claim that accurate predictions are difficult in a complex world even in the absence of other actors. The presence of a multitude of other actors, who are likewise making strategic choices and thus able to affect the outcomes of one’s own actions in myriad directions, makes it almost impossible to predict the consequences of any individual’s action. The individual decision maker thus is trapped in a web of contingencies and uncertainties. In such instances, RAP does not provide a very meaningful orientation, and RATh does not offer accurate predictions. The belief that rational action is possible and normatively required turns out to be an “ideological” element of those social systems (or subcultures) that would like its members to believe that the world is governed by rational decisions (Rayner 1987). Furthermore, there are clearly social contexts in which RAP seems out of place on other grounds. It is not uncommon for someone to have little or no idea of the effects of various decision options (such as accepting one of several job offers). Still further, information-seeking strategies may turn out to be insufficient or too time-consuming to produce clarity about options and potential outcomes. In this case, RAP would suggest guessing or pursuing a trial and error strategy. Since not even subjective probabilities can be assigned to each envisioned decision outcome, there is little a decision maker can do to make a rational choice. Social theory and systems theory in particular would suggest, however, that social systems need more coherence and predictability of individual actions than this seemingly random strategy would provide. If human actions were random occurrences, uninformed by options, as in the case of insufficient knowledge, social integration would be jeopardized. In response to this problem, systems theorists claim that conformity and predictability are accomplished through a variety of functionally equivalent procedures of which

Overarching Perspective

59

rational choice is only one. In particular, orientation through reference group judgments, the adoption of social norms, and secondary socialization by organizations and subcultures provide selection rules for options independent of the expected outcome for the individual (Giddens 1994a). In addition, these selection rules include other motivational factors, such as emotions or social bonds that play no role or only a minor one in RAP. Again, option selection does not necessarily result from an optimization process; it can result social or cultural practices. Individual choices are made on a social basis, such as judging the social desirability of or proximity to the aspired lifestyle of one’s reference group. The conflict between expected and experienced outcomes is likely to increase with the degree of social and cultural complexity. This leads, on the one hand, to an increased variability of human action and thus to a growing number of potential choices that are open to each individual. Increased complexity, on the other hand, necessitates an increased effort for coordinated actions—and a greater challenge to effective governance. This dilemma of coordinating expanding choices has been resolved in modern societies. The resolution lies with the evolution of semiautonomous systems that provide a network of orientations within each distinct social grouping (Bailey 1994: 243ff.). Such systems organize and coordinate the necessary exchange of information and services through specialized exchange agents. Depending on the cultural rules and images of these social groupings, rational expectations may play a large or small role in shaping these orientations. That is why appealing to rational decision making is attractive to some groups (e.g., bankers) but repulsive to others (e.g., evangelists). Absolute rationality—the strongest form of RAF—has lost its integrative power over the diverse system rationalities that each group has accepted as binding reference points for action and legitimation (Luhmann 1989). Complex, modern societies are characterized by the coexistence of multiple rationalities, RAF being just one of them. They compete with each other for social attention and influence. This trend toward multiple rationalities is reinforced by the disintegration of collectively approved and confirmed social knowledge. According to Luhmann (1989), each subsystem—economic, political, cultural—of society produces its own rules for making knowledge claims. These rules determine which claims are justified as factual evidence compared with which claims are seen as mere constructions, ideology, or myths. Furthermore, they govern the process of selecting those elements from an abundant reservoir of knowledge claims that seem relevant to the group members and match the body of previously acknowledged and accepted claims.

RAF Claims in the Light of Competing Social Theories The struggle for a comprehensive and overarching framework of rationality has been the focus of several new social theories, most prominently the theory of communicative action (Habermas 1984) and the theory of communitarian

60

Risk and Social Theory

responsibility (Etzioni 1993). Their common goal is to address the theoretical challenge outlined above—namely, the integration of the multiple rationalities of subsystems into a coherent governance framework. The challenge is a matter not merely of intellectual preoccupation but also of the most pressing problems of contemporary societies. Despite the widespread use of RAF in the risk literature, and despite a renaissance of RAF-based models in the social sciences, its framing fails to provide a common base for a meta-rational integration of competing subsystem claims. Hence, it is very doubtful that widespread use will be effective for a variety of reasons. The growing uncertainty and complexity of modern societies obscures the relationship between rationally derived expectations and actual outcomes. Too many social-system variables intervene between the axioms of expected utility and the experienced cause-effect relationships in everyday life. These discrepancies make the application of RAF less convincing for members of the various self-governing subsystems and thus weaken its potential power as a meaningful interpreter of and predictor for social responses. High-risk forms of technology are the most prominent examples of this. For example, after the nuclear disaster in Fukushima in March 2011, iodine tablets to protect against radioactive iodine were sold out in Germany, a country more than six thousand miles away from the accident (Renn in press). There is no rational explanation for this behavior, and there is no possible risk-benefit estimation that could justify the consumption of iodine tablets. The weakness of the RAF approaches to offer a commonly accepted rationale for designing and legitimizing public policies and governance and for managing risks rapidly grows. With this weakness becoming clearer in the eyes of most observers, it has further challenged RAF’s hegemony over risk knowledge claims. It also has challenged its legitimacy as a policy, governance, and management tool. Thus, RAF is threatened by three key assaults to its foundations (Renn et al. 1999). First is the inability of an individual in a complex society to foresee the consequences of one’s action. Second is the need for functional equivalents that create conformity and commonly accepted selection rules without reference to expected outcomes. Third is the incapacity of RAF to provide a framing—a meta-rationality—to harmonize the different subsystems. These problems are further aggravated by the contemporary trend toward emphasizing personal development and the self-realization of individuals. The experience of dissonance that individuals feel by their desire to accomplish one goal but achieving another conflicts with the expectation of self-realization. One social mechanism to cope with this conflict is to reinterpret actual outcomes as equivalent to what was desired in the first place: “isn’t what I got what I really wanted or needed?” Religions frequently use this reinterpretation when they try to explain why bad things happen to good people. Either the allegedly good people had sinned after all or their bad fate is a blessing when seen in a different light. Another mechanism is to offer a system of symbolic gratifications and incentives that compensate for the experience of conflict, such as getting a positive compliment for a positive attitude if performance is weak even after investing hard work, or being recognized as a valuable citizen in spite of the fact that all efforts

Overarching Perspective

61

to influence political outcomes have failed. However, all of these post hoc methods of resolving this conflict perform poorly in creating social conformity and individual well-being at the same time. The failure of RAF to produce the requisite conformity attracts a key theoretical alternative: critical theory. Critical theory is oriented not only toward understanding society and culture but also toward critiquing the contradictions and fault lines of both. It suggests that with the decline of a universal rationality proposition, new social norms and values need to be generated that provide collective orientations but do not conflict with personal aspirations (Habermas 1975). Similar suggestions have been proposed by the new communitarians (Etzioni 1991). In contrast to systems theory, in which such new norms are part of an evolutionary process remote from any individual voluntaristic influence, critical theory believes in the integrative potential of a free and open discourse. Such discourse is not an arena for resolving conflicts about competing claims (as is practiced in conflict resolution models based on RAF); it is an arena for the establishment of commonly agreeable social norms or values (Webler 1995). All participants voluntarily agree to accept the quest for common principles for evaluating validity claims. They further agree to comply with these principles via discourse because they perceive them to be intuitively valid and socially rewarding. Systems theorists are extremely skeptical about this approach. They claim that each system develops its own language, reference system of knowledge, and norms, all of which cannot be amalgamated under the umbrella of procedural or substantive meta-norms—since the norms will vary by system. Normative agreements need to be based on some commonalties. If these are missing or if participants in a discourse are unable to understand what they are, let alone accept the arguments of other (language) systems, then discourse becomes nothing more than window dressing; everybody talks, but nobody understands each other (Luhmann 1993). Under these circumstances, agreements remain elusive acts; they are merely acts of chance, strategy, or power. Regardless of whether systems theory or critical theory is correct in this debate (Habermas and Luhmann 1971), they agree that RAThs cannot offer the solution, because normative discourse (aimed not at optimizing utilities but at ensuring social cohesion) is outside its conceptual universe. The evolution of norms and the genesis of values are explicitly excluded from the body of knowledge within RAF. For RAF, they are simply “givens” or “out there”— exogenous to any particular context of choice. One of the major drawbacks of RAF, therefore, is its inability to explain one of the most prominent conflicts in modern societies: how to accomplish normative coherence, and how to provide solutions for coping with plural claims for collectively binding moral principles.

Structural Pressures on Individual Behavior RAF also faces major problems when it comes to structural influences on individual behavior. The problem is conceptualized in two ways. First, many individual actions occur in a restricted social context in which the variety and

62

Risk and Social Theory

quantity of decision options are limited or are perceived as limited by individual actors because of the decision-making environment. Physical and social environments set the boundaries of individual choices. But such a limitation is not a serious challenge to RAF. The well-known RREEMM (resourceful, restricted, evaluating, expecting, and maximizing man) model suggested by Siegwart Lindenberg (1985) depicts humans as rational actors within social constraints (Esser 1991). Problems arise if the decision maker feels guided by context variables—norms, obligations, values, habits, and so on—and does not perceive the situation as one of individual choice. This not only applies to habitual behavior in the form of personal routines that are performed quasi-unconsciously in everyday life but also to cultural routines that are based on complex stimulusresponse mechanisms. The enactment of culturally shaped behavior is mainly below conscious awareness and does not imply any type of internal cost-benefit analysis—certainly not the conscious calculation of RAF. Second, the outcomes of individual actions produce external effects for other individuals; they interfere with making choices. These non-intended and very often unpredictable side effects limit the ability of actors to anticipate or predict the consequences of their own actions, as noted above. They also form the structural conditions for collective actions in the future. A major reason transpersonal institutions are needed in society is to ensure predictability and social orientation even in the presence of the unpredictable side effects of individual actions (Jaeger et al. 2001). RAThs do not deny the existence or the relevance of these structural elements. Rather, they regard them either as constraints on rational choice that can be traced to individual actions of the past or as a social learning process that teaches individuals to cope with interference by predicting interactive effects more accurately. In addition, if collective action is rather homogeneous, RAP treats social aggregates as if they were rational individuals via its assumption of methodological individualism. Organizations are seen as individual actors, as persona ficta. They are entities that have goals, evaluate their options, and pursue an optimization strategy. They select the most efficient means for reaching predefined goals, similar to individual decision makers. The transfer of individual choice to collective action via methodological individualism characterizes the RAF approach. All aggregate phenomena are interpreted as if they can be reduced to the decisions of an individual actor. Methodological individualism often has been criticized on the ground that social actions cannot be reduced to individual actions alone (Hogson 1986). Georg Simmel claimed that in sociology, the house is more than the sum of its stones. Accordingly, the appropriate theoretical method is not methodological individualism but methodological holism. But RAF theorists eschew holism. They emphasize that complex social actions can indeed be explained by referring to the same terms of reference that have explained individual actions. The claim for the universal applicability of RAF is not justified by postulating that individual and social behavior show an identical or isomorphic internal structure. If so, the two action categories would necessitate identical or inter-

Overarching Perspective

63

changeable terms and theories, which is clearly not the case. RAF does postulate, however, that complex social actions can be decomposed analytically to a variety of individual actions. In turn, these actions provide the database for explaining and predicting actions of aggregates. In one sense, then, rational action theorists would agree with the statement that the house is more than the sum of its stones. The additional quality, however, can be derived from studying the sum of stones and the mortar holding them together rather than from investigating the house as an entity sui generis.

Examples Here are two examples that may help to illustrate the RAF understanding of complex social systems. The first comes from economic behavior. In the field of economics, a rational actor must accept the rules of the market, the present price structure, and the availability of resources as external constraints. These constraints are a product of all of the actions of other rational actors. Learning in this context is a function of (intelligent) trial and error in which individual actors learn to cope with the effect of interference and improve their ability to predict future outcomes more accurately. The second illustration is in the political arena. In politics, rational actors must accept norms, laws, and rules of decision making as external constraints. Again these constraints are presumably generated by other rational actors in the first place and the precedents they establish. Learning takes place here through the process of elections where the individual candidates can learn how their expectations of popularity are measured in votes. After a defeat, a candidate may learn to revise the political platform or communication method to improve the chances of gaining voters’ support. In general, observing the individual actor in the context of other actors provides enough insights to understand the structural effects of all the collective actions relevant in each respective economic, political, or even social arena. In contrast, structural theorists in sociology emphasize the large institutional practices and networks that organize social life (Burt 1982). They do not deny the possibility of looking at structural phenomena from an individualistic perspective. However, they prefer the alternative orientation of methodological holism. They assume that treating social structures and institutions as entities sui generis provides more adequate insights and more explanatory power for understanding collective actions than the atomistic view of methodological individualism (Parsons 1951). Structural theories claim instead that they have found similarities and regularities in the behavior of aggregate structures that are difficult or impossible to explain by individual actions (although they clearly emanate from and affect individual actions). Germinal studies on institutions, economic or regulatory styles, class structure, and other aggregate structures have identified many of these structural phenomena that apparently influence individual behavior without entering into the rational calculations of each actor and that were never “invented” by actors through rational choices.

64

Risk and Social Theory

Nevertheless, social scientists who adhere to RAThs are convinced that treating aggregate phenomena as manifestations of individual actions may indeed help to get us closer to a “unified social theory.” Other social scientists believe that such an approach is bound to fail. It will produce trivial results; it will be applicable only to a limited range of social phenomena; or—as is the practice now—it will overextend the theory beyond its proper scope of application. One attempt to combine the individual focus of RAF and the structural focus of many macro-sociological theories is the structuration theory by Anthony Giddens (1979). Giddens (1984: 25) describes this approach as one of duality—a synergy between the actor as agent and social structure: “the constitution of agents and structures are not two independently given sets of phenomena, a dualism, but represent a duality. According to the notion of the duality of structure, the structural properties of social systems are both medium and outcome of the practices they recursively organize. Structure is not ‘external’ to individuals. . . . ​Structure is not to be equated with constraint but is always both constraining and enabling.” Giddens rejects the idea that individuals calculate the expected utilities of the various consequences of behavioral options. Instead, they orient themselves within a complex arrangement of traditions, individual routines, and sociocultural expectations. Each actor is part of the forces that shape structures and the future context of actions for self and others. At the same time each individual is bound to structural constraints that are the outcome of past actions and choices of others. Such an open system would tend to be chaotic if society did not develop consistent patterns of behavior. Like markets of economic exchange, they act as invisible guidelines for individuals in making choices. These patterns are not simply an aggregation of individual actions but instead develop a structural logic of their own. For example, traditional norms do not promise maximum payoff or even an improvement of individual satisfaction but ensure system continuity and stability through patterned processes. Likewise, power structures are often cherished even by those who lack power because they provide actors with ontological security (the expectation of a stable mental state derived from a sense of continuity about the state of the world and events in one’s life). Giddens’s (1987) main argument is that individuals do have agency. They have choices to orient themselves within different social frames, such as traditions, special institutions, and system rationalities. But the frames constitute developments of structural forces that go beyond individual actions and their effects on others. They provide the continuity necessary for patterned expectations and interactions.

External Effects: Deviations from Normality? A final concern for RAP is the treatment of external effects. Although they are discussed frequently in the RAF literature, their treatment points to a major weakness in the assumptions RAThs make. RAF presumes that, under certain conditions, the pursuit of individual rational actions will lead almost automat-

Overarching Perspective

65

ically to collectively rational outcomes. But the opposite often occurs in reality. Rational actions by individuals frequently lead to unintended outcomes and socially undesirable outcomes (Hardin 1968). This has always been recognized within RAP as a deficiency not of RAP, but of the conditions under which choices are made. Perhaps the best-known example of such failures is the free-rider problem (Olson 1971). The common RAP response to the problem of individual versus collective rationality, and hence the potential conflict between the pursuit of individual rationality and the common good, is twofold. On one hand, rational action theorists believe in the “invisible hand” in the sense that under the condition that external effects can be internalized as costs to the individual actor, the outcome of individual actions will be equivalent to the overall social good. On the other hand, corrective actions are mandated if the market fails to allocate social costs to those who cause them. In such a case, market-compatible instruments are readily available to impose external costs on the guilty party. Among those compatible instruments are the extension of property rights to all affected parties or simulating market prices to include social costs (e.g., Pigou taxes or Coase property rights). The same argument can be made for the political sector. Individual rationality is sufficient to provide a framework of laws and regulations that will enhance the common good. This derives from the premise that each individual has the same right to influence political outcome. In both cases, economics and politics, collective action enhances the common good if the conditions of rationality and perfect market structure are met. But the real world is devoid of such perfections. Seldom if ever do we find a perfect market or a perfect voting process. Thus, the conditions for collective welfare are virtually never fully present. The strategy of “blaming” the conditions for impeding the congruence of individual and collective rationality effectively builds a wall of immunization around RAThs. Since all conditions for a perfect market or a truly rational political system are never met, deviations from theoretically derived predictions can always be explained by those imperfect conditions. Assuming perfect conditions also reduces the explanatory power of RAF since it does not explain how markets or political structures arise in the first place, nor how people behave in an imperfect world. The main argument against the RAF treatment of deviations from rationality as a consequence of imperfect economic or political conditions does not match our empirical knowledge. It remains an unsubstantiated assumption of a hidden social tendency toward an equilibrium stage of perfect market conditions and political structure (Jaeger et al. 2001). If this were true, social systems would tend to orient themselves in line with the guiding principles of rationality in their pursuit of the goal of perfect conditions. In reality, however, many systems and institutions profit from imperfection. Even if we assume that they behave like rational actors, their particularistic interests motivate them to make sure that imperfect conditions in the economic or the political market do not change. Imperfection creates losers and winners. It is not clear, then, whether or why social systems would seek to enforce a movement toward more

66

Risk and Social Theory

“perfection.” In addition to system self-interest, there may be even some good normative reasoning to allow for some imperfection in society. The beneficial consequences could include social stability, solidarity, or cohesion. Finally, many structural barriers such as market segmentation, incomplete knowledge, possibilities of manipulation, market scale effects, and many others make it unlikely, if not impossible, to meet the assumptions of perfect market conditions. Political will, even if available, is not sufficient to assure a transition to a world of such conditions. There are structural barriers that are impossible to overcome without sacrificing other highly esteemed social goods, such as social solidarity and stability. The real world will always consist of economic and social structures that do not resemble the vision of perfection that rational action theorists share. One may even raise the question of whether perfect conditions in economy and politics would constitute a desideratum for societies. Rather than a vision for the future, the idea of a RAF equilibrium could turn out to be a nightmare. We might conceive of a creature that looks like a utility maximizing automaton who enjoys perfect knowledge, preprogrammed preferences, and equal opportunities, but who knows nothing about habit, culture, commitment, and sociability—the core elements of human systems. Such a creature would fit perfectly Oscar Wilde’s definition of cynic as a man who knows the price of everything but the value of nothing.

Conclusion Our discussion leads to following conclusion. Despite its well-organized logic and widespread use, the rational action framework has serious limitations in theorizing risk and informing governance. The most prominent problem for RAF is the assumption that all human behavior can be modeled as variants of optimization or maximization procedures. There is sufficient doubt that such a superimposed framing on all human actions can offer a satisfactory perspective for studying and explaining such phenomena as trust, solidarity, identity, affection, and, of course, risk. Furthermore, major problems for RAF lie in the fact that more and more areas of social life demonstrate specific patterns of behavior that make sense neither in the light of RAF’s own assumptions or with respect to actual human choices. Outcomes have become less predictable so that orientations for future behavior can be based only partially on expected consequences. Social actions are more and more determined by actions of others, which create a large amount of interferences or constraints. Most individual actors are not able to calculate the likely impacts of their own choices in situations where many actors are likely to interfere and structural conditions are likely to change constantly. Under such conditions social processes for optimal allocation of resources take a back seat. In essence, RAF presumes a world that does not exist. Social systems and processes have created a complex world where the conditions for applying RAF are seldom met. Hence, there is a growing discrepancy between model and real-

Overarching Perspective

67

ity. As a result, many RAF theorists have become ardent proponents of changing the world in the direction of the model rather than adjusting the model in the direction of the world (Klein 2007). This latter practice is common for any good scientific theory. Theories must always live with anomalies. Good theories are not proved wrong over time but proved either irrelevant in face of new developments or only applicable to special cases. Similarly, RAF may not collapse because of its weaknesses, but may be swallowed up by a new paradigm. Then RAF will be one important framing within the master framing of the new paradigm. What does this mean for the recent debate on the role of social science for risk governance? What does it mean in the context of our argument about risk and rationality? Explanations based on RAF may help risk managers to understand why individuals behave in certain ways when they face uncertain outcomes of their action. Such an explanation, however, can yield valid results only if the individual perceives the risk problem as a problem of optimizing outcomes and if the conditions of choice generally meet RAF assumptions. If risk behavior is grounded in solidarity with others or motivated by reference group judgments, or is deeply shaped by prevailing norms, RAF does not make much sense. If the focus is on collective risk behavior, theories based on RAF are less than convincing. They may even serve the function of disenfranchising individuals from political actions or restricting freedom since these actions are deemed “irrational.” The main problem, as noted above, is that RAF presupposes stable preferences and knowledge about outcomes beyond the individual’s aspiration levels. Furthermore, there is the assumption that the sum of individual actions would tend toward equilibrium. The evidence for all of this is extremely weak and suspect. In view of the key limitations in RAF we have outlined, we argue for alternative approaches. We argue for a social science approach to research on risk that does not eschew RAF but that includes more than the RAF and its unsupported assumptions. Risk behavior is much more than rational calculation. All theoretical challengers to RAF—most notably, systems theory, critical theory, and postmodernism—are themselves not entirely free of theoretical and practical problems in dealing with risk. They do, however, provide the missing links that RAF cannot bridge. Until we have a unified social theory of risk, we are forced to live with a patchwork of different theoretical concepts. This is not a plea for eclecticism but a pragmatic position for matching theories with substantive problems. In the end, social theory will be judged according to its potential to explain social responses to risk not by its ability to please risk managers. Regardless of whether risk managers prefer to set risk policies according to the numerical results of risk assessment studies, they will be faced with social reactions to their policies. Some will challenge or even counteract the basis of their policies. Social scientists must provide meaningful explanations for these responses and promote a critical reflection on risk policies in the light of the research results. RAF-based research is likely to continue to play a role in this

68

Risk and Social Theory

endeavor. But we would caution ardent proponents of RAF to neither overstress its explanatory power nor overextend it—or to rely too heavily on its capacity to inform effective governance. In the remaining chapters in this section, we review and elaborate alternative theories that can, at least in part, address some of the key limitations of RAF and that provide the proper theoretical vocabularies to reintegrate fundamental social processes into risk characterization and governance. In particular, we critically evaluate the possible contributions that the reflexive modernization theories of Ulrich Beck and Anthony Giddens (Chapter 4), the systems theory of Niklas Luhmann (Chapter 5), and the critical theory of Jürgen Habermas (Chapter 6) can make to our sociological understanding of societal risk in advanced modernity.

4 Reflexive Modernization Theory and Risk The Work of Ulrich Beck and Anthony Giddens All life is an experiment. The more experiments you make, the better. —R alph Waldo Emerson (1909), Journals, November 11, 1842

The Era of Reflexive Modernization

M

odernity is typically defined as the evolutionary outcome of a transition from traditional forms of social and political organization accompanying the industrialization and urbanization of societies. The sine qua non of modernization is a change in worldview at the most basic level of human thought. It begins with the rejection of life as a cyclical process. In its place is the notion of change—often change that is normatively judged as progress. Modernization, whether seen simply as change or more systemically as a process of evolutionary change, is permeated with an overarching worldview, elaborated in Chapter 3, that emphasizes the role of rationality in legitimizing social institutions and actions. An influential theory in this era of advanced modernization is reflexive modernization theory (RMT). Scholars within the reflexive modernization tradition endeavor to explain the seismic changes at the end of the twentieth century that launched us into this era of heightened or advanced modernity. It is also an era punctuated with a radicalization of modernity’s core institutions. This era, with its stream of unintended and unanticipated negative technological and ecological consequences, is one of individual, organizational, and institutional tumult. The institutions of advanced modernity have attempted to harness this tumult with new systems of rationality, perhaps even hyper-rationality. Reflexive modernization scholars argue that the hyper-rationality of modernity (i.e., instrumental rationality, efficiency, justice through economic growth, and steady improvement of

70

Risk and Social Theory

i­ ndividual living conditions through scientific and technical progress) has lost its legitimizing power. To borrow from Max Weber, it also has resulted in disenchantment. The emergent development of disenchantment has been caused by several developments within the transition from simple modernity to our current era of reflexive modernity (cf. Knight and Warland 2005; Lash 2000; Renn 2008a, 2008b, 2008c; Zinn and Taylor-Gooby 2006: 33ff.). The disenchantment is reflected in four separate but complementary social processes: • Dominance of negative side effects. The promises of modernization— progress and greater welfare—have been undermined by the experience of negative side effects such as environmental degradation, the introduction of unprecedented risks, inequitable distribution of resources, and a flattening of life satisfaction. • Individualization of lifestyles and social careers. Individuals have more occupational and lifestyle options than ever before but lack a collective identity (such as social class), social orientation, and ontological security when faced with behavioral or moral conflicts. • Pluralization of knowledge camps and values. Contemporary socie­ ties are characterized by antithetical systems of competing knowledge claims, moral judgment codes, and behavioral orientations. • Absence of overarching objectives and goals. All collective actions are challenged as being driven by political choices that are incongruent with individuals’ beliefs, values, or convictions. This leads to the need for more legitimization, but the reservoir of legitimation may be too empty to fill this role. Reflexive modernization theory, which attempts to explain these societal changes, emerged in the 1980s and 1990s, largely through the scholarship of the German sociologist Ulrich Beck and the British sociologist Anthony Giddens. Prior to this, Beck had been an obscure theoretician of education, while Giddens already had an international reputation as a theorist covering a considerable range of thought. Shortly after his qualification as a university lecturer in 1979, Beck took over the editorship of the sociological journal Soziale Welt (Social World) in 1980. Beck’s contribution to RMT began with his publication of Risiko­ gesellschaft: Auf dem Weg in eine andere Moderne (Risk Society: Towards a New Modernity) in January 1986, which described the era of high modernity as the risk society. The thesis, that all citizens were threatened by mega-risks that knew neither class nor geographic boundaries, could not have been timelier. The worst nuclear power plant accident in history had occurred on April 26, 1986, a meltdown of reactor number four at the Chernobyl Nuclear Power Plant in the Ukraine (then the Ukrainian Soviet Socialist Republic). The substance and tone of his treatise immediately resonated with a European public disillusioned with experts’ pronouncements of security that were empty of meaning.

Reflexive Modernization Theory and Risk

71

Remarkably uncommon for academic books, Risk Society was on the bestseller list in Germany within six months and elevated Beck as a grand theorist of worldwide renown.1 Giddens launched his career in the 1970s by critically interpreting the works of sociology’s founders, before developing his theory of structuration in the 1980s to address the recurring theoretical issue of agency and structure. The central theoretical challenge occupying social theorists was how to reconcile the liability of individual action (agency) with the patterned outcome and institutional structures to emerge. Giddens’s contribution to RMT began with his publication of The Consequences of Modernity in 1990, the initial book in the latest stage of his intellectual trajectory. In this pivotal work, Giddens offered theoretical insights on the development, trajectory, and effects of key modernization processes, marking his first systematic engagement with the dynamics of technological and ecological risks. We do not attempt to review the totality of RMT in this chapter. Indeed, such an endeavor would most certainly require its own book-length treatment. Instead, we pursue a more refined objective: we examine only how RMT addresses the three key questions about societal risk in advanced modernity that we outlined in the Introduction. The questions regard social order, risk knowledge, and institutional and political responses, and we repeat each in full as we discuss it. As a pedagogical aid to readers, we separately interrogate Beck’s and Giddens’s answers to each of the three questions before discussing how the two scholars converge and diverge.2 Both have contributed—although quite differently—to the development of RMT, and each has a significant body of work that directly bears on our three questions.3 While their main insights have many similarities,4 they are nevertheless characterized by subtle, and sometimes not so subtle, differences. Indeed, a key difference is their intellectual 1. Not long after the English translation of Risk Society (1992) circulated across North America, Jeffrey Alexander and Philip Smith (1996) attributed Beck’s popularity to his apocalyptic language and his elevation of risk to a mythical or religious status. 2.  For early examinations of Beck’s and Giddens’s ideas on risk and society, see Mol and Spaargaren 1993; Pellizzoni 1999. 3.  Beck’s English-language publications on risk and RMT are approximately three times as numerous as those of Giddens. Indeed, as past director of the Research Center on Reflexive Modernization in Munich, Beck is more intellectually invested in the success of RMT (see Beck, Bonss, and Lau 2003: 29). 4.  Beck and Giddens have developed their similar RMT arguments largely independent of each other, and, to be sure, neither can be accused of excessively citing the other. Yet their paths have crossed. The most obvious connection is Reflexive Modernization (Beck, Giddens, and Lash 1994). This was the first of more than ten books that Beck has published with Polity Press, the publishing house Giddens co-founded in 1985. Beck (1992b) has even reviewed a few of Giddens’s early works in RMT (e.g., Giddens 1990, 1991). Finally, as director of the London School of Economics (1997–2003), Giddens oversaw the hiring of Beck as the British Journal of Sociology Centennial Professor in the Department of Sociology in 1997, a position Beck still holds.

72

Risk and Social Theory

styles: the straightforward pragmatism of Giddens’s Anglo tradition and the opaque, paradox-ridden playfulness of Beck’s German tradition. We begin with an examination of Beck’s and Giddens’s answers to our three risk questions, then proceed with an analysis of the convergence and divergence of their insights on risk. After identifying the scope conditions of RMT and summarizing the main contributions of RMT for understanding risk, we will close by illuminating the strengths and weaknesses of RMT for understanding risk.

Beck and the Risk Society Social Order: Reflexive Modernization What social order emerges in advanced modern societies where technological and ecological risks are virtually everywhere? Beck answers our first question about the social order of advanced modernity with the term “risk society.”5 As his master frame, the risk society is an amalgam of three related elements: (1) risk; (2) reflexive modernization; and (3) detraditionalization and individualization. Modern risks are products of human decision making; for Beck, as well as for Giddens, risks are manufactured uncertainties. Beck’s early work (e.g., Beck 1992c) focused almost exclusively on the unintended consequences of decision making (e.g., hazards at nuclear power plants), and, more recently, global warming (Beck 2010a, 2010b, 2010c). Publications after 9/11 (e.g., Beck 2009b) also emphasize the importance of intended consequences (e.g., terrorism). Risks in the advanced modern era are no longer circumscribed spatially or temporarily; rather, they transcend all boundaries—termed entgrenzt in the original German (Beck 1992c). Figure 4.1 identifies key characteristics that define these new risks. Risk is pivotal to Beck’s master frame, since the radical features of these new nuclear, chemical, genetic, and ecological risks reshape the social order via a discontinuous break with the past—demarcating advanced modern society (“risk society,” in Beck’s terms) from its premodern and modern predecessors. Yet we continue to use risk assessment and management techniques of the early twentieth century—methods built on temporally, spatially, and socially 5.  Beck sees the “risk society” as the decaying vestige of industrial society and as the beginning of a new stage of modernization that he referred to first as “reflexive modernity” (Beck 1992c) and later as “second modernity” (Beck 1998, 2005; Beck and Lau 2005) to emphasize his belief that we are still in a modern age—not postmodernity, as some authors claim. Turning his gaze to globalization more recently, Beck (2000a, 2000b, 2005, 2006a; Beck and Grande 2007a; Beck and Sznaider 2006) has argued that nation-states must shift from a national paradigm to a cosmopolitan paradigm within this “second age of modernity,” where we recognize and reintegrate “the other” (Beck 2005; see also British Journal of Sociology 57, no. 1, the special issue on cosmopolitanism within sociology).

Reflexive Modernization Theory and Risk

73

• Transboundary effects:  Modern risks transgress sectoral, social, national, and cultural boundaries. They may originate in one country or one sector but will then proliferate into other areas and sectors (e.g., bovine spongiform encephalopathy, or BSE). • Globalizing effects:  Risks tend to affect everybody and frequently involve irreversible harm (e.g., climate change). • Increase of penetrating power:  Risks tend to penetrate and transform social and cultural systems significantly and to change social behavior (e.g., genetically modified organisms in agriculture). • Incalculable nature:  Due to the lack of boundaries and the complex global consequences of taking risks, the instruments and tools for calculating risks are inadequate and inaccurate so that even insurance companies are unable to calculate premiums that are proportional to the respective risks (e.g., nuclear power plants). • Lack of accountability:  Potential victims of risks are being unduly burdened without their consent and without any institution or person being accountable for future damage (e.g., car exhaust). Figure 4.1  Five Major Characteristics of Risk in the Contemporary Era (Beck 1992c)

limited accidents.6 According to Beck (1996a: 31), the risk society begins “at the moment when the hazards which are now decided and consequently produced by society undermine and/or cancel the established safety systems of the provident state’s existing risk calculations.”7 In other words, the characteristics of new risks identified in Figure 4.1 overrun existing norms of risk calculation and security and signal the demise of the sovereignty of nation-states (as far as risks are concerned). In this context, Beck perceives large, multinational insurance companies to be the proverbial canaries in the coal mine, signaling what is and is not insurable at the boundary of the risk society.8 6.  Beck (1994a: 127) identifies new types of mega-hazards that abolish the four pillars of the calculus of risks. First, they cause global, often irreparable damage, so the concept of monetary compensation fails. Second, precautionary after-care is excluded, and the concept of anticipatory monitoring of results fails. Third, the accident becomes an event with a beginning but with no ending. Fourth, standards of normality, measuring procedures, and the basis for calculating the hazards are abolished, so a lack of accountability exists for them. 7.  Beck (1994b: 5, 1996a: 27) acknowledges that the seemingly abrupt shift from modern industrial society to risk society (which is also industrial) masks an intermediate stage he calls the “residual risk society.” During this middle stage, social actors—via their decision ­making— produce the defining risks of the contemporary era, but they do not yet make risk the focus of widespread political conflict. Briefly, the residual risk society contains the actual negative consequences of mega-hazards without the widespread societal preoccupation with uncertainty that appears later, in the full-blown risk society (Beck 1996a). The characteristics of this stage help to defray criticism from scholars (e.g., Edelstein 2000; Picou and Gill 2000) who argue that the U.S. public reaction to technological and ecological catastrophes in the 1970s and 1980s fell short of what would be expected in the risk society. 8.  Beck (2009b: chap. 8) has recently responded to critics of his private insurance principle.

74

Risk and Social Theory

We live in a risk society not only because innumerable decisions produce intended and unintended consequences, but also because “the idea of the controllability of decision-based side effects and dangers which is guiding for modernity has become questionable” (Beck 2009b: 15). Three prevailing axes of risk that erode this idea of controllability are global ecological threats (e.g., climate change), global financial crises (e.g., world debt crisis), and the threats from terrorism (e.g., global terror networks) (Beck 2002), prompting Beck (1996b, 1999, 2006b, 2008b, 2009a, 2009b, 2009c) to shift from “risk society” to “world risk society,” his latest term to emphasize the global scope.9 World risk society acknowledges the global scope of risks, problematizes our notion of “society,” challenges the legitimacy of nation-states, and forces what he calls “cosmopolitanism” (noted above and discussed in more detail below). For consistency, and because it is the principal identity of his theoretical frame, we continue to use the term “risk society” in this chapter and throughout this book. Figure 4.2 identifies the key characteristics on which risk society differs from earlier modern society. We discuss the distinctions throughout this section. Across Beck’s numerous works, the historical discontinuity of reflexive modernization is simultaneously a process he describes, the time period in which we live, and a way forward that he prescribes. Beck’s (1994b: 6) bestknown definition denotes reflexive modernization “self-confrontation with the effects of risk society that cannot be dealt with and assimilated in the system of industrial society—as measured by the latter’s institutional standards.” In other words, reflexive modernization is the autonomous transition from industrial society to risk society, driven by the negative, unintentional risks of modernity. In this sense, Beck ascribes agency to the risks themselves. This transition contains the potential for the destruction of industrial society, not through a revolution, but through the “the victory of Western modernization” (Beck 1994b: 2). Often used synonymously with risk society, reflexive modernization entails “a radicalization of modernity, which breaks up the premises and contours of industrial society and opens paths to another modernity” (Beck 1994b: 3). In this sense, reflexive modernization occurs when modernization problematizes its own foundations (Beck, Bonss, and Lau 2003). The social order in the industrial era centered on the distribution of economic output and improved welfare—the distribution of “goods.” Social class determined the unequal distribution of goods. In contrast, the social order in the risk society centers on the distribution of “bads”—in particular, the distribution of risks. Once risks exceed earlier spatial and temporal limitations and threaten the bases of property rights and ecological and financial capital through what Beck (1990) calls the “boomerang effect,” it becomes nearly impossible to distinguish “not affected” from “affected.” Thus, with many technological risks (e.g., nuclear fallout), the distribution of bads is not, like social 9.  Beck (2009a, 2009b) is quick to point out that the first two (ecological threats and financial crises) match well with his conceptualization of risks as the unintended effects of reflexive modernization, while the third (terrorism) is associated with intentional catastrophes.

Modern Industrial Society

Risk Society

Origin of mega-risks

Earthquakes and floods are caused by nature or God

Climate change is caused by human decision making

Modernization process

Simple modernization from agrarian to industrial society (modernization of tradition)

Reflexive modernization from industrial to risk society (radicalization of modernization and rationalization of rationalization)

Prevailing type of differentiation

Class positions (rich versus poor) from distribution of goods

Risk positions (different grades of “affected”) from distribution of bads

Transformation of human actors

Detraditionalization and early individualization

Heightened individualization

Human perception of risks

Retention of cognitive sovereignty

Loss of cognitive sovereignty (dependence on science to tell us how badly we are in trouble)

Emerging mode of science

Primary scientization

Reflexive scientization

Main line of conflict

Relations of production

Relations of definition

Preeminent political paradigm

Retention of national sovereignty

Emergence of cosmopolitanism

Management of risks

Attempts are appropriate to the magnitude and scope of risks

Magnitude and scope of risks outpace conventional attempts to manage them

Figure 4.2  Key Characteristics of the Risk Society

76

Risk and Social Theory

class, unequally distributed.10 Indeed, radiation offers no privilege to any social grouping. For Beck, this is the key disjuncture with the past. Whereas social class and the incidence of risk directly correlated in industrial society (e.g., the riskiest, dirtiest often jobs went to the most powerless members of society), this is no longer so in advanced modernity due to the ubiquity of risk (Beck 1992c).11 This distribution of risk is now orthogonal to class lines. Yet, more recently, Beck (2008a, 2010a, 2010b, 2010c) has acknowledged that some global risks, such as climate change, do exacerbate existing inequalities between rich and poor and between core and periphery.12 This fundamental shift in the foundation of the social order negates class identity and, for Beck (1992a: 109), signals “the end of the other.” It is no longer “us” and “them”; rather, we all are in this (a situation in which the new risks are everywhere) together. Stated boldly, virtual risk equality supplants class inequality. This shift in the social system, in turn, produces a change in culture (conceived as “shared meaning”)—a change in the formation of consciousness because “in class positions being determines consciousness, while in risk positions, conversely consciousness (knowledge) determines being” (Beck 1992c: 53). Briefly, as described below, the mobilization of different social actors to produce or challenge knowledge claims that define, assess, and manage risks is central within the risk society. The weakening of class position and class consciousness is one instance of a process of individualization,13 which builds on and supersedes detraditionalization (the term for modernization’s breakdown of traditional practices in Beck 1994b). It summarizes the shift from agrarian to industrial society when traditional roles—especially gender roles—were disrupted via urbanization, economic development, secularization, and democratization. The severing of traditional ties has radicalized the individual social actor (Beck and Willms 2004). Mass media, mass education, and mass consumerism of advanced modernity contribute markedly to the disembeddedness of individuals (Beck 1992c). Stripped of traditional social identities—gender roles, class position, and so on—in the risk society, the social actor becomes, in Beck’s terms, individualized. The subjectivities of such social actors, embodied in self-expression, are increasingly the product of self-selected networks and interaction among participants (Beck, Bonss, and Lau 2003). More and more social actors are cut loose from the social, cultural, and economic moorings of tradition and modernity. This destabilizes identities and sends individuals on a search for meaning (Jaeger et al. 2001) and for mental 10.  While this is true for some risks, others, such as toxic waste sites, are unevenly distributed geographically, often placing a greater burden of risk on disadvantaged groups in society. 11. This point attracts considerable opposition from environmental sociology (see, e.g., Schnaiberg 1980), and especially from the environmental justice literature (see, e.g., Bullard 1990; Pellow 2000). 12.  For a recent clarification of his ideas on risk equality, see Beck 2009b: chap. 10. 13.  For a critique of Beck’s and Giddens’s ideas on individualization, see Adams 2003.

Reflexive Modernization Theory and Risk

77

anchors that can provide them with a feeling of ontological security and stability (Mythen 2005b). The individualized actor, now set adrift, must make risk decisions and reflect not only on the social institutions responsible for shaping those decisions, but also on the degree of trust they engender. The lines of conflict in the risk society increasingly realign over what Beck refers to as the relations of definition of risk,14 which are competing social constructions of risk (Beck 1992c). This sets the stage for a new form of political action: a radically individualized, ad hoc politics from below.

Risk Knowledge: Reflexive Scientization and Unawareness What is our knowledge about these risks? The quintessential risks of advanced modernity—for example, genetically modified organisms, nanotechnology, and climate change—are increasingly complex in their causes and effects (replete with synergistic and threshold effects) and temporally and spatially expansive in their scope. Because of their complexity and the time lag in manifestation, they are fundamentally more open to social construction than are earlier types of risks. These characteristics erode the cognitive sovereignty of individuals in the risk society, as people lose the ability to know their risk position, the exact nature of their exposure, or the seriousness of the consequences (Beck 1992c). This set of structural features leads to the beginning of Beck’s answer to the second question regarding risk knowledge. There is little doubt that our knowledge of risk is highly dependent on science,15 the central institution in risk assessment. But science in the risk society plays an increasingly paradoxical role. Corresponding to Beck’s distinction between the modernization of tradition and reflexive modernization of industrial society is a differentiation in the relationship of scientific practice and the public sphere: primary scientization and reflexive scientization (Beck 1992c). In the first phase of scientization, modern science trumped lay rationality and traditional ways of knowing on its way to create new bodies of knowledge and drive waves of economic and technological development. Yet this primary scientization of discovery and invention—especially in the physical and engineering sciences—is significantly responsible for the growth of risks. Scientific applications create uncertainties embedded in new, increasingly complex technology. For example, the dawn of advanced modernity is often traced to the dropping of atomic bombs in 1945, at the end of World War II. This 14.  Beck would have been well advised to have chosen an alternative description, such as the matrix of risk definition. 15.  Beck, as well as the other theorists, largely treats science as a monolithic institution rather than as the multifaceted social institution conceptualized by science studies (cf. Hess 1997).

78

Risk and Social Theory

f­ undamental understanding of the structure of the atom opened the way for a variety of technology, such as the generation of commercial energy via nuclear fission. But these remarkable achievements in knowledge and technology planted the seeds of uncertainty into a wide variety of unprecedented risks: nuclear safety, safeguarding of bomb grade materials, detection of dirty bombs, and the disposal of nuclear waste, to name a few. More recently, science has entered a second, highly reflexive phase in which it is confronted with the products and problems of its own creation. This science is the primary social system entrusted with identifying the dark side of technological innovation, with knowledge claims about risk—the expert system on which we depend to identify risks and determine standards of safety. Beck (1992c: 155) asserts that the developmental logic of reflexive scientization “extends scientific skepticism to the inherent foundations and external consequences of science itself” so that “both its claim to truth and its claim to enlightenment are demystified.” The defects of expert rationality become increasingly recognizable and questionable when the sciences examine their own foundations and consequences. Along with this demystification, “a momentous demonopolization of scientific knowledge claims comes about: science becomes more and more necessary, but at the same time, less and less sufficient for the socially binding definition of truth” (Beck 1992c: 156). Scientists often confront each other over assumptions, methods, results, and interpretations. Different sectors of society—most notably, risk producers and risk bearers—employ their own scientists to raise or refute charges (Beck 1994a; Dietz and Rycroft 1987). While Beck (1995) acknowledges that this practice occurred in earlier industrial society, he argues that it morphs into a signature feature of the risk society, whereby scientists (and science itself) find themselves both the means and the ends of political debate. For Beck (1992c), this is a positive development, as it serves to increase the autonomy of other institutions and laypeople vis-à-vis science, and it makes reality more intelligible to non-scientists. This Janus-like role for science—as both creator and assessor of risk—is where the fundamental structural dislocation lies for Beck. When science is no longer privileged (let alone operating as a unified whole), individuals in risk societies are left to muddle through among competing epistemologies—different ways to understand the increased risks that define those societies. In this way, the main line of conflict shifts from relations of production (in the Marxist sense) in modern industrial society to relations of definition in the risk society. By relations of definition, Beck (1999: 149–150) means the standards, rules, and capacities that facilitate the identification and assessment of risks. Among these are the standards of proof, burden of proof, and standards of compensation. In the risk society, producers and bearers of risk conflict with each other (and debate among themselves) over relations of definition that are largely unchanged from modern industrial society, which are inadequate for the risk society (Beck 1995).

Reflexive Modernization Theory and Risk

79

Relations of definition become exposed and politicized with each new catastrophe that reminds us of the risk society and as the logic of risk permeates our experience (Beck 2009b). The combination of antiquated relations of definition and the politicization of science produces organized irresponsibility, a situation in which individuals, organizations, and institutions escape accountability for a wide range of hazards and disasters, despite a preponderance of laws and regulations (Beck 1995: 63–65). For example, the more pollutants that are emitted by multiple social actors, the less likely it is that any one of the polluters will be found legally culpable by our current relations of definition, in Beck’s terms, or, more simply, by standards and regulations in policy terms. This is especially true when the multiple polluters (and victims) mobilize their “stables” of dueling experts. In this sense, the coexistence of responsibility (or co-responsibility) results in organized irresponsibility. This brings us back to reflexive modernization—a set of processes emergent within the current era of uncertainty where the constant threat of a growing array of local and global risks and mega-hazards necessitates the continuous reinvention of scientific and political institutions. A key to Beck’s conceptualization of reflexive modernization is a distinction between overlapping meanings of what is meant by “reflexive” (Beck 1996a: 29, 1998: 84). The more superficial meaning identifies institutional reflection for enabling critical evaluation of the world—a world that is no longer enjoying the predictability of modernity. The deeper meaning is the idea of reflex as critical evaluation of the recognition of the limits to our knowledge. Beck recognizes, with Giddens and Lash (1994), that advances in knowledge disrupt traditional patterns, release individuals from old structures, and open new avenues of decision making and action. While increased knowledge does disrupt the world, for Beck the unintended consequences of knowledge (i.e., the risks themselves) deepen reflexivity and provide the crucial break with the past. Hence, risks must be added to advancing knowledge to round out the full meaning of reflexivity. Furthermore, in parallel with Giddens (2000a), Beck views advanced modernity metaphorically as a vast juggernaut and its dynamic of globalization as a “runaway world.” As in the Hindu legend, some structures and processes are carried forward with the cart (juggernaut) of advanced modernity, while others are crushed under it. This breakneck industrial process of “more, bigger, and faster” leads to the rapid production of even more human-generated, complex risks, creating ever more uncertainty—an internal or “manufactured uncertainty” (Beck 1992c; Giddens 1991, 1994b). The pivotal uncertainty for Beck is about the unintended and unanticipated consequences of scientific knowledge and technological innovation. The focus on unintended consequences provides the opportunity to open the more complex and theoretically challenging space of “unawareness,” since “the medium of reflexive modernization is not knowledge, but unawareness” (Beck 1998: 90). “Unawareness” is an abstract rubric for collecting together the range of limitations to knowledge: the selective reception and transmission of knowledge;

80

Risk and Social Theory

the inherent uncertainty in all knowledge; mistakes or errors; and, especially, knowledge that is unknowable or for which there is an unwillingness to know (Beck 1999).16 Probabilistic methods, according to Beck, provide perfect justification for types of technology that have the potential for considerable negative consequences, as their potential failures are framed as “random events”— and the responsibility of individual actors is replaced by abstract “scientific” explanations.17 Beck’s (1998: 97–102) idea of unawareness embeds a deep and enduring paradox. On the one hand, it emphasizes the inherent limitations to knowledge, especially the reality that some knowledge is unknowable or does not attract a willingness to know. That nanotechnology, bioengineering, and other types of emergent technology contain not only knowable risks but also risks we cannot yet know provides a clear window on fundamental limitations in society’s ability to perceive and govern risks. This unawareness, this inability to predict untoward consequences with precision, poses a key challenge to industrial research, such as whether to proceed with the potentials of science, such as ­genetic engineering and other advanced applications.18 Furthermore, because it is typically denied or papered over in industrial research and development, this unawareness increases the potential for intensifying endangerment. Because we either fail to recognize the increased risks of advanced modernity (a persistence of unawareness) or deny them, the world is a far more dangerous, perhaps fragile, place (Stehr 2001). On the other hand, the actual occurrence of unintended consequences— most obvious with the realization of “worst case” outcomes (Clarke 2006), such as technological disasters—provides knowledge that was unknowable prior to the event perceived as of no risk. (Deep-sea oil drilling platforms may explode, kill eleven workers, and cause countless millions of gallons of oil to spill into the Gulf of Mexico, leading to significant long-term ecological degradation— some of which is irreversible.) Unknowability is especially true of terrorism, a heightened feature of advanced modernity (Beck 2002, 2003; Ginges et al. 2007).19 Such outcomes create two forms of knowledge. The first is the obvious knowledge that risks can be realized, that untoward outcomes happen. The second is the less obvious but more powerful knowledge of what it was that we had been unaware of before the untoward outcome. Here, too, we see a realized 16.  For an in-depth elucidation of several forms of non-knowing, see Beck 2009b: chap. 7. 17. There is, perhaps, no better illustration of this than the BP Deepwater Horizon oil spill in the Gulf of Mexico (April 20–July 15, 2010), where a probabilistic risk assessment indicated a zero chance of a serious accident (Det Norske Veritas 2010). 18. Informally in the United States and formally in the European Union and elsewhere, this problem is addressed with the precautionary principle that, stated formally, says that where there are threats of serious irreversible harm, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation. 19. The risk of terrorist threats is impossible to estimate accurately. The means, time, and place of an attack are virtually unknowable, and that unknowability is exacerbated by the strategic response of terrorists to precautionary actions by potential victims.

Reflexive Modernization Theory and Risk

81

“The introduction of unawareness as the key conflict of reflexive modernization. . . . Reflective unawareness . . . follows the pattern that one knows that one does not know and what one does not know. Thus, knowledge and unawareness are separated within knowledge. . . . On the other hand, repressed or unknown awareness ultimately means ignorance. One is unaware of what one does not know.” —Ulrich Beck (1998: 98)

“As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” —U.S. Secretary of Defense Donald H. Rumsfeld, press briefing, February 12, 2002 Figure 4.3  An Operationalization of Unawareness

risk precisely. Figure 4.3 contains a surreal summary of this in an infamous announcement by former U.S. Secretary of Defense Donald Rumsfeld. The realization of unintended consequences now becomes part of knowledge and sharpens our awareness of our previous unawareness. It, too, sharpens our recognition that unawareness is deeply woven into the social fabric of advanced modernity. Indeed, according to Beck (2009b: 115), “World risk society is a non-knowledge society. . . . ​Hence, living in the milieu of manufacturing non-knowing means seeking unknown answers to questions that nobody can clearly formulate.” The resultant paradox telescopes what, for Beck, is the key problématique of reflexive modernity: how do social systems deal with unawareness, or the inability to know? By arguing that we are far from answering this question, Beck not only underscores a central dislocation of advanced modernity, but also implicitly answers the question we have raised about whether societies in this era are generating risks faster than they can understand, govern, or manage them. Yes, definitely, says Beck.

Institutional and Political Responses: Subpolitics How can societies develop the institutional and political means for governing and managing risk effectively? Despite Beck’s pessimistic conclusion about the imbalance between the pace of risk generation and our ability to know, govern, and manage risks, he does envision alternative paths toward the development of institutional mechanisms for more effective risk management. The most immediate avenue is opened by the breaking apart of science’s unified voice it enjoyed in modern industrial society. Here Beck envisions a space for public participation in risk ­decisions.

82

Risk and Social Theory

Between the remnants of scientific voices is an increased role for citizens to identify, assess, and manage technological and ecological risks: “its goal would be to break the dictatorship of laboratory science . . . ​by giving the public a say in science and publicly raising questions” (Beck 1995: 16). In the risk society, knowledge claims about risks need to be negotiated among scientists, political stakeholders, and laypeople—in effect, a negotiation among different epistemologies. On another level, Beck both describes and prescribes a wider and deeper democratization. Affirming the realist part of his epistemology, he is quick to remind us that many new risks (e.g., nuclear power plants, climate change) have a democratizing power, since exposure to these risks cuts across established social divisions. As a result, they create the potential for new alliances that provide heightened opportunities for public participation (Beck 1995). This leads us to Beck’s answer to our third question about the issue of risk governance and management: it is a call for greater public involvement and for the extension of democratic processes in assessing and managing risks. By relying on the sciences alone and the inordinate role of experts, democratic societies have failed to provide viable institutions for the democratic governance and management of risks. It, too, is an answer that joins a growing number of voices converging on the same conclusion: that to be effective, advanced modern risk governance must be democratized (Funtowicz and Ravetz 1985, 1993; National Research Council 1996; Renn, Webler, and Wiedemann 1995; Rosa and Clark 1999). Beck believes that industrial society has produced a truncated democracy in which decision making is adulterated. This can be overcome by more democracy in producing accountability, redistributing the burdens of proof, and dividing power between the producers and the evaluators of hazards. In his ­vision of an ecological democracy (Beck 1999), the relations of definition that prevailed under industrial society are replaced with more appropriate relations of definition, which are sufficient for the task of governing risks in the risk society. For instance, public deliberation of the costs and benefits of technological development should occur before final decision making, not as the retro application of a rational reason for the development in the first place. Risk producers should bear the burden of proof, and standards of proof should be revised via public participation in an ecological democracy (Goldblatt 1996: 172). Yet at other times, Beck is hesitant merely to prescribe more public science and greater democratization within the institutional framework of industrial society. Since the risk society entails a crisis of the institutions to emerge in industrial society, the conventional political forces, representative bodies, and scientific institutions of industrial society clearly fall short. In fact, they are the very source of the risks and the reasons that social actors feel crippling uncertainty (Beck 1994a). Central here is Beck’s notion of the “social explosiveness of hazard,” his shorthand for how large-scale technological and ecological disasters overwhelm nation-states’ political and economic institutions and delegitimize their assurances of security in the risk society.

Reflexive Modernization Theory and Risk

83

Indeed, this failure of nation-states (via legitimation crises) is occurring at the same time as the globalization of risks. The latter, magnified by global media coverage, means we are increasingly recognizing that “we are all trapped in a shared global space of threats—without exit” (Beck 2009b: 56). Again, this is what Beck now refers to as “cosmopolitanism,” a concept developed to elaborate his ideas of subpolitics (discussed below), which he promotes widely (Beck 2000b, 2005, 2006a, 2007a, 2009a, 2010b; Beck and Grande 2007a, 2007b; Beck and Sznaider 2006). Via the process of cosmopolitanism, social actors begin to realize that we are all neighbors sharing the world, which may prompt them to reintegrate “the other” first into our line of sight, and then into our moral consideration. Beck’s (1994b, 1997a) accentuation of greater democratization outside institutional parameters reflects his emphasis on global social movements and subsystem politics. He believes that mass media—particularly television news— help to raise the risk consciousness of individuals, and thus to empower social movements, through dramatic coverage of catastrophes (Beck 1992a, 2009b).20 Beck (1999) maintains that, because of their longevity, organizational assets, and focus, social movements play a central role in this democratizing process. Furthermore, he believes that nongovernmental organizations (NGOs) have the capability to act across borders and truly globally. In this sense, they can become equal counterweights to globally operating corporations. Cosmopolitan influence can be exercised by subpolitical actors because they are not bound to national territories or regional confinements. Indeed, much domestic and foreign policymaking over the past several decades has been shaped from below by grassroots pressure from environmental, women’s, and peace movements that are transnational in scope. Further, global social movements, the carriers of cosmopolitanism, have placed the themes of the future (e.g., the survival of our species within the global ecosystem) on the global political agenda against the institutionalized resistance of existing political and economic structures of industrial society (Beck 1992c, 1997b, 2000b, 2005). Yet reflexive modernization necessitates an even more fundamental reinvention of politics. For Beck (1994b, 1997b), this means subpolitics. It is an emergent structural element where political processes are conducted outside established representative institutions at social sites previously deemed nonpolitical: “the decoupling of politics from government” (Beck 2009b: 95). While the agents of traditional politics are almost always collective agents focusing their attention toward nation-states, the agents of subpolitics are diffuse, ad hoc networks of individuals acting within an emerging global civil society. Bypassing existing political institutions, subpolitics shapes political action from below through spontaneous, ad hoc individual participation in political decisions (Beck 1997a, 1997b). This concept of subpolitics includes a broad 20.  For a critique of Beck’s ideas on the mass media, see Cottle 1998.

84

Risk and Social Theory

range of political and economic activities outside institutionalized channels (Beck 1996b). Given the pervasiveness of risks in the private sphere, this sphere can no longer be perceived as nonpolitical. Individualized social actors seek to manage their exposure to risks by engaging in politicized economic behavior (e.g., buying carbon offsets with one’s airplane ticket), which is made possible by new partnerships between environmental and social justice NGOs and economic corporations. The best-known examples of these are third-party certification of organic, fair trade, and various eco-friendly standards and the mass consumption of various household products made without hazardous chemicals and marketed as “safe” (Beck 2000b). The politicized consumption of such things as milk containing no synthesized bovine growth hormone (e.g., recombinant bovine somatotropin, or rBST) is the type of self-expression by individualized social actors expected by Beck in the risk society. Subpolitics also often comprises spontaneous, emergent activities that attract the convergence of disparate factions over specific issues. The mass consumer boycott of Shell Oil in Western Europe in the mid-1990s and the ongoing protests by diverse, ideologically conflicting groups (from labor unions and political anarchists to organic food growers) against the policies of the World Trade Organization illustrate the present efficacy and potential strength of subpolitics when dissimilar actors mobilize. The emergence of subpolitics will continue to call into question those traditional political institutions (e.g., political parties, legislative bodies) that are increasingly unable to manage the ubiquitous risks of advanced modernity. For Beck (1997a), this radical, global democratization ultimately means not only the reinvention of politics but also the potential destruction of those modern political institutions that are ineffective.

Giddens and the Consequences of Modernity Social Order: Manufactured Uncertainty and Institutionalized Risk Environments What social order emerges in advanced modern societies where technological and ecological risks are virtually everywhere? With The Consequences of Modernity, Giddens explained the essential dynamics of the modern era and launched his contribution to RMT.21 The top part of Figure 4.4 identifies three characteristics of modern social institutions that, for Giddens (1990), explain why they are qualitatively different from those in earlier eras. The bottom part of Figure 4.4 lists those factors that he believes 21.  Giddens (1990: i) defines modernity as the “modes of social life or organization which emerged in Europe from about the seventeenth century onwards and which subsequently became more or less worldwide in their influence.”

Reflexive Modernization Theory and Risk

85

Three Distinctive Characteristics of Modern Social Institutions (Giddens 1990b: 6) • Extreme rapidity of change (e.g., the pace of technological change) • Vast extensiveness of change (i.e., near-global reach) • “Intrinsic nature of modern institutions,” some of which did not exist in earlier eras (e.g., the nation-state) Three Factors Explaining the Dynamism of Modernity (Giddens 1990b: 16–17) • Separation of time and space (e.g., worldwide diffusion of mechanical clocks and standardization of calendars) • Disembedding of social systems via two mechanisms: symbolic tokens (e.g., money) and expert systems (e.g., science) • Reflexive ordering and reordering of social, political, and economic relations (e.g., reflexivity in knowledge and reflexivity in action) Figure 4.4  The Discontinuity and Dynamism of Modernity

explain modernity’s dynamism. In contrast to the evolutionary narratives (a defining feature of modernity) promoted by most classical social theorists, Giddens (1985, 1990, 1991) argues that modernity represents a discontinuous break from past eras. The institutions and processes in Figure 4.5 form the core of his discontinuity thesis. Central to Giddens’s characterization of modernity is his analysis of the contemporary era’s four core institutions (top of Figure 4.5). Analytically separating capitalism and industrialism as distinguishable institutions has led Giddens (1990) to identify industrialism as the main force that transforms traditional societies into modern ones. The bottom of Figure 4.5 lists what Giddens sees as the defining features of industrialism, which—through its application of science and technology—is the key driver of widespread technological and ecological risks in modernity (Giddens 1990, 1994b).22 Giddens is also adamant in his opposition to other scholars’ assertions of postmodernity. Much to the contrary, he argues that we are experiencing the modernization of modernity—captured in his terms “high modernity” (Giddens 1990: 196, 1991: 3, 1994b: 91), “late modernity” (Giddens 1991: 3), and “radicalized modernity” (Giddens 1990: 149–150). Two factors producing the erratic character of modernity are unintended consequences (aggravated by systems complexity) and the reflexivity or circularity of social knowledge, whereby new 22.  Giddens (1985) argued that modern environmental problems were caused by the conjunction of industrialism and capitalism, before he shifted to assign primary blame to industrialism. For a critique of Giddens’s ideas regarding capitalism and industrialism and the causes of environmental degradation in modernity, see Goldblatt 1996: chap. 1. Goldblatt argues quite convincingly that Giddens’s shift to industrialism as the central cause is not plausible.

86

Risk and Social Theory

Institutional Dimensions (and High-Consequence Risks) of Modernity (Giddens 1990b: 59, 171) • Capitalism:  Capital accumulation in context of competitive labor and product markets (high-consequence risk: collapse of economic mechanisms) • Industrialism:  Transformation of nature; development of built environment (high-consequence risk: ecological decay or disaster) • Military power:  Control of means of violence in context of industrialization of war (high-consequence risk: nuclear conflict or large-scale warfare) • Surveillance:  Control of information and social supervision (high-consequence risk: growth of totalitarian power) Core Features of Industrialism (Giddens 1985: 139) • Inanimate sources of power are mobilized in the production and circulation of commodities • Mechanization of products and economic processes such that there are routinized processes creating a flow of produced goods • The use of inanimate sources of material power in either production or in processes affecting the circulation of commodities • Workplaces are centralized, wholly devoted to productive activity, and separate from the domestic locale Figure 4.5  The Institutions of Modernity

knowledge alters the nature of the world (Giddens 1990). The overall trend, in which the world seems increasingly out of our control, especially as institutions come up against their own limits, leads Giddens, as we have noted, to compare modernity to a juggernaut (Giddens 1990) and a “runaway world” (Giddens 2003c). The concept of risk as manufactured uncertainty occupies a central place in Giddens’s RMT works, buttressing his discontinuity thesis. In traditional societies, danger derived primarily from an extrinsic, often capricious nature— that is, the vagaries of climate and natural disasters often controlled the destinies of humans. However, Giddens does not consider these as constituting risks. Rather, he terms them “external threats.” According to Giddens (1990), the phenomenon of risk is “manufactured uncertainty,” a point underscored by the fact that the word “risk” came into common usage only in the modern period. The uncertainty people feel in modern times is manufactured through human activities and decisions (Giddens 2003c). Hence, risk is an inevitable element in a system that depends on human decision making and domination over nature (Giddens 1991). Industry, science, and technology—the very institutions that have reduced premodern threats such as famines and that have produced

Reflexive Modernization Theory and Risk

87

1. Risk comes from human decisions in the created environment (technological risk) or socialized nature (ecological risk) 2. Perception of risk as risk 3. Well-distributed awareness of risk (i.e., many of the dangers we face collectively are known to wide publics) 4. Development of institutionalized risk environments affecting the life chances of millions (e.g., investment markets) 5. Globalization of risk leads to increased intensity (e.g., nuclear war can threaten survival of humanity) 6. Globalization of risk leads to increased extensiveness (e.g., changes in the global division of labor affects large numbers of people on the planet) 7. Awareness of the limitations of expertise (i.e., no expert system can be wholly expert in terms of the consequences of the adoption of expert principles) Figure 4.6. The Risk Profile of Modernity (Giddens 1990b: 124–125)

many benefits for humankind—are paradoxically the genesis of this manufactured uncertainty (Giddens 1994b). They are the prime causes of risk, the result of the long-term development and maturation of the industrial order. For Giddens, the natural catastrophes or external threats in traditional societies are trumped in the modern era in frequency, magnitude, spread, and catastrophic potential by human-created risks. Giddens’s (1990) description of modernity’s “risk profile” (described in Figure 4.6) summarizes the genesis of risks. It also guides the remainder of this section. Giddens (1991) describes modernity as a “risk culture.” He chooses this description not because life is necessarily more dangerous now than it was in earlier times (indeed, he acknowledges that for most of us, it is not),23 but because of how central the concept of risk has become to how experts and laypeople alike organize their worlds. We increasingly think of risk as we make routine decisions (e.g., which buttery spread to use, how long to spend in the sunlight without sunscreen, when to lock the house and car). Indeed, awareness of risk grows rapidly and widely when there are public debates about collective risks such as those associated with financial markets, new technology, and ecological problems (Giddens 1998). Provoking a great deal of dread in particular are low-probability, high-consequence risks, such as the risk of a commercial nuclear accident. Giddens associates such risks with each major institution of modernity, as described in the top part of Figure 4.5. Because of their breach 23. This is a key element of a paradox that we take up in more detail in our incorporation of the work of Habermas in Chapter 6.

88

Risk and Social Theory

of spatial and temporal limits, high-consequence risks exemplify the boundless danger of manufactured uncertainty (Giddens 1994b). For Giddens (1990, 1991), the increased degree of manufactured uncertainty and the pervasive spread of risk have led to the institutionalization of risk environments. Institutionalized risk environments are systems that “are constituted through risk, rather than certain risks being incidental to them” (Giddens 1991: 118). Clear examples include financial markets, regional electrical grids, and arenas of pharmaceutical drug development. Two qualities of institutionalized risk environments are particularly noteworthy. The first is that virtually everyone is exposed to risks, regardless of whether they enjoy benefits from them or are involved in choices about their generation and acceptability (Giddens 1991, 2003c). As Giddens (1991: 22) is fond of saying, “No one can ‘opt out.’” For example, some of the air pollution of Northern California can be traced to industrial effluents in China (United Nations Economic Commission for Europe 2007). The second is that, given the lingering threat of low-probability, high-consequence risks, these institutionalized risk environments carry with them the possibility of extreme disaster (Giddens 1990), as witnessed with climate change (Giddens 2009a, 2009b) or with the near-collapse of the world financial economy in what has become known as the “Great Recession.” The sheer scale of catastrophes in institutionalized risk environments reminds us of the limits of our institutions. “The more calamitous the hazards they involve,” Giddens (1991: 22) writes, “the less we have any real experience of what we risk: for if things ‘go wrong,’ it is already too late.” Giddens’s enduring focus on ecological problems helps illustrate these dynamics. Until the modern era, nature was an external system that dominated human activity mostly because of its unpredictability—for example, through floods, droughts, and infectious diseases (Giddens 1990). Yet with modernization, humans have come to dominate nature via science and technology in service to industrialism, what Giddens (1990, 1991) calls “the socialization of nature.” Most ecological processes (e.g., the dynamics of major river watersheds, forest succession) are, at least partially, products of human decision making to the extent that humans actually live in eco-social systems, or “socially organized environments” (Giddens 1994a, 1994b). Because humans are producing intentional and unintentional changes to the environment, “socialized nature is in some fundamental respects more unreliable than ‘old nature’” (Giddens 1991: 137). In other words, the socialization of nature means that any eco-social system has become an institutionalized risk environment with the systemic risk of ecological disaster unavoidably embedded in it. Examples of such ecological disasters include climate change, ocean acidification, and destruction of rainforests (Millennium Ecosystem Assessment 2005). Globalization, for Giddens, increases the intensity and extensiveness of risks around the world. Defined as “the intensification of worldwide social relations which link distant localities in such a way that local happenings are shaped by events occurring many miles away and vice versa,” present-day globalization, according to Giddens (1990: 64), is no longer a process of unidirec-

Reflexive Modernization Theory and Risk

89

tional imperialism. Instead, it is disorderly and chaotic (Giddens 2003c).24 The global spread of industrialism, however uneven and sporadic, has produced the global spread of manufactured uncertainty and high-consequence risks (Giddens 1990). Globalized risks are more intensive than traditional or local risks because the pace, scale, and consequences are of substantially higher magnitude. A catastrophe today can produce many more deaths than in the past, and the real potential still exists for annihilating the entire planet abruptly via nuclear arsenals or slowly via a gradual warming. The possible risks we face have the capability to affect the lives of all humans since, for Giddens (1990: 126), virtually no one can disengage completely from global institutionalized risk environments. With the continual possibility of a massive catastrophe, crises become a normal state of affairs (Giddens 1991). There is no shortage of manufactured catastrophes—industrial and nuclear accidents, oil leaks, mine explosions—that feed a hungry media. Ecological and other risk crises, which were episodic and manageable when threats were external and embedded in the caprice of nature, now become endemic to the normal functioning of the advanced modern era. Thus, in his answer to the first question, Giddens distinguishes the advanced modern era from its predecessors on the basis of manufactured risks that are global with no escape and that have the potential for unprecedented catas­trophe. Such high-consequence risks are defining elements of “the runaway, juggernaut character of modernity” (Giddens 1990:131).

Risk Knowledge: Ontological Security and Reflexivity What is our knowledge about these risks? Giddens, like Beck, views science and related industrial systems of technical expertise as both the creator of risks and the primary system on which we have depended for knowledge about risks (Giddens 2003c). Similar to Beck, Giddens further characterizes the advanced modern era as one of growing disenchantment with science, mounting distrust in expert systems, and widespread feelings of uncertainty and insecurity. And like Beck, Giddens calls this regnant knowledge system of science and other expert systems of technical expertise into question, although he is less radical in his prescriptions for change than is Beck. Giddens identifies three reasons for the limitations of science and other expert systems in advanced modernity, which also explains what he sees as sharp declines in laypeople’s trust in expert systems.25 First, experts and laypeople 24.  For a critique of Giddens’s conceptualization of globalization, see Lewandowski 2003. 25.  For a strong critique of Giddens’s claim about the recent drop in laypeople’s trust in expert systems, see Wynne 2002. Wynne argues that Giddens overstates the degree of trust in expert systems that the general public had earlier in the modern era.

90

Risk and Social Theory

recognize that science—as organized skepticism mobilized on the principle of doubt—is always in flux. Giddens (2003c: 31) refers to this as its “essentially mobile character.” He writes, “No matter how cherished, and apparently well established, a given scientific tenet might be, it is open to revision—or might have to be discarded altogether—in the light of new ideas or findings” (Giddens 1991: 21). Due to its organized skepticism, science itself is inherently unsettling. What we know scientifically often changes rapidly, especially as attention is directed to emerging risks (Giddens 1994b). Such change is often unsettling because it does not point in the direction of greater or deeper understanding. Briefly, the halo that science enjoyed in earlier decades has been cast aside by laypeople, with their growing recognition that in the cause of risk, science can deliver only probabilities, not certainties (Giddens 1990). Second, increasing specialization in science has accelerated these trends. On one level, greater specialization means that experts—even the best of them—are laypeople on most issues most of the time, signaling a fundamental limitation of synthetic expertise. On a deeper level, greater specialization makes it harder to see and control consequences beyond the narrow domain of expertise (Giddens 1991). This relates to the third source of the limits of expert systems. Such systems of expertise may be undermined by the dynamics of their very own subject matter. For instance, the actual occurrence of catastrophes challenges the legitimacy of scientific assurances of safety (Giddens 1990). Most technological and ecological risks are controversial, with competing stakeholders expressing conflicting claims. Expert risk assessment cannot be made with the precision that laypeople increasingly desire (Giddens 1990). Indeed, risk assessment itself becomes risky: “risk calculation has to include the risk of which experts are consulted, or whose authority is to be taken as binding” (Giddens 1994a: 87). New forms of manufactured uncertainty (e.g., industrialization of animal agriculture, centralized packaging of green leafy vegetables) seem to evoke incalculability. In fact, even whether a risk exists at all is likely to be disputed (Giddens 2003c). Further, the risk assessment process is complicated by the fact that “the bigger a potential disaster, the more likely governing authorities and technical specialists are to say that it ‘cannot occur’” (Giddens 1994b: 220), as was the case with the British Petroleum (BP) Deepwater Horizon oil spill. And often, according to Giddens, experts are not even sure what “it” is. Furthermore, to the extent that risk can be assessed and threats can be controlled, such assessment and control can be done only for the present, because the accuracy levels of assessment fall to zero when one tries to assess ecological risks far into the future—such as the risks of nuclear waste whose radioactive hazards last for as long as a million years. Thus, in answering the second question on risk knowledge, Giddens, like Beck, points to the limitations of our understanding when we depend solely on science and other institutionalized expert knowledge. How, in turn, does the awareness of risk affect individuals? For the most part, Giddens (1990: 92) believes that acknowledging an even infinitesimal

Reflexive Modernization Theory and Risk

91

probability of the occurrence of a catastrophic event is a serious threat to our ontological security: “the confidence that most human beings have in the continuity of their self-identity and in the constancy of the surrounding social and material environments of action.” High-consequence risks have a very low probability of occurrence but potentially disastrous consequences. The fact that most of these risks are buried below the conscious awareness of social actors in normal day-to-day life serves to maintain a belief in ontological security. In effect, unawareness, in Beck’s terms, functions to serve ontological security—an emotional phenomenon rooted in the unconscious (Giddens 1979, 1991). Events that scare people, such as nuclear disasters, the spread of bovine spongiform encephalopathy (BSE), or highly visible terrorist acts that target them, excavate latent ontological insecurity. Indeed, one could argue that the fundamental purpose of terrorism is to promote continuous ontological insecurity, the more insecurity the better. This uneasiness, in turn, forces heightened reflexivity among experts and laypeople. Giddens believes that such reflexivity is an additional means by which social actors acquire knowledge of the risks that confront them in advanced modernity, even while he points to fundamental limitations in the acquired awareness and assessment of risks. Giddens conceptualizes two related dimensions of reflexivity (O’Brien 1999), operating at the macro and micro levels, respectively. First there is institutional reflexivity, or the reflexivity of knowledge and meaning (Giddens 1991), which relates to Beck’s (1992c) reflexive scientization.26 Giddens (1990: 38) writes, “The reflexivity of modern social life consists in the fact that social practices are constantly examined and reformed in the light of incoming information about those very practices, thus constitutively altering their character.” In advanced modernity, this reflexivity is radicalized to cover all aspects of our lives. Knowledge claims by scientists and technical experts routinely enter society, leading to iterative changes in social processes and the knowledge claims themselves over time. Expert claims and scientific knowledge are especially open to revision and refutation, as science and other expert systems are presumed to be cumulative, self-correcting systems. The Enlightenment goal of rational ordering and increased control over social and natural environments is essentially undercut by the heightened uncertainties of modernity’s reflexivity (Giddens 1991). For Giddens, then, more knowledge does not translate into greater control of our systems and more effective governance. Yet non-experts must still navigate through conflicting knowledge claims, competing experts, and the realization that any information may be revised in the future, often while coping with threats to their ontological security. All of these factors promote a continuous state of uncertainty. 26.  Reflexive scientization accounts for the substance or definition of what is structured and is clearly grounded in Giddens’s (1984) earlier notion of the double hermeneutic, where sociological concepts explaining social life routinely enter and transform the interpretive vocabulary of laypersons. Apparently, in his scheme, scientific concepts surrounding risk do the same.

92

Risk and Social Theory

This leads to Giddens’s second dimension of reflexivity—self-reflexivity or the reflexivity of action—where individual social actors accentuate the monitoring of their own conduct (Giddens 1974, 1990).27 That the self is a reflexive project is intrinsic to all human activity and accounts for the recursive production of everyday life in all eras (Giddens 1991). What is distinctly different about self-reflexivity in advanced modernity is the radically increased scope of knowledge and information that science and communication technology make available to actors—in the face of the rise of manufactured risks and the decline of tradition that provided ready-made answers in the past. The Internet, for example, has revolutionized the base of global knowledge and access to it. Indeed, reflexive practices are applied to all aspects of human life. Because of this, Giddens sees self-reflexivity as a crucial mechanism to navigating the risk consequences of advanced modernity.

Institutional and Political Responses: Life Politics, Social Movements, and Dialogic Democracy How can societies develop the institutional and political means for governing and managing risk effectively? In addressing the third question about the personal and institutionalized means for governing and managing the technological and ecological risks that now pervade the world, Giddens only adumbrates a broad perspective that involves the micro and macro levels. Giddens (1990, 1991) implies that we must acknowledge the existence of natural limits, and he believes that a more reflexive practice of science and expertise—incorporating uncertainty and ambiguity— should play a larger role in response to global crisis than does Beck. Giddens’s micro-level solution involves the emergence of life politics and the resultant production of ecological (green) lifestyles, while his macro-level solution involves the humanizing of technology and a system of governance that further “democratizes democracy.” The mobilization of social movements should serve as the bridge between the two levels. At the micro level, Giddens is quite explicit. He provides a directly prescriptive mechanism that parallels the one offered by Beck. Giddens (1990) argues that we must move from emancipatory politics to life politics. Emancipatory politics is concerned with enhancing life chances by liberating individuals and groups from structural and system constraints (Giddens 1991). These processes serve to eliminate or reduce oppression, exploitation, and inequality. The emergence of the prescribed life politics presumes that a certain level of emancipation has already occurred so that life chances are not simply enhanced but are also relatively equalized (Giddens 1994b). 27.  For a critical analysis of Giddens’s notion of reflexivity and the self, see Adams 2003.

Reflexive Modernization Theory and Risk

93

Similar to Beck, Giddens argues that the diminishment of the importance of social class leads to a trend toward individualization.28 Thus, life politics becomes the politics of lifestyles, or the life decisions of a reflexively mobilized collectivity. Life politics means that individualized social actors pursue self-expression through their lifestyle decisions and through a proclivity to join social movements that embody similar values (Giddens 1991). One prescribed answer Giddens (1990) describes is the notion of ecological lifestyles, which involve the construction of ecological self-identities monitored through reflexivity (see also Giddens 2003a). Like Beck, Giddens (1990, 1991) believes that mass media heighten individuals’ awareness of risks by bringing to their attention catastrophes around the world that they cannot experience directly. Media have virtually always provided this vicarious experience, but in advanced modernity, characterized by global communication technology and high-resolution imagery, such experiences are shown in real time and with remarkable vividness. This spread of awareness and reflexivity among a growing number of individuals with common concerns, Giddens (1991, 1998) argues, sometimes generates social movements. Social movements carry a substantial responsibility for shaping the actions of institutions responsible for managing risks (Giddens 1990). They can change the public agenda and bring attention to unacceptable risks while providing a vision of the social transformations needed to eliminate or address them (Giddens 1994b). For example, environmental movements often shoulder most of the responsibility for bringing about the humanizing of technology, since they emphasize a heightened awareness of the high-consequence risks associated with globalizing industrialism (Giddens 1990). In general then, social movements are also the means for bringing public attention to life politics in all societies (Giddens 1998).29 Social movements therefore provide the link between macro-structural and micro-agency-level risk awareness that, in turn, leads to governance structures and management strategies. Hence, in contrast to Beck, who theorizes that the source of institutional influence in individual actions is carried out through systems of subpolitics, Giddens theorizes that the source of this influence is collectivized individuals: individuals who are prepared to join social movements about unacceptable risks, about systems to govern them, or about societal management of them. At the macro level, Giddens prescribes trends for the humanization of technology and the democratization of democracy. Since we cannot escape from the world that industrialism has produced, we must attempt to humanize the technology that underpins it (Giddens 1994b). Awareness of ecological problems that have originated from the expansion of control over nature must drive us to question the basis of technology and science in their service role to

28.  For a critical analysis of Giddens’s ideas on social class, see Atkinson 2007a. 29.  For a critique of Giddens’s work on social movements, see Bagguley 1999.

94

Risk and Social Theory

i­ ndustrialism. A major dilemma with technological innovation is that it is no longer strongly dependent on industrialism; it has inertia of its own (Giddens 1990). The humanizing of technology involves “the increasing introduction of moral issues into the now largely ‘instrumental’ relation between human beings and the created environment” (Giddens 1990: 170). The life politics agenda, advanced by social movements, brings to the fore additional moral and existential questions derived from the more encompassing questions of “How shall we live?” that emerged largely with the Enlightenment project (Giddens 1991). These types of deep and penetrating questions in advanced modernity signal the need for a new form of governance, a “dialogic democracy.” Open dialogue in the tradition of John Dewey is the context for the “recognition of the authenticity of the other, whose views and ideas one is prepared to listen to and debate, as a mutual process” (Giddens 1994a: 106). Basi­ cally, this involves “creating a public arena in which controversial issues—in principle—can be resolved, or at least handled, through dialogue rather than through pre-established forms of power” (Giddens 1994b: 19).30 But, the potential for dialogic democracy is dependent, in large part, on a heightened awareness of the ubiquity of risks and on democratic governance that is open to innovation (Giddens 1994a). Dialogic democracy is but one manifestation of Giddens’s wider prescription for the radical democratization of democracy,31 which he thinks is essential to overcome the failings of conventional governance structures in managing risk. Essential elements of this radicalization of democracy include, among others, improved anticorruption measures and greater transparency in government, experimentation with alternative avenues for participation besides voting, and the renewal and expansion of civil society (Giddens 1998, 2003c). Looking toward the future, he believes this new form of governance underpinned by the democratization of democracy is essential for guiding us through the emergence of a global cosmopolitan society (Giddens 2003c).

Contribution of Reflexive Modernization Theory We conclude our discussion by comparing the insights of Beck and Giddens (and by identifying their major strengths and weaknesses in dealing with the risk problématique), then by providing a more general overview of the contribu30.  Giddens’s notion of dialogic democracy parallels Habermas’s (1991b) concepts of “ideal speech situations” and, more recently, “unconstrained discourse conditions.” The primary distinction is that Giddens’s conceptualization is more proximate and circumscribed because he strives to avoid the embedding of his concept in any macro theoretical framework. 31.  Giddens (1998, 2000b, 2001, 2003b) has widened his focus to seek a broader approach to politics. His perspective—“the third way”—is a renewal of social democracy in a world where there are no viable or recognizable alternatives to capitalism. Briefly, his framework of thinking and policymaking is an attempt to transcend the divide between classical social democracy and neoliberalism. For a critical analysis of Giddens’s notion of the third way, see Lewandowski 2003.

Reflexive Modernization Theory and Risk

95

tion of RMT to improving our theoretical understanding of risk and how that knowledge can lead to improvements in governance. As is evident in the discussions in this chapter, Beck’s and Giddens’s ideas about risk have many key commonalities. Yet only an explicit comparison can identify crucial differences that signal essential variation in RMT insights on risk. Figure 4.7 identifies ten key points of comparison between Beck and Giddens.

Comparing Beck and Giddens on Risk At its core, RMT is a theory of “the self-transformation of modern society” (Beck 2009b: 55), with an emphasis on enduring and emerging social conflict and rapid technological and social change. Critics have challenged RMT for its lack of sensitivity vis-à-vis historical questions (Goldblatt 1996) and for its inattention to the sizable professional risk literature. Even Beck acknowledges that RMT has been quite Eurocentric (see Beck, Bonss, and Lau 2003). Both Beck and Giddens envision the contemporary era as one of fundamental structural ferment, and both are adamant that this is due to the extension of the processes of modernization, not to some shift to postmodernity. Yet they differ somewhat on their identification of this source of discontinuity with the past. For Beck, the changing nature of risk (especially the emergence of mega-hazards that are not spatially, temporally, or socially limited) is no less than the defining characteristic of our age. It is fair to say that Beck has overstated the case that modern risks are novel in their characteristics (Boyne 2003; Campbell and Currie 2006; Lupton 1999); that risk equality transcends existing class inequality (Alario and Freudenburg 2003; Elliott 2002; Fischer 1998); and that these risks factor in the everyday activities and consciousness of social actors in advanced modernity (Alexander 1996; Tulloch and Lupton 2003).32 For Giddens, the dynamic nature of modern social institutions and the processes of globalization are the causes of the break from past eras. Indeed, it is fair to say that Giddens’s interest in risk is subordinate to his focus on modernity’s changing institutions. Central to our purposes here is his emphasis on the causal primacy of industrialism over capitalism in explaining the rise of technological and ecological risks—in particular, the emergence of institutional risk environments of global scale. Beck also emphasizes the industrial nature

32.  Beck’s work has attracted a wide array of detractors, who generally criticize what seem to be logical contradictions and the lack of clarity and precision in his ideas (see, e.g., Cottle 1998). Frank Fischer (1998: 113) bluntly calls out Beck’s “unfortunate inclination toward hyperbole,” while David Goldblatt (1996: 154) is a little more delicate, writing that Beck’s early works on the risk society “are not so much rigorous, analytical accounts of modernity as surveys of the institutional bases of the fears and paradoxes of modern societies.” In response to critics’ charges of opaqueness and contradictions in his ideas, Beck, Wolfgang Bonss, and Christoph Lau (2003) operationalize several claims of RMT for direct empirical testing.

Key Characteristics

Ulrich Beck

Anthony Giddens

Theoretical emphasis on order/stasis or conflict/change

Emphasizes conflict and change

Emphasizes conflict and change

Continuity or discontinuity with the past

Magnitude and scope of risks bring discontinuity with the past

Institutionalized risk environments of global scale force discontinuity with the past

Structure or agency

Structural changes create new opportunities for agency in noninstitutional locations

Structuration is synergy between actor agent and social structure

Descriptive or prescriptive

Descriptive and prescriptive

Mostly descriptive, but a little prescriptive

Ontology and epistemology of risk

Realist ontology; opposes naïve realist epistemology; expresses partially social constructionist ideas

Realist ontology; critical realist epistemology

Prevailing characterization of risk

Risk as master frame; defining characteristic of advanced modernity; risk exposes paradoxes of advanced modernity

Risk as manufactured uncertainty is emblematic of modernity

Emphasis on which social actors as risk agents

Individuals and institutions; largely ignores organizations

Individuals and institutions; largely ignores organizations

Are we creating risks more rapidly than we can understand and manage them?

Yes

Probably

Tone

Wants to be optimistic but actually is quite pessimistic

Is mostly optimistic

What is the way out of our predicament?

Radicalization of modernization

Modernization of modernization

Figure 4.7  Comparing Beck and Giddens on Risk Theory Insights

Reflexive Modernization Theory and Risk

97

of society vis-à-vis the creation of novel risks.33 Hence, both Beck and Giddens seem to give disproportionate attention to global risks and mega-hazards, systematically ignoring more local, mundane risks that remain bounded in scope and often seem to be aligned with social class (Freudenburg 2000). Theoretically, Beck and Giddens both wrestle with the interaction of structure and agency. To be sure, Giddens’s RMT writings consistently extend his earlier structuration theory (Giddens 1984). Accordingly, he contextualizes structuration as the synergistic link between social actors and social structure. Each individual actor is part of the forces that shape the future context of actions for others. At the same time, each individual is bound to structural constraints and path dependencies that are the outcome of the past actions and choices of others. Beck’s approach to the problem of structure and agency is closely related to his emphases on how structural changes in society create new opportunities for agency in noninstitutional locations (i.e., subpolitics). These actions by individuals may crystallize into new structures that facilitate or constrain later social agency (e.g., third-party certification of eco-friendly standards). Furthermore, both scholars offer a mix of prescriptions within their descriptive accounts, although such prescriptions appear much more in Beck’s than in Giddens’s works. Indeed, “is” statements and “ought” sentiments are often intermingled in Beck’s writings, a reflection of the continental tradition, while they are identifiably distinct in Giddens’s. At their foundation, Beck and Giddens believe that risk is real. That is, they both believe that the quintessential risks of advanced modernity can exist independently of the senses of social actors—although there are many cases that cannot. Nevertheless, they are fully committed to the idea that risk emerges out of social constructions. Our attention is drawn to some dangers in the world over others because of social processes that identify them and place them on the public agenda. Indeed, both seem troubled by the extent to which our perceptions of these risks often do not align with the risks’ objective reality. Recently, Beck explicitly reflected on epistemology and developed a fourcategory typology that combines presuppositions about realism (ontology) with presuppositions about constructivism (epistemology) (Rosa, Diekmann et al. 2010). The category with the highest level of realism is “strong (naïve) realism,” whose representative cases are human ecology (Catton and Dunlap 1978), environmental sociology, and ecological modernization theory (Spaargaren and Mol 1992). The next category, which emphasizes an ontological realism with an epistemological constructivism, is “weak (critical/reflexive),” found in green social theory (Burns and Dietz 1992; Dickens 1992; Rosa 1998a; Rosa and Clarke 2012). The third category, “weak constructivism (constructive realism),” emphasizes an epistemological base (risks are constructions) with an ontological 33. Indeed, much to the chagrin of Marxists and non-Marxists alike, neither scholar implicates capitalism deeply in his insights on risk (Goldblatt 1996). Nor do Beck and Giddens advance a political-economic framework, drawing the ire of scholars who theorize the relations between the state and the economy.

98

Risk and Social Theory

outcome (risks are real). Beck places his work and that of Giddens—as well as the work of Klaus Eder (1996) and Bruno Latour (2004)—in this category.34 ­Finally, the last category, “strong (naïve) constructivism,” rests entirely on the presupposition that risks are not real but only constructed. The exemplars here are rational choice theory (Esser 1990), cultural theory (Douglas and Wildavsky 1982), and autopoietic systems theory (Luhmann 1993). To be sure, Beck opposes what he considers “naïve” realist epistemology (in essence, the business-as-usual model of the biophysical sciences) and the “naïve” strong constructivist epistemology (particularly the systems theory of Luhmann). And he, along with Giddens, agrees that risks are real; there are unwanted illnesses and accidents and dead bodies. As noted in Chapter 1, the process of their connection between ontology and epistemology is from the latter to the former. Risks are constructed, but because they are believed to be real, they must be real. The key to, and serious flaw in, this “construction to real” line of reasoning is that it embeds the reasoning with the logical fallacy of “begging the question”—namely, it presupposed but does not demonstrate why a constructed risk is necessarily real. The fatal flaw in the logic of the “construction to real” line of reasoning can be corrected with a shift in the priorities of the process by a shift to a “realism to construction” sequence as demonstrated by Rosa and Clarke (2012) and in Chapter 1. As for Giddens, although he spends much less space on epistemology in his works, he arrives at a constructed realism position similar to Beck’s and is properly placed in the Beck typology. For Beck and Giddens, low-probability, high-consequence risks are the signature risks of our new age—the risk society for the former, and high or late modernity for the latter. On a more abstract level, the notion of risk itself forms an essential element in both of their contributions to RMT, although it seems to be more transcendent in Beck’s works. Risk is Beck’s master frame, the linchpin of his entire theory. For Giddens, risk—as manufactured uncertainty—is emblematic of modernity. Risk as we know it simply did not exist in earlier eras, according to Giddens. As the defining feature of advanced modernity, risk exposes the innumerable paradoxes of our time. Both Beck and Giddens largely ignore the roles of formal organizations in the risk landscape of advanced modernity, analyzed especially in the scholarship of Charles Perrow (1999), Lee Clarke (1989), James Short and Lee Clarke (1992), and Diane Vaughan (1996), among others. Rather, both perceive that individualized social actors (sometimes mobilized in social movements) are the quintessential risk agents in advanced modernity. In particular, Beck and Giddens emphasize the dynamic interplay between individuals and social institutions that is increasingly fraught with tension as high-consequence risks

34.  Beck (2009c) is simultaneously committed to this realist and social constructivist view of risk based on the intellectual influences of Kant, Hegel, Weber, and Frege (Ulrich Beck, interview by Eugene A. Rosa, Munich, November 17, 2000).

Reflexive Modernization Theory and Risk

99

threaten to undermine institutional legitimacy. For both, although especially for Beck, people have lost faith in the promises of modernity. They have come to expect that the price to pay for prosperity, greater uncertainty and risk, will haunt them later on when risks become apparent. This leads to Beck’s and Giddens’s answer to one of the guiding questions that motivate this book: are we creating risks more rapidly than we can understand and manage them? “Yes,” Beck proclaims. Indeed, his affirmative answer is central to his theoretical argument. Giddens is less sure that the answer to this question is “yes,” but his acknowledgment of high-consequence risks and the globalization of institutionalized risk environments makes it likely that his answer is at least “probably.” This feature of their theoretical frames defines the difference in the tone of the two scholars’ works. It is a difference that follows from their respective meanings of the concept “reflexive.” Compared with Beck, Giddens seems quite optimistic about our current state of affairs—and about the future. Yet, as Beck (2009b) himself points out, Giddens’s theorizing of reflexivity ignores what Beck considers the “reflex” dimensions and its frightening concomitant of non-knowing. Across Beck’s works, one gets the sense that he would like to be optimistic about our current state of affairs, but his focus on limits, unawareness, and phenomena such as the boomerang effect produce only a dominating amount of pessimism (especially in his earliest works). Yet even the deeply pessimistic Beck agrees that there can be a way out of our predicament. Indeed, Beck and Giddens agree that the desirable way forward, the one that may help us to assess and manage risks most effectively, is indeed just that: going forward (or further) into modernization. In particular, their common prescription is for the further transformation of our governance systems, for the further democratization of democracy itself.35 Uncovering this general agreement, however, reveals three subtle differences in their more specific prescriptions. First, Beck sees only a limited role, at best, for conventional science, as we know it now and in the future. Giddens, however, envisions a more substantial role for science, especially one that is interdisciplinary, inclusive, and integrative. Second, while both theorists point to the role of social movements in the democratization process, Beck is less sanguine about their centrality, emphasizing instead the ad hoc political action via subpolitics by otherwise disconnected social actors. Third, while Giddens seems comfortable crafting solutions to technological and ecological risks mostly within existing political institutions, Beck is willing to discard those ineffective political institutions of reflexive modernization and reinvent governance anew.

35. There have been few attempts to test RMT hypotheses regarding enhanced public participation empirically. Carlo Jaeger, Ralf Schüle, and Bernd Kasemir (1999), however, do propose a hybrid methodology of focus groups embedded within a process of integrated environmental assessment, and their research findings largely support the expectations of RMT.

100

Risk and Social Theory

The Upshot Beck and Giddens agree that there is a clear discontinuity between advanced modern society and its predecessors due to the acceleration of risks produced by humans and human systems—internal or manufactured risks. Advanced modernity is a modernity of unprecedented pace, scale, and uncertainty. Advanced modern technology is larger, more sophisticated, and much higher in risk potential than ever before—with the capacity to affect far greater numbers of people, the things they value, the areas they occupy, and the ecosystems that sustain them. Furthermore, because risks are global and impossible to avoid, they have markedly increased the vulnerability of social systems everywhere. These infrastructural changes have reshaped society. In advanced modernity, then, technological, ecological, or terrorist catastrophes are increasingly transcending geographical, political, and economic boundaries. Beck and Giddens highlight the challenges for individuals when they look to science for knowledge about risk. Indeed, both thinkers acknowledge that individualized social actors are increasingly more reflexive, recognizing science’s paradoxical role—as simultaneously the creator and the assessor of risks. They argue that individuals no longer accept scientific assessments uncritically but combine scientific knowledge with experiential knowledge, with political experience, and with the trustworthiness of science to arrive reflexively at some judgment about risks. How can societies develop or reconfigure institutions and practices to govern and manage these pervasive and highly consequential risks? Beck and Giddens converge on the same general conclusion: that we need to develop new institutional mechanisms that extend the arena of decision making. We need innovations in governance to further democratize risk choices. Because advanced modern risks embed not just probabilities and other analytic elements, but also a multiplicity of other social, political, and psychological factors, their proper governance and management require consideration of these factors. Risk governance in advanced modernity demands a radicalized democratization of democracy. A core idea that lashes the work of Beck and Giddens together is the central role of human agency as the foundation of governance. This is an extension of the basic transformational driver of the seventeenth- to eighteenth-century Enlightenment in France and Europe more generally: the elevation of the rights of all humans, not simply privileged classes. These political ideals had a farreaching influence, not just the well-known influence on the American Declaration of Independence, the U.S. Bill of Rights, and the French Declaration of the Rights of Man and the Citizen, but also a further impact on, for example, the Declaration of the Rights of the Polish Lithuanian Constitution of 1791. These ideals were closely tied to the shift in worldview that was a key element of modernization, the shift from a world of repeated cycles to one of direction. And this shift, based on rational thought and especially science, would foster continuous progress. For the risk society, the prescribed catalyst for that prog-

Reflexive Modernization Theory and Risk

101

ress is via a new form of governance that grants additional agency to democratic actors. The historical antecedents and logic of Beck and Giddens disappear in the next chapter where systems theory, the theoretical frame of Niklas Luhmann, is examined. For Luhmann, the individual does not exist. Agency does not exist. Individuals are simply embedded elements in autopoietic systems—systems of economics, governance, culture, and even psychological systems—that operate by their own logic. Based on that premise, an agent-based type of governance is obviated.

5 Risk in Systems The Work of Niklas Luhmann Too sharp a division between theoretical and experimental work can lead to mutual misunderstanding, even in so-called exact sciences; in softer disciplines, it is bound to bring about a total impasse. —Wassily Leontief, winner of the Nobel Memorial Prize in Economic Sciences in 1973, in Science (February 25, 1983)

The Gap between Risk Assessment and Perception: A Product of Systems Rationality

T

he German sociologist Niklas Luhmann, like Beck and Giddens, has devoted considerable time and effort to the study of risk. As we discussed in Chapter 2, Luhmann’s Ecological Communication was published in 1986, soon after the Chernobyl, Challenger, and Sandoz Laboratory accidents. In this treatise, written before the three accidents, Luhmann offered a theoretical explanation for the significant gap between the reassuring risk assessments of the technical elite and the experience of disasters by the public. In a later work especially, Luhmann (1993) further deepened and broadened the sociological understanding of risk. As a grand theorist of social systems, Luhmann situates his focus on risk at the interface and conflict between different social systems. His principal claim is that human societies are organized as a variety of self-referential or autopoietic systems that define their own reality, as well as an image of the world outside (Luhmann 1986, 1989; see Bailey 1994; Fuchs 2001).1 Systems comprise functional subsystems such as the law, the economy, and the political order. Since all systems and their subsystems depend on internal order and systemic

1.  “Autopoiesis,” a term coined in biology, refers to the autonomous process of self-organization presumed to dominate organic processes. Similarly, the social world is presumed to consist of autonomous, or semiautonomous, processes of self-organization.

Risk in Systems

103

interactions with other systems in the outside world, they have generated special communication media (e.g., legal codes, money, and power) to serve these needs. These media operate to ensure internal order and to provide the necessary exchange with other systems (Luhmann 1982). The sustainability of social systems depends on the function of media exchange. Media form the (binary) code of interaction within and between systems. According to Luhmann, availability of media is an issue of “yes” or “no.” Either one has access to the medium or not. This does not exclude that an individual may possess more or less of a specific medium, such as wealth or power. However, this is not Luhmann’s point. The system provides the means to generate or activate media, regardless of how this is distributed among various actors within the system. The binary code that Luhmann proposes implies that a system has the potential to communicate via its specific medium, and this is either available or unavailable. There is no in between. Our purpose in this chapter is not to review the totality of Luhmann’s systems theory. Rather, we narrow our focus to an examination of how Luhmann’s systems theory addresses the three key questions regarding societal risk in advanced modernity that we outlined in the Introduction. The questions relate to social order, risk knowledge, and institutional and political responses. We repeat each in full as the discussion unfolds.

Social Order: Dangers, Risks, and System Boundaries What social order emerges in advanced modern societies where technological and ecological risks are virtually everywhere? Luhmann (1990) argues that an important new feature to emerge within systems in the era of advanced modernity is the radical social construction of risk. Risk is not ontologically real in any way; it is entirely a construction by systems. A core feature of this construction is the distinction between risk and danger. Social systems and subsystems define the conditions for internalizing or externalizing “threats to health and the environment” from one system or subsystem to another. In Luhmann’s terms, hazards perceived as external threats to a system are called “dangers,” while hazards perceived to pose internal (and thus manageable) threats are called “risks.” This seemingly trivial semantic distinction embeds a deep theoretical demarcation in Luhmann’s framing. It addresses the recurrent contradiction where people perceive human life as becoming increasingly riskier while other indicators of life chances, such as life expectancy, continue to improve in all affluent and even most developing countries. From this perspective, neither hazards nor “objective” threats have necessarily increased in terms of lives lost. Rather, social systems increasingly have internalized external threats—that is, dangers—and have thereby ­transformed

104

Risk and Social Theory

them into risks. The increased transformation of dangers into risks, via internalization, creates the impression that risks are increasing. But fates, acts of God, random events, catastrophes of all sorts, or capriciousness of nature remain as constructs for describing dangers. Similar to Beck and Giddens, Luhmann argues that the more social systems act to shape the future and influence fate, the more dangers are internalized and, axiomatically, the more risks are created. Danger is what people are exposed to; risk is what they have chosen to take. For example, climate—once thought to be a function of nature’s caprice, a danger—is now viewed as significantly shaped by humans and is, therefore, a risk. The increased transformation of dangers into risks poses serious systemlevel problems. With the rapid and uninterrupted growth in internalized dangers (risks), the orderly functioning of social systems is threatened. Owing to the growing diversity of knowledge claims and social norms, social systems tend to circumscribe the experience of their members within their own system, limiting their exposure to external systems. As a consequence, too, this results in a circumscription of agency. This exclusion function was fairly successful in the early phases of evolving modernization, which Luhmann (1984) interprets as a differentiation process from a single societal system to a plurality of systems. Mature modern—perhaps advanced modern—societies exert the tendency to expose people to a multitude of systems and their incompatible rationales. As a consequence, decision makers and managers at all levels of society are challenged to govern a multitude of knowledge claims and a multitude of moral systems. Yet they have no integrative system—or anything approaching a meta-system—at their disposal for dealing with the evolved plurality, diversity, and complexity that interfere with intersystem communication. Governance consists of administrative and procedural rules for making intersystemic exchanges possible (Luhmann 1983). However, a reliance on procedure is effective only as long as the main strategic tool of modernization—dominant rationalities such as the efficiency of the market—is not challenged. Luhmann (1993) argues, however, that in advanced modernity, the diversity and plurality of systems have reached a point where the coexistence of competing rationales (generated by each system) within a society cannot be ignored. System agents have become more and more aware of not only the risks within their systems, but also the risks outside their systems (dangers) seen through the eyes of other systems (e.g., malnutrition in developing countries, financial risks in business failures, and risks of technological failure in disasters). Especially acute in this context are systemic risks, such as those inherent in a global financial system. This is another reason that many people believe risks are increasing when, on many objective measures, they are not. The potential for a specific phenomenon to be constructed as a risk in one system and a danger in another not only impedes communication between systems but draws into relief the conflict over what are considered rational rules. Nuclear power illustrates this point. Shareholders of a nuclear power plant perceive the possibility of a nuclear

Risk in Systems

105

accident as an external danger, not a risk (they typically live far from the plant and trust that the manager-engineers running the plant know what they are doing). Simultaneously, these shareholders worry about a systemic risk, the risk of a stock market crash that could render their investment in the nuclear power plant worthless (Luhmann 1989). Typical residents around a nuclear plant may care less about the stock market, an external danger, but live under the fear that they may be exposed to radiation or other unacceptable risks. Since those who create risks expose others to dangers, there is in Luhmann’s perspective always the potential for tension between the risk takers and the risk bearers. That this problem is real makes it independent of the perceived magnitude of risks or the (socially constructed) balance between risk and danger. This resultant dilemma is not the fault of risk governance institutions as, for example, claimed by Beck (1992c) and Giddens (1994a). Instead, systems and their managers are caught in the inevitable contradiction of devoting their energy to the governance of the increasing dangers that are internalized (now risks) while it is external dangers (risks in other systems) that people typically fear the most. To deal with this systemic dilemma, risk creators (and, as a means for exculpation, public risk managers) use formal tools of risk analysis to justify and legitimate the acceptance of risks, while risk bearers use perceptions of dread and consequences, cultural interpretations, or competing professional judgments as a justification for rejecting dangers.2 Each line of reasoning is incompatible with the other, and the real problem is that no communication medium is available to reconcile these opposing positions on risk taking.3 As a consequence, the governance system is unable to resolve all risk conflicts. System functioning is thereby continually threatened, as is communication among systems.

Risk Knowledge: Radical Constructivism What is our knowledge about these risks? Luhmann (1984) identifies himself as a radical constructivist. He does not deny the possibility that systems can produce “objective” knowledge about the world but points out that each observer is imprisoned in a social system that provides only constructed meaning, rationality, and identity. Indeed, even the notion of individuality or agency, anointed by the Enlightenment, by English-speaking social philosophy, and by some sociological theory with primacy, is an ­outcome 2.  One of the most durable findings in the social science of risk is a consistent difference in the risk perceptions of laypeople versus those of experts (Slovic 1987). 3.  “With respect to the risk perspective in the future,” as Luhmann (1993: 159) put it, “neither consensual agreement on facts nor on values will be of any help; on the contrary they will further aggravate the conflict.”

106

Risk and Social Theory

of systems rather than a prime element in creating systems (Fuchs 2001). It is therefore impossible for any system to predict accurately the outcome of its activities beyond its own system boundaries, since the consequences of these activ­ities are co-determined by the actions of other systems, a feature Luhmann refers to as double contingency (Luhmann 1982). Each system is thus trapped in a web of external (double) contingencies and uncertainties. The increasing multiplicity of systems and system agents markedly increases the number of contingencies; there are more system actions that become contingent contexts for all systems. The challenge is exacerbated by the heightened degree of social and cultural complexity and differentiation within and among systems. Together, these evolutionary developments introduce additional uncertainties that produce conflicts between expected and experienced outcomes. This increased complexity and uncertainty necessitate a recurrent effort to coordinate actions. This dilemma has been addressed in contemporary societies with the evolution of semiautonomous autopoietic systems that provide a network of orientations within each system’s distinct logic (Bailey 1994; Fuchs 2001; Luhmann 1989). Systems organize and coordinate the necessary exchange of information and services through specialized exchange agents (i.e., actors) located at the border of a system that are capable of activating resonance in neighboring systems. One example may be public relation agencies that are fluent in the code of their clients (e.g., industry) and their audience (e.g., regulators). Depending on the cultural rules that systems adopt, different phenomena that impinge on systems are constructed or reconstructed as risks or as dangers. The best-known institutional example of this pattern is the insurance industry. With fire insurance, the possibility of losing property to a fire is externalized to the party now managing the risk, the insurer, and this exchange is communicated through the medium of money. Coping with the multiplicity of actors and with increased system complexity has produced a discernible trend toward reinforcing system-specific rationalities. In turn, this trend is reinforced by the disintegration of collectively approved and confirmed social knowledge. Each system produces its own rules for making knowledge claims about risk (Dietz and Rycroft 1987). These rules determine which claims are justified as factual evidence versus which claims are seen as procedural issues or as mere constructions, ideology, or even myths. Furthermore, system-specific rules govern the process of selecting those claims from an abundant reservoir of knowledge claims that seem relevant to system governance and that match the body of previously acknowledged and accepted claims. In short, the demarcation between danger and risk provides Luhmann with a theoretical framework that is independent of the actual or objective threats to life or what else people may value. Risks are nothing more than those dangers that systems choose to internalize. Risk, therefore, is thoroughly a social construct. It is what systems believe they are able to produce, govern, manage, reduce, or manipulate, regardless—more or less—of their probability of occurrence or severity or outcome.

Risk in Systems

107

Institutional and Political Responses: Dialogic Resonance across Systems How can societies develop the institutional and political means for governing and managing risk effectively? Luhmann is not especially optimistic about resolving the societal risk dilemma. Nor does he view reflexivity, as do Beck and Giddens, as a means for resolving it. People and agency, the linchpins of reflexivity, are outcomes of systems in Luhmann’s framing, not foundational elements driving them. Human agency as a driver and governor of systems disappears in the very systems it serves. Nor does Luhmann advocate more democratization or better risk communication as the solution. The systemic divides between danger and risk and between risk creators and risk bearers, in the absence of an effective medium of communication, are insurmountable. For example, the terrorist risks imposed by a fundamentalist Islamic, Jewish, or other social system are incomprehensible to a secular Christian system of risk bearers. This ineluctable dislocation between systems poses a serious, perhaps irresolute, challenge to risk governance.4 How does Luhmann assess the potential for resolving risk conflicts? As noted above, the challenge of governance at all levels is inevitably caught between systems that distinguish between danger and risk. Thus, systems are unable to provide any sensible rationale that will bridge the gap between them. Furthermore, purely legalistic or procedural approaches cannot resolve the issue as they did during the emergence of modernization, with its singular, overarching rationality. The only opportunity for a communicative exchange between systems is to provide opportunities for conveying the rationality of one system to the rationality of another. Luhmann (1989) is skeptical about the potential power of discourse to lead to a resolution. Nevertheless, he still advocates deliberative dialogue between the various systems. Far from resolving or even reconciling conflicts, deliberative dialogue in this context has the potential to decrease the pressures for conflict, to provide a platform for making and challenging claims, and to guide the actions of governance. It will not ultimately resolve the fundamental tension, but it does relieve pressures. Deliberations help reframe the decision context, make the governance system aware of public demands, and enhance the legitimacy of collective decisions. Legitimacy is markedly enhanced by the mere presence of ostensible channels of communication (Skillington 1997). In this version of deliberation, reaching a consensus is neither necessary nor desirable. Deliberative actions 4. Luhmann’s inability to visualize alternative systems, structures, or institutions, it should be noted, is not shared by other systems theorists. For example, the Uppsala School of systems theory has addressed the issue of parliamentary reform and other institutional responses in the context of postmodern technological risks (see, e.g., Andersen and Burns 1995).

108

Risk and Social Theory

represent opportunities for the open expression of system values and goals. For Luhmann, the process of communication, of systems talking to each other, of gaining empathy for other viewpoints, of exchanging arguments, and of widening system horizons is the extent of what a deliberative process can accomplish. It is an experience of mutual learning without necessarily a substantive outcome or governance solution. Systems theory is effective in providing explanations for system behavior when potential consequences are either uncertain or contested. However, it offers few clues to how contemporary social systems operating under vastly different schemes of rationality reach closure or collectively binding standards or decisions. Yet it is obvious that such decisions are made routinely. For example, virtually all nations reach conclusions about tolerable levels of health risk and establish regulations for enforcing standards. In addition, the regulatory agencies of the political system are forced to establish standards and rules that cut across a variety of social systems. Indeed, it is not uncommon to arrive at agreement across large, self-interested systems—for instance, nation-states that sign international treaties, such as the Bretton Woods treaty for regulating international financial transactions, and international laws, such as the law of the sea. Social systems select risks from multiple systems and conceptualize their commonality by defining the demarcation lines between dangers and risks. Autopoietic independence may indeed be the major goal of individual systems within a society, but the vitality and functionality of society as a whole rests on the effectiveness of intersystemic communication—a form of communication that is common and often effective. But Luhmann, of course, does not believe in crosscutting communication media. Each system is more or less autonomous and responds only to changes in its own communication mode (which Luhmann calls resonance). A communication mode such as physical data may be dramatic for one system (such as the human ecosystem) but have no resonance with another system (such as the economy), so long as the first system does not translate into the operational code (such as money) of the latter system. Thus, the success of risk governance is measured either in terms of externalizing risks by exporting them to other systems (i.e., transforming them into dangers for the exporting system) or by successful translation into the respective codes of each system, leading to a social agreement or collective perception among systems that the remaining risks are controllable, manageable, or worth taking. For Luhmann (1989), effective risk governance for the remaining internalized risks is always and nothing else but successful translation of codes to gain dialogic resonance from one system to another. But he cautions that meta-criteria for creating resonance across all systems and regulating risks between them face the potential of permanent delegitimization. Reality demonstrates, however, that in spite of these systemic predictions, social organizations do communicate with each other and arrive at decisions that attract collective compliance (Renn 1999). The ability of pluralistic and socially differentiated systems to arrive at (and to carry out) collectively binding decisions is thus outside the

Risk in Systems

109

theoretical range of Luhmann’s version of systems theory. Since systems can only engage in dialogue, and not fully communicate, according to Luhmann, an intractable gulf and tension always exists between them. In his grand theory, then, there can be no effective overall system of governance. Thus, to develop an effective system of societal risk governance, another theoretical perspective is required. That perspective, we argue, can be found in the theory of communicative action described in the next chapter.

6 Jürgen Habermas and Risk An Alternative to RAP? In essence, democracy implies that those vitally affected by any decision men make have an effective voice in that decision. This, in turn, means that all power to make such decisions be publicly legitimated and that makers of such decisions be held publicly accountable. —C. Wright Mills, The Sociological Imagination (1959)

Risk as a Sign of Legitimation Crisis

T

he German theorist Jürgen Habermas has been remarkably silent about risk. He does set the groundwork for recognition of this central topic in Legitimation Crisis (1973), with his anticipation of the problem of climate change (“the limit of the environment’s ability to absorb heat from energy consumption”) and his recognition that population and economic excess are the key drivers of environmental degradation (“an exponential growth of population and production—that is the expansion of control over nature—must some day run up against the limits of the biological capacity of the environment,” propositions supported by later empirical evidence ([Habermas 1975: 42; see also Rosa, York, and Dietz 2004; York, Rosa, and Dietz 2003). However, just as with the fields of ecological science and risk, where—like Hawthorne’s ships— there is barely any touching, Habermas never conceptualizes risk as an organizing concept or elevates it to a central element in his sociological thinking. Despite this, we argue here that Habermas’s version of critical theory (Ha­ bermas 1970, 1971), which he later named the theory of communicative action (HCAT; see Habermas 1984, 1987), provides an effective meta-framework and operational conceptualization needed to harness risk theory to effective democratic management processes. The compass of HCAT is sufficiently broad to offer both a theoretical framework for understanding the risk society and a moral directive toward alternative solutions to the democratic governance challenge of the risk society. In this chapter, we first provide a broad overview of critical theory into which we situate HCAT. We then describe the core elements of HCAT and elaborate on the intended insights that HCAT offers for demo-

Jürgen Habermas and Risk

111

cratic risk governance and the likely unintended insights that it offers for understanding three recurring paradoxes in the risk field.

Critical Theory and Habermas Habermas developed HCAT partially as a complement to and partially as a critique of critical theory (Geuss 1981). Theodor Adorno and Max Horkheimer, the two major advocates of critical theory, had positioned themselves against the Weberian tradition of a value-free, nomological concept of sociology. Critical theory was inspired by a neo-Marxian concept of power and interest in society, by a combination of analytic and normative perspectives of the social sciences, and by the idea of enlightenment as a way to individual and social emancipation (Horkheimer 1982; Horkheimer and Adorno 1972). In the course of his scholarly life, Habermas distanced himself from the Marxian idea of dialectic development and based his universal belief in the potential of emancipation on the power of discourse as means to resolve cognitive, expressive, and normative claims (Webler 1995). Critical theory, a body of thought consisting of an extended critique of modernity that emphasizes the contradictions and untoward consequences of advanced or late capitalism, is based on a set of normative assumptions that are the most explicitly teleological among sociological schools of thought.1 First, it argues that since there is no clear boundary between facts and values, the proper goal of theory is to unite scientific inquiry with political action within a single theoretical framework. Furthermore, Habermas (1970) does not eschew science but views it as one form of rationality, and like all forms of rationality, scientific knowledge is mediated by social experience—culture. In one contrast to Luhmann, Habermas emphasizes the importance of communication between systems in a public sphere. In another, he argues for empirical evidence as the rudder to correct the course of reason and theory that has gone astray. Second, critical theory is based on a systems approach with an overarching rationality that bridges the partial rationalities of other theories and of the institutions found in a pluralist society. Critical theory suggests that, due to the decline in the Enlightenment belief of a universal rationality, new social norms and values need to be generated. The fundamental goal of these emergent ele­ ments of rationality is to provide collective orientations that do not conflict with personal aspirations and agency (Habermas 1968, 1970, 1989). The new form of universal rationality is communicative rationality. It embraces Weber’s two forms of rationality, instrumental and value rationality, but is not identical to them. Communicative rationality is the result of argumentation on the basis of claims according to rules that can be specified for different types of statements (speech acts, as Habermas calls them) or discourses (e.g., ­therapeutic 1. This stems from critical theory’s fundamental project: to develop a theory of the contemporary epoch that is guided by the goal of realizing a truly rational society and of realizing a genuine freedom.

112

Risk and Social Theory

­ iscourse or normative discourse). The exchange of arguments can produce d this overarching rationality in a discourse setting in which arguments count, irrespective of their sources or the power of the actors involved. Communicative rationality allows actors to reach consensus on the basis of mutual understanding of facts, values, experiences, and normative assumptions. Critical theory’s third important distinction is that it “is designed with practical intention: the self-emancipation of men [sic] from the constraints of unnecessary domination in all its forms” (McCarthy 1973: xviii). Habermas, for example, is not only concerned with developing a categorical picture of society—that is, a clear picture of how society functions—but also a normative palette for creating a better society. He believes, in a continuation of Enlightenment thought, that this can best be achieved by rational means. As a result, critical theory provides guidelines for dealing with risk debates but not with risks per se. This aspect of his theory, derived from Habermas’s innovative conceptual approach to evolution—an abstract conceptualization of the transformation of normative structures, not a description of the patterning of historical sequences—lays out the principles of social organization for advanced modernity. Thus, while risk may be interpreted differently depending on the perspective of each actor, this does not preclude the possibility that all actors can enter in a rational discourse about risk (Habermas 1971). Such discourse takes place not in an arena for resolving conflicts about competing claims (as, for example, is practiced in conflict resolution models based on game theory), but in an arena for the establishment of commonly agreeable social norms or values—in Dewey’s public sphere (Renn et al. 1993; Tuler and Webler 1995; Webler 1999). Participants do not seek to maximize or optimize outcomes, such as profits, on the basis of their self-interest alone. Instead, all participants voluntarily agree to accept the quest for common principles for evaluating value claims and agree to comply with these principles via discourse. The basis of participation is the belief that such principles are intuitively valid, resonate with individuals as part of their social heritage, and prove to be convincing even in situations where self-interests might be violated. Critical theory, having developed out of a branch of revisionist, neo-Marxist political theory, continues with its aim toward transformative sociopolitical action by explaining how the emancipation of all oppressed people and groups can be achieved at all levels—that is, the political, social, and psychological levels (Forester 1985; Habermas 1975). In both extending and reorienting critical theory, Habermas assumes that people are rational, in the universal and substantive sense, and that they possess the uniquely human quality of communicative rationality. Citizens are quite capable of the rational assessment of their political worlds—and, by extension, of their risk worlds. But Habermas argues that the present capitalist economic system has commodified an increasing number of the lifeworld’s social relationships while the political system inherently suffers from a chronic shortage of public legitimacy, as the political system must take actions more in service to the power and monetary distributions

Jürgen Habermas and Risk

113

in society than in service to considerations of fairness or a rational balance between social benefits and risks (Habermas 1975). What was made clear in the twentieth century is that risks have emerged as dominating phenomena that demand political intervention and management. Decisions by the political system—based as they are on the exercise of power rather than, for example, a fairness doctrine—perpetuate an inequitable distribution of risks. The only viable solution to overcome this imbalance is to create a forum, a public sphere, for open discourse where all actors have the opportunity to argue their interests and resolve their conflicts in an equitable and rational manner. The process of discourse must be fair, transparent, and truthful (Webler 1995). Habermas’s (1975) critical theory orientation toward protecting and engaging all citizens shapes his HCAT, which is sometimes called the theory of communicative competence. Developed further by others (Apel 1992; Benhabib 1992; Brulle 1992, 2002; Renn and Webler 1998; Webler 1995), HCAT provides a blueprint for the self-emancipation of humans within advanced modernity. Habermas’s theory relies on a two-tier model of society. The core idea here is that the social world (society with a big “S”) consists of two separate domains: systems and the lifeworld (Lebenswelt).2 Two systems dominate capitalist societies in advanced nation states: the economy and the political system. The former’s medium of operation is purposive-rational action, and the latter’s medium of operation is power. The lifeworld comprises three separate spheres: culture, society (with a small “s”), and person. Its media of operation are language, interaction, norms, and public opinion. The dynamics of Society (its dialectics) lie in the interaction between these two domains (systems and lifeworld) and between economic and political systems and culture, strategic rationality and social rationality—each conditioning and limiting the other. The key premise that organizes our interpretation of Habermas is that his meta-­framework could accommodate and should include, but was unmindful of, advanced modern­ity’s third key system: risk assessment and governance. Thus, Habermas adopts parts of Luhmann while rejecting other parts. He adopts the idea of system as the frame for explaining the structures and historical processes of Society. However, he rejects the idea that systems are coterminous with Society, that systems explain all of social life, and that systems have primacy over individuals. Instead, Society consists not only of systems but also of the lifeworld, and actors pass back and forth in their engagement with each domain, according to Habermas (1970), as economic activity or political action in the first domain and as interaction in the second. Furthermore, unlike 2.  Lebenswelt, meaning the world of immediate, unreflected experience, is the foundational concept of phenomenology and can be traced to Edmund Husserl. Habermas (1970) traces the seeds of the distinction between systems and Lebenswelt to Aldous Huxley, Max Weber, the American pragmatist philosopher John Dewey, and his fellow critical theorist Herbert Marcuse. It is frequently identified with the public sphere, or as civil society by political theorists.

114

Risk and Social Theory

­Luhmann, Habermas does not see rationality in systems alone; nor does he dismiss individual action as irrelevant, irrational, and inefficacious. Instead, the individual is granted a very active role in shaping not only the lifeworld, but also the purposive rationality of systems. Thus, in contrast to Luhmann’s auto­ poietic systems theory, in which new norms or values to assess and evaluate risks are part of an evolutionary process remote from any individual’s voluntary influence, Habermas (1971) believes in the integrative potential of individuals engaging in a free and open discourse.

Core Elements of HCAT The basic premise of HCAT is that people are capable of coming to a rationally motivated agreement (i.e., one that is free of coercion of any kind) if they are provided with procedural norms and optimal discourse space. Communicative acts are inherently social since they engage two or more speakers and listeners in a social relationship and are, when conducted in a proper discourse setting, fully dialogical—that is, all speakers are tuned in to one another. This theoretical setting, where actors can openly exchange ideas and arguments and critically reflect upon the input by others, was originally described by Habermas as the “ideal speech situation,” but is now referred to as “communicative competence” (Habermas 1970) and “unconstrained discourse conditions” (Habermas 1991b; see also the critical remarks in Warren 1993).3 The second element of HCAT is the medium for accomplishing communicative rationality. Here Habermas adopts speech-act theory as the template to develop the medium. Because the lifeworld is universal, transcending cultures, its foundation must also rest on universal principles. This is the core reason Habermas locates social rationality in language—a universal medium with universal rules of grammar, literally one of the defining features of what it means to be human. The speech act is communication’s elementary building block. Despite its expression in widely divergent ways, the speech act is conceptualized as the basic unit of language and of human action, is the key to understanding culture, and is the vehicle for expressing the subjective meanings individuals hold about themselves and the world. In every language, speech acts follow rules. Because all speakers share the rules and engage in everyday communicative action, language is a universal skill and therefore, Habermas argues, the most promising vehicle for emancipatory communication.4 Rationality for Habermas does not reside in the reflexive social actor (as it does for Beck and Giddens); nor does it emerge from system logic (as it does for Luhmann). Instead, it crystallizes through open, interpersonal communicative 3. Habermas (personal communication, June 3, 1996) regrets ever having used the term “ideal speech” and wishes he could retract it from his earlier writings, substituting the idea of a “thought experiment.” 4. Habermas shares with Noam Chomsky the supposition that all languages are structured in the same basic way, even if the sounds and rules differ from language to language.

Jürgen Habermas and Risk

115

discourse. The exchange of arguments can produce an overarching rationality in a discourse setting where arguments count, regardless of the power positions of the actors involved. The discourse setting provides a cultural context in which political issues, including ethical and normative ones, can be openly discussed—where discourse, to use Habermas’s terminology, takes place alongside factual, normative, or expressive claims. To be sure, factual evidence—scientific and other forms—are central elements of this process. Habermas believes that agreement reached in an unconstrained setting is free of power considerations of political rationality and of the utility considerations of economic rationality. It therefore is free of all power differences. As a result, discourse is also free of the distortions associated with power and is therefore socially rational. This conveys Habermas’s absolute trust in human social rationality. The third element of HCAT is participatory pragmatism. Its goal is to serve the larger critical theory project of the self-emancipation of humans. Ha­bermas, as already noted, is not only concerned with developing an evidentiary-based picture of society but also a normative palette for creating a better society. He believes that this can best be achieved by developing venues for open communication that will produce socially rational decisions that are widely accepted. As a result of its broad meta-theoretical orientation, HCAT implicitly provides guidelines for dealing not with risks per se but with orienting perspectives on risk and risk governance. In short, it addresses the key piece missing from the frameworks of Beck, Giddens, and Luhmann. As a perceptual reality, risk may be interpreted differently depending on the perspective of each actor. But this does not preclude the possibility that all actors can enter in a rational discourse about risk and come to agreement about societal management choices (Habermas 1971, 1984, 1987). Such discourse takes place, as noted above, not where dialogic techniques are already in place, but where commonly agreed on social norms or values are established (Renn et al. 1993; Tuler and Webler 1995; Webler 1999). Discourse participants do not seek to maximize or optimize outcomes solely on the basis of self-interest. Instead, all participants voluntarily agree to accept the rules of discourse with the goal of arriving at common principles for establishing and evaluating value claims and for complying with these principles via discourse. The basis of participation is the belief that such principles resonate with individuals’ lifeworld experience and have the capability to transcend situations in which personal interests might be violated. Building on the same foundation and the same process of rationalization that produced the Age of Enlightenment and the scientific revolution, Habermas has thus taken critical theory to a new locus by providing a blueprint for making progress toward the goals of the Enlightenment project of modernity. The normative foundation of all versions of critical theory is unshaken: people must be emancipated from the restrictions of constraining political organization if society is going to continue along and fulfill the evolutionary path that

116

Risk and Social Theory

began with the Enlightenment’s ideals. The good life and the liberated society are possible only if people are free to fully develop their rationalities.

Disenchantment of the Lifeworld For Habermas, both domains (systems and lifeworld) are conceptualized, after Weber, to be disenchanted—rationalized. But there is a clear and distinct difference in the premises and action directives of each rationality. A strategic calculus dominates the systems domain. It is a calculus of instrumental rationality and control, a calculus based on the idea that actors seek to realize stated objectives by means of purposive action based on considered alternatives. Carlo Jaeger and his colleagues (2001) explicate the logic and history of this form of rationality and label it rational action as paradigm (RAP), which we addressed in Chapter 3. Under RAP, objectives and valued goals are projected and evaluated on the basis of some instrumentally rational calculus, the object being to maximize or optimize utility or to minimize some constraint or risk. Therefore, RAP not only subsumes the separate rationalities of economics and politics, the dominant systems of HCAT’s modernity, but can also accommodate emergent rationalities, such as risk optimization. In stark contrast, the lifeworld is dominated by a preoccupation with spontaneity, sociability, social bonds, and the development of the individual— important social values outside the calculus of individual gain. Indeed, its desiderata contradict the calculus of gain. The lifeworld requires form and continuity, which are the very requirements satisfied by culture—norms, tradition, and custom—not by repeated, intentional calculation. It is here that we underscore the key point of our argument. What Habermas missed in ignoring risk, our central point, is that risk assessment and management around the globe emerged to become another system based on the rational calculus of RAP. This evolution of the risk society has introduced a new tension between the systems world of calculated rationality and the lifeworld. While the lifeworld has always embedded risk, a necessary condition to live at all, with the emergence of the risk society it faces the further encroachment of system rationality. Perhaps no better example of this exists than in the practice of placing economic value on a human life, such as the U.S. Environmental Protection Agency’s value of $6.1 million (see Ackerman and Heinzerling 2004).

Evolutionary Tension Habermas’s distinction between systems and lifeworld is the logical outcome of his evolutionary orientation that argues that human Societies developed along two independent trajectories: a material one and a moral one (Habermas 1991a). The core idea, around which Habermas (1975) erects a wide variety of theoretical and moral issues, is the proposition that the lifeworld in late or advanced capitalism (again, advanced modernity in our terms) has been increas-

Jürgen Habermas and Risk

117

ingly colonized. Colonization means inappropriately extending the boundary of one domain into other domains that are then ruled by the colonizer, such as extending economic calculation to the lifeworld. For example, a romantic couple would attract the lifted brows of surprise and opprobrium if they explained their romantic attachment as the result of a cost-benefit calculation or the outcome of market conditions. The colonizer in the late capitalism of advanced modernity is none other than the instrumental rationality of RAP. And as colonizer, RAP has not only entered into it but has also extended its spread throughout the lifeworld. Although it is well suited to economic and other instrumentally rational systems, RAP is an inappropriate and dysfunctional rationality for the lifeworld. As noted above, technological change plays a pivotal role in globalization and in the transition to advanced modernity. With the rapid growth in technology, a system element that represents one of the most embedded expressions of RAP—a “technocratic consciousness,” in Habermas’s (1975) terms—is extended to the lifeworld. Accompanying this consciousness, as theorized by Beck and Giddens but not Habermas, is the consciousness of a risk society. Habermas, following Herbert Marcuse (1964), understands the technocratic colonization of the lifeworld, describing it as the one-sided development of modernity. But he is puzzlingly unmindful of the ineluctable free rider in the technological vehicle: risk. Transported to the lifeworld, RAP and its technocratic consciousness trump normative standards with instrumental singularity. Normative standards apply not only to discourse participants but also to Society as a whole— or, at least, to institutions or large segments of the population. The extension of these standards to risk is straightforward. Normative standards in the domain of technological and ecological risk include, for example, pollution restrictions, distributional impacts, exposure limits, and performance standards for technology. Norms and regulations apply to all potential emitters or users regardless of whether they were represented in the discourse. Technological consciousness, and its medium of RAP, it should be clear, represents a genuine impediment to spontaneous communication. It impedes the recognition or creation of shared cultural meanings. These considerations lead to a core challenge for Habermas (1970: 57), who writes, “Our problem can then be stated as one of the relation of technology and democracy: how can the power of technical control be brought within the range of consensus of acting and transacting citizens?”5 The logical extension is, “How can risk be brought under such control?” The arenas for staging unconstrained discourse conditions may be local and episodic, a response to local political issues—such as the local siting of a waste incinerator—that attracts attention or concern. Or the local staging may be due

5.  Risk, as frequently noted, is not explicitly stated here or elsewhere by Habermas, but this proposition would continue to serve HCAT if the word “risk” was substituted for “technology.”

118

Risk and Social Theory

to the engagement of activists or sympathizers from a broader social movement, such as radical ecologists. Such broader social movements, of course, are often the outgrowth of successful local political action. Indeed, Habermas is especially sympathetic to grassroots actions because they are distanced from established institutions and emerge amid reflection and discussion about underlying values. Irrespective of the causal ordering between localized discourse settings and social movements, Habermas (1984, 1987) looks quite favorably on such movements—insofar as they serve to improve people’s communicative competencies for rational discourse. For Habermas (1981), “new social movements” (such as environmentalism, feminism, and post-colonialism), because they are premised on the ethical principles of undistorted rational communication, offer the best hope for a rational democracy. Habermas’s (1987) core idea, developed above, provides a theoretical basis for the worldviews and political actions of these movements. The movements emerge from the dialectical tension between system and lifeworld and are motivated toward resisting the colonization of the lifeworld by RAP. These new social movements distance themselves from traditional social movements in their disdain of class-based, economic motivations, emphasizing instead principles of universalism, democratic process, and opposition to bureaucracy. In short, moral values (not RAP or power) and non-strategic forms of motivation lie at the heart of these movements.

Inadvertent Insight? An immense achievement of Habermas’s formulation of communicative competence is that, despite his ultimate intention to establish a moral and workable framework for the self-emancipation of humans via direct action, HCAT provides a theoretical answer—perhaps unwittingly—to three recurring paradoxes in the risk field. The three paradoxes have dogged the risk assessment community since its institutional crystallization in the past three decades. The first, the contradiction between “objective” conditions and “subjective” feelings, is addressed by Luhmann’s distinction between danger and risk. Nevertheless, Luhmann can only lay out the problem, not address it effectively. In contrast, neither Beck nor Giddens even recognizes the three paradoxes. Hence, to date, each paradox, where recognized, has been explained only ad hoc, if at all, and there is no theoretical framing that addresses all three paradoxes simultaneously. Habermas doubtless would trace these three paradoxes to the continuing tension that originates in the actions of instrumental rationality, the principle for assessing technological and environmental risk and its calculating consciousness, to colonize the lifeworld. First, Habermas provides a theoretical, agency-preserving understanding of the asymmetry between the improvement in the objective conditions of life and people’s belief that the world is getting much riskier. This is explained by the tension between the systems rationality of RAP (with its objective numbers showing a steady increase in life expectancy) and the lifeworld rational-

Jürgen Habermas and Risk

119

ity that is more concerned with other threats to life and to the quality of life and cultural meaning. In the lifeworld, people worry not about statistical averages but about the scale and types of death (e.g., due to a catastrophe versus a chronic, prolonged illness), the existential threat to their ontological security, the buildup of societal vulnerabilities, and the increase in experienced inequities in the distribution of risks. Second, HCAT provides an explanation for the incommensurability between low-probability, high-consequence events and high-probability, low-­ consequence events. People perceive these two forms of risk asymmetrically. They are almost always more concerned with low-probability, high-consequence risks (such as airplane crashes, in which few people die each year) than with high-probability, low-consequence risks (such as automobile crashes, in which many people die each year (Renn in press; Walker and Covello 1984). According to RAP calculations, the expected value of high-consequence risk is much lower than the expected value of the low-consequence risk. The response that people are more concerned with the former type of risk confounds the instrumental rationality of RAP, but it is perfectly understandable from a lifeworld rationality. Instrumental rationality emphasizes the probability of untoward outcomes and the total number of potential casualties, whereas lifeworld rationality emphasizes the means of how those casualties occur or the deep images of disaster they conjure and their impact on the social order of things. Put simply, in the lifeworld, unlike in the RAP world, not all deaths are equal. Furthermore, from an administrative standpoint, the paradox also highlights the dilemma of whether to place governance emphasis on the former (an application of instrumental rationality) or the latter (a lifeworld rationality). The tension explicated by Habermas between the two rationalities, system and lifeworld, also explains a third, repeated paradox. In certain risk decisions, such as the siting of waste facilities (including high-level nuclear waste), empathy for future generations (intergenerational welfare) is salient and of deep concern to citizens (cf. Norgaard 1994). Again, this contradicts the calculus of RAP that would place the locus of concern on goals of the individual decision maker and on the immediate time horizon, not on the long-term interests of the unborn.

Conclusion A key characteristic of advanced modern societies and of the process of globalization that interconnects these societies is risk. Risk occupies a central presence in advanced modernity, and its ubiquity is evident with widespread concerns about a range of hazards, from the dramatic (terrorist attacks) and the complex (nuclear power) to the simple and rapidly expanding (cellular telephones). Social theory holds promise not only to describe and explain fundamental social processes such as these, but also to outline the consequential dislocations and paradoxes they engender while pointing to the pressures on the political system and the resources available for tackling them. Even more

120

Risk and Social Theory

important, theory can specify the grand management challenges that must be met to ensure the sustainability and perpetuity of societies. Habermas’s two-step analysis, from a diagnosis of RAP’s growing infringement on the lifeworld to the recommendation for renewed rational discourse, rests on his fundamental faith in language and its universality as a medium for rational interaction. Hence, he theoretically reintegrates the social actor not only in Beck’s context of subpolitics but also in a context of active, rational, and normative content. Furthermore, Habermas identifies the broad context (the lifeworld) and the medium (open communication) for rational discourse and action. The net result is a meaningful framework both for better understanding the world of advanced modernity and for rationally addressing its dominant dislocation—the grand challenge of developing effective governance systems for global risks. Similar to the risk theories of Beck, Giddens, and Luhmann, there is little doubt that HCAT is also incomplete with respect to a thorough sociological analysis of risk.6 Whatever the reasonableness of Habermas’s communicative theory, it offers little guidance for explaining mechanisms of selection for public participation, for introducing and interpreting competing pieces of evidence, for aggregating competing preferences, or for selecting the specific types of arenas where discourse should take place. Because of the episodic nature of social movements, discourse on a large scale is consequently intermittent and unpredictable, not a regular process with the capacity for addressing the growing number of challenges to risk governance. Nevertheless, Habermas does establish the foundation to guide these details of action—sufficiently so that there are a number of efforts to establish the procedures and arenas for his prescribed communicative discourse (e.g., Renn 1999, 2004, 2008c; Renn et al. 1993; Webler 1999). While still experimental and yet to attain all of the conditions outlined by Habermas, these efforts are guided by, consistent with, and revealing of Habermas’s evolutionary framework.

6. Taken together, all four theories reviewed offer no prescription for superseding global constraints, such as tribalism, ethnic conflict, post-colonial divisions, or, possibly, trans­national conflicts. These omissions are either potentially fatal to all four or signal the need to specify the scope conditions (e.g., only societies experienced with democratic processes) of each ­t heory.

III Risk Governance Links between Theory and Strategy

7 The Emergence of Systemic Risks If a man will begin with certainties he shall end in doubts, but if he will be content to begin with doubts, he shall end in certainties. —Sir Francis Bacon, Of the Proficience and Advancement of Learning, Divine and Humane (1893)

A

s we pointed out in Chapter 2, contemporary risk governance is challenged with a new category of risks: systemic risks. The most obvious example of such risks was the world financial crisis of 2008, in which the entire global financial system nearly collapsed. It all began with a crisis in subprime mortgage deficiencies. Early warnings indicated that the market was inflating and that the expectations of ever rising real estate values were based on weak assumptions. Steven Schwarcz (2008: 78) provided one warning on the verge of the crisis: “the recent subprime mortgage meltdown is undermining financial market stability and has the potential to cause a true systemic breakdown, collapsing the world’s financial system like a row of dominoes.” George Kaufman and Kenneth Scott (2003) provide a widely cited definition of a systemic risk. While they define systemic risks in the context of financial systems, their definition is robust enough to accommodate much broader systems, such as the global climate. “Systemic risk refers to the risk or probability of breakdowns in an entire system, as opposed to breakdowns in individual parts or components, and is evidenced by co-movements (correlation) among most or all parts” (Kaufman and Scott 2003: 372). It is the totality of the threat—the probability that the entire system can collapse—that distinguishes systemic from other types of risk. The term “systemic risk” has been used not only for characterizing financial risks and their repercussions on the world economy but also for health and environmental risks (Brigg 2008). The main features of systemic risks include ripple effects beyond the domain in which the risks originally appear and the threat of a multiple breakdown of important or critical services to society (De Bandt and Hartmann 2000). The main problem is that it is often difficult to predict when a system

124

Risk Governance

will suffer a breakdown or collapse. Threats to entire systems, such as climate change, may be hidden in small, incremental effects that provide no hint about when thresholds have been reached. Or a collapse may occur due to a domino effect in which a small glitch is released that affects multiple elements within a system or even multiple systems in parallel, thereby amplifying the overall risk (Renn et al. 2002). The recent emergence of systemic risks prompts the question “What are the main drivers of systemic risks?” The answer is short but complex. Key drivers of systemic risks are major structural changes in society, including • Increases in population and population density. • Increased consumption threatening the climate and world resource system (Rosa, York, and Dietz 2004). • Increased encroachment onto hazard-prone land for residential and productive uses (Rundle, Turcotte, and Klein 1996). • Increased interdependence among and between technical, social, and cultural hazards, leading to multiple and often nonlinear interactions that make predictions particularly difficult (Klinke and Renn 2000). Interacting with these drivers are changes in the social definition of what is regarded as detrimental or hazardous (Freudenburg 1993), accelerated by the growing diversity of lifestyles and subcultures within and across societies (Sklair 1994). Also, larger populations increasingly concentrate in geographic areas exposed to more natural hazards and technological risks. Forty of the fifty fastest growing urban centers in the world are located in earthquake-­endangered areas (Rundle, Turcotte, and Klein 1996). Figure 7.1 displays the trend in financial costs for major natural catastrophes between 1950 and 2010. The geographic shift above has reshaped a variety of systemic risks, creating a dramatic increase in financial losses and loss of human life, as well as a serious erosion of natural capital and services due to natural disasters over the past three decades, as seen in Figure 7.1. The estimated number of fatalities caused by the ten largest natural disasters from 1980 to 2010 is 1,089,570, according to the German insurance company Munich Re (2011; cf. Organization for Economic Cooperation and Development 2003). Furthermore, anthropogenic climate change and other human interventions into geochemical cycles are expected to increase the intensity of hazards in the near-future (Intergovernmental Panel on Climate Change 2001). Traditional risks have obvious negative physical effects. But those effects are typically bounded. A fire, for example, may destroy a school, which could lead to the direct loss of the facility and to the interruption of the affected children’s education. However, in an age in which fires are prevented from consuming entire cities, the impact of almost any blaze is likely to be limited. When fire breaks out at a school, safety equipment, sprinklers, and routine fire drills (some of the basic tools of conventional risk management) are likely to be effective. With appropriate safeguards in place, the odds are minimal that lives will

The Emergence of Systemic Risks

(U.S.$ bn)

125

Overall losses (in 2010 values)

Insured losses (in 2010 values)

Trend overall losses

Trend insured losses

250

200

150

100

50

1950

1955

1960

1965

1970

1975

1980

1985

1990

1995

2000

2005

2010

Figure 7.1  Great Natural Catastrophes Worldwide, 1950–2010: Overall and Insured Losses with Trends (Source:  Munich Re 2011; reprinted with permission)

be lost or even that anyone will suffer serious physical harm. What is more, the economic cost is almost certain to be limited by insurance claims and contingency budgets, while disaster planning probably means that the lives of teachers and students are disrupted for no more than a few days. Systemic risks are vastly different, even incomparable. From another angle, systemic risk denotes the embeddedness of any risk to human health and the environment in a larger context of social, technological, financial, and economic risks and opportunities (Renn 2008c). Systemic risks are at the crossroads between natural events; economic, social, and technological developments; and policy-driven actions, at both the domestic and the international level. Furthermore, systemic risks contain three recurring features that make them difficult to understand and to govern—complexity, uncertainty, and ambiguity (Organization for Economic Cooperation and Development 2003; Renn et al. 2002: 11)—which we will address more thoroughly in Chapter 8. As we have pointed out, the key defining feature of a systemic risk is a threat not just to the individual units that make up the system but to the entire system or its processes. An example of a systemic technological risk is the “normal accident” analyzed by Charles Perrow (1999), where the unanticipated interactions of a technological system’s components, its individual units, can bring the entire system down. Perrow’s paradigmatic example is nuclear power plants. Another key characteristic that sets systemic risks apart from conventional risks is that their negative physical effects (sometimes immediate and obvious, but often subtle and latent) have the potential to trigger severe ripple effects outside the domain in which the risk is located (Organization for Economic Cooperation and Development 2003; Renn 2008c: 137ff., in press). When a ­systemic

126

Risk Governance

risk becomes a calamity, the resulting ripple effects can cause a dramatic sequence of secondary and tertiary spinoff effects (Kasperson et al. 2003). They may be felt in a wide range of seemingly divergent social systems, from the economy to the health system, inflicting harm and damage in domains far beyond their own. A commercial sector, for example, may suffer significant losses as a result of a systemic risk, as we witnessed in the financial crisis in the aftermath of the collapse of Lehman Brothers in 2008. Even fairly healthy financial institutions were negatively affected, and in the end, taxpayers had to pay the bill for the reckless behavior of a few. Another example is the bovine spongiform encephalopathy (BSE) debacle in the United Kingdom, which affected not only the farming industry but also the animal feed industry, the national economy, public health procedures, and politics itself (see Wynne and Dressel 2001; Renn and Keil 2009: 350). People refused to eat British beef, regardless of the tangible evidence that showed little danger to their health or safety. The official inquiry of the U.K. government, summarized in the so-called Phillips report, stated, “The Government took measures to address both hazards [mad cow disease and Creutzfeldt-Jakob disease]. They were sensible measures, but they were not always timely nor adequately implemented and enforced” (Philips, Bridgeman, and Ferguson-Smith 2000: xvii–xviii). A further, telling example was the massive outbreak of foot and mouth disease, which infected entire herds of U.K. sheep in 2001. The epidemic had a devastating impact on many farms, both commercial and family, across the length and breadth of the British Isles (Anderson 2002). However, in contrast to a similar outbreak in the 1960s, the impact of foot and mouth disease was felt far outside the agricultural community because of ripple effects. The replacement of local abattoirs with central facilities resulted in the transformation of a loosely coupled system into a tightly coupled one. This meant the disease spread with lightning speed (Her Majesty’s Stationary Office 2001). Because of the disease’s aggressive ability to infect, the public was discouraged from traveling in rural England. Restrictions went as far as prohibiting people in rural communities from using footpaths that had been traversed for centuries. Citizens avoided journeys to the U.K. countryside. Foreign visitors did not go to Britain at all. The travel and hospitality industry suffered enormously as a result, as tourist accommodations stood empty at the height of the holiday season, transatlantic flights crossed the ocean at half capacity, and tour operators went into bankruptcy (English Tourism Council 2002). In the modern world, foot and mouth disease rippled to produce immense secondary effects (Sheehy et al. 2002). The outbreak gave rise to another insidious and all-too-common ripple effect: the amplification of risk (Renn et al. 1992). There was the widespread loss of public confidence and trust in institutions that have a responsibility, implied or explicit, for risk management. Even as the outbreak was being brought under control, the handling of the crisis by the U.K. government—particularly the effi­ cacy of the Department for Environment, Food, and Rural Affairs—was being called into question. Local councils came under fire for unnecessary closure of trails and footpaths, while the Department for Culture, Media, and Sport was

The Emergence of Systemic Risks

127

criticized for doing too little to help the travel sector or to persuade potential visitors that Britain was safe. The entire incident cast a black shadow over the U.K. government’s risk management capacity. That shadow remains to this day, serving to illuminate starkly not only the challenge of managing amplified ripple effects effectively, but also the dangers of failing to do so (Anderson 2002). As the foot and mouth disease example illustrates, the ripple effects of systemic risks go well beyond an amplification of the physical impact of conventional risks on people, property, or the environment (Burns et al. 1993; Kas­ person et al. 1988). Likewise, management of these risks involves much more than simply mitigating negative physical outcomes. Instead, the systemic risk perspective of ripple effects focuses on the interactions among different risk domains and different systems. It is concerned with how risk outcomes may move across boundaries, including the intangible boundaries between domains, across cultures, and across economies, and the political boundaries that define and divide nations. Systemic risks are frequently transboundary risks that cannot be contained within a political territory. They often have a simultaneous negative impact on many territories and countries. Another crucial factor that sets systemic risks apart from conventional ones, as already noted, are the knowledge and governance challenges of complexity, uncertainty, and ambiguity (Klinke and Renn 2002; Renn 2008c; Renn, Klinke, and van Asselt 2011). These multiple problem characteristics arise partly because of the risks’ potentially wide-ranging transdimensional and transboundary effects, themselves the product of today’s more complex, more interconnected world. The three characteristics are at the heart of the systemic risk challenge: if a risk can be embraced and simplified, if any uncertainties about its nature or impact can be eliminated, and if the wider public can be made assured that the risk is benign or controlled, then the risk would cease to be a systemic risk. It would move into the domain of conventional risks. Unfortunately, it is rarely possible to achieve these goals, for complexity, uncertainty, and ambiguity are often deeply embedded within systemic risks. Systemic risks pose additional challenges because they are not amenable to the reductionism of the standard risk assessment model. They require a more holistic approach to hazard identification, risk assessment, and risk management, because systemic risks are complex. This means that it is difficult to trace the connections between causes and effects. Instead, the analysis must focus on interdependencies, ripple and spillover effects, and other nonlinear dynamics that initiate effects that cascade between otherwise unrelated risk domains (Horlick-Jones and Sime 2004). Governing systemic risks presents specific and unique challenges, which are magnified by the reality that systemic risks vary considerably across and within systems; no two are exactly alike. While each one has similarities common to the definition of systemic risk, the characteristics of individual risks within a domain vary dramatically. Since the risks are inherently different, they require fundamentally different governance approaches (Organization for Economic Cooperation and Development 2003; van Asselt and Renn 2011).

128

Risk Governance

A critical component for the effective management of systemic risks is the simple realization that the risk manager requires a different set of decision-making tools because of the inherent problems of complexity, uncertainty, and ambiguity. In practice, conventional approaches should be abandoned. In Chapter 9, we describe a new approach for addressing systemic risks using, as a basic reference, the risk governance framework proposed by the International Risk Governance Council in Geneva (International Risk Governance Council 2005, 2007). This framework provides guidance for developing comprehensive assessment and management strategies to cope with systemic risk. The framework integrates scientific, economic, social, and cultural aspects and includes a disciplined scheme for the engagement of stakeholders. It introduces three decision-making strategies to fit with different types of risks. The strategies—probability-based, resilience-based, and discourse-based—correspond to the three problem characteristics (complexity, uncertainty, and ambiguity). The scheme incorporates different concepts to complement classic decision-­making phases, such as selecting objectives, assessing and handling data, and finding the most appropriate procedure for balancing pros and cons (Renn 2008c). A crucial element of this governance approach is the integration into the regulatory framework of analytic-deliberative processes, a term introduced in the risk community by Paul Stern and Harvey Fineberg (1996) and now widely adopted in the democratic governance of risks. The robustness of the governance framework is consistently tested by an increasingly multicultural world. When a new virus—such as severe acute respiratory syndrome (SARS) or the avian flu—spreads quickly around the world by hitching a ride with travelers, when protesters in one country oppose the development of nuclear generation facilities in another, and when sulfur dioxide in one country kills trees in another, the ability of systemic risk to cross cultural as well as political boundaries is clearly put to the test. This adds complexity to the governance of systemic risks, since significant cultural and political differences, as well as similarities, cloud the risk perception mechanisms and processes that prevail within cultures. The cloud thickens for cross-cultural, transboundary risks. These differences shape variations in risk perception. Individuals in divergent cultures may develop very different ideas about what is a risk, what is not, and what to do (Boholm 1998). This culturally incubated shift in perceptions creates an important issue for the governance of such systemic risks that have no more respect for multicultural boundaries than they do for political, social, or economic ones. Cross-national studies show that risk perception is governed both by primary factors, the quasi-universal evolutionary traits inherent in all of us, and by secondary cultural factors, or the idiosyncratic, culturally defined characteristics that vary from culture to culture (Hofstede 2001; Rohrmann and Renn 2000). Dread, catastrophic potential, and familiarity are core drivers of the primary universal mechanism of risk perception, while historical events; social and cultural experiences such as trust in institutions; personal and social value commitments; and the socioeconomic status of an individual are key drivers

The Emergence of Systemic Risks

129

of the secondary mechanisms (Rosa, Tuler et al. 2010). Governance of systemic risks requires a special awareness of the impact of these perception mechanisms and processes and how they will affect responses to risk and risk management decisions. This awareness is critical to effective governance. As the modern world savors its advances and luxuries, it grows ever more fearful of the potential threats that accompany them. Systemic risks are now a troubling addition to the already sizable repertoire of risks. They are real; they appear time and again, as headlines show; they present intractable challenges; and they demand a new form of governance. Well-informed actions must race to keep up with the breathtaking progress that characterizes the brave new world we live in, and the new dangers both obvious and hidden within it. Addressing systemic risks effectively calls for direct action on the key social and political features that define and activate systemic risks. It requires a systematic process for addressing the underlying dimensions of risk: complexity, uncertainty, and ambiguity. Such a process is developed in the chapter that follows.

8 The Three Companions of Risk Complexity, Uncertainty, and Ambiguity Scientists tend to ignore academic philosophy as an empty pursuit. Surely, any intelligent person can think straight by intuition. . . . ​I deplore the unwillingness of scientists to explore seriously the logical structure of arguments. . . . ​ Many great theories are held together by chains of dubious metaphor and analogy. —Stephen Jay Gould, Ever since Darwin (1977)

The Three Major Challenges of Risk

A

s pointed out in Chapter 7, three major characteristics—complexity, uncertainty, and ambiguity—are inherent in all decisions in which risks play a key role. The three challenges are particularly relevant for new threats, especially systemic risks (Klinke and Renn 2002, 2010; Organization for Economic Cooperation and Development 2003). These characteristics confound conventional risk management, as demonstrated in the U.K. bovine spongiform encephalopathy (BSE) crisis of the 1990s (Wynne and Dressel 2001). In the BSE case, complexity, uncertainty, and ambiguity converged to transform a comparatively simple issue of public health and farm policy into a complex political and social challenge that had repercussions in far-reaching circles, including international trade, social welfare, and government credibility (Cummings 2010: 57ff.). The first task to characterize this risk was to confirm that a new variant of Creutzfeldt-Jakob disease was the cause of the fatal human illness, which eventually was traced to the transmission of BSE to people through the consumption of beef infected with the disease. It also was necessary to determine the cause of BSE in cattle so the spread of the devastating ailment could be halted. In this case, scientific uncertainty was enhanced by the complexity of the issue, and both uncertainty and complexity created the opening for the high degree of ambiguity that led to social and political controversy and high social amplification of the risks to humans and farm animals. Uncertainty is a state of indeterminacy between cause and effect. For BSE, commonly known as

The Three Companions of Risk

131

“mad cow disease,” the indeterminacy became clear when early empirical evidence was insufficient for scientists to establish causal relationships between the disease and human illness. Mad cow disease was a certainty, but its complex connection to Creutzfeldt-Jakob disease was uncertain, creating an ambiguous context for decision making. People in the United Kingdom were reluctant to give up their steaks and Sunday roasts while the uncertainty lingered. To highlight his confidence in British beef, former U.K. Agriculture Minister John Gummer went as far as to force his daughter to eat a hamburger on television, even as causal uncertainty remained in the scientific community. And farmers, understandably, were reluctant to slaughter their herds. Against this confidence by U.K. citizens, a concern lingered for some years among U.K. authorities, as did suspicion in other countries about British beef. Years after BSE was brought under control, questions continued in Parliament, and concerned trading partners continued to refuse imports of U.K. beef. The compound effect of complexity, uncertainty, and ambiguity is a risk governance problem that at first glance may appear to present a Gordian knot of risk. Untangling the divergent challenges inherent in systemic risk often appears insurmountable. Again, scientific uncertainty lays the groundwork for these challenges. The potential dangers of electromagnetic fields (EMFs) provide another clear illustration. Multiple studies have failed to show any statistically significant harmful health effects arising from the radiation that seeps from mobile telephones into the brains of their users (World Health Organization 2007, 2011). However, other studies show benign effects, while still others claim to show a definite link between the use of mobile phones and the development of fatal brain tumors (Baan et al. 2011; International Agency for Research on Cancer 2011). The scientific uncertainty over cause and effect leaves the scientific community with a troubling ambiguity while complicating the governance and management challenge of decision makers (MacGregor and Morgan 1994; Ruddat and Sautter 2005). In many ways, the mobile phone issue parallels the early development of the BSE case. If society hopes to cope with systemic risks such as BSE and EMFs, it must come to grips with the challenges posed by three characteristics of system risks.

What Are the Three Challenges? Complexity Complexity refers to a type of uncertainty in which it is difficult to identify and quantify causal links between a multitude of potential causes and specific adverse effects (Renn 2008c: 75–76). The complexity stems from interactive effects among causes (synergisms and antagonisms, positive and negative feedback loops), long delay periods between cause and effect, inter-unit (individuals or collectivities) variation, intervening variables, and other dynamic disturbances. These include multiple possible causes occurring together (National Research Council 1996). Links do not follow the neat chain of events between a

132

Risk Governance

cause and an effect in a well-defined functional relationship (as, for example, in car accidents or in overdoses of pharmaceutical products). Applying such simple statistical models to systemic risks burlesques risk assessment. Where there is complexity that camouflages the pathways of causality, sophisticated models of probabilistic inferences are required (Renn and Walker 2008), as are other tools of analysis (Rosa et al. 2011). It is precisely these complexities that, in some circumstances, make sophisticated scientific investigations necessary, since the cause-effect relationships are neither obvious nor directly observable. In other circumstances where the scientific findings remain obdurately ambiguous, the available science becomes but one input among other analytic tools. A controversial example is the effect known as Gulf War syndrome. William J. Rea of the Environmental Health Center in Dallas (quoted in Gilbert 1995: 3) ascribes the syndrome to the combined effect on soldiers of their exposure to toxins from a variety of sources: There were the oil fires. There were injections of different kinds. There were pesticides. Not just in the uniforms. In the tents. There was nerve gas. There was water containing plastics. The jugs were heated up in the desert heat, facilitating the infusion of plastic into the water. . . . ​The food, the K-rations, too, contributed because the food sat in plastic for a long time (in the desert heat). There were the diet sodas which were also heated in the desert. The guys worked with trucks and tanks and got a lot of exhaust fumes. The showers used water which was contaminated from the tankers which brought the water in. Also, they painted camouflage on equipment with toxic paints. Additionally, there was lots of mold there. And there were chemicals from Iraqi bodies which some troops handled. Maybe the bodies gave off fumes from the oil fires. There were scuds [missiles], which possibly gave off nerve gas. The portable generators gave off a lot of fumes and the sailors slept next to fuel tanks. Desert sand, too, can cut the lungs. Long delay periods between cause and effect (as with latent diseases) are another source of complexity in the risk analysis process. Most cancers follow this pattern. For example, asbestos was widely believed to be a relatively benign substance until the 1930s and 1940s, in part because lung cancer and the fatal asbestos-related cancer mesothelioma began to appear only decades after the initial widespread exposure of hapless workers to massive daily doses of the mineral. However, by the time the dangers of asbestos were widely known, the substance was ubiquitous, and it took fifty more years for the risk to be controlled. As Jorma Rantanen (1997: 2; cf. Rantanen 2008) of the Finnish Institute of Occupational Health states, the suggestion of a global ban came—­ astoundingly enough—eighty-five years after the first sign of trouble: Asbestos has been mined and used at least for 2500 years. The earliest reports of chronic health hazards were given 90 years ago, and the sys-

The Three Companions of Risk

133

tematic epidemiological studies on lung cancer and mesothelioma were published in the 1930s and 1940s. These early findings did not affect the asbestos production or use; the first system-wide actions for reduction of the use were initiated about 30 years ago in most countries and implemented in full scale as late as in the 1970s and 1980s. This tells about a remarkable delay (30–50 years) between the early research findings of adverse health effect of a substance and an effective regulatory and control policy. Although fibrogenic properties were recognized already in 1902, it took 85 years of this century before the International Labour Organisation was able to agree on International Asbestos Convention No. 162 and Recommendation No. 172 in 1986. Variations among individuals, intervening variables, and many other complications also may add complexity to a risk analysis. When emergent risks are examined, the cause-effect relationships are rarely obvious or directly observable. Such complexities make sophisticated scientific investigation a necessary and fundamental part of the assessment of systemic risk, including systemic assessment procedures and the incorporation of new mathematical tools such as nonlinear regression, portfolio analysis, and fuzzy set theory (Aven 2003; Aven and Renn 2010). However, even the most sophisticated mathematical tools will not be able to solve all puzzles of complexity. In the beginning of an analysis, there are always human assumptions about causal relationships that are tested against competing hypotheses. If these assumptions neglect a cause or major impact factor or do not recognize relevant intervening variables, the models will fail to illuminate the complex structures of causes, intermitting agents, and effects. That failure can only lead to ineffective governance.

Uncertainty Uncertainty can be defined as a state of indeterminacy between cause and effect. The indeterminacy can be traced to insufficient empirical evidence, to inadequate empirical indicators clearly representing the cause or the effect, to complexity (too many causal links), to the observation of correlations that are spurious (e.g., the use of vitamin C to prevent colds), to the delayed effects described above, or to a failure to identify all factors in a multi-causal process. By scientific uncertainty, we mean that scientific knowledge is limited (little recognition of the risk or little data) or even absent (no information or data). Such uncertainty makes it difficult to assess exactly the probability and possible outcomes of undesired effects (cf. Aven and Renn 2009a; Aven, Renn, and Rosa 2011; Filar and Haurie 2010). Uncertainty here often arises from an incomplete or inadequate reduction of complexity in modeling cause-effect chains (cf. Marti, Ermoliev, and Makowski 2010). It is essential to acknowledge in the context of risk assessment that human knowledge—a point carefully developed in Chapter 1—is always incomplete and selective and thus contingent on uncertain presuppositions, assertions, and predictions (Funtowicz and Ravetz

134

Risk Governance

1992; Renn 2008c: 75ff.). It is obvious that probabilities themselves represent only an approximation of the likelihood of uncertain events; the margin of error, therefore, is a source of uncertainty. But probabilistic predictions are complicated by additional uncertainty factors, which prudence demands be included in risk governance and management procedures. For the risk analyst, the question is which factors to include. Alas, because the relevant scientific and mathematical literature offers no established classification of uncertainty, the factors question remains incomplete. As a consequence, uncertainty is defined using different terms and descriptions, such as incertitude, variability, indeterminacy, lack of knowledge, and even ignorance. A more systematic understanding of uncertainty clearly would be useful. Deconstructing the component elements of uncertainty allows us to identify the attributes that make risk uncertain (Renn 2008c; Renn, Klinke, and van Asselt 2011; Stirling 2008; van Asselt 2000): • Variability is the observed or predicted dissimilarity of responses to an identical stimulus by individuals among targets within a relevant population, such as humans, animals, plants, or landscapes. In risk management, safety factors are often used to cover this variability. • Random and systematic measurement errors include the imprecision or imperfection of measurement. The problem is inherent in drawing inferences from small statistical samples, in the application to humans of extrapolations from animal data, in bio-surveys or other experimental results, in the limited controls of epidemiological studies, and in the uncertainties of modeling, including the choice of functional relationships for interpolating from large to small doses. All of these sources of uncertainty are conventionally expressed through statistical confidence intervals. • Incertitude results from an expectation of cause and effect, but where the relationship between variables is randomly determined. This might be due to truly non-causal or non-repeatable random events or from badly understood nonlinear, chaotic relationships. • System boundaries must be imposed to make the task of determining cause and effect tractable. This leads to the necessity of focusing on specific agents, pathways, and effects to model causal connections. What happened with chlorofluorocarbons is a good example. Chlorofluorocarbons were developed and produced because, as stable chemicals under atmospheric conditions that did not react with any other chemical, they were thought to be an ideal gas for cooling devices and spray cans. The effects of these gases in reducing the protective ozone layer under high ultraviolet radiation in the stratosphere were excluded in the original analyses. Nevertheless, all of these effects were principally known to science at the time of these analyses (Schellnhuber 1999). For outsiders, such failures to acknowledge negative effects sometimes evoke images of conspiracy—which,

The Three Companions of Risk

135

of course, is always a possibility. But far more often, the demands of modeling are bound to create system boundaries, which means that no one can assess all effects of one agent over a complex web of interrelated factors. Nor with necessary system boundaries can one model the synergistic effects of all the chemicals to which a human might be simultaneously exposed. • Lack of knowledge can result from total ignorance, from lack of knowledge about unknown unknowns (which leads to the exclusion of possible causes from external influences), or from the impossibility of measuring some possible causes, and other factors. These different and diverse component elements have one feature in common: the uncertainty they yield reduces the strength of confidence that a risk assessor can ascribe to the proposed cause-effect chain under consideration (Morgan and Henrion 1990). Put another way, uncertainty increases when complexity cannot be resolved by scientific methods. Even simple relationships may be associated with high uncertainty. When any of the attributes of uncertainty described above are present, they can seriously hamper analysis. If uncertainty plays a large role in an assessment, the risk-based approach to analysis is potentially misleading and counterproductive (Horlick-Jones and Sime 2004). Clearly, judging the relative severity of risks based on weak or nonexistent evidence or evidence too complex to interpret does not make much sense. Under such circumstances, strategies are required to improve the resilience of the risk governance system itself. This resilience approach is the basis of much of current European environmental and health protection legislation and regulation (Berkhout et al. 2010). Phthalates (esters of phthalic acid) present a prime example. The Phthalate Information Center is a project of the Phthalate Esters Panel of the American Chemistry Council, a body that describes itself as “composed of all major manufacturers and some users of the primary phthalate esters in commerce in the U.S.” The center’s website highlights the uncertainty challenge related to the assessment of the safety or danger of phthalates, a ubiquitous class of chemical compounds that are used in the manufacturing of many plastic products to make them strong and flexible. Here is an excerpt from the website: “Studies showed that rats and mice fed very high doses of DINP [diisononyl phthalate, a phthalate commonly used in toys] developed liver and/or kidney tumors after exposure over a lifetime. However, there is growing consensus that these effects are not relevant to humans. Concerns have been expressed about possible developmental effects as well as possible chronic liver and kidney toxicity, but these effects have also only been associated with very high doses and, like the tumors, are probably not relevant to humans” (Phthalate Esters Panel 2003). Despite the organization’s obvious bias in favor of the continued use of phthalates, the language on its website shows how the component elements of uncertainty have contributed to the Phthalate Esters Panel’s unwillingness to make categorical claims about the phthalate DINP. The science is inconclusive;

136

Risk Governance

therefore, analysis of the risk is an enormous challenge, as the Phthalate Information Center website further illustrates: Risk assessments on a number of phthalates, including DINP, are in varying stages of completion by the European Union (EU). The final draft DINP assessment makes no requirement for risk reduction measures relating to the uses of DINP, including toys. The EU is currently considering maximum allowed migration limits on soft PVC toys to guarantee their safety. These limits would replace a temporary ban the EU placed on phthalates in toys intended for the mouths of children under 3 years of age. . . . ​In spite of the growing evidence to support the safety of DINP, the Japanese government has imposed a ban on the use of phthalates in objects intended for the mouths of young children. . . . ​The ban will go into effect in June 2003. (Phthalate Esters Panel 2003)

Ambiguity Ambiguity denotes the variability of legitimate interpretations and normative implications based on identical observations, data assessments, or other types of evidence, however meager (International Risk Governance Council 2005; Renn and Walker 2008). These multiple interpretations, as well as different values, are one source of ambiguity. Another may arise over the category of risk being assessed. Infinitesimal risks are typically acceptable, so additional regulatory efforts are considered unnecessary. Other risks are tolerable if they are considered acceptable because they are outweighed by benefits (Bandle 2007). Tolerable risks require additional regulatory efforts for their reduction or to cope with their consequences. Under conditions of ambiguity, decision makers often become ambivalent because they can find justification for both action and inaction. This means there are different legitimate viewpoints from which to evaluate potential adverse effects and to judge whether these effects are acceptable or even tolerable. Those who are in the business of managing or even communicating risks perceive and respond to risks according to their own mental constructs, images, and values, yielding multiple but meaningful and legitimate interpretations of risk assessment outcomes (Keeney 1992). As a result, whether risks are acceptable, tolerable, or intolerable ultimately may be in the eye of the beholder. And the choice of category—acceptable, or tolerable, or intolerable—and response can be subject to considerable debate and intense controversy. Thus, another source of ambiguity comes from different interpretations of the situation in which the risk is located or from divergent and contested perspectives on the justification, severity, or wider meanings associated with a perceived risk (cf. Jasanoff 2004; Stirling 2003). As a result, views differ on the ways to assess and appraise the risks, as well as on the relevance, meaning, and implications of

The Three Companions of Risk

137

available risk information and on which governance and actions should be considered. As should be obvious, such conditions place great obstacles in the way of consensus among assessors or decision makers. We will return to this issue in Chapter 10 when we discuss the potential for consensus on the light of discourse theory.

Room for Interpretation A condition of ambiguity emerges where the problem lies in interpreting evidence or in trying to agree on the appropriate values, priorities, assumptions, or boundaries to be applied to the definition of possible outcomes. Hence, the term ambiguity can be refined. As implied above, it consists of two complements: interpretive ambiguity and normative ambiguity. Interpretive ambiguity arises when different interpretations of an identical assessment result (e.g., a given risk has adverse or non-adverse effects). Normative ambiguity arises when there are different interpretations of what is regarded as tolerable (e.g., interpretations are tied legitimately to ethics, to quality-of-life considerations, or to the equitable distribution of risks and benefits). What does it mean, for example, if neuronal activities in the human brain are intensified when subjects are exposed to electromagnetic radiation during mobile phone use? Does the consistent correlation between the two simply indicate increased attention by subjects or signs of an eventual connection to a brain tumor or other medical problem? What if a causal connection can be unequivocally established, but the incidence of harm is very small? Should there be advisories or regulatory guidelines on the use of mobile phones? Most scientific disputes in the fields of risk analysis and management do not arise from differences in methodology, measurements, or dose-response functions. Rather, they arise from normative ambiguity, from disagreements about what the findings mean for human health and environmental protection (Harrison and Hoberg 1994). Emission data is hardly disputed. What most experts debate is whether the emission of a given substance in a certain volume with a certain level of exposure constitutes a serious threat to the environment or to human health. Should the role of risk regulation be limited to the avoidance of significant health effects, or should it be expanded to eliminate any measurable effect that could cause some damage to health, albeit still unknown? The first option has been advocated by a camp that asks for clear evidence of harm before a risk is regulated; the second option (often called precautionary approach) has been advocated by a camp that recommends the use of regulatory instruments if there is suspicion of a causal link but no conclusive evidence (Wiener 2011). Controversy over electromagnetic fields is an ongoing example. Humans are exposed to EMFs not only from mobile phones, but also from power lines, home wiring, airport and military radar, substations, transformers, computers, and appliances. One need only read back issues of publications such as EMF Health Report to see that consensus has not yet been reached in the scientific

138

Risk Governance

community about how to interpret the various and extensive studies over the risks of EMFs (World Health Organization 2011). As Robert Goldberg (1998: 1) wrote in the report of the U.S. EMF Research and Public Information Dissemination (RAPID) Working Group: EMF hazard research is an area where it is possible to come down on either side of the issue depending on small differences in interpretation, opinions on the need for caution as a matter of public policy, and the evaluation criteria applied. The popular press seeks a yes or no answer— they interpreted the recent [RAPID] evaluation as an expert decision that EMF is a hazard, while they interpreted the National Academy of Sciences report as an expert decision that EMF is harmless. Both are incorrect interpretations and, in my view, the underlying understanding of the research behind the analyses is actually very similar. Variability of interpretation, however, is not restricted to expert dissent. Laypeople’s perception of risk often differs from expert judgments because of their emphasis on qualitative risk characteristics such as familiarity, dread, personal or institutional control, assignment of blame, and equity. Moreover, in contemporary pluralist societies, the diversity of risk perspectives within and between social groups generally can be traced to divergent value preferences, variations in interests, and very few, if any, universally applicable moral ­principles—even more so if risk problems are complex and uncertain. The resulting normative ambiguity may be based primarily on conflicting moral perspectives or on conflicting views of what constitutes a fair distribution of risks and benefits (Kasperson 1986). This normative ambiguity may lead to social mobilization and conflict. A good example of such conflict is public resistance to the location of base stations or towers of mobile wireless telecommunications. Community groups protesting the building of these base stations in their vicinity usually do not go further to demand that mobile phones be abolished. But when siting decisions are made about facility locations, oppositional voices can be heard (Ruddat et al. 2010). Residents protest against being involuntarily and, compared with their fellow citizens, unfairly exposed to the potential risks of the new towers. The challenge of these risks is exacerbated by the characteristics of uncertainty, complexity, and ambiguity—for several reasons. First, the causal chain is highly complex and encompasses a range of consequences. Negative consequences can range from loss of human life and deleterious health effects to finan­cial losses and damaged ecosystems. Second, the characteristic of many of these risks is a high level of uncertainty, sometimes even indeterminacy, because they are based on nonlinear relationships. Third, the consequences of risk manifestation are ambiguous, resulting in different evaluations that depend on the cultural background, social position, and economic well-being of concerned citizens.

The Three Companions of Risk

139

Collective Efforts for Assessing and Managing Risks Given this variety of viewpoints, it becomes obvious that deep uncertainty, complexity, and ambiguity point to different reasons that many risks defy simple concepts of causation. Our ability to understand risk ranges from putatively certain, simple, and clear to the totally uncertain, complex, and ambiguous (Rosa 2003, 2010). In a reversal of how conventional risk characterization and management is conducted, it might make more sense to treat simple risks not as the exemplary case but as a special case in which uncertainty, complexity, and ambiguity are low (Renn 2008c: 177ff.). Recognizing the degree to which uncertainty, complexity, and ambiguity are present in any risk context should be a first step in risk assessment, governance, and management. For ease of explanation, we have juxtaposed the challenge posed by the three systemic risk characteristics. However, that should not lead to the interpretation that each risk problem can be structured into three neatly isolated boxes. Uncertainty, complexity, and ambiguity are closely intertwined. Dealing with risk is a dynamic governance process of continuous and gradual learning, encompassing a broad array of structural and procedural means and mechanisms by which politics and society can collectively govern relevant risk problems (cf. Brooks and Adger 2005). The adaptive and integrative quality of the process requires the capacity to learn from previous and similar risk handling experiences to cope with current risk problems and to apply these lessons to future potential risk problems and surprises. The essential lesson from previous risk experiences is that the need for an integration of analysis and deliberation (Stern and Fineberg 1996) lies at the heart of risk debates in the modern world. Risk governance and management agencies all over the world have been searching for decades for a better strategy to address systemic risks. The evolving concept of an analytic-deliberative process is one of the few attempts to develop integrative risk governance that is based on both analytic rigor and deliberation for assessing and managing risks (Chess, Dietz, and Shannon 1998; Hagendijk and Irwin 2006; National Research Council 2008; Renn 2008c: 284ff.; Stern and Fineberg 1996; Tuler and Webler 1999). We address this approach in greater depth in Chapter 10.

Systemic Risks The combination of analysis and deliberation promises to be particularly well suited to deal with systemic risk problems. Systemic risks constitute those risks where there is a probability that an entire system—whether financial, climatic, technological, or ecological—will collapse as stated in Chapter 7. This contrasts with the risk associated with any element, individual entity, or component of a system. Systemic risks demand scientific expertise, structured thinking, and broad deliberations. Profound scientific knowledge is required because of the

140

Risk Governance

typical uncertainty, complexity, and ambiguity inherent in systemic risks. This is the work of scientists and risk professionals who are recognized as competent authorities in their respective risk fields. But they do not work in isolation, as in the past. Instead, the analyses are partly the product of broad deliberation among scientists and stakeholders in the questions science needs to address. The systematic search for the state of the art in the evidence needed for risk assessment leads to a knowledge base that provides the data for further deliberation (Yankelovich 1991). At the same time, however, the process of deliber­ ation should also transform the scientific discourse and lead the discussion toward classifying knowledge claims, characterizing uncertainties, exploring the range of alternative explanations, and acknowledging the limits of systematic knowledge for many facets of a given risk problem (Renn 2010). Coping with systemic risks requires deliberation in three phases. First, deliberative processes are needed to define the types, role, and relevance of the different forms of knowledge for making informed choices. Second, deliberation is needed to develop the most appropriate procedure for dealing with the unique composition of complexity, uncertainty, and ambiguity. And third, deliberation needs to address the wider concerns of the affected public, especially if the risks are associated with high ambiguity. Understanding Risk: Informing Decisions in a Democratic Society, the U.S. National Research Council’s report on characterizing risk (Stern and Fineberg 1996), has become the standard reference for analytic-deliberative processes. It underscores the need not only to get the science right, but equally to get the right science. The deliberative process should inform scientists and other experts about the effects on their thinking due to professional conventions, sometimes doubtful assumptions, incomplete or conflicting data, and simplified models. This self-reflection about the preconditions of scientific inquiry is a good starting point for early deliberation among experts and non-experts (Jasanoff 2004). The more the respective expert communities learn to use deliberative techniques for sorting out the basis and strength of knowledge claims, the more effective the integration of scientific results into decision-making deliberation will be.

Grand Systemic Risks: Disruptions to Ecological Systems The most comprehensive systems to have evolved are the Earth and related systems that sustain all life on the planet. These systems are characterized by an extremely low understanding of their probability of risk occurrence and an extremely low understanding of the outcome if the risk is realized. Precise estimates of the probability of system collapse or precise delineation of system outcomes are beyond the reach of science, although progress is being made in both areas. Broad assessments have permitted science to ask questions about the pace, scale, and spread of environmental threats around the world—the risks to the Earth system. One germinal assessment of grand-scale consequences was undertaken by Peter Vitousek and his colleagues (1997), who carefully exam-

The Three Companions of Risk

141

100

Percentage

80 60 40 20 0

Land Transformation

CO2 Concentration

Water Use

Nitrogen Fixation

Plant Invasion

Bird Extinction

Marine Fisheries

Figure 8.1  Human Dominance or Alteration of Several Major Components of the Earth System (Source:  Vitousek et al. 1997:495; reproduced with permission) Note:  Human dominance or alteration is expressed (from left to right) as percentage of the land surface transformed; percentage of the current atmospheric CO2 concentration that results from human action; percentage of accessible surface fresh water used; percentage of terrestrial nitrogen fixation that is caused by humans; percentage of plant species in Canada that humanity has introduced from elsewhere; percentage of bird species on Earth that have become extinct in the past two millennia, almost all of them as a consequence of human activity; and percentage of major marine fisheries that are fully exploited, overexploited, or depleted.

ined the scale of ecosystem impacts around the world by examining the dominant influence of human actions in producing those impacts. While this science still suffers the precision limitations noted above, it does provide sufficiently clear contours of the problem to understand its magnitude. Hence, about the accuracy of their estimates of scale, Vitousek and his colleagues (1997: 495) write, “The numbers have large uncertainties, but the fact that they are large is not at all uncertain.” Figure 8.1 provides summary data of these findings. Vitousek and his colleagues (1997: 498) summarize their assessment of global human impacts this way: “the rates, scales, kinds, and combinations of changes occurring now are fundamentally different from those at any other time in history: we are changing the Earth more rapidly than we are understanding it.” The “boundary estimates” studies led by the Stockholm Resilience Centre and Stockholm Environmental Institute, with the cooperation of an international team of leading scholars in the field (Rockström et al. 2009), are a more recent attempt to provide a comprehensive assessment of global, systemic ecological risks. The method involves developing a framework of “planetary boundaries”—that is, safe operating spaces for humanity within the Earth system and within its biophysical and system processes. Put another way, the safety level or risk level of any boundary is an estimate of how far away or how close the boundary is to a tipping threshold. The method defines and examines nine processes believed to define planetary boundaries: climate change, rate

10 35

Extinction rate (number of species per million species per year)

Amount of N2 removed from the atmosphere for human use (millions of tonnes per year)

Quantity of P flowing into the oceans (millions of tonnes per year)

Concentration of ozone (Dobson units)

Global mean saturation state of aragonite in surface seawater

Consumption of fresh water by humans (km3 per year)

Percentage of global land cover converted to cropland

Overall particulate concentration in the atmosphere, on a regional basis

For example, amount emitted to—or concentration of persistent organic pollutants, plastics, endocrine disrupters, heavy metals, and nuclear waste in—the global environment, or the effects on ecosystem functioning

Rate of biodiversity loss

Nitrogen cycle (part of a boundary with the phosphorus cycle)

Phosphorus cycle (part of a boundary with the nitrogen cycle)

Stratospheric ozone depletion

Ocean acidification

Global freshwater use

Change in land use

Atmospheric aerosol loading

Chemical pollution

11.7

2,600

2.90

283

8.5–9.5

121

>100

387 1.5

Current Status

Note:  Darker gray shading indicates processes for which boundaries have been crossed.

(Source:  Adapted from Rockström et al. 2009: 473; reprinted with permission)

Figure 8.2  Results of an Assessment of Global, Systemic Ecological Risks Conducted within a Framework of Planetary Boundaries

To be determined

To be determined

15

4,000

2.75

276

11

350 1

(i) Atmospheric CO2 concentration (parts per million by volume) ( ii) Change in radiative forcing (watts per m2)

Climate change

Proposed Boundary

Parameters

Earth-System Process

Planetary Boundaries

Low

415

3.44

290

~1

0

0.1–1

280 0

Pre-industrial Value

The Three Companions of Risk

143

of biodiversity loss (terrestrial and marine), interference with the nitrogen and phosphorous cycles, stratospheric ozone depletion, ocean acidification, global freshwater use, change in land use, chemical pollution, and atmospheric aerosol loading. Figure 8.2 summarizes the result of their assessment. What is made clear by these estimates is that three key boundaries—­climate change, rate of biodiversity loss, and the conversion of nitrogen cycles into phosphorous cycles—have already been exceeded. In addition, the Earth system is approaching dangerous tipping points in ocean acidification, the amount of phosphorous flowing into oceans, and the concentration of ozone. To rerephrase the telling observation of Vitousek and his colleagues (1997), these estimates contain large uncertainties, but the risk they pose to the most comprehensive system—the global ecosystem—is undeniably large. This magnitude of systemic risk most challenges the era of advanced modernity to develop effective systems of governance.

Five Types of Deliberative Discourse There is no silver bullet for governing systemic risks. But five different types of governance systems featuring discourse deliberation can help risk managers deal with systemic risks: design discourse, instrumental discourse, reflective discourse, epistemic or cognitive discourse, and, for the most complex scenarios, participatory discourse (Klinke and Renn 2002; Renn and Sellke 2011). The design discourse—which we discuss last—takes place first as a framing of which other discourses will contribute to the process and when. These different types of discourses can be aligned with the three major challenges of risk: complexity, uncertainty, and ambiguity (Renn 2008b). Instrumental discourse refers to a simple or linear risk problem that is not complex, uncertain, or ambiguous. For judgments about simple risk problems, a sophisticated approach involving all potentially affected parties is not necessary. Most actors would not seek to participate, because the expected results are either trivial or more or less obvious. Instrumental discourse should take place among agency staff, directly affected groups (such as product or activity providers and immediately exposed individuals), as well as enforcement personnel, as advisable. It should be recognized, however, that risks that appear simple often turn out to be more complex, uncertain, or ambiguous than originally assessed. It is therefore essential to revisit these risks regularly and monitor the outcomes carefully to determine whether there is a need to shift to another discourse deliberation. Characterizing risks, evaluating them, and designing options for risk reduction pose special challenges in situations of high uncertainty. Estimating the occurrence of risk or the consequences if the risk is realized is imprecise. How can one judge the severity of a situation when the potential damage and its probability are unknown or highly uncertain? When risks are shrouded with high levels of uncertainty, scientific input is only the first step of a more complex risk evaluation procedure. It remains essential to compile all of the relevant

144

Risk Governance

data and the various arguments supporting differing scientific positions. Silvio Funtowicz and Jerome Ravetz (1990) proposed the numeral, unit, spread, assessment, and pedigree (NUSAP) scheme to express different degrees of uncertainty in risk assessment (van der Sluijs et al. 2005). This notational system addresses a range of factors associated with incertitude, including the pedigree of the theoretical framework from which the results are derived. Funtowicz and Ravetz are convinced that experts familiar with the field under investigation are able to classify the state of knowledge according to their pedigree system. The pedigree category sets out to exhibit the “deepest sort of uncertainty,” which Funtowicz and Ravetz (1990: 52) say, “border with ignorance (not expressed statistically).” Uncertainty requires an essential broadening of the risk debate. Information about the range of troubling uncertainties needs to be collected and brought into the entire deliberative process, but not just among experts. Instead, stakeholders, public interest groups, and affected publics are brought into the deliberation. A key objective is to find the right balance between too little and too much precaution (International Risk Governance Council 2005). This requires a subjective, normative judgment—something that science can rarely provide. Judgments based on economic considerations likewise have limited value, since the stakes are uncertain, vary depending on the point of view of the various stakeholders, and are likely to involve unfair and unacceptable distributions. Under these circumstances, risk managers are well advised to include as many principal stakeholders in the evaluation process as reasonable. Participants should be asked to find a consensus on the extra margin of safety in which they would be willing to invest in exchange for avoiding potentially catastrophic consequences. This is a balancing act. If too much protection is sought, innovations may be prevented or stalled. If too little protection is sought, society may experience untoward consequences and unwanted surprises. Such questions can be approached through reflective discourse, whose goal is the clarification of knowledge. It is similar to the process of true cognitive understanding through reason involving the assessment of tradeoffs between the competing extremes of overproduction and underprotection from risk. Exercises in reflective discourse are mainly appropriate when answering uncertainties about risk-averse or risk-seeking approaches to innovation. Such discussions provide answers to the question of how much uncertainty is acceptable in exchange for the potential benefits of some future opportunity. Deliberators will attempt to decide whether taking the risk is worth it, in view of the potential benefit. Here the classic and intractable question—How safe is safe enough?—is replaced by the question “How much uncertainty and ignorance are the main actors willing to accept in exchange for some present or future benefit?” Policymakers, representatives of major stakeholder groups, scientists, and affected citizens must take part in conversations of this type. Political or economic advisory committees, charged with proposing or evaluating political options, may be a useful in the capacity of facilitator or adviser. These

The Three Companions of Risk

145

committees can guide the deliberations of core stakeholder groups in addressing the question. At a more operational level, value-tree analysis and other decision-aiding tools, such as multi-criteria analysis, also have proved effective for structuring this type of deliberation (Renn 2008c: 332ff.). These analytic techniques aim to exhibit and systematize normative and evaluative statements. They serve to make the normative structure of dissent and conflict more transparent so that all deliberators have a sense of the values driving each other’s views. The systematization also has the intention of promoting agreement over the normative fundamentals in the formation of opinion. There are even more fundamental instruments and procedures for stakeholder participation. These include round tables, consensus conferences, mediation (if conflicts have arisen), negotiated rulemaking, and similar processes (Amy 1983; Rowe and Frewer 2000; Webler et al. 1991). The proper handling of complexity in risk characterization and risk governance requires transparency over the subjective judgments and the inclusion of knowledge elements that have shaped both sides of cost-benefit calculations. Resolving complexity necessitates a discursive procedure during the appraisal phase with a direct link to the tolerability and acceptability judgment of later deliberation. Input for handling complexity could be provided by epistemic discourse aimed at finding the best estimates for characterizing the risks under consideration. This discourse should engage scientists with different positions and include the participation of other experts and knowledge carriers. The participants may come from academia, from government, from industry, or from civil society, but their legitimacy to participate is their claim to bring new or additional knowledge to the deliberation area. The goal is to resolve conflicts over available evidence. The objective of deliberation here is to isolate the most adequate description or explanation of a phenomenon, not to resolve its challenges. For example, the deliberation should attempt to answer questions such as, “What physical effects are likely to be caused by the emission of specific substances?” “How many people are exposed to the effects?” and “Are there higher effects for some groups than for others?” In addition, epistemic discourse is expected to help identify and quantify amplifying or ripple effects of physical risks, effects that ripple out to other areas of social and economic life. Such discourse also may work as an early warning device by drawing attention to the potential crosscutting effects that are typical of systemic risks. In addition, an effective epistemic discourse can reveal hidden ambiguities and greater uncertainty than previously anticipated by regulators. Both biophysical scientists and social scientists should be engaged in this discourse because the criteria for risk evaluation include aspects of perception and social mobilization, and it would be useful to predict future controversies and risk debates based on the cumulative knowledge of past social experiences. Such anticipatory knowledge will reduce the element of surprise if (and when) these controversies emerge. Delphi surveys and meta-analytical

146

Risk Governance

workshops are appropriate instruments for conducting such epistemological discourses (Schulz and Renn 2009; Webler et al. 1991). These are procedures that systematically ascertain existing expert knowledge while distinguishing between areas of dissent and consent (Gregory, McDaniels, and Fields 2001; Webler 1999). When major ambiguities are associated with a risk problem, it is not enough that risk regulators are open to public concerns and are willing to address the issues that many people want them to address. Under conditions of high ambiguity, the process of risk evaluation needs to be open to public input and new forms of deliberation. This starts with revisiting the operant question over whether the question was properly framed in the first place. Is the issue really a risk problem, or is it, in fact, an issue of past abuses, of current lifestyle, or of future vision? It also asks whether there are ethical or normative standards against which to weigh options. The aim is to find consensus on the contours of the dimensions of ambiguity to compare risks and benefits as one way to balance the pros against the cons. High levels of ambiguity require the most inclusive strategy for participation, engaging not only directly affected groups but also indirectly affected groups—including, when possible, unengaged citizens. Resolving ambiguities in risk debates requires a participatory discourse, a platform where competing arguments, beliefs, and values are openly discussed. The opportunity for resolving these conflicting expectations lies in the process of identifying common values; defining options that allow people to live according to their own vision of a “good life” without compromising the vision of others to find equitable, just distribution rules over commons resources; and activating institutional means for reaching common welfare so that all, instead of a few, can reap the collective benefits. Otherwise, the classic commons dilemma emerges. Both established mechanisms of legal decision making and novel procedures, such as citizens’ panels and direct citizen participation, are included in this deliberation stage. Participatory discourses are chiefly appropriate as a means of searching for solutions that are compatible with the interests and values of affected people and for resolving their conflicts. This type of discourse involves weighing the various criteria and interpreting the results over (1) issues of fairness and environmental justice; (2) visions of future technological developments and societal change; and (3) preferences about desirable lifestyles and community life. Techniques for this kind of discourse include randomly selected citizens’ panels or juries, voluntary advisory groups, consensus conferences, and other participatory techniques aimed at resolving ambiguities and value conflicts (Dienel 1989; Durant and Joss 1995; Fiorino 1990; Renn 2008c: 332ff.; Schneider, Oppermann, and Renn 1998). Categorizing risks according to the quality and nature of available information may, of course, be contested among the participants in deliberations. Who decides whether a risk issue can be categorized as simple, complex, uncertain, or ambiguous? It is possible that a consensus may not be reached over defining the category of risk. In such cases, a detailed (worst-case) analysis of

The Three Companions of Risk

147

possibilities comprising monitoring and surveillance may be the only achievable compromise. Examples might include reversible removal of risk sources, timely detection of adverse effects, and strength of surveillance systems. The best means for dealing with such conflicts, however, is to provide for broad involvement when assigning the different risks to these four categories. Assigning risks to the four categories needs to be done before the risk assessment procedures begin. Over the course of further analysis and deliberation, the categorization may change as new data or information that are being collected reveal the placement of some risks into the wrong category. Yet the risk governance system proposed in Chapter 9 builds on the need to classify risks at the beginning and allocate them to different routes of appraisal, characterization, evaluation, and management. Hence, the initial screening needs to be well founded. It seems prudent under the circumstances to convene a screening board to perform this challenging task (Dreyer and Renn 2009). This board should include members of the risk assessment team, risk managers, key institutional stakeholders (such as industry, NGOs, and representatives of related regulatory or governmental agencies), and representatives of affected publics. It is here that design discourse comes into play. This discourse is aimed at properly categorizing different risks, selecting the appropriate risk assessment policy, defining priorities in handling risks, organizing the appropriate involvement procedures, and specifying the conditions under which further steps in the analyticdeliberation process will be conducted. Figure 8.3 summarizes the different requirements for participation and stakeholder involvement for the four risk challenges and for the design discourse. As is the case with all classification schemes, this one oversimplifies the picture of the involvement process. Indeed, it has been criticized for being too rigid in linking risk characteristics (complexity, uncertainty, and ambiguity) to specific forms of discourse and dialogue (van Asselt 2005). The generic distinctions shown in the table might be refined by distinguishing between participatory processes based on risk agent versus risk absorbing issues. Despite these possibilities for elaboration, the purpose of this scheme is to provide a general orientation and delineate a generic distinction between ideal cases rather than to offer a strict recipe for participation. It is clear that the four basic types of discourse are not stand-alone categories. They need to be combined, or even integrated, in dealing with systemic risks. As our discussion above illustrates, experience shows that it is essential to distinguish the type of discourse that is needed for each of the challenging characteristics of systemic risks (uncertainty, complexity, and ambiguity) and for each phase of the deliberation process. Proper categorization will eliminate inappropriate cognitive issues, such as determining the right method for extrapolating animal data to humans via a participatory discourse. Similarly, epistemic discourse is inappropriate for settling value conflicts. Furthermore, it seems advisable to separate the treatment of uncertainty, complexity, and ambiguity in different discourse activities, since they need other forms of discourse to be resolved. Often they need different participants, too.

Epistemic

Type of Discourse

Complexity Induced

Risk Problem

Instrumental

Type of Discourse

Simple

Risk Problem

Risk Problem

Uncertainty Induced

Type of Discourse

Reflective

Actors

• Agency Staff • External Experts • Stakeholders • Industry • Directly Affected Groups

Type of Conflict

• Cognitive • Evaluative

Risk Problem

Ambiguity Induced

Type of Discourse

Participatory

Actors

• Agency Staff • External Experts • Stakeholders • Industry • Directly Affected Groups • General Public

Type of Conflict

• Cognitive • Evaluative • Normative

Remedy

Figure 8.3  The Risk Management Escalator and Stakeholder Involvement

Allocation of risks to one route or to several of the four routes Design discourse A team of risk and concern assessors, risk managers, stakeholders, and representatives of risk-handling agencies

Actors

Actors

Function: Type of Discourse: Participants:

• Agency Staff • External Experts

Type of Conflict

Cognitive

Agency Staff

Remedy

Statistical Risk Analysis

Remedy

Probabilistic Risk Modeling

Remedy

Risk Balancing Necessary + Probabilistic Risk Modeling

Risk Tradeoff Analysis and Deliberation Necessary + Risk Balancing + Probabilistic Risk Modeling

The Three Companions of Risk

149

How this analytic deliberative model can be theoretically grounded and practically designed is the subject of Chapter 10. This aim of this chapter was to provide the foundation for understanding the special challenges that systemic risks impose on modern societies. The experts and authorities in our societies charged with risk analysis, governance, and management have yet to come to recognize, let alone come to grips with, the challenges of these new risks. And this lacuna is not because these risks are in the realm of unknown unknowns, since their existence is manifested throughout our globalizing, technological world. A clear need exists for a holistic and systemic framework that can guide risk governance and risk managers as they characterize, assess, and evaluate systemic risks. Without such a framework, those who are responsible for addressing systemic risks cannot be expected or able to know exactly the scope of the challenge. Furthermore, once risks can be distilled down to their fundamentals, it is essential to clarify the types of procedures necessary for obtaining effective, efficient, and politically feasible decisions about them. These are challenges that must be met. In the next chapter, we introduce a framework for governing these new risks that aims to address the issues of uncertainty, complexity, and ambiguity in a much more appropriate way than traditional risk governance and management practices.

9 Risk Governance A Synthesis We are more inclined to engage in a risk if we believe that the odds of a truly catastrophic outcome are negligible. —Robert Meyer, “Why We Still Fail to Learn from Disasters” (2010)

Beyond Government: The Need for Comprehensive Governance

R

isk governance, as we defined it in the Introduction, is a broad rubric referring to a complex of coordinating, steering, and regulatory processes conducted for collective decision making involving uncertainty. Risk sets this collection of processes in motion whenever the risk affects multiples of people, collectivities, or institutions. Governance comprises both the institutional structure (formal and informal) and the policy process that guide and restrain collective activities of a group, society, or international community. Its aim is to regulate, reduce, or control risk problems. This chapter addresses each stage of the risk governance process: pre-­ assessment, interdisciplinary risk estimation, characterization and evaluation, management, and communication/participation. Furthermore, we explicate the design of risk communication and participation to address the challenges raised by the three risk characteristics of complexity, uncertainty, and ambiguity. Finally, this chapter concludes with basic lessons for risk governance. Before beginning those tasks, we identify a fundamental change in the actors now participating in governance. In recent decades, the handling of collectively relevant risk problems has shifted. The shift is from traditional state-centric approaches, with hierarchically organized governmental agencies as the dominant locus of power, to multilevel governance systems, where the political authority for handling risk problems is distributed among separately constituted public bodies (cf. Lidskog 2008; Lidskog et al. 2011; Rosenau 1992; Wolf 2002). These bodies are characterized by overlapping jurisdictions that do not match the traditional hierarchi-

Risk Governance

151

cal order of state-centric systems (cf. Hooghe and Marks 2003; Skelcher 2005). They consist of multi-actor alliances that include not only traditional actors such as the executive, legislative, and judicial branches of government, but also socially relevant actors from civil society. Prominent among those actors are industry, science, and nongovernmental organizations. The result of the governance shift is an increasingly multilayered and diversified sociopolitical landscape. It is a landscape populated by a multitude of actors whose perceptions and evaluations draw on a diversity of knowledge and evidence claims, value commitments, and political interests. Their goal, of course, is to influence processes of risk analysis, decision making, and risk management (Irwin 2008; Jasanoff 2004). Institutional diversity can offer considerable advantages when complex, uncertain, and ambiguous risk problems need to be addressed. First, risk problems that vary in scope can be managed across different institutions or at different levels. Second, an inherent degree of overlap and redundancy makes nonhierarchical adaptive and integrative risk governance systems more resilient and therefore less vulnerable. Third, the larger number of actors facilitates experimentation and learning (Klinke and Renn 2012; Renn 2008c: 177ff.; Renn, Klinke, and van Asselt 2011). There are also disadvantages. They include the possible commodification of risk, the fragmentation of the risk governance process, more costly collective risk decision making, the potential loss of democratic accountability, and paralysis by analysis. The paralysis shows itself in the inability to make decisions due to unresolved cognitive and normative conflicts and lack of accountability vis-à-vis multiple responsibilities and duties (Charnley 2000; Garrelts and Lange 2011; Lyall and Tait 2004). Thus, understanding the dynamics, structures, and functionality of risk governance processes requires a general and comprehensive understanding of procedural mechanisms and structural configurations. The standard model of risk analysis consisting of three components—risk assessment (including identification), management, and communication—is too narrowly focused on private or public regulatory bodies. It fails to consider the full range of governance actors engaged in processes for governing risk. Furthermore, it ignores stakeholder and public involvement as a core feature in the stage of communication and deliberation, another key element of governance. Despite new attempts to develop new models and frameworks of risk governance, there is still a need to link these conceptual frameworks to actual case studies. Such a link allows us to test past experiences and explore their usefulness for designing more informed and robust risk management strategies (Renn and Walker 2008). As Timo Assmuth (2011:167), a senior researcher at the Finnish Environment Institute, concludes, “With complex risk and riskbenefit issues such as those of Baltic Sea fish [that are threatened not only by overfishing, but also by toxic contaminants], a narrow and rigid assessment and management approach based on illusory certainty and on a sectorized and top-down governance and deliberation style needs to be complemented by a broader, more flexible and evolutionary approach.”

152

Risk Governance

From Government to Governance The term government typically refers to a civil body defined as a sovereign state, whether it is of an international body, of a nation, of a jurisdiction within a nation, or of a local domain. The governing structure of the modern nation-state is the most obvious example of government. While “governing” is the province of governments, the purview of governance is, after John Dewey, the public sphere.1 The public sphere is much broader than government, consisting as it does of a set of processes conducted by a wide variety of social actors. Governance choices in modern societies are generally conceptualized as a mutual interplay among governmental institutions, economic forces, and civil-­society interests (mediated, e.g., by NGOs). Generally, governance embodies a non-­ hierarchically organized structure in which there is no superordinate authority. It consists of government and nongovernment actors bringing about policies that are binding to all participants (cf. Lidskog 2008; Rosenau 1992; Wolf 2002). In this perspective, nongovernmental actors play an increasingly relevant role due to their decisive advantages in flexibility and in gaining information or resources compared with governmental agencies (Kern and Bulkeley 2009). As noted in the Introduction, the concept of governance came into fashion again in the 1980s in circles concerned with international development. It was soon adopted in other domains. During the past decade, the term has experienced tremendous popularity in the fields of, among others, international relations, various policy sciences (including subfields referred to as European studies and comparative political science), environmental studies, and risk research. The idea of governance thus has been dusted off and reintroduced to enlarge the perspective on policy, politics, and polity by acknowledging the key role of other actors in managing and organizing societal and political solutions. The revitalization of the term governance and its growth in use is best understood as a response to new challenges, such as globalization, increased international cooperation (e.g., the European Union), technological changes (e.g., international communication), the increased engagement of citizens, the rise of NGOs, and the changing role of the private sector. Augmenting these societal challenges are the increased complexity of policy issues and the resulting difficulty in implementing decisions with confidence and legitimacy (Pierre and Peters 2000; Walls et al. 2005). Many classical theories of regulation presume a hierarchical orientation in which government is the central actor wielding the medium of power. In economic policy theories, the central actor is the market, whose medium is money. Both theoretical orientations are focused on a dominant actor that exercises power and control. The governance perspective views things very differently. It generates and implements collectively binding policy solutions in a complex 1. The possible range of governance often has been provocatively termed governance by government, governance with government, and governance without government (Rosenau 1992, 1995), which emphasizes the decreasing role of the nation-state.

Risk Governance

153

context of multi-actor networks and processes. Power is distributed among the variety of actors in the multi-actor networks. Added to government and markets are new civic actors, such as expert groups and NGOs, as well as ad hoc coalitions of citizens from unclear ranks of supporters, posing the difficulty of determining whom they represent. Governance also includes the role of nonelected actors, such as civil servants, scientific and policy experts, think tanks, and a broad range of committees active in various ways in policy processes. The governance perspective thus draws attention to the diversity of actors, the diversity of their roles, the diversity in the logics of action, the manifold relationships among them, and all kinds of dynamic networks emerging from these relationships. Scholars who subscribe to the governance perspective examine social networks and the roles of the various actors in these dynamics as a way to understand policy development and political decision making. Some authors differentiate between horizontal and vertical governance (Benz and Eberlein 1999; Lyall and Tait 2004). The horizontal level includes all actors within a defined geographical, functional, or political-­administrative segment—for example, a governmental agency, stakeholder groups, industry, and a science association at the national level. The vertical level describes the links between these horizontal levels (such as the institutional relationships between the local, regional, state, and international levels). When various levels are involved, which is often the case, the notion of multi-level governance is advanced. In such a context, “government” is no longer a single entity (Rausch­ mayer et al. 2009). We should note that governance includes both a descriptive component and a normative component. In a descriptive use of the term, it refers to the complex web of manifold interactions between heterogeneous actors in a particular policy domain. It also refers to the resultant decision or policy. Governance is then an observation and description of the approach to characterizing the scale and scope of problem solving. Here is a descriptive definition of governance: “structures and processes for collective decision-making involving governmental and non-governmental actors” (Nye and Donahue 2000: 12). Normatively, governance refers to the model or framework of how we ought to organize or manage collective decisions. The highly influential white paper of the European Commission on Governance (2001) propagates such a normative perspective on “good governance.” In that white paper, a direct response to the BSE (mad cow disease) crisis, governance is presented as a prescriptive model, where transparency, stakeholder participation, accountability, and policy coherence are adopted as key principles. With that general background, we can now focus on risk governance more specifically. It involves the translation of the substance and core principles of governance to the context of risk and problem solving (International Risk Governance Council 2005; Klinke and Renn 2012; Renn 2008c; Renn and Walker 2008; Renn, Klinke, and van Asselt 2011). It refers to a body of scholarly ideas over how to deal with challenging public risks. These ideas have been informed by forty years of interdisciplinary research drawing from sociological,

154

Risk Governance

­ sychological, and political science research on risk, from science and technolp ogy studies and from research by policy scientists and legal scholars.2 This considerable body of knowledge provides a convincing, theoretically demanding, and empirically sound basis for critiquing the standard risk analysis model that consists of two dimensions: the probability of occurrence and the consequences if a risk is realized. Many risks cannot be calculated on the basis of quantitative probabilities and effects alone, as we demonstrated in Chapters 7–8. Risks too often are multidimensional. Too often they have qualitative features that are not easily amenable to quantification. Too often they exist in contexts that are ambiguous or intractably complex. In short, they have all the characteristics of systemic risks. This means that regulatory models that build on the standard model not only are inadequate but also constitute an obstacle to dealing with risk responsibly. These limitations in standard risk analysis are the reason for the recent adoption of a governance framework to understand and manage risk. Risk governance does accommodate the qualitative features of risk. But it also comprehends the various ways in which many actors—individuals and institutions, public and private—deal with risks surrounded by deep uncertainty, complexity, or ambiguity.3 It includes formal institutions and regimes and informal arrangements. It refers to the totality of actors, rules, conventions, processes, and mechanisms concerned with how relevant risk information is collected, analyzed, and communicated and how regulatory decisions are taken (International Risk Governance Council 2005, 2007; van Asselt 2007). However, risk governance is more than just descriptive shorthand for a complex, interacting network in which collectively binding decisions are taken to deal with a particular set of societal issues. The aim is that risk governance—similar to governance in general—provides a conceptual as well as a normative basis for how to deal responsibly with uncertain, complex, or ambiguous risks in particular (van Asselt and Renn 2011).

From Simple to Systemic Risks At the core of risk governance is the recognition that risk comes in many flavors. In 1995, the Dutch Health Council phrased it this way: “not all risks are equal” (quoted in van Asselt and Renn 2011: 440). The modern concept of risk can be traced to the seventeenth century in the work of such mathematical luminaries as Blaise Pascal, Pierre de Fermat, and Christiaan Huygens, and to the emergence of the word probability (Hacking 1975). But with the exception of 2. See the review of literature in van Asselt and Renn 2011. 3.  Ambiguity is a condition in which the presence of a risk is unclear, its scope is ill-defined, and it is not clear whether the risk is serious enough to characterize and manage. Ambiguity also refers to the plurality of legitimate viewpoints for evaluating decision outcomes and justifying judgments about their tolerability and acceptability. So ambiguity refers to a context of vagueness and the existence of multiple values and perspectives.

Risk Governance

155

the insurance industry, the application of risk to decisions about investments, technology, and the environment is a child of the twentieth century (Knight 1921; Starr 1969). The predominant formulation adopted for application has been to treat risk in terms of probability and effects, of dose and response, or of agent and consequences. This dominant framing of risk underlies what has been called the technocratic, decisionistic, and economic models of risk assessment and management (cf. Löfstedt 1997; Millstone et al. 2004; Renn 2008c: 10). This framing of risk, like all models of reality, abbreviates the many facets of risk into a simple two-dimensional cause and consequence or dose and response model. This is not much of a problem for simple risks, such as annual auto fatalities, where considerable actuarial data is available to estimate probabilities. The cause for the risk is generally well known; the potential negative consequences are fairly obvious; and the uncertainty is low, so there is little ambiguity about the interpretation of the risk. Simple risks are ostensible and recurrent and therefore grounded in ontological reality. They follow the process of long-term frequency distributions, meaning they are less affected by current or future disturbances. As a consequence, a whole toolbox of statistics is available, and their application is meaningful. Examples include a wide variety of accidents, such as with automobiles and airplanes, and regularly recurring natural events, such as seasonal flooding. But many risks are not simple and cannot be calculated as a function of probability and effects. This alternative view of risk, shared by an increasing group of risk scholars, explicitly challenges the idea of risk inherited from scholars such as Frank Knight (1921) in which risk is restricted to numerically defined probability distributions (Aven and Renn 2009a). As we pointed out earlier, many risks have qualitative features that are not easily captured in the standard two-dimensional risk model. For example, the aesthetic and functional loss associated with the extinction of polar bears would be difficult to fit into that model. Risk governance highlights the importance of recognizing the key roles that uncertainty,4 complexity, and ambiguity play in many societal risks. Ironically, it is a consistent practice in most cases to assess and manage risks with those characteristics as if they were simple and amenable to the standard risk model. The conventional assessment and management routines do not do justice to the confounding characteristics of such risks. This practice leads to a whole range of difficulties, among them unjustified amplification or irresponsible attenuation of the risk, sustained controversy, deadlocks, legitimacy problems, unintelligible decision making, trade conflicts, and border conflicts. The main message from Chapters 7–8 is that we urgently need to develop better 4. Some like-minded authors prefer to reconceptualize risk in a way that renders the addition of “uncertain” superfluous. For example, Terje Aven and Ortwin Renn (2009a: 2) suggest that risk be redefined as a reference to “uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value” (see also Rosa 2003, 2010). We agree with such definitions and use them, as well.

156

Risk Governance

c­ onceptual and operational approaches to understand and characterize, let alone manage, non-simple risks.

Lessons for Risk Governance Risk governance is a paradigm, an orienting perspective of highly contextualized practices for understanding and managing risks. Hence, it is not a model in the strict sense of the word. However, it does incorporate the results of model applications where appropriate. The aims of risk governance are more fundamental. Risk governance aims for a paradigm shift from the conventional practices in the field. It also seeks to orient risk professionals to a broader concept of risk (van Asselt and Renn 2011). It is a dynamic, adaptive learning, and decision-making process of continuous and gradual learning and adjustments that permits prudent handling of complexity, scientific uncertainty, or sociopolitical ambiguity. Adaptive and integrative capacity in risk governance processes encompasses a broad array of structural and procedural means and mechanisms by which politics and society can handle collectively relevant risk problems. In practical terms, adaptive and integrative capacity is the ability to design and incorporate the necessary steps in a risk governance process that allow risk managers to reduce, mitigate, or control the occurrence of harmful outcomes resulting from collectively relevant risk problems in an effective, efficient, and fair manner (Brooks and Adger 2005). The adaptive and integrative quality of the process requires the capacity to learn from previous and similar risk handling experiences to inform and deal with current risk problems. It also has the capacity for adaptively applying these lessons to cope with future potential risk problems and surprises. A significant institutional development for the promotion of risk governance was the founding of the International Risk Governance Council (IRGC) in 2002. The IRGC (2005: 5) is an independent organization whose “work includes developing concepts of risk governance, anticipating major risk issues, and providing risk governance policy recommendations for key decision makers.” In 2005, it suggested a process framework of risk governance (International Risk Governance Council 2005; Renn 2008c; Renn and Walker 2008). The framework structures the risk governance process into four phases: (1) preassessment; (2) appraisal; (3) characterization and evaluation; and (4) risk management (see Figure 9.1). Communication and stakeholder involvement were conceptualized as constant companions to all four phases of the risk governance cycle. The framework suggests a phase-by-phase process beginning with pre-assessment and ending with risk management. Each phase is further subdivided into functional components that need to be included to complete each step. Furthermore, there is a strict separation between knowledge acquisition and decision making and between physical and non-physical impacts (a distinction between risk assessment and so-called concern assessment—that is, a scientific investigation of people’s perceptions and concerns associated with the respective risk).

Pre-assessment

Communication

Problem Framing Early Warning Screening Determination of Scientific Conventions

Reduction Options

• Conclusions and Risk

Seriousness of Risk

• Risk Profile • Judgment of the

Figure 9.1  The IRGC Risk Governance Process (Source: Adapted from International Risk Governance Council 2005: 65)

and Acceptability • Need for Risk Reduction Measures

• Judging the Tolerability

• Risk Perceptions • Social Concerns • Socioeconomic Impacts

Concern Assessment

• Risk Estimation

Assessment

Risk Assessment

Risk Appraisal

Assessment Sphere Generation of Knowledge

• Hazard Identification and Estimation • Exposure and Vulnerability

Risk Characterization

Tolerability and Acceptability Judgment

• • • •

Risk Evaluation

• Option Identification and Generation • Option Assessment • Option Evaluation and Selection

Decision Making

Practice

• Option Realization • Monitoring and Control • Feedback from Risk Management

Implementation

Risk Management

Management Sphere Decision on and Implementation of Actions

158

Risk Governance

Governance Institution Pre-estimation

Monitoring and Control

Management

Interdisciplinary Estimation Evaluation

Characterization Human Resources Social Capital

Financial and Technical Resources

Institutional Means

Figure 9.2  A Modified Risk Governance Framework (Source:  Adapted from Renn and Klinke 2012: 58)

Andreas Klinke and Ortwin Renn (2012) have proposed some alterations to the IRGC risk governance model, because it appears too rigid to be applied to systemic risks. They recognized a need for a comprehensive risk governance model with additional adaptive and integrative capacity. The modified framework suggested by Klinke and Renn (2012) consists of the following interrelated activities: pre-estimation, interdisciplinary risk estimation, risk characterization, risk evaluation, and risk management. This requires the ability and capacity of risk governance institutions to use resources effectively (see Figure 9.2). Appropriate resources include institutional and financial means, as well as social capital (e.g., strong institutional mechanisms and configurations, transparent decision making, allocation of decision making authority, formal and informal networks that promote collective risk handling, and education), technical resources (e.g., databases, computer software and hardware, research facilities) and human resources (e.g., skills, knowledge, expertise, epistemic communities). Hence, the adequate involvement of experts, stakeholders, and the public in the risk governance process is a crucial dimension to produce and convey adaptive and integrative capacity in risk governance institutions (cf. Pelling et al. 2008; Stirling 2008). Since the social acceptance of any response of risk governance to risk problems associated with complexity, uncertainty, or ambiguity is critical, risk handling and response strategies need to be flexible, and the risk governance approaches need to be iterative and inclusionary.

Risk Governance

159

Pre-estimation Risk, while an ontologically real phenomenon, is understood only via mental constructions resulting from how people perceive uncertain phenomena. Those perceptions, interpretations, and responses are shaped by social, political, economic, and cultural contexts (cf. International Risk Governance Council 2005; Luhmann 1993; Organization for Economic Cooperation and Development 2003). At the same time, those mental constructions are informed by experience and knowledge about events and developments in the past that were connected with real consequences. That the understanding of risk is a social construct with real consequences is contingent on the presumption that human agency can prevent harm. Our understanding of risk as a construct has major implications for how risk is considered. While risks have an ontological status, understanding them is always a matter of selection and interpretation. What counts as a risk to someone may be destiny explained by religion for a second party or even an opportunity for a third party. Although societies over time have gained experience and collective knowledge of the potential effects of events and activities, one neither can anticipate all potential scenarios nor be worried about all of the many potential consequences of a proposed activity or an expected event. At the same time, it is impossible to include all possible options for intervention. Therefore, societies always have been and always will be selective in what they choose to consider and what they choose to ignore. Pre-estimation, therefore, involves screening to winnow from a large array of actions and problems that are risk candidates. Here it is important to explore what political and societal actors (e.g., governments, companies, epistemic communities, and NGOs) as well as citizens identify as risks. Equally important is to discover what types of problems they identify and how they frame them in terms of risk and in terms of uncertainty, complexity, and ambiguity. This step is referred to as framing, how political and societal actors rely on schemes of selection and interpretation to understand and respond to those phenomena that are relevant risk topics (Kahneman and Tversky 2000; Nelson et al. 1997; Reese 2007). According to Robert Entman (1993:52), “To frame is to select some aspects of a perceived reality and make them more salient in a communication text, in such a way as to promote a particular problem definition, casual interpretation, moral evaluation, and/or treatment recommendation for the item described.” Perceptions and interpretations of risk depend on the frames of reference (Daft and Weick 1984). Framing implies that pre-estimation requires a multi-actor and multiobjective governance structure. Governmental authorities (national, supranational, and international agencies), risk producers, opportunity takers (e.g., industry), those affected by risks and benefits (e.g., consumer organizations, local communities, and environmental groups), and interested parties (e.g., the media or experts) are all engaged. They will often debate about the appropriate frame to conceptualize the problem. What counts as risk may vary greatly among these actor groups.

160

Risk Governance

Interdisciplinary Risk Estimation For political and societal actors to arrive at reasonable decisions about risks in the public interest, it is not enough to consider only the results of risk assessments, scientific or otherwise. To understand the concerns of affected people and various stakeholders, information about their risk perceptions and their concerns about the direct consequences if the risk is realized is essential and should be taken into account by risk managers. Interdisciplinary risk estimation consists of a systematic assessment not only of the risks to human health and the environment and but also of related concerns, as well as social and economic implications (cf. International Risk Governance Council 2005; Renn and Walker 2008). The interdisciplinary risk estimation process should be informed by scientific analyses. Yet in contrast to traditional risk regulation models, the scientific process includes the biophysical sciences as well as the social and economic sciences. The interdisciplinary risk estimation comprises two activities. The first is risk assessment, which produces the best estimate of the likelihood and the physical harm that a risk source may induce. The second is concern assessment, or identifying and analyzing the issues that individuals or society as a whole link to a certain risk. For this purpose, the research repertoire of the social sciences, such as survey methods, focus groups, econometric analysis, macroeconomic modeling, and structured hearings with stakeholders may be used. In 2003, the Wissenschaftliche Beirat der Bundesregierung Globale Umwelt­ veränderungen (German Advisory Council on Global Environmental Change; WGBU) offered a set of criteria to characterize risks that go beyond the classic components of probability and extent of damage. The WBGU isolated and validated eight measurable risk criteria through a rigorous process of interactive surveying. Experts from the biophysical sciences and the social sciences were asked to characterize risks based on the dimensions that they would use for substantiating a judgment on tolerance to risk. Their input was subjected, through discussion sessions, to a comparative analysis. To identify the eight definitive criteria, the WBGU distilled the experts’ observations down to those that appeared most influential in the characterization of different types of risk. In addition, alongside the expert surveys the WBGU performed a meta-­analysis of the major insights gleaned from existing studies of risk perception and evaluated the risk management approaches adopted by countries including the United Kingdom, Denmark, the Netherlands, and Switzerland. The ­WBGU’s long exercise of deliberation and investigation pinpointed the following eight physical criteria for the evaluation of systemic risks: 1. Extent of damage, or the adverse effects arising from a risk—measured in natural units such as deaths, injuries, or production losses. 2. Probability of occurrence, an estimate of the relative frequency of a discrete or continuous loss function that could arise from the manifestation of a risk.

Risk Governance

161

3. Incertitude, an overall indicator of the degree of remaining uncertainties inherent in a given risk estimate. 4. Ubiquity, which defines the geographic spread of potential damages and considers the potential for damage to span generations. 5. Persistence, which defines the duration of potential damages, also considering potential impact across the generations. 6. Reversibility, or the possibility of restoring the situation, after the event, to the conditions that existed before the damage occurred (e.g., reforestation, the cleaning of water). 7. Delay effect, which characterizes the possible extended latency between the initial event and the actual impact of the damage it caused. The latency itself may be of a physical, chemical, or biological nature. 8. Potential for mobilization, understood as violations of individual, social, or cultural interests and values that generate social conflicts and psychological reactions among individuals or groups of people who feel that the consequences of the risk have been inflicted on them personally. Feelings of violation may also result from perceived inequities in the distribution of costs and benefits. Interestingly, the final criterion—mobilization—was the only one aimed at describing public response (whether it is acceptance or outrage) that found favor among all the experts consulted by the WBGU. Subsequently, a research team at the Center of Technology Assessment in Baden-Württemberg concluded that a broader understanding of this important criterion for the evaluation of systemic risks is needed. The team unfolded the compacted mobilization index and divided it into four major, identifiable elements that drive the potential for mobilization (Klinke and Renn 2002): 1. The inequity and injustice associated with the distribution of risks and benefits over time, space, and social status. 2. The psychological stress and discomfort associated with the risk or the risk source, as measured by psychometric scales. 3. The potential for social conflict and mobilization, which translates into the degree of political or public pressure on agencies with responsibility for risk regulation. 4. The likely spill-over effects that may occur when highly symbolic losses have repercussions in seemingly unconnected areas, such as financial markets or the loss of credibility in management institutions.5

5. These spillover effects have been the main target of the social amplification of risk framework. This theory was developed by a research team at Clark University in the late 1980s (e.g., Kasperson et al. 1988) and has been shown to be effective across a range of applications (Rosa 2003).

162

Risk Governance

The four social criteria can be used by the risk analyst to assess the extra effect that the manifestation of a risk—or even its mere presence—may have on psychological or social responses among subject groups. They can be used to gauge how much such social side effects are likely to stretch beyond the expected impact ascertained through consideration of the risk in the context of the other seven physical criteria. A similar decomposition has been proposed by the United Kingdom’s Environment Agency (1998). For example, in the report A Strategic Approach to the Consideration of “Environmental Harm” (Pollard et al. 2000), Environment Agency researchers propose two main criteria for the assessment of mobilization potential, each of which is subdivided into three further criteria: anxiety, divided into dread, unfamiliarity, and notoriety; and discontent, divided into unfairness, imposition, and distrust. In a recent draft document, the U.K. Treasury Department (Her Majesty’s Treasury 2004) recommended a risk classification that includes hazard characteristics, the traditional risk assessment variables such as probability and extent of harm, indicators on public perception, and the assessment of social concerns. The document offers a tool for evaluating public concerns against six factors that are centered on the hazards leading to a risk, the risk’s effects, and its management: 1 . Perception of familiarity and experience with the hazard. 2. Understanding the nature of the hazard and its potential effects. 3. Repercussions of the risk’s effects on intergenerational, intragenerational, or social equity. 4. Perception of fear and dread in relation to a risk’s effect. 5. Perception of personal or institutional control over the management of a risk. 6. Degree of trust in risk management organizations. The methodology for including social criteria in the formal risk evaluation process definitely needs further refinement, since it is an area of risk analysis that remains in its infancy. Fortunately, risk governance bodies and management agencies around the world are preparing different approaches to an extended evaluation process. Their work is vital to the ultimate success of risk governance in the modern world, because gaining a better understanding of the social and psychological criteria that affect human reactions to risk is a crucial requirement for the characterization and evaluation of systemic risk.

Risk Evaluation A heavily disputed task in the risk governance process concerns the procedure for evaluating the societal acceptability or tolerability of a risk. In classical approaches, risks are ranked and prioritized based on a combination of probability (how likely is it that the risk will be realized?) and impact (what are the

Risk Governance

163

consequences if the risk does occur?) (Klinke and Renn 2002; Klinke and Renn 2012; Renn 2008c: 149ff.). However, as described above, in situations of uncertainty, complexity, and ambiguity, risks cannot be treated only in terms of likelihood (probability) and (quantifiable) impacts. This standard two-­dimensional model ignores many important features of risk. Values and issues such as reversibility, persistence, ubiquity, equity, catastrophic potential, controllability, and voluntariness need to be integrated into risk evaluation. Furthermore, riskrelated decision making is neither about risks alone nor usually about a single risk. Evaluation requires risk-benefit evaluations and risk-risk tradeoffs. So by definition, risk evaluation is multidimensional. To evaluate risks, the first step is to characterize the risks on all of the dimensions that matter to the affected populations. Once the risks are characterized in a multidimensional profile, their acceptability can be assessed. Furthermore, there are competing, legitimate viewpoints over evaluations, over whether there are or could be adverse effects, and if there are, whether these risks are tolerable or even acceptable. As noted above, drawing the lines between “acceptable,” “tolerable,” and “intolerable” risks is one of the most controversial and challenging tasks in the risk governance process. The U.K. Health and Safety Executive developed a procedure for chemical risks based on riskrisk comparisons (Löfstedt 1997). Some Swiss cantons such as Basel-Landschaft experimented with roundtables consisting of industry, administrators, county officials, environmentalists, and neighborhood groups. As a means for reaching consensus, two demarcation lines were drawn between the area of tolerable and acceptable risk and between acceptable and intolerable risks. Irrespective of the selected means to support this task, the judgment on acceptability or tolerability is contingent on making use of a variety of different knowledge sources. In other words, it requires taking the interdisciplinary risk estimation seriously. Risk evaluations, an epistemological issue, generally rely on causal and principal presuppositions as well as worldviews (cf. Goldstein and Keohane 1993). Causal beliefs refer to the scientific evidence from risk assessment on whether, how, and to what extent the risk might cause harm. This dimension emphasizes cause-effect relations and provides guidance for which strategy is most appropriate to meet the goal of risk avoidance, risk reduction, or adaptation. But risks typically embed normative issues, too. Looming below all risks is the question of how safe is safe enough, implying a normative or moral judgment about acceptability of risk and the tolerable burden that risk producers can impose on others. The results of the assessment can provide hints about what kind of mental images are present and which moral judgments guide people’s perceptions and choices. Of particular importance is the perception of just or unjust distribution of risks and benefits. How these moral judgments are made and justified depends to a large degree on social position, cultural values, and worldviews. The judgments also depend on shared ontological (a belief in the state of the world) and ethical (a belief of how the world should be) convictions. This collection of forces influences thinking and evaluation strategies. The selection of strategies for risk management therefore is understandable only within the

164

Risk Governance

context of broader worldviews. Hence, society can never derive acceptability or tolerability from the assessment evidence alone. Facts do not speak for themselves. Furthermore, the evidence is essential not only to reflect the degrees of belief about the state of the world regarding a particular risk, but also to know whether a value might be violated or not (or to what degree). In sum, risk evaluation involves the deliberative effort to characterize risks in terms of acceptability and tolerability in a context of uncertainty, ambiguity, and complexity. Such contexts often imply that neither the risks nor the benefits can be clearly identified. Multiple dimensions and multiple values are always in play and must be considered. Finally, risk evaluations may shift over time. But the harsh reality is this: notwithstanding uncertainty, complexity, and ambiguity, decisions must be made. It may well be possible at a certain point in time to agree whether risks are acceptable, tolerable, or intolerable. When the tolerability or acceptability of risks is heavily contested, that, too, is highly relevant input in the decision-making process.

Risk Management Risk management starts with a review of the output generated in the previous phases of interdisciplinary risk estimation, characterization, and risk evaluation. If the risk is acceptable, no further management is needed. Tolerable risks are those for which the benefits are judged to be worth the risk but risk reduction measures are necessary. If risks are classified as tolerable, risk management needs to design and implement actions that either render these risks acceptable or sustain that tolerability in the long run by introducing risk reduction strategies, mitigation strategies, or strategies aimed at increasing societal resilience at the appropriate level. If the risk is considered intolerable, notwithstanding the benefits, risk management should be focused on banning or phasing out the activity creating the risk. If that is not possible, management should be devoted to eliminating or mitigating the risk in other ways or to increasing societal resilience. If the risk is contested, risk management can aim at finding ways to create consensus. If that is impossible or highly unlikely, the goal would be to design actions that increase tolerability among the parties most concerned or to stimulate alternative course of action. A variety of ways exist to design the process of identifying risk management options in contexts of uncertainty, complexity, and ambiguity (Klinke and Renn 2002; Renn 2008c: 173ff.). In those situations, routine risk management within risk assessment agencies and regulatory institutions is inappropriate for this category, since the risk problems are not sufficiently known or are contested. Klinke and Renn (2002) have developed a decision tree for the various combinations of these contextual factors. In a case where complexity is dominant and uncertainty and ambiguity are low, the challenge is to invite experts to deliberate with risk managers to understand complexity. Flood risk management may be a good example of this. Although the occurrence of certain types of floods follows a random pattern, one can address vulnerability

Risk Governance

165

and design emergency management actions well in advance. The major challenge is to determine the limit to which one is willing to invest in resilience. And once the complexity is well understood, it is a question of political will to implement the desired level of protection. Heiko Garrelts and Hellmuth Lange (2011: 207–208), for example, emphasize the need for state decisiveness in such cases: “for all the indispensability of participatory approaches—for reasons of integrating citizen‘s expertise, for reasons of the additional need for legitimacy in face of existing future uncertainty—it is the state that remains the institutional guarantor for ensuring that problems can be addressed from diverging perspectives. The ability of state agencies to intervene with sanctions and directives addresses the question of ultimate responsibility, which is all too often overlooked by participation oriented approaches.” In cases of high complexity, low uncertainty, and low ambiguity, Garrelts and Lange (2011) suggest a reversal from governance to government. We do not argue that this conclusion holds for all risk management situations. It is conceivable that the strategy is inapplicable to situations, such as flooding, where the effects of climate change complicate the matter or where societal actors resist particular flood risk management options, such as higher dikes or the dismantling of settlements in flood plains, for aesthetic or cultural reasons (cf. Wisner et al. 2004). The second node in the Klinke and Renn (2002) decision tree concerns risk problems that are characterized by high uncertainty but low ambiguity. They argue that expanded knowledge acquisition may help to reduce uncertainty. If uncertainty cannot be reduced (or can be reduced only in the long run) by additional knowledge, however, they argue for what they call “precaution-based risk management.” Precaution-based risk management explores a variety of options: containment, diversification, monitoring, and substitution. The focal point here is to find an adequate and fair balance between over-cautiousness and insufficient caution. This argues for a reflective process involving stakeholders to ponder concerns, economic budgeting, and social evaluations. For risk problems that are highly ambiguous (regardless of whether they are low or high on uncertainty and complexity), the Klinke and Renn (2002) decision tree recommends “discourse-based management.” Discourse management requires a participatory process involving stakeholders, especially the affected public. The aim of such a process is to produce a collective understanding among all stakeholders and the affected public about how to interpret the situation and how to design procedures to collectively justify binding decisions on acceptability and tolerability that are considered legitimate. In such situations, the task of risk managers is to create a condition where those who believe that the risk is worth taking and those who believe otherwise are willing to respect each other’s views and to construct and create strategies acceptable to the various stakeholders and interests. But deliberation is not a guarantee for a smooth risk management process. Rolf Lidskog, Ylva Uggla, and Linda Soneryd (2011) argue that complexity and ambiguity are grounds for continuous conflict that is difficult, if not impossible, to resolve. The reduction of

166

Risk Governance

c­ omplexity simultaneously implies reducing the number of actors as relevant or legitimate participants. The resolution of ambiguity requires broad representation of all actors involved in the case, so it is difficult to find the perfect or, perhaps, even optimal path between functionality and inclusiveness. In any case, our response to this inherent conflict is to invest in structuring an effective and efficient process of inclusion (whom to include) and closure (what counts as evi­dence and the adopted decision-making rules) (Aven and Renn 2010: 181ff.; Renn 2008c: 284ff.). In sum, neither the characterization of the systemic risk at hand (uncertain, complex, or ambiguous) nor the contingent evaluation of the risk (acceptable, tolerable, intolerable, disputed) results in a simple typology for risk management. Nevertheless, decisions must be made, for even non-decisions about risk are decisions—often with far-reaching consequences. Characterizations and evaluations of systemic risks do provide some guidance for risk management. They provide guidance about how to design a process that appears to be sensible, that prioritizes risks, and that delineates reasonable options for different contexts. It is clear that the traditional risk management style, resting on a twodimensional assessment model, is not just inadequate to address systemic risks, but also may exacerbate the governance challenge by fueling societal controversies over risk.

Risk Communication and Participation Effective communication across all relevant interests is one of the key challenges in risk governance. It is not a distinct stage (in contrast to how it is often treated in the risk literature) but central to the entire governance process. Positively framed, communication is at the core of any successful risk governance activity. Negatively framed, a lack of communication destroys risk governance. Early on, risk communication was predicated on the view that disagreements between experts and citizens over risks were due to the lack of accurate knowledge by citizens. The solution was sought in the so-called deficit model of communication and engagement. The solution to the gap in knowledge between experts and laypeople was education and persuasion of the deficient public (Fisch­hoff 1995). Implied in this solution was the belief that an educated public would perceive and evaluate risks the same way as experts. However, this deficit model has been subjected to considerable criticism. For one thing, increased knowledge often elevated citizens’ concerns about risk, creating an even greater diversion between citizens and experts. For another, as Nick Pidgeon and his colleagues (2005: 467) aptly phrased it, “One of the most consistent messages to have arisen from social science research into risk over the past 30 years is that risk communication . . . ​needs to accommodate far more that an simple oneway transfer of information. . . . ​[T]he mere provision of ‘expert’ information is unlikely to address public and stakeholder concerns or resolve any underlying societal issues.” Third, research on risk controversies has demonstrated that in general, the public does not always misunderstand science. Furthermore, ex-

Risk Governance

167

perts and governments may also misunderstand public perceptions (HorlickJones 1998; Irwin and Wynne 1996). The important point to emphasize is that risk communication and trust are delicately interconnected processes. A large volume of literature demonstrates the connection between trust in the institutions managing risks and citizens’ perceptions of the seriousness of risks (Earle and Cvetkovich 1996; Luhmann 1980; Poortinga and Pidgeon 2003; Whitfield et al. 2009). Communication breakdowns can easily damage trust. However, communication strategies that misjudge the context of communication, in terms of the level of and reasons for distrust, may boomerang, resulting in increased distrust (Löfstedt 2005). Communication strategies proliferate. We define communication as meaningful interactions in which knowledge, experiences, interpretations, concerns, and perspectives are exchanged (cf. Löfstedt 2003). Communication in the context of risk governance refers to exchanges among policymakers, experts, stakeholders, and affected publics. The aim of communication is to provide a better basis for responsible governing of uncertain, complex, or ambiguous risks. Its aim also is to enhance trust and social support. Depending on the nature of the risks and the context of governing choices, communication will serve various purposes. It might serve to share information about the risks and possible ways to handle them. It might support building and sustaining trust among various actors where particular arrangements or risk management measures become acceptable. It might result in actually engaging people in risk-related decisions, through which they gain ownership of the problem. However, communication in the context of risk governance is not simple. It is not just a matter of having accurate assessments of risks. It is not just a matter of bringing people together. It is not just a matter of effective communication. It requires all of these features and more. Also required is a set of procedures for facilitating the discourse among various actors from different backgrounds so they can interact meaningfully in the face of uncertainty, complexity, or ambiguity. Meaningful communication features multiple actors. The highly influential U.S. National Research Council report (Stern and Fineberg 1996) is an important milestone in the recognition of the need for risk decision making as an inclusive multi-actor process. It also was a germinal precursor to the idea of risk governance with an emphasis on the coordination of risk knowledge and expertise with citizens’ and other stakeholders’ priorities. Scholars subscribing to the idea of risk governance share its procedures and normative position that the involvement of interested and affected parties in collective decision making about risk is needed (see, e.g., De Marchi 2003; Irwin 2008; Jasanoff 2004; Rosa and Clark 1999; Stirling 2007). One key challenge to risk governance is that of inclusion: which stakeholders and publics should be included in governance deliberations? The inclusion challenge has deep implications. Contrary to the conventional paradigm in which risk topics are usually identified by experts, with the analytic-­deliberative process underpinning risk governance, public values and social concerns are

168

Risk Governance

key agents for identifying and prioritizing risk topics. Inclusion means more than simply including relevant actors. That is the outmoded practice of “public hearings,” where relevant actors are accorded a fairly passive role. Inclusion means that actors play a key role in framing (or pre-assessing) the risk (International Risk Governance Council 2005; Renn and Schweizer 2009; see also Roca, Gamboa, and Tàbara 2008). Inclusion should be open to input from civil society and adaptive at the same time (Stirling 2004). Crucial questions in this respect are, Who is included? What is included? What is the scope and mandate of the process? (see also Renn and Schweizer 2009), Inclusion can take many different forms: roundtables, open forums, negotiated rulemaking exercises, mediation, or mixed advisory committees consisting of scientists and stakeholders (Renn 2008c:332ff.; Rowe and Frewer 2000; Stoll-Kleemann and Welp 2006). Due to a lack of agreement on method, social learning promoted by structured and moderated deliberations is required to find out what level and type of inclusion is appropriate in the particular context and for the type of risk involved. The methods that are available have contrasting strengths and weaknesses (Pidgeon et al. 2005). A focus on inclusion is defended on several grounds (cf. Roca, Gamboa, and Tàbara 2008). First, we argue that in view of uncertainty, complexity, or ambiguity, there is a need to explore various sources of information and to incorporate various perspectives. It is important to know what the various actors label as risk problems and which most concern them. Here inclusion is interpreted to be a means to an end—a procedure for integrating all relevant knowledge and for including of all relevant concerns. Second, from a democratic perspective, actors affected by the risks or the ways in which the risks are governed have a legitimate right to participate in deciding about those risks. Here inclusion is interpreted not just as a means but also as an end in itself. At the same time, inclusion is a means to agree on principles and rules that should be respected in the processes and structures of collective decision making. Third, the more actors who are involved in the weighing of the heterogeneous pros and cons of risks, the more socially robust the outcome will be. When uncertainty, complexity, or ambiguity reigns, there is no simple decision rule. In that view, inclusion also is a way to organize checks and balances between various interest and value groups in a plural society. Hence, inclusion is intended to support the coproduction of risk knowledge, the coordination of risk evaluation, and the design of risk management. Social learning is also required here. And it is not simply a matter of degree, where more inclusion equals more learning and therefore better risk governance. The degree and type of inclusion may vary depending on the phase of governance process and the risk context. In each phase and context, careful thought is needed about the kind and degree of inclusion necessary. Hence, differentiation is not an exception, It is the rule. The task of inclusion is to organize productive and meaningful communication among a range of actors who have divergent interests but complementary roles. The cumulative empirical analyses suggest that providing a platform

Risk Governance

169

for the inclusion of a variety of stakeholders—to deliberate over their concerns and exchange arguments—can help to de-escalate conflicts and legitimize final decisions. Nevertheless, however careful the establishment of the platform and the decision rule about inclusion, there will always be some disenfranchised or disappointed actors in society (Beierle and Cayford 2002; National Research Council 2008).

Conclusion In this chapter, we have explored the genesis and analytical scope of risk governance. We have argued for a broader, paradigmatic turn from government to governance. We argued that in the context of risk, the idea governance is conceptualized as both a descriptive and normative activity: as a description of how decisions are made and as a normative model for improving structures and processes of risk decisions. Risk governance draws attention to the fact that many risks are not simple; they cannot all be calculated as a function of probability and effect or consequence. Many risks embed complex tradeoffs of costs and benefits. Systemic risks are characterized by their complexity, uncertainty, and ambiguity. Risk governance underscores the need to ensure that societal choices and decisions adequately address these complicating features. However, conventional risk characterization typically treats, assesses, and manages such risks as if they were simple. This practice has led to many failures to deal adequately with risks such as genetic engineering, nuclear energy, the global financial crisis, global warming, and cyber-terrorism, demonstrating an urgent need to develop alternative concepts and approaches to deal with uncertain, complex, or ambiguous risks. We have argued that a risk governance framework is the promising alternative. We have also outlined the idea of adaptive and integrative risk governance, organizing governance into five phases: pre-estimation, interdisciplinary risk estimation, risk characterization, risk evaluation, and risk management. Each phase entails communication processes and public involvement. We also outlined the challenges that confront each phase. Our analysis demonstrates that risk governance is not simply an opportunistic buzzword, but a disciplined argument for a paradigm shift. Paradigms and reforms shift not just in the abstract but also in practice. Such fundamental transitions are not easy. Yet we hope that, by combining the insights of risk theory with an argument for governance, this volume helps to stimulate and facilitate that shift. We also hope it will nudge a change in risk practice. To lean on an old saw, the proof of any pudding is in the eating. In this chapter, we have laid out the recipe for a new pudding: a system of governance that both reflects underlying changes in how societies are attempting to manage risks and one that supersedes the government approach of the past. The eating can only come when that recipe is put to use. In Chapter 10, we describe an analytic-deliberative procedure for accomplishing just that.

10 An Analytic-Deliberative Process A Proposal for Better Risk Governance Most of the luxuries, and many of the so-called comforts of life, are not only not indispensable, but positive hindrances to the elevation of mankind. —Henry David Thoreau, Walden; Or, Life in the Woods (1854)

A Way Out of the Risk Dilemma

A

fter reviewing the main sociological analyses of risk and extending our perspective to the whole governance process, many questions remain unanswered. How can one get priorities right in the politics of risk? Who should decide where limited money and resources should go? How can we establish a constructive discourse on risk management? How can modern societies cope with systemic risks on a global level? How should the old divide between the social sciences and the biophysical sciences in understanding and managing risk be overcome? The obvious answer is that we need more cooperation and integration. We cannot allocate money to all risks; we need to set priorities. So if the experts pronounce in favor of noise protection and choose to ignore electromagnetic waves, while lay opinion perceives a greater risk in electromagnetic waves, the compromise of “something for everybody” becomes an impossibility. New strategies for governance must be found. Binding strategies will be possible only once we overcome the old assumption that laypeople and experts are always in opposition. If everything is based on an average mean value of lay and expert findings—something that some social sciences tend to do—we will, indeed, find a deep rift between these two groups. But this rift obscures the fact that in expert circles, and within the great mass of laypeople, there is an enormous variety of opinions and assessments. Every lay perception has its opposite; every expert has a counter-expert. Different assessments arise depending on the makeup of a particular group of experts, even if the final decision is made solely by experts. Varying levels of knowledge are in competition with each other, and determining which among the compet-

An Analytic-Deliberative Process

171

ing claims represents the truth ultimately is not feasible. It is thus impossible to expect an unequivocal expert answer to an urgent question of risk, even if we are prepared to use sound science as a guideline for general risk policies. At the same time, it is equally unhelpful to base the determination of priorities in the politics of risk on a generalized lay perception. The differences in risk perception among laypeople are as extensive those as among experts, and we find the same problem in deciding which lay opinion will be the dominant one when it comes to judging risk. When people talk about risk, they are driven by personal or professional interest. If truth is replaced by interest, however, bargaining power is going to determine what is regarded as truth. This kind of replacement is a breeding ground for fundamentalism, with one side wanting to abolish any possible risk to the environment (jeopardizing the economy in this endeavor), while the other side wants to redefine risk as opportunity without taking ecological issues into account at all. Between these two extreme positions there is hardly any room for compromise other than the strategic kind, where the argument ends up with a philosophy of “You give me your risk and I will give you mine.” What is needed to establish the necessary priorities? How can we break this deadlock in determining the rationality of the politics of risk? Is it possible to integrate lay and expert findings? Can we even legitimize the politics of risk? Here are some reflections that outline our normative assumptions about risk governance in an increasingly complex environment. We must abandon postmodern ideas about knowledge as a random social construct with no predominant criteria for truth or quality. The reality is that people suffer and die as a result of both correct and false information (Rosa 1998a). It is particularly important to be quite clear about the limits of legitimate information in a situation of systemic risks, where environmental and technological decisions have far-reaching consequences and where our knowledge is severely limited, particularly regarding secondary and tertiary impacts. It is precisely the fuzziness of systemic risks that demands that we set clear boundaries between what scientific evidence can support and what appears to be nonsense or absurd. If we have no clear boundaries, there will be room for pseudoscientific legitimization of practically any fear of risk, no matter how far-fetched. We now have a number of methods and techniques at our disposal, such as the meta-analysis or the Delphi survey method, which allow us a fair overview of legitimate knowledge without needing to resort to public opinion with overall power of legislation (Gregory 2004; Webler et al. 1991). The scientific establishment itself has to limit the range of legitimate knowledge because it is bound by scientific rigor and has access to appropriate conflict-resolution procedures, and thus is equipped to resolve competing claims to truth. Expert opinion and lay perception need to be perceived as complementing, rather than competing with, each other. Practitioners who have designed public participation exercises on risk acceptance constantly report that lay participants insist on getting the best expert judgments before making decisions on their own (Lynn 1987; McDaniels 1998; Renn 2004; Wondelleck, Manring, and

172

Risk Governance

Crowfoot 1996). They ask to learn more about the range of expert assessments and their professional evaluations. Once these questions are answered, participants address the political problem of how to deal with the remaining risks and uncertainties that cannot be resolved by scientific scrutiny. Acceptability cannot be set apart from expertise, but the best expertise is not sufficient to make prudent judgments about acceptability. The very essence of responsible action is to make viable and morally justified decisions in the face of uncertainty based on a range of expert assessments and evaluated according to personal values, preferences, and interests. Scientific assessments have to be embedded into the context of criteria of acceptable risk, fair risk distribution, and an adequate extent of precautionary measures. These criteria most precisely reflect the main points of lay perception. For a rational politics of risk, it is therefore imperative to collect both ethically justifiable evaluation criteria and standards and the best available systematic knowledge that inform us about the performance of each risk source or risk-reduction option on the self-chosen criteria (­McDaniels 1996). Ultimately, decisions about acceptable risks need be based on a subjective mix of factual evidence, attitudes toward uncertainties, and moral standards (Kasperson and Kasperson 1983). An educated and equitable decision is possible only in the context of these three elements. This is what makes the polarization of the two camps, with experts brandishing rationality on one side and counter-experts claiming the moral high ground on the other side particularly damaging. Risk governance is intrinsically bound up with a combination of systematic knowledge and moral assessment of the expected consequences. There are no unequivocal logical, factual, or normative guidelines on the question of the acceptability of nuclear technology, genetic engineering, or waste disposal by incineration. A discourse without a systematic scientific basis is nothing but an empty vessel while, by contrast, a discourse that disregards the moral aspects of available options will aid and abet amoral actions. There are, at present, three different types of risk discourse in our society. The first type is subject to the strategy of fear and is rooted in the tradition of doomsday scenarios (Taylor 1970), which conjures up a bleak vision of a technology that is anything but safe, threatening breakdowns, setbacks, and, ultimately, catastrophe. In this communication of fear, identity is forged by doom mongers who outdo one another in conjuring up doomsday scenarios, raising a collective self-pity to the status of a postmodern philosophy (Keen 2008; Postman 1993). A discourse of this type has a paralyzing effect on society and restricts activities that encourage the generation of information, as well as debates on ethics. Directly opposed to this is the discourse of risk acceptance. Here, all danger is dismissed as exaggerated; it is a figment of people’s imaginations, according to this argument, and there is no limit to the opportunities provided by technology. This type of discourse could be called the communication of opportunity: the dominant idea is to ignore objective limits to actions and to see any risk and ambivalence as a challenge. Subjects such as ecological crises, the lim-

An Analytic-Deliberative Process

173

its of the welfare state, the struggle of the individual to find meaning, and the moral maze—to name but a few—are seen by this philosophy of opportunity as nothing but superficial red herrings that offer potential for technical and political action. Believers in this kind of thinking require nothing but optimism to roll up their sleeves and get on with it. But this discourse ends in permanent self-delusion, often blaming scapegoats who have “blocked social progress” with their pessimism and their technophobia. Both the fear and the opportunity types of communication cannot do justice to the complex issues of the politics of risk, particularly systemic risks. We are convinced that the only sensible approach is the design discourse—that is, a communication of ambivalence, which allows for the creative perception of real opportunities based on knowledge of inherent boundaries and risks. A healthy fear of the uncertain and acknowledgment of the boundaries of creative design, on the one hand, and active power fueled by positive concepts of the future, as well as an awareness of the necessary technical and organizational backup, on the other hand, will help technological activity (in this situation of uncertainty) to achieve a balance between innovation and precaution. Our society is in need of more creative discourse as the only appropriate way to work through the problems and opportunities of future technological development, enabling us to deal wisely with inherent contradictions and uncertainties. We are still far away from having such design discourses embedded within our political culture. The problem has not been the lack of a culture of debate; rather, it is linking results with official politics, as well as winning public approval. The next subsections will outline a discourse strategy based on the seminal work of Jürgen Habermas that we believe is well suited to enlighten risk governance in the twenty-first century.

Expertise and Deliberation in Risk Governance Elements of Decision Making in Pluralist Societies At the foundation of any society is the need for effectiveness, efficiency, legitimacy, and social cohesion.1 Effectiveness refers to the need of societies to have a certain degree of confidence that human activities and actions will actually result in the consequences that the actors intended when performing them. 1. This analysis of societal functions and systems is based on the functional school of sociology and political sciences. Early representatives of this school who are relevant for arguments presented here are David Easton, Theodore Lowi, Talcott Parsons, and Edward Shils (see Easton 1965; Lowi 1964; Parsons 1951, 1963, 1967; Parsons and Shils 1951). This school of thought has been heavily criticized for its stationary and equilibrium-based assumptions (see Coser 1956). Modern functionalists have responded to this criticism by adding agency, a ­dynamic and change-inducing component to the theoretical framework (Alexander 1985; Alex­a nder and Colomy 1990). Our approach has been inspired by the (neo)functional school of Bielefeld in Germany (Luhmann 1982, 1993; Münch 1982, 1996; Willke 1995). A similar approach for participatory technology assessment was pursued in Joss 2005.

174

Risk Governance

­Effi­ciency describes the degree to which scarce resources are used to reach the intended goal. The more resources that are invested to reach a given objective, the less efficient is the activity under question. Legitimacy is a composite term that denotes, first, the normative right of a decision-making body to impose a decision even on those who were not part of the decision-making process (issuing collectively binding decisions), and second, the factual acceptance of this right by those who might be affected by the decision (Renn 2008c: 286ff.; Suchman 1995). As a result, it includes an objective normative element, such as legality or due process, and a subjective judgment, such as the perception of acceptability (Luhmann 1983). Finally, social cohesion covers the need for ­social integration and collective identity despite plural values and lifestyles (Parsons 1971). Within the macro-organization of modern societies, these four foundations are predominantly handled by different societal systems: economy, science (expertise), politics (including legal systems), and the social sphere (Münch 1982; Parsons 1967; Renn 1992, 2008c). In the recent literature on governance, the political system is often associated with the rationale of hierarchical and bureaucratic reasoning; the economic system with monetary incentives and individual rewards; and the social sphere with the deregulated interactions of groups within the framework of a civil society.2 Another way to phrase these differences is by distinguishing among competition (market system), hierarchy (political system), and cooperation (sociocultural system). Scientific input into these models is seen as an integral part of politics in the form of scientific advisory committees or the civil sector in the form of independent institutions of knowledge generation (Alexander 1993; Held 1995).3 In the field of decision making about risks, we prefer to operate with four separate subsystems since scientific expertise is crucially important to risk decisions and cannot be subsumed under the other three systems (cf. Joss 2005: 198ff.). Since science is part of a society’s cultural system, our proposed classification is also compatible with the classic division of society into four subsystems (politics, economics, culture, and social structure), which has been suggested by the functional school in sociology (Münch 1982; Parsons 1951). The picture of society is, of course, more complex than the division into four systems suggests. Many sociologists relate to the concept of “embeddedness” when describing the relationships among the four systems (Granovetter 1992; Scott and Meyer 1991). Each system is embedded in the other systems and mirrors the structure and functionality of the other systems in subsystems of their own. To make our argument, however, the simple version of four analytically distinct systems is sufficient. Each of the four systems is characterized by several governance processes and structures adapted to the system properties and functions in question. The 2.  Overviews are provided in Ash 1997; Seligman 1991; Shils 1991. We adopt the definition of civil society put forward by Alexander (1993:797): “the realm of interaction, institutions and solidarity that sustains public life of societies outside the worlds of economy and state.” 3.  For more on handling specifically environmental risks, see Hajer 1995.

An Analytic-Deliberative Process

175

Economic System Cost–Benefit Balancing

• • •

Property rights Private contracts Compensation for external effects

Mediation

Efficiency

Expert System • • •

Knowledge Competence

Test of truth claims Instrumental knowledge Enlightenment

Values Fairness

Social System • • •

Mutual understanding Social values Lifestyles

Legitimization

Political System Expert Advisory Panels

• • • •

Due process Liability laws Legal statutes Voting

Participation

Figure 10.1  Four Central Systems of Society (Source:  Adapted from Renn 2008c: 287; based on Joss 2005; Münch 1982)

four systems and their most important structural characteristics are shown in Figure 10.1. What findings can be inferred from a comparison of these four systems? In the market system, decisions are based on the cost-benefit balance established on the basis of individual preferences, property rights, and individual willingness to pay. The conflict resolution mechanisms relate to civil law regulating contractual commitments, Pareto optimality (each transaction should make at least one party better off without harming third parties), and the application of the Kaldor–Hicks criterion (if a third party is harmed by a transaction, this party should receive financial or in-kind compensation to such an

176

Risk Governance

extent that the utility gained through the compensation is at least equivalent to the disutility experienced or suffered by the transaction) (Scitovsky 1941). The third party should hence be at least indifferent between the situation before and after the transaction.4 Additional instruments for dealing with conflicts are (shadow) price setting, the transfer of rights of ownership for public or nonrival goods, and financial compensation (damages and insurance) to individuals whose utilities have been reduced by the activities of others. The main goal here is to be efficient. In politics, decisions are made on the basis of institutionalized procedures of decision making and norm control (within the framework of a given political culture and system of government). The conflict resolution mechanism in this sector rests on due process and procedural rules that ideally reflect a consensus of the entire population. In democratic societies, the division in legislative, executive, and judicial branches; defined voting procedures; and a structured process of checks and balances lie at the heart of the institutional arrangements for collective decision making. Votes in a parliament are as much a part of this governance model as is the challenging of decisions before a court. The target goal here is to seek legitimacy. Science has at its disposal methodological rules for generating, challenging, and testing knowledge claims, with the help of which one can assess decision options according to their likely consequences and side effects. If knowledge claims are contested and conflicts arise about the validity of the various claims, scientific communities make use of a wide variety of knowledge-based decision methods, such as methodological review or re-tests, meta-analysis, consensus conferences, Delphi, or (most relevant in this arena) peer review to resolve the conflicts and test the explanatory or predictive power of the truth claims. These insights help policymakers understand phenomena and be effective in designing policies. Finally, in the social system, there is a communicative exchange of interests, preferences, and arguments, which helps all actors come to a solution. Conflicts within the social system are normally resolved by finding favorable arrangements for all parties involved, using empathy as a guide to explore mutually acceptable solutions, referring to mutually shared beliefs, convictions, or values or relying on social status to justify one’s authority. These mechanisms create social and cultural cohesion. Socially relevant problems are rarely dealt with within the limits of one single system logic. Instead, they go through interrelated procedures, ­either sequentially or in parallel. For example, the political system can decide on a specific goal or target by parliamentary vote (e.g., a limit on automobile emissions) and then leave it to the market to implement this decision (such as organizing an auction to sell emission rights to all potential emitters). Or a governmental 4. In economic theory, the transaction is justified if the sum of the compensation is lower than the surplus that the parties could gain as a result of the planned transaction. However, the compensation does not need to be paid to the third party.

An Analytic-Deliberative Process

177

decree is reviewed by an expert panel or a citizen advisory committee. Of particular interest are decision-making processes that combine the logic of two or more systems (Renn 2008c: 289). The settlement of conflicts with the method of mediation or negotiated rulemaking can, for example, be interpreted as a fusion of economic and social rationale. The cooperation between experts and political representatives in joint advisory committees (i.e., the experts provide background knowledge, while politicians highlight preferences for making the appropriate choices) represents a combination of knowledge-oriented elements and political governance. Classic hearings are combinations of expert knowledge, political resolutions, and the inclusion of citizens in this process.5

Application to Risk Governance Risk decisions are routinely made within all four systems. Managers may decide on a risky strategy to market a product; consumers may decide to eat beef in spite of BSE warnings; scientists may find out that a specific concentration of a new substance may increase the probability of contracting a specific disease; and political bodies may require motorists to wear helmets when driving on public streets. However, almost all risk decisions require input from more than one system. This is particularly true for risk management decisions that impose restrictions on one part of the population to protect other parts or, conversely, to allow some parts to impose risks on other parts (Kasperson and Kasperson 1983; Kunreuther and Slovic 1996; Linnerooth-Bayer and Fitzgerald 1996; Vari 1989). In these cases, legitimate decision making requires proof of the following: • Alternative actions are less cost effective (efficiency). • The required course of action (or non-action) would result in anticipated positive results (effectiveness). • The actions are in line with due process and democratic procedures (legitimacy). • These actions are in line with public preferences and values and are accepted even by those who disagree with the decision (reflection of social preferences and values). When contemplating the acceptability of a risk, one needs to be informed about the likely consequences of each decision option, the opportunity cost for choosing one option over the other, and the potential violations of interests and values connected to each decision option (Gregory, Lichtenstein, and Slovic 1993; Vlek and Cvetkovich 1989). This is basically true for any far-­reaching 5.  For comparisons with these models, see also the three political advisory models in Habermas 1968 (which preceded his development of the theory of communicative rationality); the steering models of power, money, and knowledge in Willke 1995; and more detailed descriptions in Renn 1999.

178

Risk Governance

political decision. Decision making on risk, however, includes the three key elements already mentioned in Chapters 9–10: complexity, uncertainty, and ambiguity. Meeting these three challenges, risk managers need adequate and accurate expertise, acceptable criteria for conducting cost-benefit tradeoffs under uncertainty, legitimate procedures and due process for dealing with conflicting values, and adequate methods to incorporate public concerns. These demands imply input from the scientific, economic, social, and political systems. How processes can be designed and structured to manage and organize these inputs from different societal sectors into policymaking is the main subject of the next section.

Basic Concepts of Participatory and Deliberative Models The Need for Deliberation Decisions and policies that include complex, uncertain, and even ambiguous outcomes need input from all four sectors of society. In theory, a benevolent dictator or an enlightened risk management agency could provide all of this input without relying on external feedback. In pluralist and democratic societies, however, the division of functional systems into semiautonomous (autopoietic, according to Luhmann) entities has produced a self-image of society in which each system insists on generating and providing the functional input to collective decision making for which it is specialized (Bailey 1994: 285–322; Jaeger et al. 2001: 193ff.; Luhmann 1989). In addition, the other systems usually expect that each system provides its specific contribution to collective decision making (Fiorino 1989b). Experts expect to be heard and considered when risk managers make factual claims about potential outcomes of risky technology or activities. Representatives of industry and the unions demand to be included when tradeoffs with financial implications for them are made. Communities and affected citizens would like to be consulted if their backyards are being altered by locating risky facilities there. Not only do system representatives ask to be included; it also seems wise from a purely functional viewpoint to use the collective pool of experience and problem-solving capacity of each of the systems rather than reconstruct a mirror image of all societal systems in each decision-making agency (although some of this may be necessary, as well, to process all of the input from outside sources). Normative democratic theory would also suggest that those who are affected by public decisions should have a right to be involved or, at least, consulted (Kasperson 1986; Rosenbaum 1978; Sclove 1995). Furthermore, as Simon Joss (2005) has argued, the development toward multilevel governance in the political sphere, the emergence of globalized markets in the economic sphere, the increase of pluralism and individualism in the social sphere, and the growing influence of uncertainties and ambiguities in the scientific sphere have all contributed to a decline of legitimization power within each sphere and pro-

An Analytic-Deliberative Process

179

moted a development toward closer integration of these spheres in terms of integrated decision making (cf. Welp and Stoll-Kleemann 2006). Based on these considerations, risk management agencies have established new routines for outreach as a means of tapping into the experience of other systems. Numerous scientific advisory councils, stakeholder roundtables, citizens’ advisory groups, and other forms of public outreach give ample testimony of the practice of risk management agencies all over the world in improving their basis for legitimizing decisions and practices (Rowe and Frewer 2000). The different forms of cooperation between risk governance institutions and various constituencies of society have been described, analyzed, and reviewed under different conceptual headings.6 These reviews unanimously emphasize the need for a combination of methods that distinguish input from science, stakeholders, and the general public. The evolving concept of analytic-­deliberative processes is one of the promising suggestions for developing an integrative approach to inclusive risk governance based on the inclusion of experts, stakeholders, and the general public (Chess, Dietz, and Shannon 1998; National Research Council 2008; Stern and Fineberg 1996; Sweeney 2004; Tuler and Webler 1999; ­Webler, Tuler, and Krueger 2001).

The First Component: The Role of Scientific Analysis The first element of analytic-deliberative processes refers to the inclusion of systematic and reproducible knowledge in risk management. There is little debate in the literature about the inclusion of external expertise being essential as a major resource for obtaining and utilizing risk knowledge (Bohnenblust and Slovic 1998; Horlick-Jones, Rowe, and Walls 2007; Stern and Fineberg 1996). A fierce debate has evolved, however, on the status of scientific expertise for representing all or most of the relevant knowledge. This debate includes two related controversies: the first deals with the problem of objectivity and realism; the second deals with the role of anecdotal and experiential knowledge that nonexperts have accumulated over time.7 Depending on which side one stands on in this debate, scientific evidence is regarded either as one input to fact-finding 6.  Within the tradition of the sociology of science, the interplay between experts and policymakers has been a popular topic of scholarly work (Coppock 1985; De Marchi and Ravetz 1999; Funtowicz and Ravetz 1990; Jasanoff 2004; Majone 1989; Nelkin 1977; Nowotny, Scott, and Gibbons 2001; Rip 1985; Shrader-Frechette 1991; van den Daele 1992; von Schomberg 1992; Weingart 1999). Within the tradition of political science and institutional analysis, the focus has been on new governance processes focusing on the role of stakeholders and organized groups in the risk arena (Dahl 1994; Eder 1992; Hilgartner and Bosk 1988; Kitschelt 1986; Münch 1996; Olson 1984; Sutter 2004). Analysts of public participation and involvement have investigated the different approaches of risk managers to have representatives of organized and non-organized interests in society participate in decision making (Creighton, Dunning, and Delli Priscoli 1998; Fiorino 1990; Kweit and Kweit 1987; Laird 1993; Renn 2004; Renn and Schweizer 2009). 7. See the reviews in Bradbury 1989; Clarke and Short 1993: 379ff.; Horlick-Jones 2007; Rosa 1998a; Shrader-Frechette 1991; Yearley 2000).

180

Risk Governance

among others or as the central (or even only) legitimate input for providing and resolving knowledge claims. There is agreement, however, among all camps in this debate that systematic knowledge is needed and that it should be generated and evaluated according to the established rules or conventions of the respective discipline. Methodological rigor that aims to accomplish a high degree of validity, reliability, and relevance remains the most important yardstick for judging the quality of scientific insights. Most constructivists do not question the need for methodological rules, but they are skeptical about whether the results of scientific enquiries represent objective or (even more so) unambiguous descriptions of reality (Jasanoff 2004; Knorr-Cetina 1981; Latour 1987; Latour and Woolgar 1979). They instead see scientific results as products of specific processes or routines that an elite group of knowledge producers has framed as “objective” and “real.” In “reality,” these products are determined by the availability of research routines and instruments, prior knowledge and judgments, and social interests (Jasanoff 1996; van den Hove 2007; Wynne 1992). With respect to the analysis of analytic-deliberative processes, the question of whether the analytic part of the discourse follows a constructivist or a realist perspective matters only to the degree that scientific input is used as a knowledge base or as a final arbiter for reconciling knowledge conflicts. The analytic process in itself follows more or less identical rules that are independent of the philosophical stance on realism. A knowledge discourse deals with different, sometimes competing, claims that obtain validity only through a compatibility check with acknowledged procedures of data collection and interpretation, a proof of theoretical compatibility and conclusiveness, and the provision of inter­ subjective opportunities for reproduction (Shrader-Frechette 1991: 46ff.). Obviously, many research results do not reach the maturity of proven facts, but even intermediary products of knowledge, ranging from plain hypotheses via plausible deductions to empirically proven relationships, strive for further perfection (e.g., the pedigree scheme of Funtowicz and Ravetz 1990). However, even the most ardent proponent of a realist perspective will admit that, often, only intermediary types of knowledge are available when it comes to assessing and evaluating risks (Starr and Whipple 1980). What does this mean for analyticdeliberative processes? The following four points show the importance of knowledge for risk management but also make clear that choosing the right management options requires more than looking at the scientific evidence alone. First, scientific input is essential for risk decision making. The degree to which the results of scientific inquiry are taken as ultimate evidence to judge the appropriateness and validity of competing knowledge claims is contested in the literature and should therefore be one of the discussion points during deliberation. The answer to this question may depend on context and the maturity of scientific knowledge in the respective risk area. (A similar assessment is provided in Horlick-Jones, Rowe, and Walls 2007.) For example, if the issue is the effect of a specific toxic substance on human health, anecdotal evidence may serve as a heuristic tool for further inquiry, but there is hardly any reason to replace toxicological and epidemiological investigations with intuitions from

An Analytic-Deliberative Process

181

the general public. If the issue is the siting of an incinerator, anecdotal and local knowledge about sensitive ecosystems or traffic flows may be more relevant than systematic knowledge about these impacts in general (Renn 2010). Second, the resolution of competing claims of scientific knowledge should be governed by the established rules within the respective discipline. These rules may not be perfect and may even be contested within the community. Yet they are usually superior to alternatives (Harrison and Hoberg 1994: 49ff.; Shrader-Frechette 1991: 190ff.; van den Daele 1992). Third, many problems and decision options require systematic knowledge that is unavailable, still in its infancy, or in an intermediary status. Analytic procedures are demanded as a means of assessing the relative validity of each of the intermediary knowledge claims, show their underlying assumptions and problems, and demarcate the limits of reasonable knowledge (i.e., identify the range of those claims that are still compatible with the state of the art in this knowledge domain (Renn 1992; Science Advisory Board 2001: 6). Fourth, knowledge claims can be systematic and scientific, as well as experiential (based on long-term familiarity with the risk cause, the risk agent, or the risk-absorbing system); local (referring to one’s own experience with the local conditions); or derived from folklore wisdom, including common sense (Renn 2010). All of these forms of knowledge have a legitimate place in analytic-deliberative processes. How they are used depends on the context and the type of knowledge required for the issue under question (Wynne 1989). For example, if a hazardous waste incinerator is going to be sited, systematic knowledge is needed to understand the dose-response relationships between the flue gas and potential health or environmental damage; experiential knowledge is needed for assessing the reliability of the control technology or the sincerity of the plant’s operator to implement all of the required control facilities; local knowledge may be helpful to consider special pathways of diffusion of gases or special routes of exposure; and common sense or folklore wisdom may assist decision makers in ordering and prioritizing multiple knowledge claims.

The Second Component: Deliberation The term deliberation refers to the style and procedure of decision making without specifying the participants who are invited to deliberate (Chambers 2003; Stern and Fineberg 1996). For a discussion to be called deliberative, it must rely on mutual exchange of arguments and reflections rather than on decision making based on the status of the participants, sublime strategies of persuasion, or sociopolitical pressure. Deliberative processes should include a debate about the relative weight of each argument and a transparent procedure for balancing pros and cons (Habermas 1989; Tuler and Webler 1999; Webler 1995). Deliberation is foremost a style of exchanging arguments and coming to an agreement on the validity of statements and inferences. Using a deliberative format does not necessarily include the demand for stakeholder or public involvement. Deliberation can be organized in closed circles (such as conferences of Catholic

182

Risk Governance

bishops, where the term has, indeed, been used since the Council of Nicaea), as well as in public forums. Following our arguments, however, complex, uncertain, and ambiguous risk decisions require contributions from scientists, policymakers, stakeholders, and affected publics, and a procedure is required that guarantees both the inclusion of different constituencies outside the risk management institutions and the assurance of a deliberative style within the process of decision making. We use the term deliberative democracy to refer to the combination of deliberation and third-party involvement (see also Bohman 1997; Chambers 2003; Cohen 1997; Warren 2002). In terms of risk governance, deliberation is required for three major tasks. First, deliberative processes are needed to define the role and relevance of the different forms of knowledge for making informed choices. Second, deliberation is needed to find the most appropriate way to deal with uncertainty and to set efficient and fair tradeoffs between potential overprotection and underprotection (handling uncertainty). Third, deliberation is needed to address the wider concerns of the affected groups and the public at large, particularly if the risks are associated with high ambiguity. Why do we expect that deliberative processes are better suited to deal with these three challenges than traditional social-science techniques, such as data from surveys among the relevant constituents, focus groups results, or other participatory instruments to collect systematic feedback from the public? First, deliberation can produce common understanding of the issues or the problems based on the joint learning experience of the participants with regard to systematic and anecdotal knowledge (Pidgeon 1997; Webler et al. 1995). Furthermore, it may produce a common understanding of each party’s position and argumentation (rationale of arguing) and thus assist in a mental reconstruction of each actor’s argumentation (Habermas 1970; Tuler 1996; Warren 1993). The main drive in gaining mutual understanding is empathy. Habermas’s theory of communicative action (HCAT) provides further insights about how to mobilize empathy and how to use the mechanisms of empathy and normative reasoning to explore and generate common moral grounds (Webler 1995). Second, deliberation can produce new options for action and solutions to a problem. This creative process can be mobilized either by finding win-win solutions or by discovering identical moral grounds on which new options can grow (Fisher, Ury, and Patton 1981; Webler 1995, 1999). It has the potential to show and document the full scope of ambiguity associated with risk problems and helps to make a society aware of the options, interpretations, and potential actions connected with the issue under investigation (De Marchi and Ravetz 1999). Each position within a deliberative discourse can survive the cross fire of arguments and counterarguments only if it demonstrates internal consistency, compatibility with the legitimate range of knowledge claims, and correspondence with the widely accepted norms and values of society. Deliberation clarifies the problem, makes people aware of framing effects, and determines the limits of what could be called reasonable within the plurality of interpretations (Skillington 1997).

An Analytic-Deliberative Process

183

Third, deliberation can produce common agreements. The minimal agreement may be a consensus about dissent (Raiffa 1994; Renn and Webler 1998: 64). If all arguments are exchanged, participants know why they disagree. They may not be convinced that the arguments of the other side are true or morally strong enough to change their position, but they may understand the reasons that the opponents came to their conclusions. At the end, the deliberative process produces several consistent and—in their own domain—optimized positions that can be offered as package options to legal decision makers or the public. Once these options have been subjected to public discourse and debate, political bodies such as agencies or parliaments can make the final selection in accordance with legitimate rules and institutional arrangements, such as a majority vote or executive order. Final selections also can be performed by popular vote or referendum. In addition, deliberation creates “second-order” effects on individuals and society by providing insights into the fabric of political processes and by creating confidence in one’s own agency to become an active participant in the political arena (Dryzek 1994). By participating, individuals can enhance their capacity to raise their voice in future issues and become empowered to play their role as active citizens in the various political arenas. Fourth, deliberation may result in consensus. Often, deliberative processes are used synonymously with consensus-seeking activities (Coglianese 1997). This is a major misunderstanding. Consensus is a possible outcome of deliberation but not a mandatory requirement (cf. van den Hove 2007). If participants find a new option that they all value more than the option they preferred when entering the deliberation, a “true” consensus is reached (Habermas 1971). But it is clear that finding such a consensus is the exception rather than the rule. Consensus is either based on a win-win solution (see, e.g., Waldo 1987) or a solution that serves the “common good” and each participant’s interests and values better than any other solution (see, e.g., Webler, Tuler, and Krueger 2001). The requirement of a tolerated consensus is less stringent. Such a consensus rests on the recognition that the selected decision option might serve the common good best but at the expense of some interest violations or additional costs. Here participants might agree to a common decision outcome while holding their nose. In this situation, people who might be worse off than before, but who recognize the moral superiority of the solution, can abstain from using their power of veto without approving the solution. In our empirical work, deliberation often has led to a tolerated consensus solution, particularly in siting conflicts.8 Consensus and tolerated consensus should be distinguished from compromise. A compromise is a product of bargaining, with each side gradually reducing its claim to the opposing party until they reach an agreement (Raiffa 1994). All parties involved would rather choose the option they preferred before starting deliberations, but because they cannot find a win-win situation or a morally superior alternative, they look for a solution that they can “live with,” well aware of the 8.  One example is provided in Schneider, Oppermann, and Renn 1998.

184

Risk Governance

fact that it is the second- or third-best solution for them. Compromising on an issue relies on full representation of all vested interests.

The Link between Analysis and Deliberation: The Decision Analytic Approach One of the most difficult problems in the design and implementation of analytic-deliberative processes is the integration of analytic input within deliberation and the introduction of deliberative methods to the analytic component. The deliberative process and the process of analytic knowledge generation and processing must be integrated. This task can be facilitated by using decision analytic tools (Gregory 2004; McDaniels 1998; National Research Council 2005).9 Decision theory provides a logical framework that distinguishes action alternatives or options, consequences, likelihood of consequences, and value of consequences, where the valuation can be over multiple attributes that are weighted—based on tradeoffs in multi-attribute utility analysis (Edwards 1977). A sequence of decisions and consequences may be considered, and use of mathematical models for predicting the risks of certain events or activities, as well as the consequences of regulatory options, may or may not be part of the process (Arvai, Gregory, and McDaniels 2001; Humphreys 1977). The structuring potential of decision analysis has been used in many participatory processes (Arvai, Gregory, and McDaniels 2001; Rauschmayer and Wittmer 2006). It helps the facilitator of such processes to focus on one element during the deliberation; to sort out the central from the peripheral arguments; to provide a consistent reference structure for ordering arguments and observations; and to synthesize multiple impressions, observations, and arguments into a coherent framework. The structuring power of decision analysis has often been used without expanding the analysis into quantitative modeling. The second potential—agenda setting and sequencing—also is frequently applied in participatory settings. It often makes sense to start with problem definition and then develop the criteria for evaluation, generate options, assess the consequences of options, assign probabilities to each outcome, weigh criteria relative to one another, and synthesize all of the assessments into a priority list of options (see Figure 10.2). The third potential—quantifying consequences, probabilities, and relative weights and calculating expected utilities—is more controversial than the other two. Whether the deliberative process should include a numerical analysis of utilities or engage the participants in a quantitative elicitation process often is contested among participation practitioners (Gregory, McDaniels, and Fields 2001). One side claims that quantifying helps participants to be more 9. The following text on the potential use of decision analytic tools for deliberative processes is based partly on an unpublished paper prepared for the Panel on Public Participation in Environmental Assessment and Decision Making of the U.S. National Academy of Sciences (North and Renn 2005).

Generic Steps

Example of Application of the Steps

Structure the problem and specify goals for the situation ahead

Problem:  Municipal solid waste disposal Priorities:  Reduce waste generation, encourage voluntary reuse and recycling, and mandate recycling, incineration, and landfills

Extract appropriate value dimensions from stated goals

Ensure equity of risk exposure, compensation, and cost effectiveness; minimize impacts

Define criteria and indicators for each value dimension (or attribute)

Provide meta-criteria on health risks, environmental risks, and social and cultural risks

Assign relative weights to each value dimension

Health risk = 40%; environmental risk = 35%; cost = 25%

Describe alternatives or options that seem to meet the criteria

Option A: Regional recycling centers and an expanded landfill Option B: A new landfill in community X

Measure how well each decision option performs on the criteria

Conduct geological borings and probabilistic risk assessments, collect statistical data, and elicit citizens’ responses

Assess probabilities of each possible outcome

Option A: Health risk = .11; eco-risk = .21; cost = .82 Option B: Health risk = .34; eco-risk = .75; cost = .20

Sum each decision option’s utility on each criterion, then multiply by the weight of each value dimension

Option A: 32

Conduct a sensitivity analysis to incorporate changes in the criteria composition, outcome generation, assignment of weights, and probability assessments

Option A: Worst case = 28 (out of 100); medium = 32; optimistic = 41

Option B: 45

Option B: Worst case = 22, medium = 45, optimistic = 50

Figure 10.2  Steps in a Decision-Analytic Approach (Source:  Adapted from Jaeger et al. 2001: 51)

186

Risk Governance

precise about their judgments and to be aware of the often painful tradeoffs they are forced to make. In addition, quantification can make judgments more transparent to outside observers. The other side claims that quantification restricts the participants to the logic of numbers and reduces the complexity of argumentation into calculation, like mere tradeoff games. Many philosophers argue that quantification supports the illusion that any value can be traded off against another and that complex problems can be in appropriately reduced to simple linear combinations of utilities (Shrader-Frechette 1991). One compromise between the two camps may be to have participants go through the quantification exercise as a means of helping them clarify their thoughts and preferences but make the final decisions on the basis of holistic judgments (Hostmann et al. 2005; Renn 2004). In this application of decision analytic procedures, the numerical results (i.e., for each option, the sum of the utilities of each dimension multiplied by the weight of each dimension) of the decision process are used not as an expression of the final judgment of the participant but as a structuring aid to improve the participants’ holistic intuitive judgment. By pointing out potential discrepancies between the numerical model and the holistic judgments, the participants are forced to reflect on their opinions and search for potential hidden motives or values that might explain the discrepancy. Within the decision analytic model, input from external knowledge sources is required when assessing the consequences of each decision option and assigning probabilities to each option. In an ideal world, this task could be performed by experts and then fed into the process. In reality, however, this process is more sophisticated and intertwined (Hyman and Stiftel 1988). Often, scientific claims are disputed, experts may voice divergent (even contradictory) advice, the legitimate role of other knowledge (experiential, local, common sense, and folklore wisdom) is not clear, or the boundaries between facts and values have become fuzzy (De Marchi and Ravetz 1999; Funtowicz and Ravetz 1990; Horlick-Jones, Rowe, and Walls 2007; Laudan 1996; Liberatore and Funtowicz 2003; van der Sluijs et al. 2005). It is essential to acknowledge, in the context of risk, that human knowledge is always incomplete and selective and thus contingent on uncertain assumptions, assertions, and predictions. In this difficult situation, different approaches could be used. The classic approach would be to consult highly esteemed scientific journals and conduct peer reviews from independent scientists. These checks may help managers or organizers of deliberative processes to determine the evidence considered acceptable. Another, more innovative approach is the joint fact-finding mission in which experts and the participants in the deliberation (stakeholders and representatives, not the public) try to sort out the knowledge claims and define what is relevant and valid for the case in question (Adler et al. 2000: 21ff.; Meister and Oldenburg 2008: 38). A third approach is to conduct a workshop with experts representing the entire range of opinions and assessments regarding the risk in question and to ask them to classify the systematic knowledge in terms of validity, accuracy,

An Analytic-Deliberative Process

187

authenticity, and reliability (Adler et al. 2000: 18).10 A fourth possibility would involve the experts in a separate consensus-building process such as Delphi or a group Delphi and feed the results to the deliberating body (Schulz and Renn 2009; Webler et al. 1991). Another point of consideration is the issue of presenting scientific results to the deliberating body. Participants without scientific training may feel at a disadvantage when the information is being presented to them in esoteric, scientific terminology, and by extensive use of models, mathematics, and statistics. The point here is not that one should simplify to be comprehensible to a lay audience (which may be important for communication with stakeholders, but not for an environmental decision-making process that uses mathematical logic as an appropriate means of dealing with great scientific complexity). The main argument is, rather, that the assumptions and conditions that may constrain the validity and applicability of the models should not remain hidden behind the image of exact figures and algorithms (Klein 1997). These assumptions and conditions should be made explicit and be subject to questioning and challenging by the participants. It may be advisable for those responsible for the deliberative process to work closely with scientists and analysts so that information is made available in a format that guarantees that all assumptions, underlying values and norms, and scientific conventions (such as using the ninety-fifth percentile for significance testing) become transparent and open to criticism. The empirical analysis by Thomas Beierle and Jerry Cayford (2002) underlines that the quality of the process is enhanced, if there is a major effort to make the expertise available to all participants in a form that does not compromise its accuracy, but is comprehensible for a non-expert in the field. It is less advantageous to have each stakeholder select his or her own expert and then have the experts conduct a fight against each other in front of the deliberating body (Science Advisory Board 2001: 9). Most decision analysts agree that applying the concepts from decision analysis requires specific tools that help participants to use the decision-analytic framework most productively. This is true both for eliciting values on the consequences and for organizing information on issues of complexity, uncertainty, and ambiguity in predicting the consequences for the various options under consideration (Keeney 1988, 1992). The tools needed on the informational aspects involve a set of quality criteria (Science Advisory Board 2001: 6): • Have all evidence claims have been fairly and accurately tested against commonly agreed standards of validation (methodological rigor)?

10. In several practical applications, we have asked experts in a group Delphi to make a distinction among certain, probable, possible, unknown, and absurd when judging systematic knowledge claims. This simple classification has been quite successful in catalyzing the debate among laypeople involved in a deliberative setting.

188

Risk Governance

• Has all the relevant evidence in accordance with the state-of-theart knowledge been collected and processed (comprehensiveness and representativeness)? • Were systematic, experiential, and local knowledge and expertise adequately included and processed (incorporation of all relevant knowledge claims)? • Have all conflicts about evidence been resolved using accepted means of validation, testing, and methodology approved of by the respective scientific or knowledge communities? • Were all normative judgments inherent in evidence claims made explicit and thoroughly explained? Were normative statements deduced from accepted ethical principles, legally prescribed norms, or accepted conventions within the knowledge community? Every type of knowledge has standards of quality that can be examined, debated, and tested. Thus, the issues of what is to be examined, how it is to be examined, who is to examine it, and when it is to be examined are negotiable during the course of the deliberation (Adler et al. 2000: 18). Yet the methods and procedures of examination need to be taken from the accepted arsenal of validation and reliability testing methods. After carefully considering the evidence, the Science Advisory Board of the U.S. Environmental Protection Agency (2001: 8) concluded that “adequate treatment of science is possible in stakeholder processes, but typically only if substantial financial resources, adequate time and high-quality staff are available from the outset to allow the necessary deliberation and provide the necessary support on an iterative basis throughout ongoing interaction with the stakeholders.” The second aspect of introducing deliberative methods into the analytic part of the process is equally important. The U.S. National Research Council report on characterizing risk postulated the importance of getting the right science and getting the science right (Stern and Fineberg 1996). Experts need to acknowledge that they act on the basis of cultural expectations (paradigms), professional conventions, sometimes doubtful assumptions, incomplete or conflicting data, and simplified models. This self-reflection about the preconditions of scientific inquiry is a good starting point for a deliberation among experts and later between experts and non-experts (Jasanoff 2004; Liberatore and Funtowicz 2003). The more the respective expert communities learn to use deliberative techniques for sorting out knowledge claims, the more effective is the transfer of results of these discussions to non-expert communities.

Process Requirements for Deliberative Processes The discussion so far has focused on the potential of analytic-deliberative processes, their advantages and disadvantages, and the interface between the analytic and the deliberative process. This section deals with the internal structure of deliberation. There is a need for an internal structure that facilitates com-

An Analytic-Deliberative Process

189

mon understanding, rational problem solving, and fair and balanced treatment of arguments. The success or failure of a discourse depends on many factors. Among the most influential are the following (Renn 2008c: 318ff.; Renn and Webler 1998: 57ff.): • A clear mandate for the participants of the deliberation. Models of deliberative democracy require a clear and unambiguous mandate of what the deliberation process should produce or deliver (Armour 1995). Since deliberations are most often informal instruments, there should be a clear understanding that the results of such a process cannot claim any legally binding validity (unless the process is part of a legal process, such as arbitration). All of the participants, however, should begin the process with a clear statement that specifies their obligations or promises of voluntary compliance once an agreement has been reached. • Openness regarding results. A deliberative process will never accomplish its goal if the decision already has been made (officially or secretly) and the purpose of the communication effort is to “sell” this decision to the other parties. Individuals have a good sense of whether a decision maker is really interested in their point of view or if the process is meant to pacify potential protesters (Fiorino 1989b). • A clear understanding of the options and permissible outcomes of such a process. The world cannot be reinvented by a single involvement process; nor can historically made decisions be deliberately reversed. All participants should be clearly informed of the ranges and limits of the decision options that are open for discussion and implementation (Leach 2005: 47; Yosie and Herbst 1998). If, for example, the technology is already in existence, the discourse can focus only on issues such as emission control, monitoring, emergency management, or compensation. But the range of permissible options should be wide enough to provide a real choice situation to the participants (Kasperson 1986). • A predefined timetable. It is necessary to allocate sufficient time for all the deliberations, but a clear schedule, including deadlines, is required to make the process effective and product oriented (Science Advisory Board 2001). • A well-developed methodology for eliciting values, preferences, and priorities. The need for efficiency in risk governance demands a logically sound and economical way to summarize individual preferences and integrate them within a group decision (agreement on dissent, majority and minority positions, tolerated consensus, true consensus, or compromise). Formal procedures, such as multi-­objective or multiattribute utility analysis, could serve as tools for reaching agreements (Chen and Mathes 1989; Hostmann et al. 2005; McDaniels 1996; von Winterfeldt 1987; von Winterfeldt and Edwards 1986). We have used

190

Risk Governance

multi-attribute utility procedures in most of our deliberative processes (Renn 1999). • Equal position of all parties. A deliberative process needs a public sphere with a climate of a “powerless” environment (Habermas 1971, 1991a, 1991b; Webler 1995). This does not mean that every party has the same right to intervene or claim a legal obligation to be involved in the political decision-making process. However, the internal discourse rules have to be strictly egalitarian; every participant must have the same status in the group and the same rights to speak, make proposals, or evaluate options (Kemp 1985). Two requirements must be met. First, the decision about the procedure and the agenda must rely on consensus; every party needs to agree. Second, the rules adopted for the discourse need to be binding for all members, and no party can be allowed to claim any privileged status or decision power. The external validity of the discourse results, however, is subject to all legal and political rules that are in effect for the topic in question. • Neutrality of the facilitator. The person who facilitates such a process should be neutral in his or her position on the risk issue and respected and authorized by all participants (Bacow and Wheeler 1984; Baughman 1995; Moore 1986). Any attempt to restrict the maneuverability of the facilitator, moderator, or mediator should be strictly avoided. • A mutual understanding of how the results of the process will be integrated within the decision-making process of the regulatory agency. As a pre-decisional tool, the recommendations cannot, in most cases, serve as binding decisions. They should instead be regarded as a consultative report similar to the technical recommendations provided by scientific consultants to the legitimate public authorities (National Research Council 2008). Official decision makers must acknowledge and process the reports by the deliberative bodies, but they are not obliged to follow their advice. However, the process will fail its purpose if deviations from the recommendations are neither explained nor justified to the panelists. A second set of internal requirements about the expected behavior of the participants is necessary to facilitate agreement or, at least, a productive discussion. Among these requirements are the following: • Willingness to learn. All parties must be ready to learn from each other. This does not necessarily imply that they have to be willing to change their preferences or attitudes (Daniels and Walker 1996; Pidgeon 1997). Conflicts can be reconciled on the basis that parties accept others’ positions as a legitimate claim without giving up their own point of view. Learning in this sense entails:

An Analytic-Deliberative Process

• Recognition of various forms of rationality in decision-making (Habermas 1989; Luhmann 1983; Lynn 1986); • Recognition of different forms of knowledge, whether it is systematic (Rosa 1998a), experiential (National Research Council 2008), local (Wynne 1996), or folklore wisdom (Habermas 1971; Renn 2010); • Willingness to subject oneself to the rules of argumentative disputes (i.e., provide factual evidence for claims); obey the rules of logic for drawing inferences; disclose one’s own values and preferences vis-à-vis potential outcomes of decision options; and so on (Webler 1995). • Resolution of allegedly irrational responses. Reflective and participatory discourses frequently demonstrate a conflict between two contrasting modes of evidence. The public refers to anecdotal and personal evidence, mixed with emotional reactions, whereas the professionals employ their systematic and generalized evidence based on abstract knowledge (Adler et al. 2000: 18; Dietz and Rycroft 1987). A dialogue between these two modes of collecting evi­ dence is rarely accomplished, because experts regard the personal evidence as a typical response of irrationality. The public representatives often perceive the experts as uncompassionate technocrats who know the statistics but could not care less about a single life lost. This conflict can be resolved only if both parties are willing to accept the rationale of the other party’s position and to understand, and maybe even empathize with, the other party’s view (Tuler 1996). If over the duration of discourse some familiarity with the process and mutual trust among the participants have been established, role playing can facilitate that understanding. Resolving alleged irrationalities means discovering the hidden rationality in the argument of the other party. • Non-moralization of positions and parties. The individuals involved in a deliberative process should agree in advance to refrain from moralizing (Bacow and Wheeler 1984). Moral judgments on positions or persons impede compromise. As soon as parties start to moralize, they cannot make tradeoffs between their allegedly moral position and the other parties’ “immoral” position without losing face (Renn 2004). A second undesired result of moralizing is the violation of the equality principle. Nobody will assign equal status to a party who is allegedly morally inferior. Finally, moralizing masks deficits of knowledge and arguments. Even if somebody knows nothing about a subject or has only weak arguments to support his or her position, assigning blame to other actors and making the issue moral one can help win points. The absence of moralizing does not mean refraining from using ethical arguments, such as “This solution does not seem

191

192

Risk Governance

fair to future generations” or “We should conserve this ­ecosystem for its own sake.” Ethical arguments are essential for resolving risk disputes.

Conclusion The objective of this chapter was to address and discuss the need and potential to integrate analysis and deliberation; to introduce different concepts of stakeholder and public involvement; and to characterize the main features of, and conditions for, a successful implementation of an analytic-deliberative process. Organizing and structuring such a process goes beyond the well-meant intention of having the public involved in risk decision making. The mere desire to initiate a two-way communication process and the willingness to listen to public concerns are not sufficient (Hadden 1989; Lynn 1990). Discursive processes need a structure that ensures the integration of technical expertise, regulatory requirements, and public values. Decisions on risk must reflect effective regulation, efficient use of resources, legitimate means of action, and social acceptability. The combination of analytic rigor and deliberative democracy is the best medicine for dealing with systemic risks. Inputs for both the analytic and the deliberative element of decision making can be provided by the different systems of society: efficiency by economic markets; knowledge about effectiveness by scientists and experts; legitimacy by the political institutions; and reflection of values and preferences by including social actors. The objective is to find an organizational structure so that each system contributes to the deliberation process the type of expertise and knowledge that claim legitimacy within a rational decision-making procedure (von Schomberg 1995). It does not make sense to replace technical expertise with vague public perceptions; nor is it justified to have the experts insert their own value judgments into what ought to be a democratic process. Recently, there has been much concern in the professional risk community that opening the risk management arena to public input would lead to a dismissal of factual knowledge and to inefficient spending of public money (Cross 1992, 1998; Okrent 1998; cf. Chapter 2). In our opinion, these concerns are not warranted if one looks at actual experiences with discursive models of participation. There are only a few voices that wish to restrict scientific input to risk governance. The role of scientific analysis in risk governance should not be weakened but, rather, strengthened. Profound scientific knowledge is required in risk governance, especially with regard to dealing with complexity, uncertainty, and ambiguity. This knowledge has to be assessed and collected by scientists and risk professionals who are recognized as competent authorities in the respective risk field. The systematic search for the “state of the art” in risk assessment leads to a knowledge base that provides the data for deliberation (Yankelovich 1991). At the same time, however, the style of deliberation also should transform the scientific discourse and lead the discussion toward classifying knowledge claims, characterizing uncertainties, exploring the range of

An Analytic-Deliberative Process

193

alternative explanations, and acknowledging the limits of systematic knowledge in many risk arenas. Placing emphasis on the analytic part of the process does not contradict the deliberative character of the whole decision-making process. Although systematic evaluations of analytic-deliberative processes are largely missing, and empirical data about the success or failure of such processes are still not conclusive, most reviewers agree that ignorance and misperceptions are not the major problems in participatory settings (Beierle and Cayford 2002; Bingham 1984; Creighton 1991; Rowe, Marsh, and Frewer 2004). On the contrary, participants from the lay public not only were willing to accept, but actually demanded, that the best technical estimate of the risks under discussion be employed for the decision-making process (Burns and Überhorst 1988; Renn 1999). These participants also insisted, however, that other dimensions apart from expected values enter the deliberation process. Once the potential contributions of the expert communities, the stakeholder groups, and members of the affected public had been recognized and acknowledged in such settings, a process of mutual understanding and constructive decision making started to unfold. Such a discursive process may not always lead to the desired results, but the experiences so far justify a fairly optimistic outlook. The main lesson from these experiences has been that scientific expertise, rational decision making, and public values can be reconciled if a serious attempt is made to integrate them. The transformation of the risk arena into a cooperative risk discourse seems to be an essential and, ultimately, inevitable step in improving risk policies and risk management.

Conclusion Risk Governance as a Catalyst for Social Theory and Praxis The task is not so much to see what no one has yet seen, but to think what no one thought about that which everyone sees. —Arthur Schopenhauer, The World As Will and Representation (1818)

T

he reality that risk has always been and will continue forever to be a universal feature of the human condition is nearly unassailable. From threats by saber-toothed tigers to global warming, there is deep uncertainty about the things that humans value. The world of advanced modernity has elevated risk to a position at the core of that condition. The pace of risk creation, the magnitude of potential consequences, and their spread distinguish the present era from all predecessors. Globalization has served to interconnect societies via a common concern with risk and by creating new suites of systemic risk. Risk holds a central presence in this era of advanced modernity, and its ubiquity is evident with widespread concern about a range of hazards, from the dramatic (terrorist attacks) and the complex (nuclear power) to the simple and rapidly expanding (cellular telephones). Social theory can promise not only to describe and explain fundamental social processes producing risks such as these, but also to outline the consequential dislocations and paradoxes they engender. At the same time, theory can point to potential solutions or acceptable procedures that can help societies govern risks in both the short term and the long term and develop a critical momentum that creates the resources and processes for taking collective actions necessary to create better living conditions and opportunities for all. In this book, we leaned on the thinking of key European scholars to describe the contours—and increase our understanding—of this world risk soci­ ety. A primary aim of this book was to capture the essence of their writings,

196

Conclusion

critically evaluate their logic and theoretical implications, and identify key gaps that need to be filled. We distilled and summarized the works of Ulrich Beck, Anthony Giddens, Niklas Luhmann, and Jürgen Habermas and attempted to work through the logic of their theories as they bear on risk. There are clearly a number of key differences in the theoretical frames of Beck, Giddens, and Luhmann, and there is little doubt that the fundaments distinguishing the separate frames are worthy of explication and critique. But the frames are also replete with similarities that can inform the risk governance challenges of advanced modernity. These similarities are found in their common foundation for describing the current era as a categorical break with the past, in their situating of risk as the pivotal feature of this era of advanced modernity, and in their suggestions for institutional reforms to govern societal risks more effectively. The common foundation describing advanced modernity itself emerges with the complementary answers that the separate theoretical frames give to the three crucial questions that guided our treatment of risk in Part II: 1. What social order emerges in advanced modern societies where technological and ecological risks are virtually everywhere? 2. What is our knowledge about these risks? 3. How can societies develop the institutional and political means for governing risk effectively? To the first question, Beck, Giddens, and Luhmann answer that there is a clear discontinuity between contemporary society and its predecessors that is due to risks produced by humans and human systems—internal or manufactured risks, in Luhmann’s terminology. Contemporary technology is larger, more complex, and much higher in risk potential—with the capacity to affect far greater numbers of people, the things they value, and the areas they occupy. Furthermore, that risks are global and impossible to avoid has markedly increased the vulnerability of social systems everywhere. These infrastructural changes, it is further argued by Beck, Giddens, and Luhmann, have reshaped the social order. Technological, ecological, and terrorist catastrophes are increasingly transcending geographical, political, and economic boundaries. Social actors thus are stripped of their traditional social and political identities and are becoming individualized. Actors find themselves in a risk-laden world stripped of its traditional social filters, forced by their individualized status to make an increasing number of difficult risk decisions. Where do they look to inform their decisions? They look to conventional science, the keeper of risk knowledge. But they do so in a way that significantly departs from the past. The individualized social actor is increasingly more reflexive, recognizing science’s paradoxical role—as simultaneously creator of and assessor of risks. The lion’s share of new risks comes from technological innovations that are the handiwork of the scientific enterprise. Hence, the individual no longer accepts scientific assessment

Conclusion

197

uncritically but combines scientific knowledge with experiential knowledge, with political knowledge, and with the perceived trustworthiness of science to reflexively arrive at judgments about risks. Hence, our risk knowledge, in answer to the second question, no longer can be traced to science alone; it must be traced to an amalgam of actors and institutions, as well as to the outcomes of exercising individual reflexivity in terms of making intuitive sense of conflicting knowledge claims and evaluation criteria. This categorically changed state of the world begs the third question about how societies can govern these pervasive and highly consequential risks. Again, the separate frames converge on the same conclusion: that we need to develop new institutional mechanisms that extend the arena of decision making. Because contemporary risks embed not just probabilities and other analytic elements, but also a multiplicity of social, political, and psychological factors, their proper governance requires the participation of a broad range of interests. Risk governance in the risk society (at least for Beck and Giddens) means the radicalization of democracy, or what Giddens (2000a) refers to as “democratizing democracy.” And Luhmann, despite his skepticism about a democratic solution, sees value in “subpolitical deliberation”–that is, collective policymaking by civil society actors in competition with established governmental routines. Thus, the common prescription of these separate frames is a broadening of deliberations about collective risks, an enlargement of Dewey’s public sphere for risk decision making. The understandings of the contemporary world by these theorists converge on a core idea: that the development of technology and its concomitant manufactured uncertainty in the late stages of modernity (what we have termed advanced modernity) has created a Sisyphean world. The juggernaut of modernity, which demands an infinite process of pushing upward and faster, conflicts with premodern traditions such as traditional tribal, religious, and ethnic commitments. It also increasingly threatens to bring on ecological and other crises. Thus, while the juggernaut pushes materiality upward, it simultaneously is pushed down in the other direction by the weight of risks that grow so rapidly in number, consequence, spread, and catastrophic potential as to overwhelm both social actors and the societal institutions responsible for their control and management. Yet this increase in catastrophic potential and genuine uncertainty is accompanied by an unprecedented improvement in individual life expectancy, accident prevention (fewer and fewer people suffer serious accidents in their lifetime), and health conditions, at least in the more affluent nations of the world (Freudenburg 1988). The advanced modern social actor is therefore embedded in a paradox: on one side is the promise to be less affected by risks to life and health than the average individual in all of human history, and on the other side is the threat of uncertain but potentially catastrophic events that are produced by the same forces that make individual life so comfortable and safe. The irony is that this is not just a reflection of real versus perceived risks, as many technical experts would have us believe. Such semantic gerrymandering does not get at the core.

198

Conclusion

The irony is, instead, a representation of real changes and threats that provide paradoxes how we think about ourselves. Humans now are engulfed in a culture of contradictions that are based not on misperceptions (which would merely imply solution via better risk communication and education) but on the strategy of what signals they choose to observe and to what signals they assign priority. Both sides of the coin are true: there is tremendous risk associated with modern and advanced modern development of technology, institutions, and governance processes in a globalized word and, at the same time, there has never been less actual harm experienced by the average individuals in most countries of the world. A word of caution is needed here. There are, of course, exceptions within these general patterns and processes. Exceptions include the poorest nations on Earth, the societies with non-functioning governance structures and those regions experiencing the resurrection of premodern conflicts around religion and race. Far from being insignificant, they do, however, represent a world that is governed by the conflicts and structures of the past: feudal claims over power, preservation of traditional privileges, and immobile social structures that prevent any change for the better. With the advent of modern technology, efficient administration, due process, and capitalist investment structures, these premodern conditions have steadily been transformed into our advanced modern structure characterized by the schism between personal well-being and collective threats. It may be the case, as some argue, that this advanced modern structure is dependent on maintaining—or even worse—perpetuating premodern conditions in other parts of the world. Whether this proposition is true is a critical question that needs more attention but is outside the scope of this book. Our focus is, and has been throughout the book, on those societies that have gone successfully through the process of modernization into advanced modernity and face the many challenges that the authors we review describe and analyze in their work. This recognition of the dual faces of risk in contemporary societies has underscored the importance of risk as a sensitizing concept, thereby re-­specifying and deepening our understanding of the driving factors of advanced modernity. It also has opened avenues for enhancing our understanding of the process of globalization, for sketching out the emergent politics of global citizens, for disciplined speculation on the direction that late modernity will take, and for conceptualizing alternative paths to the next stages of history. Since the risk society is inevitably riddled with these structural paradoxes and moral dilemmas, since it is also riddled with systemic risks that threaten large-scale institutions, and since the volume and pace of risk creation may overwhelm our capacities, the question of how to not just manage risks but to govern them becomes more complex than dealing with routine policy issues. If we raise the simple question “What is the route out of this undesirable state of affairs?” the common answer by Beck, Giddens, and Luhmann is, “There is none. This is the fait accompli of advanced modernity.” This is the inescapable state of the human condition at this time. The most we can hope to do is to monitor and

Conclusion

199

mitigate the worst effects. For Luhmann this presumably would be one of the results of the self-referential examination of the environmental consequences of system functioning. Giddens and Beck, providing for a more active role for institutions, agree that the vehicle for traveling this route can only be a reconstituted form of democracy—certainly not any version of socialism. They also see a crucial role to be played by social movements. But Giddens and Beck are far more successful at theorizing the nature of the risk society and developing a perspective than at explicating the means for addressing its dislocations and moral issues, other than to adumbrate a fairly vague vision around a revitalized form of democracy. The contours of such a revitalization have yet to crystallize. For this reason, we turned to Habermas to address our third ­question—the one about the societal governance or management of risks. As we explained in the preface, the second major focus of this book is on effective and democratic structures and processes of governing risk where judgments about the uncertain consequences of actions and events are central to public decisions that need to be made. Despite his more general theoretical frame, Habermas offers—ironically—a greater degree of specificity in developing a blueprint for a more rational basis of collective choice. By their insistence on the late modern process of individualization, Beck and Giddens send the social actor adrift into a free-floating, reflexive existence. Luhmann’s systems have no place for the individual at all; individuals are biological systems, psychological systems, or carriers of social systems. Habermas’s two-step analysis, from a diagnosis of the growing infringement of systems on the lifeworld by a narrow understanding of instrumental rationality to the recommendation for renewed rational discourse, rests on his fundamental faith in language and its universality as a medium for rational interaction. His answer to the moral crisis of philosophy is not to reconstruct or reveal the (existing but deeply covered) rational solution(s) to pressing moral dilemmas but that we construct these solutions creatively through discourse. Language provides the means through which we can construct truth, morality, and authenticity by actively engaging in the exchange of speech acts, which helps to create objectivity, not to discover it. His philosophical contribution to the debate about realism and constructivism, which we addressed in detail in Chapter 1, is the mediator role of language and interaction (among social actors as well as between them and signals from the natural world). Discourse provides a bridge between the real (independently existing, ontological) world and its mental representations (our epistemological constructions of it). It provides the link between an uncertain state of the world, risk, and our perception and interpretation of that state. Hence, Habermas theoretically reintegrates the communicative actor into the process of creating mutually recognized realities. The net result of this “true” discourse is a meaningful suggestion both for understanding the advanced modern world and for rationally addressing its dislocations. In Chapters 9–10, we outlined some implications for a process that promises to provide a more effective and democratically legitimized governance structure for establishing such “true” discourses in a real world setting. Given

200

Conclusion

the challenge that risk is compatible with a multitude of competing frames and representations that respond to different but basically real aspects of the advanced modern conditions, the solution to risk problems and their governance will not be found in constructed unidimensional scales of relative harm, as is done in conventional risk assessments. Such scales and other tools can only be a part of a more comprehensive approach, albeit an important one. The various stakeholders and their various frames need to be equally represented in a discursive environment in which speech acts are freely exchanged, validity claims are negotiated and enacted, and a variety of solutions are verbally or experimentally tested. This process enables societies to learn and adapt to complex situations. Our treatment of risk governance and the incorporation of complexity, uncertainty, and ambiguity into risk decision making has been enlightened by the basic insight of Habermasian theory: that a truthful, true, and moral solution to a problem needs to reflect plural conceptual frames of the problem, on the one hand, and on the other, it requires a creative act of mutual inter­actions among all those affected by the outcomes of the governance choices over each risk issue.1 The risk theories of Beck, Giddens, and Luhmann, there is little doubt, remain incomplete—especially in their prescriptions for rebalancing the disequilibria and dislocations of late modernity. Nevertheless, with their focus on risk and with their analyses of institutional responses to risk they provide a sensitizing perspective and a theoretical vocabulary for understanding the trajectory of late modernity. Likewise, Habermas’s theory of communicative action is incomplete.2 However, taken together, all four perspectives are incomplete in distinctly different ways. Beck and Giddens provide a needed focus on risk and illuminate the dislocations and paradoxes of risk, but they can, at best, only adumbrate what can be done about this state of affairs: dialogic or radicalized democracy. They are thus incomplete in prescribing next steps. By placing risk in a systems framework, Luhmann is able to unpack some of the dynamics of system dysfunctions produced by the tension between the social constructions of risk versus danger. However, lost in this process is not only the individual, but also individual agency, with the consequence that he offers no direction for developing governance procedures for addressing the dysfunctions. Luhmann, like Weber, recognizes the “iron cage” that results from increasing rationalization and bureaucratization of the lifeworld. Rather than seeking to emancipate us from it, he seals the cage ever tighter with additional systemic locks that make escape all but impossible. 1. How this insight is translated in practical participation processes is further developed in Renn (2008c); Webler (1995). 2.  As briefly mentioned, the four theories reviewed offer no prescription for superseding contemporary global constraints, such as tribalism, ethnic conflict, post-colonial divisions, or, possibly, transnational conflicts. These omissions are either potentially fatal to the logic of all four or signal the need to specify the scope conditions (e.g., only societies experienced with democratic processes) of each theory.

Conclusion

201

Habermas’s theory is incomplete for failing to see the pivotal role played by risks in late modernity. Although he is cognizant of the importance of technology and of a “technocratic consciousness,” it is remarkable that he seldom mentions risk at all, and when he does, it is typically as a connective concept, not a fundamental one to be elaborated. Had he extended his vision of technocratic systems to include risk, he might have argued, in his quest to complete Weber’s modernity project, that formal risk analysis is the exemplar of Weber’s rationalization theme of world history. In particular, he might have argued that the formal risk assessment techniques based on one concept of reality are an additional assault on the lifeworld by a third system: the system of a formalized assessment of technology and its effects. Nevertheless, Habermas does go beyond Beck, Giddens, and Luhmann in providing direct guidance, via communicative discourse, for the democratization of decision making.3 That he offers guidance in general terms may only reflect his deep, obdurate commitment to the emancipation of all social actors within a radically democratized context by leaving the exact form and content of communication and social action to the participants themselves. To explicate how actors should decide would compromise the effectiveness of the public sphere—its setting, freedom of decision making, and autonomy of the participants. More important, it would violate his own insight that rationality is not inherent in humans but a product of interaction and discourse. The great achievement of Habermas’s formulation of communicative competence—HCAT—is that despite his principal intention to establish a moral and workable framework for action, the formulation provides a (probably unintended) theoretical answer to the recurring paradoxes of risk in advanced modern societies. He provides a theoretical, agency-preserving understanding to the paradox of improved individual risk in the context of increased societal risks described throughout the book, to the puzzling asymmetry between the improvement in the objective conditions of life and people’s belief that the world is getting much riskier. This is explained by the tension between the systems rationality of individual living conditions (with its objectively improved numbers in health and longevity) and the lifeworld rationality that sees costs or other threats to the quality of life, such as the increasing uncertainty about one’s own life, the existential threat to ontological security, the buildup of societal vulnerabilities, and the increase in experienced inequities in the distribution of risks. These are not imagined problems or mere perceptions of an affluent people whose expectations have gone far beyond reality. They mark measurable threats of modern societies and can become quite real and painful, as the recent financial crisis has shown. Whether advanced modernity is generating risks at a rate that outpaces society’s capacity for assessing, controlling, and governing them, and whether 3.  Recent years have witnessed a sea change toward democratization in national policy on technological and environmental risks (National Research Council 2003, 2004). However, this effort is entirely ad hoc and undisciplined by a theoretical frame.

202

Conclusion

these dislocations constitute the advanced modern crisis is doubtless a central project of advanced modernity. The depth and the complexity of the challenge remain to be more thoroughly determined. The works evaluated here, while not providing that final determination, have raised, refined, and elaborated these central questions of advanced modernity. In doing so, they have charted new directions not only in social theory but also in the way we can comprehend certain vast and fundamental changes of advanced modernity. They have outlined the contours of the “risk society” and have pointed a beacon to guide our inquiry.

References

Ackerman, Frank, and Lisa Heinzerling. 2004. Priceless. New York: New Press. Adam, Barbara. 1998. Timescapes of Modernity: The Environment and Invisible Hazards. London: Routledge. Adam, Barbara, Ulrich Beck, and Joost van Loon, eds. 2000. The Risk Society and Beyond: Critical Issues for Social Theory. London: Sage. Adams, Matthew. 2003. “The Reflexive Self and Culture.” British Journal of Sociology 54:​ 221–238. Adler, Peter, Robert Barrett, Martha Bean, Juliana Birkhoff, Connie Ozawa, and Emily Rudin. 2000. Managing Scientific and Technical Information in Environmental Cases: Principles and Practices for Mediators and Facilitators. Washington, DC: RESOLVE, U.S. Institute for Environmental Conflict Resolution, and Western Justice Center Foundation. Alario, Margarita, and William Freudenburg. 2003. “The Paradoxes of Modernity: Scientific Advances, Environmental Problems, and Risks to the Social Fabric.” Sociological Forum 18:193–214. Albrow, Martin. 1996. The Global Age: State and Society beyond Modernity. Cambridge: Polity. Alexander, Jeffrey C., ed. 1985. Neofunctionalism. Beverly Hills, CA: Sage. ———. 1993. “The Return of Civil Society.” Contemporary Sociology 22:797–803. ———. 1996. “Critical Reflections on ‘Reflexive Modernization.’” Theory, Culture, and Society 13:133–138. Alexander, Jeffrey C., and Paul Colomy. 1990. “Neofunctionalism: Reconstructing a Theoretical Tradition.” In Frontiers of Social Theory: The New Syntheses, ed. George Ritzer, 33–67. New York: Columbia University Press. Alexander, Jeffrey C., and Philip Smith. 1996. “Social Science and Salvation: Risk Society as Mythical Discourse.” Zeitschrift für Soziologie 25:251–262. Allan, Stuart, Barbara Adam, and Cynthia Carter. 2000. Environmental Risks and the Media. London: Routledge. Allen, John, and Nick Henry. 1997. “Ulrich Beck’s Risk Society at Work: Labour and Employment in the Contract Service Industries.” Transactions of the Institute of British Geographers (new series) 22:180–196.

204

References

Amoore, Louise, and Marieke de Goede. 2008. Risk and the War on Terror. New York: Routledge. Amy, Douglas J. 1983. “Environmental Mediation: An Alternative Approach to Policy Stalemates.” Policy Sciences 15:345–365. Andersen, Svein S., and Tom R. Burns. 1995. “The European Community and the Erosion of Parliamentary Democracy.” In European Union—How Democratic Is It? ed. Svein S. Andersen and Kjell A. Eliassen, 227–252. London: Sage. Anderson, Benedict. 1991. Imagined Communities: Reflections on the Origin and Spread of Nationalism. London: Verso. Anderson, Iain. 2002. Foot and Mouth Disease 2001: Lessons to Be Learned Inquiry Report. London: Stationery Office. Apel, Karl Otto. 1992. “Diskursethik vor der Problematik von Recht und Politik.” In Zur Anwendung der Diskursethik in Politik, Recht und Wissenschaft, ed. Karl-Otto Apel and Matthias Kettner, 29–61. Frankfurt am Main: Suhrkamp. Armour, Audrey. 1995. “The Citizen’s Jury Model of Public Participation.” In Fairness and Competence in Citizen Participation: Evaluating New Models for Environmental Discourse, ed. Ortwin Renn, Thomas Webler, and Peter Wiedemann, 175–188. Dordrecht: Kluwer. Arnoldi, Jakob. 2009. Risk: An Introduction. Cambridge: Polity. Arvai, Joseph, Robin Gregory, and Timothy McDaniels. 2001. “Testing a Structured Decision Approach: Value Focused Thinking for Deliberative Risk Communication.” Risk Analysis 21:1065–1076. Ash, Amin, ed. 1997. Beyond Market and Hierarchy: Interactive Governance and Social Complexity. Cheltenham: Elgar. Assmuth, Timo. 2011. “Policy and Science Implications of the Framing and Qualities of Uncertainty in Risks: Toxic and Beneficial Fish from the Baltic Sea.” AMBIO 40(2): 158–169. Atkinson, Will. 2007a. “Anthony Giddens as Adversary of Class Analysis.” Sociology 41:​ 533–549. ———. 2007b. “Beck, Individualization, and the Death of Class: A Critique.” British Journal of Sociology 58(3): 349–366. Atran, Scott. 2010. Talking to the Enemy: Faith, Brotherhood, and the (Un)making of Terrorists. New York: HarperCollins. Aven, Terje. 2003. Foundations of Risk Analysis: A Knowledge and Decision-Oriented Perspective. Chichester: Wiley. Aven, Terje, and Ortwin Renn. 2009a. “On Risk Defined as an Event Where the Outcome Is Uncertain.” Journal of Risk Research 12:1–11. ———. 2009b. “The Role of Quantitative Risk Assessments for Characterizing Risk and Uncertainty and Delineating Appropriate Risk Management Options, with Special Emphasis on Terrorism.” Risk Analysis 29:587–600. ———. 2010. Risk Management and Governance: Concepts, Guidelines, and Applications. Berlin: Springer. Aven, Terje, Ortwin Renn, and Eugene A. Rosa. 2011. “On the Ontological Status of the Concept of Risk.” Safety Science 49:1074–1079. Baan, Robert, Yann Grosse, Béatrice Lauby-Secretan, Fatiha El Ghissassi, Véronique Bouvard, Lamia Benbrahim-Tallaa, Neela Guha, Farhad Islami, Laurent Galichet, and Kurt Straif. 2011. “Carcinogenicity of Radiofrequency Electromagnetic Fields.” Lancet Oncology 12(7): 624–626. Bacon, Sir Francis. 1893. Of the Proficience and Advancement of Learning, Divine and Humane, ed. David Price (from the 1605 edition). London: Cassell. Bacow, Lawrence, and Michael Wheeler. 1984. Environmental Dispute Resolution. New York: Plenum. Bagguley, Paul. 1999. “Beyond Emancipation? The Reflexivity of Social Movements.” In Theorizing Modernity: Reflexivity, Environment and Identity in Giddens’ Social Theory, ed. Martin O’Brien, Sue Penna, and Colin Hay, 65–82. London: Addison Wesley Longman.

References

205

Bailey, Kenneth. 1994. Sociology and the New Systems Theory. Albany: State University of New York Press. Bandle, Tony. 2007. “Tolerability of Risk: The Regulator’s Story.” In The Tolerability of Risk: A New Framework for Risk Management, ed. Frédéric Boulder, David Slavin, and Ragnar Löfstedt, 93–104. London: Earthscan. Bankoff, Greg, Georg Frerks, and Dorothea Hilhorst. 2004. Mapping Vulnerability: Disasters, Development, and People. London: Earthscan. Barke, Richard P., and Hank C. Jenkins-Smith. 1993. “Politics and Scientific Expertise: Scientists, Risk Perception, and Nuclear Waste Policy” Risk Communication 13:425–439. Barnes, Barry, and David Bloor. 1982. “Relativism, Rationalism, and the Sociology of Knowledge.” In Rationality and Relativity, ed. Martin Hollis and Steven Lukes, 21–47. Cambridge, MA: MIT Press. Bastide, Sophi, Jean-Paul Moatti, Jean-Pierre Pages, and Francis Fagnani. 1989. “Risk Perception and the Social Acceptability of Technologies: The French Case.” Risk Analysis 9:​ 215–223. Baughman, Mike. 1995. “Mediation.” In Fairness and Competence in Citizen Participation: Evaluating New Models for Environmental Discourse, ed. Ortwin Renn, Thomas Webler and Peter Wiedemann, 253–266. Dordrecht: Kluwer. Bauman, Zygmunt. 1992. “The Solution as Problem.” Times Higher Education Supplement, 1045 (November 13), 25. Beck, Ulrich. 1990. “On the Way toward an Industrial Society of Risk?” International Journal of Political Economy 20(1): 51–69. ———. 1992a. “From Industrial Society to the Risk Society: Questions of Survival, Social Structure, and Ecological Enlightenment.” Theory, Culture, and Society 9:97–123. ———. 1992b. “How Modern Is Modern Society?” Theory, Culture, and Society 9:163–169. ———. 1992c. Risk Society: Towards a New Modernity (1986). London: Sage. ———. 1994a. Ecological Enlightenment: Essays on the Politics of the Risk Society (1991). New York: Humanities Press. ———. 1994b. “The Reinvention of Politics.” In Reflexive Modernization: Politics, Traditions, and Aesthetics in the Modern Social Order, ed. Ulrich Beck, Anthony Giddens, and Scott Lash, 1–55. Stanford, CA: Stanford University Press. ———. 1995. Ecological Politics in an Age of Risk (1988). Cambridge: Polity. ———. 1996a. “Risk Society and the Provident State.” In Risk, Environment, and Modernity: Towards a New Ecology, ed. Scott Lash, Bronislaw Szerszynski, and Brian Wynne, 27–43. London: Sage. ———. 1996b. “World Risk Society as Cosmopolitan Society? Ecological Questions in a Framework of Manufactured Uncertainties.” Theory, Culture, and Society 13(4): 1–32. ———. 1997a. The Reinvention of Politics: Rethinking Modernity in the Global Social Order (1993). Cambridge: Polity. ———. 1997b. “Subpolitics: Ecology and the Disintegration of Institutional Power.” Organization and Environment 10:52–65. ———. 1998. Democracy without Enemies. Malden, MA: Polity. ———. 1999. World Risk Society. Cambridge: Polity. ———. 2000a. “The Cosmopolitan Perspective: Sociology of the Second Age of Modernity.” British Journal of Sociology 51:79–105. ———. 2000b. What Is Globalization? Cambridge: Polity. ———. 2000c. “Risk Society Revisited: Theory, Politics and Research Programmes.” In The Risk Society and Beyond: Critical Issues for Social Theory, ed. Barbara Adam, Ulrich Beck, and Joost van Loon, 211–229. London: Sage. ———. 2002. “The Terrorist Threat: World Risk Society Revisited.” Theory, Culture, and Society 19(4): 39–55. ———. 2003. “The Silence of Words: On Terror and War.” Security Dialogue 34(3): 255–267. ———. 2005. Power in the Global Age (2002). Cambridge: Polity.

206

References

———. 2006a. Cosmopolitan Vision. Cambridge: Polity. ———. 2006b. “Living in the World Risk Society: A Hobhouse Memorial Public Lecture Given on Wednesday, 15 February 2006, at the London School Economics.” Economy and Society 35(3): 329–345. ———. 2007. “The Cosmopolitan Condition: Why Methodological Nationalism Fails.” Theory, Culture, and Society 24:286–290. ———. 2008a. “Climate Change and Globalisation Are Reinforcing Global Inequalities: High Time for a New Social Democratic Era.” Globalizations 5(1): 78–80. ———. 2008b. “World at Risk: The New Task of Critical Theory.” Development and Society 37(1): 1–21. ———. 2009a. “Critical Theory of World Risk Society: A Cosmopolitan Vision.” Constellations 16(1): 3–22. ———. 2009b. World at Risk (2007). Cambridge: Polity. ———. 2009c. “World Risk Society as Cosmopolitan Society: Ecological Questions in a Framework of Manufactured Uncertainties.” In Human Footprints on the Global Environment: Threats to Sustainability, ed. Eugene A. Rosa, Andreas Diekmann, Thomas Dietz, and Carlo C. Jaeger, 47–82. Cambridge, MA: MIT Press. ———. 2010a. “Climate for Change, or How to Create a Green Modernity?” Theory, Culture, and Society 27:254–266. ———. 2010b. “Remapping Social Inequalities in an Age of Climate Change: For a Cosmopolitan Renewal of Sociology.” Global Networks 10:165–181. ———. 2010c. “World Risk Society as Cosmopolitan Society: Ecological Questions in a Framework of Manufactured Uncertainties.” In Human Footprints on the Global Environment: Threats to Sustainability, ed. Eugene A. Rosa, Andreas Diekmann, Thomas Dietz, and Carlo Jaeger, 47–82. Cambridge, MA: MIT Press. ———. 2011. “Cosmopolitanism as Imagined Communities of Global Risk.” American Behavioral Scientist 55:1346–1361. ———. 2013a. “Why ‘Class’ Is Too Soft a Category to Capture the Explosiveness of Social Inequality at the Beginning of the 21st Century.” British Journal of Sociology 64:63–74. ———. 2013b. German Europe. Cambridge: Polity. Beck, Ulrich, Wolfgang Bonss, and Christoph Lau. 2003. “The Theory of Reflexive Modernization: Problematic, Hypotheses, and Research Programme.” Theory, Culture, and Society 20(2): 1–33. Beck, Ulrich, Anthony Giddens, and Scott Lash, eds. 1994. Reflexive Modernization: Politics, Traditions, and Aesthetics in the Modern Social Order. Stanford, CA: Stanford University Press. Beck, Ulrich, and Edgar Grande. 2007a. Cosmopolitan Europe. Cambridge: Polity. ———. 2007b. “Cosmopolitanism: Europe’s Way Out of Crisis.” European Journal of Social Theory 10(1): 67–85. ———. 2010. “Varieties of Second Modernity: The Cosmopolitan Turn in Social and Political Theory and Research.” British Journal of Sociology 61:409–443. Beck, Ulrich, and Christoph Lau. 2005. “Second Modernity as a Research Agenda: Theoretical and Empirical Explorations in the ‘Meta-change’ of Modern Society.” British Journal of Sociology 56:525–557. Beck, Ulrich, and Natan Sznaider. 2006. “Unpacking Cosmopolitanism in the Social Sciences: A Research Agenda.” British Journal of Sociology 57:1–23. ———. 2011. “The Self-Limitation of Modernity: The Theory of Reflexive Taboos.” Theory and Society 40:417–436. Beck, Ulrich, and Johannes Willms. 2004. Conversations with Ulrich Beck. Cambridge: Polity. Beierle, Thomas C., and Jerry Cayford. 2002. Democracy in Practice: Public Participation in Environmental Decisions. Washington, DC: Resources for the Future. Bella, David A., Charles D. Mosher, and Steven N. Calvo. 1988. “Technocracy and Trust: Nuclear Waste Controversy.” Journal of Professional Issues in Engineering 114:27–39.

References

207

Benhabib, Seyla. 1992. “Autonomy, Modernity, and Community.” In Cultural-Political Interventions in the Unfinished Project of Enlightenment, ed. Axel Honneth, Thomas McCarthy, Claus Offe, and Albrecht Wellmer, 39–61. Cambridge, MA: MIT Press. Benz, Arthur, and Burkhard Eberlein. 1999. “The Europeanization of Regional Policies: Patterns of Multi-Level Governance.” Journal of European Public Policy 6(2): 329–348. Berkhout, Frans, Constanze Haug, Tim Rayner, Harro van Asselt, Roger Hildingsson, Dave Huiteme, Andrew Jordan, Suvi Monni, and Johannes Stripple. 2010. “How Do Climate Policies Work? Confronting Governance Dilemmas in the European Union.” In Making Climate Change Work for Us: European Perspectives on Adaptation and Mitigation Strategies, ed. Mike Hulme and Henry Neufeldt, 137–163. Cambridge: Cambridge University Press. Bingham, Gail. 1984. Resolving Environmental Disputes: A Decade of Experience. Washington, DC: Conservation Foundation. Blank, Yishai. 2011. “Introduction.” In Ulrich Beck, Cosmopolitanism: A Critical Theory for the 21st Century. Tel Aviv: Hakibbutz Hameuchad. Bloch, Ernst. 1995. The Principle of Hope (1954). Cambridge, MA: MIT Press. Blok, Anders. 2012. “Greening Cosmopolitan Urbanism? On the Transnational Mobility of Low-Carbon Formats in Northern European and East Asian Cities.” Environment and Planning A 44(10): 2327–2343. Boghossian, Paul. 2006. Fear of Knowledge: Against Relativism and Constructivism. New York: Oxford University Press. Bohman, James. 1997. “Deliberative Democracy and Effective Social Freedom: Capabilities, Resources, and Opportunities.” In Deliberative Democracy: Essays on Reason and Politics, ed. James Bohman and William Rehg, 321–348. Cambridge, MA: MIT Press. Bohnenblust, Hans, and Paul Slovic. 1998. “Integrating Technical Analysis and Public Values in Risk Based Decision Making.” Reliability Engineering and System Safety 59:151–159. Boholm, Åsa. 1998. “Comparative Studies of Risk Perception: A Review of Twenty Years of Research.” Journal of Risk Research 1(2): 135–163. Bourdieu, Pierre. 1984. Distinction: A Social Critique of the Judgement of Taste. Cambridge, MA: Harvard University Press. Boyne, Roy. 2003. Risk. Buckingham: Open University Press. Bradbury, Judith A. 1989. “The Policy Implications of Differing Concepts of Risk.” Science, Technology, and Human Values 14(4): 380–399. Breyer, Stephen. 1992. Breaking the Vicious Circle: Towards Effective Risk Regulation. Cambridge, MA: Harvard University Press. Brigg, David J. 2008. “A Framework for Integrated Environmental Health Impact Assessment of Systemic Risks. A Review.” Environmental Health 7:61–78. Brion, Denis. 1988. “An Essay on LULU, NIMBY, and the Problem of Distributive Justice.” Environmental Affairs 15:437–503. Brooks, Nick, and Neil W. Adger. 2005. “Assessing and Enhancing Adaptive Capacity.” In Adaptation Policy Frameworks for Climate Change: Developing Strategies, Policies, and Measures, ed. Bo Lim and Erika Spanger-Siegfried, 165–182. Cambridge: Cambridge University Press. Brulle, Robert. 1992. “Jürgen Habermas.” Human Ecology Bulletin 8:29–40. ———. 2002. “Habermas and Green Political Thought.” Environmental Politics 11:1–20. Bulkeley, Harriet. 2001. “Governing Climate Change: The Politics of Risk Society?” Transactions of the Institute of British Geographers 26: 430–447. Bullard, Robert D. 1990. Dumping in Dixie. Boulder, CO: Westview. Burawoy, Michael. 2005. “For Public Sociology.” American Sociological Review 70:4–28. Burns, Tom R., and Thomas Dietz. 1992. “Cultural Evolution: Social Rule Systems, Selection and Agency.” International Sociology 7:259–283. Burns, Tom R., and Christian Stöhr. 2011. “The Architecture and Transformation of Governance Systems: Power, Knowledge, and Conflict.” Human Systems Management 30:​ 173–195.

208

References

Burns, Tom R., and Reinhard Überhorst. 1988. Creative Democracy: Systematic Conflict Resolution and Policymaking in a World of High Science and Technology. New York: Praeger. Burns, William J., Peter Slovic, Richard E. Kasperson, Jeanne X. Kasperson, and Srinivas Emani. 1993. “Incorporating Structural Models into Research on the Social Amplification of Risk: Implications for Theory Construction and Decision Making.” Risk Analysis 13:611–623. Burt, Ronald S. 1982. Towards a Structural Theory of Action: Network Models of Social Structure, Perception and Action. New York: Academic Press. Calhoun, Craig. 2010. “Beck, Asia and Second Modernity.” British Journal of Sociology 61:​ 597–619. Camerer, Colin F. 2003. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton, NJ: Princeton University Press. Campbell, Scott, and Greg Currie. 2006. “Against Beck: In Defense of Risk Analysis.” Philosophy of the Social Sciences 36(2): 149–172. Catton, William R. 1980. Overshoot: The Ecological Basis of Evolutionary Change. UrbanaChampaign: University of Illinois Press. Catton, William R., and Riley E. Dunlap. 1978. “Environmental Sociology.” American Sociologist 13:41–49. Chambers, Simone. 2003. “Deliberative Democratic Theory.” Annual Review of Political Science 6:307–326. Chang, Kyung-Sup. 2010. “The Second Modern Condition? Compressed Modernity as Internalized Reflexive Cosmopolitization.” British Journal of Sociology 61:444–464. Charnley, Gail. 2000. Democratic Science: Enhancing the Role of Science in Stakeholder-based Risk Management Decision-Making. Washington, DC: Health Risk Strategies. Chen, Kan, and J. C. Mathes. 1989. “Value-Oriented Social Decision Analysis: A Communication Tool for Public Decision Making on Technological Projects.” In Social Decision Methodology for Technological Projects, ed. Charles Vlek and George Cvetkovich, 111– 132. Dordrecht: Kluwer. Chess, Caron, Thomas Dietz, and Margaret Shannon. 1998. “Who Should Deliberate When?” Human Ecology Review 5(1): 60–68. Clarke, Lee. 1989. Acceptable Risk? Making Decisions in a Toxic Environment. Berkeley: University of California Press. ———. 2006. Worst Cases. Chicago: University of Chicago Press. Clarke, Lee, and James Short. 1993. “Social Organization and Risk: Some Current Controversies.” Annual Review of Sociology 19:375–399. Clarke, Simon. 2006. From Enlightenment to Risk: Social Theory and Contemporary Society. New York: Palgrave Macmillan. Coglianese, Cary. 1997. “Assessing Consensus: The Promise and Performance of Negotiated Rule-Making.” Duke Law Journal 46:1255–1333. Coglianese, Cary, and David Lazer. 2003. “Management-Based Regulation: Prescribing Private Management to Achieve Public Goals.” Law and Society 37:691–730. Cohen, Joshua. 1997. “Procedure and Substance in Deliberative Democracy.” In Deliberative Democracy: Essays on Reason and Politics, ed. James Bohman and William Rehg, 407–437. Cambridge, MA: MIT Press. Cohen, Maurice J. 1999. “Science and Society in Historical Perspective: Implications for Social Theories of Risk.” Environmental Values 8(2): 153–176. Cole, Stephen. 1992. Making Science: Between Nature and Society. Cambridge, MA: Harvard University Press. Collins, Harry. 2004. Gravity’s Shadow: The Search for Gravitational Waves. Chicago: University of Chicago Press. ———. 2009. “We Cannot Live by Skepticism Alone.” Nature 458:30–31. Collins, Harry, and Robert Evans. 2007. Rethinking Expertise. Chicago: University of Chicago Press.

References

209

Coppock, Rob. 1985. “Interactions between Scientists and Public Officials: A Comparison of the Use of Science in Regulatory Programs in the United States and West Germany.” Policy Sciences 18:371–390. Coser, Lewis A. 1956. The Function of Social Conflict. New York: Free Press. Cottle, Simon. 1998. “Ulrich Beck, ‘Risk Society,’ and the Media: A Catastrophic View?” European Journal of Communication 13(1): 5–32. ———. 2009. Global Crisis Reporting: Journalism in the Global Age. Maidenhead: Open University Press. Covello, Vincent. 1983. “The Perception of Technological Risks: A Literature Review.” Technological Forecasting and Social Change 23: 285–297. Creighton, James. 1991. “A Comparison of Successful and Unsuccessful Public Involvement: A Practitioner’s Viewpoint.” In Risk Analysis: Prospects and Opportunities, ed. Constantine, 135–141. New York: Plenum. Creighton, James, Mark Dunning, and Jerome Delli Priscoli, eds. 1998. Public Involvement and Dispute Resolution: A Reader on the Second Decade of Experience at the Institute of Water Resources. Fort Belvoir, VA: Institute of Water Resources, U.S. Army Corps of Engineers. Cross, Frank B. 1992. “The Risk of Reliance on Perceived Risk.” Risk: Issues in Health and Safety 3:59–70. ———. 1998. “Facts and Values in Risk Assessment.” Reliability Engineering and Systems Safety 59:27–45. Cummings, Louise. 2010. Rethinking the BSE Crisis: A Study of the Scientific Reasoning under Uncertainty. New York: Springer. Curran, Dean. 2013. “Risk Society and the Distribution of Bads: Theorizing Class in the Risk Society.” British Journal of Sociology 64(1): 44–62. Daft, Richard L., and Karl E. Weick. 1984. “Toward a Model of Organizations as Interpretation Systems.” Academy of Management Review 9(2): 284–295. Dahl, Robert A. 1994. “A Democratic Dilemma: System Effectiveness versus Citizen Participation.” Political Science Quarterly 109(1): 23–34. Dake, Karl. 1991. “Orienting Dispositions in the Perceptions of Risk: An Analysis of Contemporary Worldviews and Cultural Biases.” Journal of Cross-Cultural Psychology 22:61–82. Daniels, Steven, and Gregg Walker. 1996. “Collaborative Learning: Improving Public Deliberation in Ecosystem-based Management.” Environmental Impact Assessment Review 16:​ 71–102. Dawes, Robyn M. 1988. Rational Choice in an Uncertain World. New York: Harcourt Brace Jovanovich. De Bandt, Olivier, and Philipp Hartmann. 2000. “Systemic Risk: A Survey.” Working Paper Series 35, European Central Bank, Frankfurt am Main. Delanty, Gerard. 2009. The Cosmopolitan Imagination: The Renewal of Critical Social Theory. Cambridge: Cambridge University Press. De Marchi, Bruna. 2003. “Public Participation and Risk Governance.” Science and Public Policy 30(3): 171–176. De Marchi, Bruna, and Jerome R. Ravetz. 1999. “Risk Management and Governance: A PostNormal Science Approach.” Futures 31:743–757. Det Norske Veritas. 2010. Key Aspects of an Effective U.S. Offshore Safety Regime. White paper, July 22, Oslo. Diamond, Jared. 2005. Collapse: How Societies Choose to Fail or Succeed. New York: Penguin Books. Dickens, Peter. 1992. Society and Nature. Philadelphia: Temple University Press. Dienel, Peter C. 1989. “Contributing to Social Decision Methodology: Citizen Reports on Technological Projects.” In Social Decision Methodology for Technological Projects, ed. Charles Vlek and George Cvetkovich, 133–151. Dordrecht: Kluwer. Dietz, Thomas M., and Robert W. Rycroft. 1987. The Risk Professionals. New York: Russell Sage Foundation.

210

References

Dietz, Thomas M., Paul C. Stern, and Robert W. Rycroft. 1989. “Definitions of Conflict and the Legitimation of Resources: The Case of Environmental Risk.” Sociological Forum 4:​ 47–69. Douglas, Mary. 1986. How Institutions Think. Syracuse, NY: Syracuse University Press. ———. 1994. Risk and Blame: Essays in Cultural Theory. New York: Routledge. Douglas, Mary, and Aaron Wildavsky. 1982. Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers. Berkeley: University of California Press. Dreyer, Marion, and Ortwin Renn. 2009. “A Structured Approach to Participation.” In Food Safety Governance: Integrating Science, Precaution, and Public Involvement, ed. Marion Dreyer and Ortwin Renn, 11–120. Heidelberg: Springer. Drottz-Sjöberg, Britt-Marie. 1991. “Perception of Risk: Studies of Risk Attitudes, Perceptions, and Definitions.” Ph.D. diss., Stockholm School of Economics. Dryzek, John. 1994. Discursive Democracy: Politics, Policy, and Political Science, 2nd ed. Cambridge: Cambridge University Press. Dunlap, Riley E., Michael E. Kraft, and Eugene A. Rosa. 1993. Public Reactions to Nuclear Waste: Citizens’ Views of Repository Siting. Durham, NC: Duke University Press. Durant, John, and Simon Joss. 1995. Public Participation in Science. London: Science Museum. Durkheim, Émile. 1982. The Rules of the Sociological Method, ed. Steven Lukes, trans. W. D. Halls. New York: Free Press. Earle, Timothy C., and George Cvetkovich. 1996. Social Trust toward a Cosmopolitan Society. Westport, CT: Praeger. Easton, David. 1965. A Framework for Political Analysis. Englewood Cliffs, NJ: Prentice-Hall. Eberlein, Burkard, and Edgar Grande. 2005. “Reconstituting Political Authority in Europe: Transnational Regulatory Networks and the Informalization of Governance in the European Union.” In Complex Sovereignty: Reconstituting Political Authority in the 21st Century, ed. Edgar Grande and Louis W. Pauly, 146–167. Toronto: University of Toronto Press. Edelstein, Michael R. 1988. Contaminated Communities: The Social and Psychological Impacts of Residential Toxic Exposure. Boulder, CO: Westview. ———. 2000. “‘Outsiders Just Don’t Understand’: Personalization of Risk and the Boundary between Modernity and Postmodernity.” In Risk in the Modern Age: Social Theory, Science, and Environmental Decision-Making, ed. Maurie J. Cohen, 123–142. New York: St. Martin’s Press. Eder, Klaus. 1992. “Politics and Culture: On the Sociocultural Analysis of Political Partici­ pation.” In Cultural-Political Interventions in the Unfinished Project of Enlightenment, ed. Axel Honneth, Thomas McCarthy, Claus Offe and Albrecht Wellmer, 95–120. Cambridge, MA: MIT Press. ———. 1996. The Social Construction of Nature. London: Sage. Edgell, Stephen. 2012. The Sociology of Work: Continuity and Change in Paid and Unpaid Work. Los Angeles: Sage. Edwards, Ward. 1977. “How to Use Multiattribute Utility Measurement for Social Decision Making.” Systems, Man, and Cybernetics, IEEE Transaction 7(5): 326–340. Ekinsmyth, Carol. 1999. “Professional Workers in a Risk Society.” Transactions of the Institute of British Geographers 24: 353–366. Elliott, Anthony. 2002. “Beck’s Sociology of Risk: A Critical Assessment.” Sociology 36:293–315. Emerson, Ralph Waldo. 1909. Journals of Ralph Waldo Emerson, with Annotations—1841–1844, vol. 6. Boston: Houghton Mifflin. Englander, Tibor, Klara Farago, Paul Slovic, and Baruch Fischhoff. 1986. “A Comparative Analysis of Risk Perception in Hungary and the United States.” Social Behavior 1:55–66. English Tourism Council. 2002. Attitudinal Survey Key Findings. Available at http://www​ .englishtourism.org.uk. Accessed June 30, 2013. Entman, Robert M. 1993. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication 43(4): 51–58. Environment Agency. 1998. Strategic Risk Assessment: Further Developments and Trials. Research and Development report no. E70. London: Environment Agency.

References

211

Ericson, Richard V. 2007. Crime in an Insecure World. Cambridge: Polity. Ericson, Richard V., and Aaron Doyle. 2004. “Catastrophe Risk, Insurance, and Terrorism.” Economy and Society 33: 135–173. Ericson, Richard V., Aaron Doyle, and Dean Barry. 2003. Insurance as Governance. Toronto: University of Toronto Press. Ericson, Richard V., and Kevin D. Haggerty. 1997. Policing the Risk Society. Toronto: University of Toronto Press. Erikson, Kai. 1994. A New Species of Trouble: Explorations in Disaster, Trauma, and Community. New York: W. W. Norton. Esser, Hartmut. 1990. “‘Habits,’ ‘Frames,’ und ‘Rational Choice.’” Zeitschrift für Soziologie 19:231–247. ———. 1991. Alltagshandeln und Verstehen. Zum Verhältnis von erklärender und verstehender Soziologie am Beispiel von Alfred Schuetz und “Rational Choice.” Tübingen: Mohr Siebeck. ———. 1993. “Response Set: Habit, Frame, or Rational Choice?” In New Directions in Attitude Measurement, ed. Dagmar Krebs and Peter Schmidt, 293–314. Berlin: Walter de Gruyter. Etzioni, Amitai. 1991. A Responsive Society: Collected Essays on Guiding Deliberate Social Change. San Francisco: Jossey-Bass. ———. 1993. The Spirit of Community: Rights, Responsibilities, and the Communitarian Agenda. New York: Crown. European Commission on Governance. 2001. European Governance: A White Paper. Report no. 428. Brussels: European Commission. Ewald, François. 1986. L’état providence. Paris: Grasset and Fasquelle. Ezell, Barry C. 2007. “Infrastructure Vulnerability Assessment Model (I-VAM).” Risk Analysis 27(3): 571–583. Ferry, Luc, and Alain Renaut. 1990. French Philosophy of the Sixties. Amherst: University of Massachusetts Press. Filar, Jerzy A., and Alain Haurie. 2010. Uncertainty and Environmental Decision Making. Heidelberg: Springer. Fiorino, Daniel J. 1989. “Technical and Democratic Values in Risk Analysis.” Risk Analysis 9(3): 293–299. ———. 1990. “Citizen Participation and Environmental Risk: A Survey of Institutional Mechanisms.” Science, Technology, and Human Values 15(2): 226–243. Fischer, Frank. 1998. “Ulrich Beck and the Politics of the Risk Society: The Environmental Threat as Institutional Crisis.” Organization and Environment 11:111–115. Fischhoff, Baruch. 1985. “Managing Risk Perceptions.” Issues in Science and Technology 2(1): 83–96. ———. 1995. “Risk Perception and Communication Unplugged: Twenty Years of Process.” Risk Analysis 15(2): 137–145. ———. 1996. “Public Values in Risk Research.” In Annals of the American Academy of Political and Social Science, Special Issue: Challenges in Risk Assessment and Risk Management, ed. Howard Kunreuther and Paul Slovic, 75–84. Thousand Oaks, CA: Sage. Fischhoff, Baruch, Bernard Goitein, and Zur Shapiro. 1982. “The Experienced Utility of Expected Utility Approaches.” In Expectations and Actions: Expectancy-Value Models in Psychology, ed. Norman T. Feather, 315–340. Hillsdale, NJ: Lawrence Erlbaum. Fischhoff, Baruch, Paul Slovic, Sarah Lichtenstein, Stephen Read, and Barbara Combs. 1978. “How Safe Is Safe Enough? A Psychometric Study of Attitudes toward Technological Risks and Benefits.” Policy Sciences 9:127–152. Fisher, Roger, William Ury, and Bruce Patton. 1981. Getting to Yes: Negotiating Agreement without Giving In. New York: Penguin Books. Fitchett, James A., and Pierre McDonagh. 2000. “A Citizen’s Critique of Relationship Marketing in Risk Society.” Journal of Strategic Marketing 8:209–222. Flynn, James, William Burns, C. K. Mertz, and Paul Slovic. 1992. “Trust as a Determinant of Opposition to a High-Level Radioactive Waste Repository: Analysis of a Structural Model.” Risk Analysis 12:417–429.

212

References

Forester, John, ed. 1985. Critical Theory and Public Life. Cambridge, MA: MIT Press. Foucault, Michel. 1980. Power/Knowledge: Selected Interviews and Other Writings, 1972–1977. New York: Vintage. Franklin, Jane, ed. 1998. The Politics of Risk Society. Cambridge: Polity. Freudenburg, William R. 1988. “Perceived Risk, Real Risk: Social Science and the Art of Probabilitistic Risk Assessment.” Science 242(4875): 44–49. ———. 1993. “Risk and ‘Recreancy’: Weber, the Division of Labor, and the Rationality of Risk Perceptions.” Social Forces 71:909–932. ———. 2000. “Social Constructions and Social Constrictions: Toward Analyzing the Social Construction of ‘The Naturalized’ as well as ‘The Natural.’” In Environment and Global Modernity, ed. Gert Spaargaren, Arthur P. J. Mol, and Frederick H. Buttel, 103–119. London: Sage. Freudenburg, William R., and Eugene A. Rosa. 1984. Public Reactions to Nuclear Power: Are There Critical Masses? Boulder, CO: Westview.
Frey, Bruno S. 2006. “Institutions Shape Preferences: The Approach of Psychology and Economics.” In Evolution and Design of Institutions, ed. Christian Schubert and Georg von Wangenheim, 11–24. London: Routledge. Fuchs, Stephan. 2001. Against Essentialism. Cambridge, MA: Harvard University Press. Funtowicz, Silvio, and Jerome R. Ravetz. 1985. “Three Types of Risk Assessment.” In Risk Analysis in the Private Sector, ed. Chris G. Whipple and Vincent T. Covello, 217–231. New York: Plenum. ———. 1990. Uncertainty and Quality in Science for Policy. Dordrecht: Kluwer. ———. 1992. “Three Types of Risk Assessment and the Emergence of Post-Normal Science.” In Social Theories of Risk, ed. Sheldon Krimsky and Dominic Golding, 251–273. London: Praeger. ———. 1993. “Science for a Post-Normal Age.” Futures 25:739–752. Garrelts, Heiko, and Hellmuth Lange. 2011. “Path Dependencies and Path Change in Complex Fields of Action: Climate Adaptation policies in Germany and the Realm of Flood Risks.” AMBIO 40(2): 200–209. Geuss, Raymond. 1981. The Idea of a Critical Theory: Habermas and the Frankfurt School. Cambridge: Cambridge University Press. Giddens, Anthony. 1974. New Rules of Sociological Method. London: Hutchinson. ———. 1979. Central Problems in Social Theory: Action, Structure, and Contradiction in Social Analysis. Berkeley: University of California Press. ———. 1984. The Constitution of Society: Outline of the Theory of Structuration. Berkeley: University of California Press. ———. 1985. The Nation-State and Violence. Cambridge: Polity. ———. 1987. Social Theory and Modern Sociology. Stanford, CA: Stanford University Press. ———. 1990. The Consequences of Modernity. Oxford: Polity. ———. 1991. Modernity and Self-Identity: Self and Society in the Late Modern Age. Stanford, CA: Stanford University Press. ———. 1994a. “Living in a Post-Traditional Society.” In Reflexive Modernization: Politics, Tradition, and Aesthetics in the Modern Order, ed. Ulrich Beck, Anthony Giddens, and Scott Lash, 56–109. Stanford, CA: Stanford University Press. ———. 1994b. Beyond Left and Right: The Future of Radical Politics. Stanford, CA: Stanford University Press. ———. 1998. The Third Way: The Renewal of Social Democracy. Cambridge: Polity. ———. 2000a. Runaway World (1999). New York: Routledge. ———. 2000b. The Third Way and Its Critics. Cambridge: Polity. ———. 2001. The Global Third Way Debate. Cambridge: Polity. ———. 2003a. “An Interview with Anthony Giddens.” Journal of Consumer Culture 3(3): 387– 399. ———, ed. 2003b. The Progressive Manifesto: New Ideas for the Centre-Left. Cambridge: Polity.

References

213

———. 2003c. Runaway World: How Globalization Is Reshaping Our Lives, 2nd ed. New York: Routledge. ———. 2009a. The Politics of Climate Change. Cambridge: Polity. ———. 2009b. “Recession, Climate Change and the Return of Planning.” New Perspectives Quarterly 26(2): 51–53. Gigerenzer, Gerd. 2000. Adaptive Thinking: Rationality in the Real World. Oxford: Cambridge University Press. Gigerenzer, Gerd, and Reinhard Selten. 2001. “Rethinking Rationality.” In Bounded Rationality: The Adaptive Toolbox, Dahlem Workshop Report, ed. Gerd Gigerenzer and Reinhard Selten, 1–12. Cambridge, MA: MIT Press. Gilbert, Claire W. 1995. “The Plural Causes of the Gulf War Syndrome: Interview by William J. Rea.” Blazing Tattles, August 5, 1995: 7. Gilroy, Paul. 2010. “Planetarity and Cosmopolitics.” British Journal of Sociology 61:620–626. Ginges, Jeremy, Scott Atran, Douglas Medin, and Khalil Shikaki. 2007. “Sacred Bounds on Rational Resolution of Violent Political Conflict.” Proceedings of the National Academy of Sciences 104:7357–7360. Glock, Hans-Johann. 1999. “Words and Things.” Prospect 40: 20–26. Goldberg, Robert B. 1998. “The EMF RAPID Working Group Report Provides Comprehensive Assessment of Power-Frequency EMF Hazards.” EMF Health Report 6(4): 1–3. Goldblatt, David. 1996. Social Theory and the Environment. Boulder, CO: Westview. Goldstein, Judith, and Robert O. Keohane. 1993. “Ideas and Foreign Policy: An Analytical Framework.” In Ideas and Foreign Policy: Beliefs, Institutions, and Political Change, ed. Judith Goldstein and Robert O. Keohane, 3–30. Ithaca, NY: Cornell University Press. Goldthorpe, John H. 2002. “Globalization and Social Class.” West European Politics 25(3): 1–28. Goszczynska Maryla, Tadeusz Tyszka, and Paul Slovic. 1991. “Risk Perception in Poland: A Comparison with Three Other Countries.” Journal of Behavioral Decision Making 4(3): 179–193. Gould, Leroy C., Gerald T. Gardner, Donald R. DeLuca, Adrian R. Tieman, Leonard W. Doob, and Jan A. J. Stolwijk. 1988. Perceptions of Technological Risk and Benefits. New York: Russell Sage Foundation. Gould, Stephen Jay. 1977. Ever since Darwin: Reflections in Natural History. New York: W. W. Norton. Graham, John D. 1996. “The Biases of Public Perception.” Lecture at the SRA–Europe Meeting at Guildford, Harvard Risk Center, Cambridge, MA. Graham, John D., and Jonathan B. Wiener. 1995. Risk versus Risk. Cambridge, MA: Harvard University Press. Gramsci, Antonio. 1971. Selections from the Prison Notebooks, ed. and trans. Quentin Hoare and Geoffrey Nowell-Smith. London: Lawrence and Wishart. Grande, Edgar, and Louis W. Pauly, eds. 2005. Complex Sovereignty: Reconstituting Political Authority in the 21st Century. Toronto: University of Toronto Press. Grande, Edgar, and Bernhard Zangl. 2011. “Varieties of Preventive Governance in World Risk Society.” Paper presented at the annual meeting of the International Studies Association, Montreal, March 16. Available at http://citation.allacademic.com/meta/p500756_index. html. Accessed June 30, 2013. Granovetter, Mark. 1992. “Economic Action and Social Structure: The Problem of Embeddedness.” In The Sociology of Economic Life, ed. Mark Granovetter and Richard Swedberg, 53–81. Boulder, CO: Westview. Green, Donald P., and Ian Shapiro. 1994. The Pathologies of Rational Choice Theory: A Critique of Applications in Political Science. New Haven, CT: Yale University Press. Gregory, Robin. 2004. “Valuing Risk Management Choices.” In Risk Analysis and Society: An Interdisciplinary Characterization of the Field, ed. Timothy McDaniels and Mitchell J. Small, 213–250. New York: Cambridge University Press.

214

References

Gregory, Robin, Sarah Lichtenstein, and Paul Slovic. 1993. “Valuing Environmental Resources: A Constructive Approach.” Journal of Risk and Uncertainty 7:177–197. Gregory, Robin, Tim McDaniels, and Daryl Fields. 2001. “Decision Aiding, not Dispute Resolution: A New Perspective for Environmental Negotiation.” Journal of Policy Analysis and Management 20(3): 415–432. Habermas, Jürgen. 1968. Technik und Wissenschaft als “Ideologie.” Frankfurt am Main: Suhr­ kamp. ———. 1970. Toward a Rational Society (1968), trans. Jeremy Shapiro. Boston: Beacon. ———. 1971. Knowledge and Human Interests, trans. Jeremy Shapiro. Boston: Beacon. ———. 1975. Legitimation Crisis (1973), trans. Thomas McCarthy. Boston: Beacon. ———. 1981. “New Social Movements.” Telos 49:33–37. ———. 1984. Theory of Communicative Action: Volume 1: Reason and the Rationalization of Society, trans. Thomas McCarthy. Boston: Beacon. ———. 1987. Theory of Communicative Action: Volume 2: System and Lifeworld. Trans. Thomas McCarthy. Boston: Beacon. ———. 1989. “The Public Sphere.” In Critical Theory and Society, ed. Stephen E. Bronner and Douglas Kellner, trans. Sarah Lennox and Frank Lennox, 136–142. London: Routledge. ———. 1991a. Communication and the Evolution of Society (1976), trans. Thomas McCarthy. Cambridge: Polity. ———. 1991b. Moral Consciousness and Communicative Action, 2nd ed., trans. Christian Lenhardt and Sherry Weber Nicholson. Cambridge, MA: MIT Press. Habermas, Jürgen, and Niklas Luhmann. 1971. Theorie der Gesellschaft oder Sozialtechnologie. Was leistet die Systemforschung? Frankfurt am Main: Suhrkamp. Hacking, Ian. 1975. The Emergence of Probability. Cambridge: Cambridge University Press. Hadden, Susan. 1989. A Citizen’s Right-to-Know: Risk Communication and Public Policy. Boulder, CO: Westview. Hagendijk, Rob, and Alan Irwin. 2006. “Public Deliberation and Governance: Engaging with Science and Technology in Contemporary Europe.” Minerva 44:167–184. Hajer, Maarten. 1995. The Politics of Environmental Discourse. Oxford: Oxford University Press. ———. 1997. The Politics of Environmental Discourse: Ecological Modernization and the Policy Process. Oxford: Oxford University Press. ———. 2009. Authoritative Governance: Policy-making in the Age of Mediatization. Oxford: Oxford University Press. Halachmi, Arie. 2005. “Governance and Risk Management: Challenges and Public Productivity.” International Journal of Public Sector Management 18(4): 300–317. Hampel, Jürgen, Andreas Klinke, and Ortwin Renn. 2000. “Beyond ‘Red’ Hope and ‘Green’ Distrust: Public Perception of Genetic Engineering in Germany.” Politeia 16(60): 68–82. Han, Sang-Jin, and Young-Hee Shim. 2010. “Redefining Second Modernity for East Asia: A Critical Assessment.” British Journal of Sociology 61:465–488. Hardin, Garrett. 1968. “The Tragedy of the Commons.” Science 162:1243–1248. Harrison, Kathryn, and George Hoberg. 1994. Risk, Science, and Politics. Montreal: McGillQueen’s University Press. Hayek, Friedrich A. 1948. Individualism and Economic Order. Chicago: University of Chicago Press. Hayles, N. Katherine. 1995. “Searching for Common Ground.” In Reinventing Nature? Responses to Postmodern Deconstruction, ed. Michael Soulé and Gary Lease, 47–63. Washington, DC: Island Press. Hedstrom, Peter. 2005. Dissecting the Social: On the Principles of Analytical Sociology. Cambridge: Cambridge University Press. Heimer, Carol. 1988. “Social Structure, Psychology, and the Estimation of Risk.” Annual Review of Sociology 14:491–519. Held, David. 1995. Democracy and the Global Order: From the Modern State to Cosmopolitan Governance. Cambridge: Polity.

References

215

Heng, Yee-Kuang. 2006. “The ‘Transformation of War Debate’: Through the Looking Glass of Ulrich Beck’s World Risk Society.” International Relations 20:69–91. Her Majesty’s Stationery Office. 2001. Pre-Budget Report 2001: Measuring the Macroeconomic Impact of Foot and Mouth Disease. Her Majesty’s Treasury, CM 5318. London: Stationery Office. Her Majesty’s Treasury. 2004. The Risk Programme: Improving Government’s Risk Handling. London: Her Majesty’s Treasury. Hess, David J. 1997. Science Studies. New York: New York University Press. Hilgartner, Stephen, and Charles L. Bosk. 1988. “The Rise and Fall of Social Problems: A Public Arenas Model.” American Journal of Sociology 94:53–78. Hirano, Masashi, Taisuke Yonomoto, Masahiro Ishigaki, Norio Watanabe, Yu Maruyama, Yasuteru Sibamoto, Tadashi Watanabe, and Kiyofumi Moriyama. 2012. “Insights from Review and Analysis of the Fukushima Dai-ichi Accident.” Journal of Nuclear Science and Technology 49(1): 1–17. Hofstede, Geert. 2001. Culture’s Consequences, 2nd ed. Thousand Oaks, CA: Sage. Hogson, Geoffrey. 1986. “Behind Methodological Individualism.” Cambridge Journal of Economics 10:211–224. Hood, Christopher, Henry Rothstein, Robert Baldwin, Judith Rees, and Michael Spackman. 1999. “Where Risk Society Meets the Regulatory State: Exploring Variations in Risk Regulation Regimes.” Risk Management 1:21–34. Hooghe, Liesbet, and Gary Marks. 2003. “Unraveling the Central State, But How? Types of Multi-level Governance.” American Political Science Review 97(2): 233–243. Horkheimer, Max. 1982. Critical Theory. New York: Seabury. Horkheimer, Max, and Theodor W. Adorno. 1972. Dialectic of Enlightenment. New York: Seabury. Horlick-Jones, Tom. 1998. “Meaning and Contextualization in Risk Assessment.” Reliability Engineering and Systems Safety 59:79–89. ———. 2007. “On the Signature of New Technologies: Materiality, Sociality, and Practical Reasoning.” In Risk and the Public Acceptance of New Technologies, ed. Rob Flynn and Paul Bellaby, 41–65. Basingstoke: Palgrave. Horlick-Jones, Tom, Gene Rowe, and John Walls. 2007. “Citizen Engagement Processes as Information Systems: The Role of Knowledge and the Concept of Translation Quality.” Public Understanding of Science 16:259–278. Horlick-Jones, Tom, and Jonathan Sime. 2004. “Living on the Border: Knowledge, Risk and Transdisciplinarity.” Futures 36:441–456. Hostmann, Markus, Thomas Bernauer, Hans-Joachim Mosler, Peter Reichert, and Berhard Truffer. 2005. “Multi-Attribute Value Theory as a Framework for Conflict Resolution in River Rehabilitation.” Journal of Multi-Criteria Decision Analysis 13:91–102. Hudson, Barbara. 2003. Justice in the Risk Society: Challenging and Re-affirming Justice in Late Modernity. London: Sage. Hughes, Thomas P. 2004. American Genesis: A Century of Invention and Technological Enthusiasm, 1870–1970. New York: Penguin Books. Hulme, Mike. 2010. “Cosmopolitan Climates: Hybridity, Foresight and Meaning.” Theory, Culture, and Society 27:267–276. Humphreys, Patrick. 1977. “Application of Multi-Attribute Utility Theory.” In Decision Making and Change in Human Affairs, ed. Helmut Jungermann and Gerard de Zeeuw, 165– 205. Dordrecht: Reidel. Hutter, Bridget, and Michael Power, eds. 2005. Organizational Encounters with Risk. Cambridge: Cambridge University Press. Hyman, Eric, and Bruce Stiftel. 1988. Combining Facts and Values in Environmental Impact Assessment. Boulder, CO: Westview Press. Intergovernmental Panel on Climate Change. 2001. Climate Change 2001: The Scientific Basis. Contribution of the IPCC Working Group I to the Third Assessment Report. New York: IPCC.

216

References

International Agency for Research on Cancer. 2011. “IARC Classifies Radiofrequency Electromagnetic Fields as Possibly Carcinogenic to Humans.” Available at http://www.iarc.fr/en/ media-centre/pr/2011/pdfs/pr208_E.pdf. Accessed June 8, 2011. International Atomic Energy Agency. 2006. Environmental Consequences of the Chernobyl Accident and Their Remediation: Twenty Years of Experience. Report of the Chernobyl Forum Expert Group “Environment.” Vienna: IAEA. International Federation of the Red Cross and Red Crescent Societies. 2000. World Disasters Report 2000. Geneva: IFRC. International Risk Governance Council. 2005. Risk Governance: Towards an Integrative Approach. Geneva: IRGC. ———. 2007. An Introduction to the IRGC Risk Governance Framework. Policy brief. Geneva: IRGC. Irwin, Alan. 2008. “STS Perspectives on Scientific Governance.” In The Handbook of Science and Technology Studies, ed. Edward Hackett, Olga Amsterdamska, Michael Lynch, and Judy Wajcman, 583–608. Cambridge, MA: MIT Press. Irwin, Alan, and Brain Wynne, eds. 1996. Misunderstanding Science? The Public Reconstruction of Science and Technology. Cambridge: Cambridge University Press. Jaeger, Carlo, Ortwin Renn, Eugene A. Rosa, and Thomas Webler. 2001. Risk, Uncertainty, and Rational Action. London: Earthscan. Jaeger, Carlo, Ralf Schüle, and Bernd Kasemir. 1999. “Focus Groups in Integrated Assessment: A Micro-cosmos for Reflexive Modernization.” Innovation 12:195–219. Jarvis, Darryl S. L. 2007. “Risk, Globalisation and the State: A Critical Appraisal of Ulrich Beck and the World Risk Society Thesis.” Global Society 21: 23–46. Jasanoff, Sheila. 1996. “Beyond Epistemology: Relativism and Engagement in the Politics of Science.” Social Studies of Science 26(2): 393–418. ———. 1999. “The Songlines of Risk.” Environmental Values 8(2): 135–152. ———. 2004. “Ordering Knowledge, Ordering Society.” In States of Knowledge: The Co-­ production of Science and Social Order, ed. Sheila Jasanoff, 13–45. London: Routledge. Jefferson, Thomas. 1994. Thomas Jefferson: Writings, ed. Merrill D. Peterson. New York: Library of America. Joss, Simon. 2005. “Lost in Translation? Challenges for Participatory Governance of Science and Technology.” In Wozu Experten? Ambivalenzen der Beziehung von Wissenschaft und Politik, ed. Alexander Bogner and Helge Torgersen, 197–219. Wiesbaden: VS Verlag für Sozialwissenschaften. Kahneman, Daniel, Paul Slovic, and Amos Tversky, eds. 1982. Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kahneman, Daniel, and Amos Tversky. 1979. “Prospect Theory: An Analysis of Decision under Risk.” Econometrica 47:263–291. ———. 1982. “The Psychology of Preferences.” Scientific American 246:160–171. ———, eds. 2000. Choices, Values, and Frames. Cambridge: Cambridge University Press. Kaplan, Stanley, and B. John Garrick. 1981. “On the Quantitative Definition of Risk.” Risk Analysis 1:11–27. Karpowicz-Lazreg, Cécilia, and Etienne Mullet. 1993. “Societal Risk as Seen by the French Public.” Risk Analysis 9:401–405. Kasperson, Jeanne X., Roger E. Kasperson, Nick Pidgeon, and Paul Slovic. 2003. “The Social Amplification of Risk: Assessing Fifteen Years of Research and Theory.” In The Social Amplification of Risk, ed. Nick Pidgeon, Roger E. Kasperson, and Paul Slovic, 13–46. Cambridge: Cambridge University Press. Kasperson, Roger E. 1986. “Six Propositions for Public Participation and Their Relevance for Risk Communication.” Risk Analysis 6(3): 275–281. ———. 1992. “The Social Amplification of Risk: Progress in Developing an Integrative Framework of Risk.” In Social Theories of Risk, ed. Sheldon Krimsky and Dominic Golding, 153–178. Westport, CT: Praeger.

References

217

Kasperson, Roger E., Dominic Golding, and Jeanne X. Kasperson. 1999. “Risk, Trust, and Democratic Theory.” In Social Trust and the Management of Risk, ed. George Cvetkovich and Ragnar E. Löfstedt, 22–41. London: Earthscan. Kasperson, Roger E., and Jeanne X. Kasperson. 1983. “Determining the Acceptability of Risk: Ethical and Policy Issues.” In Assessment and Perception of Risk to Human Health, ed. J. T. Rogers and David V. Bates, 135–155. Ottawa: Royal Society of Canada. Kasperson, Roger E., Ortwin Renn, Paul Slovic, Halina S. Brown, Jacque Emel, Robert Goble, Jeanne X. Kasperson, and Samuel Ratick. 1988. “The Social Amplification of Risk: A Conceptual Framework.” Risk Analysis 8:177–187. Kaufman, George G., and Kenneth E. Scott. 2003. “What Is Systemic Risk, and Do Bank Regulators Retard or Contribute to It?” Independent Review 7(3): 371–391. Keen, Andrew. 2008. The Cult of the Amateur: How Blogs, MySpace, YouTube and the Rest of Today’s User-generated Media Are Killing Our Culture and Economy. London: Nicolaus Brealey. Keeney, Ralph. 1988. “Structuring Objectives for Problems of Public Interest.” Operations Research 36:396–405. ———. 1992. Value-Focused Thinking: A Path to Creative Decision Making. Cambridge, MA: Harvard University Press. Kemp, Ray. 1985. “Planning, Political Hearings, and the Politics of Discourse.” In Critical Theory and Public Life, ed. John Forester, 177–201. Cambridge, MA: MIT Press. Kemp, Ray, and Tamsin Greulich. 2004. Communication, Consultation, Community: MCF Site Deployment Consultation Handbook. Melbourne: Mobile Carriers Forum. Kemshall, Hazel. 2003. Understanding Risk in Criminal Justice. Buckingham: Open University Press. Keown, Charles F. 1989. “Risk Perceptions of Hong Kongese versus Americans.” Risk Analysis 9:401–405. Kern, Kristine, and Harriet Bulkeley. 2009. “Cities, Europeanization and Multi-level Governance: Governing Climate Change through Transnational Municipal Networks.” Journal of Common Market Studies 3:309–332. Kitschelt, Herbert. 1986. “New Social Movements in West Germany and the United States.” Political Power and Social Theory 5:286–324. Kjaer, Anne Mette. 2004. Governance (Key Concepts). Malden, MA: Polity. Klein, Gary. 1997. “Developing Expertise in Decision Making.” Thinking and Reasoning 3(4): 337–352. Klein, Naomi. 2007. The Shock Doctrine: The Rise of Disaster Capitalism. New York: Metropolitan Books. Kleinhesselink, Randall, and Eugene A. Rosa. 1991. “Cognitive Representation of Risk Perceptions: A Comparison of Japan and the United States.” Journal of Cross Cultural Psychology 22:11–28. Klinke, Andreas, and Ortwin Renn. 2000. “Managing Natural Disasters.” In Foresight and Precaution, ed. M. P. Cottam, D. W. Harvey, R. P. Pape, and Joyce Tait, 83–87. Rotterdam: Balkema. ———. 2002. “A New Approach to Risk Evaluation and Management: Risk-Based, PrecautionBased and Discourse-Based Management.” Risk Analysis 22(6): 1071–1094. ———. 2010. “Risk Governance: Contemporary and Future Challenges.” In Regulating Chemical Risks: European and Global Perspectives, ed. John Eriksson, Michael Gilek, and Christina Rudén, 9–28. Berlin: Springer. ———. 2012. “Adaptive and Integrative Governance on Risk and Uncertainty.” Risk Research 15(3): 273–292. Knight, Andrew J., and Rex Warland. 2005. “Determinants of Food Safety Risks: A Multi­ disciplinary Approach.” Rural Sociology 70:253–275. Knight, Frank H. 1921. Risk, Uncertainty, and Profit. Boston: Houghton Mifflin. Knight, Kyle, and Eugene A. Rosa. 2011. “The Environmental Efficiency of Well-Being: A Cross-National Analysis.” Social Science Research 40: 931–949.

218

References

Knorr-Cetina, Karin D. 1981. The Manufacture of Knowledge: An Essay on the Constructivist and Contextual Nature of Science. Oxford: Pergamon. Kuhn, Thomas. 1970. The Structure of Scientific Revolutions (1962), 2nd ed. Chicago: University of Chicago Press. Kunreuther, Howard, and Paul Slovic. 1996. “Science, Values, and Risk.” In Annals of the American Academy of Political and Social Science: Challenges in Risk Assessment and Risk Management, ed. Howard Kunreuther and Paul Slovic, 116–125. Thousand Oaks, CA: Sage. Kunsch, B. 1998. “Risk Management in Practice.” In Risk Perception, Risk Communication, and Its Application to EMF Exposure. ICNIRP report 5, ed. International Commission on Non-ionizing Radiation Protection, 327–342. Vienna: ICNIRP. Kweit, Mary G., and Robert W. Kweit. 1987. “The Politics of Policy Analysis: The Role of Citizen Participation in Analytic Decision Making.” In Citizen Participation in Public Decision Making, ed. Jack DeSario and Stuart Langton, 19–38. Westport, CT: Greenwood. Laird, Frank. 1993. “Participatory Analysis: Democracy and Technological Decision Making.” Science, Technology, and Human Values 18(3): 341–361. Lash, Scott. 2000. “Risk Culture.” In The Risk Society and Beyond: Critical Issues for Social Theory, ed. Barbara Adams, Ulrich Beck and Joost van Loon, 47–62. London: Sage. Lash, Scott, and John Urry. 1994. Economies of Signs and Space. London: Sage. Latour, Bruno. 1987. Science in Action. Milton Keynes: Open University Press. ———. 2003. “Is Re-modernization Occurring—And If So, How to Prove It?” Theory, Culture and Society 20:35–48 ———. 2004. Politics of Nature: How to Bring the Sciences into Democracy. Cambridge, MA: Harvard University Press. Latour, Bruno, and Steve Woolgar. 1979. Laboratory Life: The Social Construction of Scientific Facts. London: Sage. Laudan, Larry. 1996. “The Pseudo-Science of Science? The Demise of the Demarcation Problem.” In Beyond Positivism and Relativism: Theory, Method and Evidence, ed. Larry Laudan, 166–192. Boulder, CO: Westview. Leach, William D. 2005. “Public Involvement in USDA Forest Service Policymaking: A Literature Review.” Journal of Forestry 104(1): 43–49. Lee, Terence R. 1998. “The Perception of Risks: An Overview of Research and Theory.” In Risk Perception, Risk Communication and Its Application to EMF Exposure, ICNIRP report no. 5, ed. International Commission on Non-ionizing Radiation Protection, 77–101. Vienna: ICNIRP. Leontif, Wassily. 1983. “Academic Economics Continued.” Science 219:904. Lewandowski, Joseph D. 2003. “Disembedded Democracy? Globalization and the ‘Third Way.’” European Journal of Social Theory 6(1): 115–131. Liberatore, Angela, and Silvio Funtowicz. 2003. “Democratizing Expertise, Expertising Democracy: What Does This Mean, and Why Bother?” Science and Public Policy 30(3): 146–150. Lidskog, Rolf. 2008. “Scientised Citizens and Democratised Science: Re-assessing the Expertlay Divide.” Journal of Risk Research 11(1): 69–86. Lidskog, Rolf, Ylva Uggla, and Linda Soneryd. 2011. “Making Transboundary Risks Governable: Reducing Complexity, Constructing Spatial Identity and Ascribing Capabilities.” AMBIO 40(2): 111–120. Lindenberg, Siegwart. 1985. “An Assessment of the New Political Economy: Its Potential for the Social Sciences and for Sociology in Particular.” Sociological Theory 3: 99–114. Linnerooth-Bayer, Joanne, and Kevin B. Fitzgerald. 1996. “Conflicting Views on Fair Siting Processes Evidence from Austria and the US.” Risk: Health, Safety and Environment 7(2): 119–134. Linnerooth-Bayer, Joanne, Ragnar E. Löfstedt, and Gunnar Sjöstedt, eds. 2001. Transboundary Risk Management. London: Earthscan. Löfstedt, Ragnar. 1997. Risk Evaluation in the United Kingdom: Legal Requirements, Conceptual Foundations, and Practical Experiences with Special Emphasis on Energy Systems. Working paper no. 92. Stuttgart: Center of Technology Assessment.

References

219

———. 2003. “Science Communication and the Swedish Acrylamide ‘Alarm.’” Journal of Health Communication 8:407–432. ———. 2005. Risk Management in Post-Trust Societies. London: Palgrave Macmillan. Loon, Joost van. 2002. Risk and Technological Culture: Towards a Sociology of Virulence. London: Routledge. Lowi, Theodore J. 1964. “Four Systems of Policy, Politics, and Choice.” Public Administration Review 32:298–310. Luhmann, Niklas. 1980. Trust and Power. New York: Wiley. ———. 1982. “The World Society as a Social System.” International Journal of General Systems 8:131–138. ———. 1983. Legitimation durch Verfahren. Frankfurt am Main: Suhrkamp. ———. 1984. Soziale Systeme. Frankfurt am Main: Suhrkamp. ———. 1986. “The Autopoiesis of Social Systems.” In Sociokybernetic Paradoxes: Observation, Control and Evolution of Self-Steering Systems, ed. R. Felix Geyer and Johannes van der Zouven, 172–192. London: Sage. ———. 1989. Ökologische Kommunikation/Ecological Communication (1986), trans. John Bednarz Jr. Cambridge: Polity. ———. 1990. “Technology, Environment, and Social Risk: A Systems Perspective.” Industrial Crisis Quarterly 4:231. ———. 1993. Risk: A Sociological Theory (1989), trans. Rhodes Barrett. New York: Aldine de Gruyter. ———. 1995. Social Systems. Stanford, CA: Stanford University Press. Lupton, Deborah. 1999. Risk. London: Routledge. Lyall, Catherine, and Joyce Tait. 2004. “Shifting Policy Debates and the Implications for Governance.” In New Modes of Governance: Developing an Integrated Policy Approach to Science, ed. Catherine Lyall and Jouyce Tait, 3–17. Aldershot: Ashgate. Lynn, Frances M. 1986. “The Interplay of Science and Values in Assessing and Regulating Environmental Risks.” Science, Technology and Human Values 11(2): 40–50. ———. 1987. “Citizen Involvement in Hazardous Waste Sites: Two North Carolina Success Stories.” Environmental Impact Assessment Review 7:347–361. ———. 1990. “Public Participation in Risk Management Decisions: The Right to Define, the Right to Know, and the Right to Act.” Risk Issues in Health and Safety 1:95–101. MacGregor, Donald G., and M. Granger Morgan. 1994. “Perception from Electromagnetic Fields: A Psychometric Evaluation of a Risk Communication Approach.” Risk Analysis 14(5): 815–828. Machlis, Gary, and Eugene A. Rosa. 1990. “Desired Risk: Broadening the Social Amplification of Risk Framework.” Risk Analysis 10:161–168. Majone, Giandomenico. 1989. Evidence, Argument, and Persuasion in the Policy Process. New Haven, CT: Yale University Press. Marcuse, Herbert. 1964. One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon. Marshall, Brent K. 1999. “Globalization, Environmental Degradation, and Ulrich’s Beck’s Risk Society.” Environmental Values 8(2): 253–275. Marti, Kurt, Yuri Ermoliev, and Marek Makowski. 2010. Coping with Uncertainty: Robust Solutions. Berlin: Springer. Mason, Michael. 2005. The New Accountability: Environmental Responsibility across Borders. London: Earthscan. Matten, Dirk. 2004. “Editorial: The Risk Society Thesis in Environmental Politics and Management—A Global Perspective.” Journal of Risk Research 7: 371–376. McCarthy, T. A. 1973. “A Theory of Communicative Competence.” Philosophy of the Social Sciences 3:135–156. McDaniels, Timothy. 1996. “The Structured Value Referendum: Eliciting Preferences for Environmental Policy Alternatives.” Journal of Policy Analysis and Management 15(2): 227–251.

220

References

———. 1998. “Ten Propositions for Untangling Descriptive and Prescriptive Lessons in Risk Perception Findings.” Reliability Engineering and System Safety 59:129–134. McNeil, B., S. Pauker, H. Sox Jr., and A. Tversky. 1982. “On the Elicitation of Preferences for Alternatives.” New England Journal of Medicine 306:1259–1262. Meadows, Donella H., Dennis L. Meadows, and Jorgen Randers. 1972. The Limits of Growth. Stuttgart: DVA. ———. 1992. The New Limits of Growth—The Situation of Mankind: A Threat and a Chance for the Future. Stuttgart: DVA. Meister, Hans-Peter, and Felix Oldenburg. 2008. Beteiligung—Ein Programm für Politik, Wirt­ schaft und Gesellschaft. Heidelberg: Physica. Menand, Louis. 2001. The Metaphysical Club. New York: Farrar, Straus, and Giroux. Meyer, Robert. 2010. “Why We Still Fail to Learn from Disasters.” In The Irrational Economist: Making Decisions in a Dangerous World, ed. Erwann Michel-Kerjan and Paul Slovic, 124– 131. New York: Perseus Books. Millennium Ecosystem Assessment. 2005. Ecosystems and Human Well-Being: Synthesis. Washington, DC: Island. Miller, Greg. 2006. “Psychology Tough Decisions? Don’t Sweat It.” Science 311:935. Mills, C. Wright. 1959. The Sociological Imagination. New York: Oxford University Press. Millstone, Erik, Patrick van Zwanenberg, Claire Marris, Les Levidow, and Helge Torgesen. 2004. Science in Trade Disputes Related to Potential Risks: Comparative Case Studies. Seville: Institute for Prospective Technological Studies. Mol, Arthur P. J., and Gert Spaargaren. 1993. “Environment, Modernity, and the Risk-Society: The Apocalyptic Horizon of Environmental Reform.” International Sociology 8:431–459. Moore, Christopher. 1986. The Mediation Process: Practical Strategies for Resolving Conflict. San Francisco: Jossey-Bass. Morgan, Millett G., and Max Henrion. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge University Press. Münch, Richard. 1982. Basale Soziologie. Soziologie der Politik. Opladen: Westdeutscher Verlag. ———. 1996. Risikopolitik. Frankfurt am Main: Suhrkamp. Munich RE. 2011. Significant Natural Catastrophes 1980–2011. NatCat Service. Available at http://www.munichre.com/en/reinsurance/business/non.life/georisks/natcatservice/ default.aspx. Accessed March 28, 2012. Mythen, Gabe. 2004. Ulrich Beck: A Critical Introduction to the Risk Society. London: Pluto. ———. 2005a. “From Goods to Bads? Revisiting the Political Economy of Risk.” Sociological Research Online 10(3). Available at http://www.socresonline.org.uk/10/3/mythen.html. Accessed May 3, 2007. ———. 2005b. “Employment, Individualization, and Insecurity: Rethinking the Risk Society Perspective.” Sociological Review 53:129–149. National Research Council. 1996. Understanding Risk. Washington, DC: National Academies Press. ———. 2003. One Step at a Time: The Staged Development of Geological Repositories for HighLevel Radioactive Waste. Washington, DC: National Academy Press. ———. 2004. Implementing Climate and Global Change Research: A Review of the Final U.S. Climate Change Science Program Strategic Plan. Report of the Committee to Review the U.S. Climate Change Science Program Strategic Plan. Washington, DC: National Academy Press. ———. 2005. Decision Making for the Environment: Social and Behavioral Science Research Priorities, ed. Garry D. Brewer and Paul C. Stern. Panel on Social and Behavioral Science Research Priorities for Environmental Decision Making. Washington, DC: National Academy Press. ———. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: National Academy Press. Nelkin, Dorothy. 1977. Technological Decisions and Democracy. Beverly Hills, CA: Sage.

References

221

Nelson, Thomas E., Zoe M. Oxleay, and Rosalee A. Clawson. 1997. “Toward a Psychology of Framing Effects.” Political Behavior 19(3): 221–246. Norgaard, Richard. 1994. Development Betrayed: The End of Progress and the Revisioning of the Future. London: Routledge. North, D. Warner, and Ortwin Renn. 2005. Decision Analytic Tools and Participatory Decision Processes. Paper prepared for the Panel on Public Participation in Environmental Assessment and Decision Making. Washington, DC: National Research Council. Nowotny, Helga, Peter Scott, and Michael Gibbons. 2001. Re-thinking Science: Knowledge and the Public in an Age of Uncertainty. Cambridge: Polity. Nye, Joseph S., and John D. Donahue, eds. 2000. Governance in a Globalising World. Washington, DC: Brookings Institution. O’Brien, Martin. 1999. “Theorizing Modernity.” In Theorizing Modernity: Reflexivity, Environment and Identity in Giddens’ Social Theory, ed. Martin O’Brien, Sue Penna, and Colin Hay, 17–38. London: Addison Wesley Longman. Ogiwara, Noriko. 1993. Dragon Sword and Wind Child, trans. Cathy Hirano. New York: Farrar, Straus, and Giroux. Okrent, David. 1998. “Risk Perception and Risk Management: On Knowledge, Resource Allocation and Equity.” Reliability Engineering and Systems Safety 59:17–25. Olson, Mancur. 1971. The Logic of Collective Action: Public Goods and the Theory of Groups (1965), rev. ed. Boston: Harvard University Press. ———. 1984. Participatory Pluralism: Political Participation and Influence in the United States and Sweden. Chicago: Nelson-Hall. Ong, Aihwa. 2004. “Assembling around SARS: Technology, Body, Heat, and Political Fever in Risk Society.” In Ulrich Becks kosmopolitisches Projekt. Auf den Weg in eine andere Soziologie, ed. Angelika Poferl and Natan Sznaider, 81–89. Baden-Baden: Nomos. Opp, Karl-Dieter. 1989. The Rationality of Political Protest: A Comparative Analysis of Rational Choice Theory. Boulder, CO: Westview. Organization for Economic Cooperation and Development. 2003. Emerging Systemic Risks: Final Report to the OECD Futures Project. Paris: OECD. Otway, Harry, and Kerry Thomas. 1982. “Reflections on Risk Perception and Policy.” Risk Analysis 2:69–82. Otway, Harry, and Detlof von Winterfeldt. 1982. “Beyond Acceptable Risk: On the Social Acceptability of Technologies.” Policy Sciences 14(3): 247–256. Parsons, Talcott. 1951. The Social System. Glencoe, MD: Free Press. ———. 1963. “On the Concept of Political Power.” Proceedings of the American Philosophical Society 17:352–403. ———. 1967. Sociological Theory and Modern Society. New York: Free Press. ———. 1971. The System of Modern Societies. Englewood Cliffs, NJ: Prentice-Hall. Parsons, Talcott, and Edward A. Shils, eds. 1951. Toward a General Theory of Action. Cambridge: Cambridge University Press. Pelling, Mark, Chris High, John Dearing, and Denis Smith. 2008. “Shadow Spaces for Social Learning: A Relational Understanding of Adaptive Capacity to Climate Change within Organisations.” Environment and Planning 40:867–884. Pellizzoni, Luigi. 1999. “Ref lexive Modernization and Beyond: Knowledge and Value in the Politics of Environment and Technology.” Theory, Culture, and Society 16(4): 99–​ 125. Pellow, David N. 2000. “Environmental Inequality Formation.” American Behavioral Scientist 43:581–601. Perrow, Charles. 1999. Normal Accidents: Living with High Risk Technologies (1984), updated ed. Princeton, NJ: Princeton University Press. Peters, Bernhard. 2005. “Public Discourse, Identity, and the Problem of Democratic Legitimacy.” In Making the European Polity: Reflexive Integration in the EU, ed. Erik Oddvar Eriksen, 84–123. London: Routledge.

222

References

Philips, Lord of Worth Matravers, June Bridgeman, and Malcolm Ferguson-Smith. 2000. The BSE Inquiry, Volume 1: Findings and Conclusion. London: Stationery Office. Phthalate Esters Panel, American Chemistry Council. 2003. “Phthalates and Children’s Toys.” Available at http://www.phthalates.org. Accessed April 11, 2003. Picou, J. Steven, and Duane A. Gill. 2000. “The Exxon Valdez: Disaster as Localized Environmental Catastrophe: Dissimilarities to Risk Society Theory.” In Risk in the Modern Age: Social Theory, Science, and Environmental Decision-Making, ed. Maurie J. Cohen, 143– 170. New York: St. Martin’s Press. Pidgeon, Nick F. 1997. “The Limits to Safety? Culture, Politics, Learning and Manmade Disasters.” Journal of Contingencies and Crisis Management 5(1): 1–14. Pidgeon, Nick F., Wouter Poortinga, Gene Rowe, Tom Horlick-Jones, John Walls, and Tim O’Riordan. 2005. “Using Surveys in Public Participation Processes for Risk Decision Making: The Case of the 2003 British GM Nation Public Debate.” Risk Analysis 25(2): 467–479. Pierre, Jon, and B. Guy Peters. 2000. Governance, Politics and the State. London: Macmillan. Pollard, Simon J. T., Raquel Duarte Davidson, Roger Yearsley, Clare Twigger-Ross, John Fisher, R. Willows, and James Irwin. 2000. A Strategic Approach to the Consideration of “Environmental Harm.” Bristol: Environment Agency. Poortinga, Wouter, and Nick F. Pidgeon. 2003. “Exploring the Dimensionality of Trust in Risk Regulation.” Risk Analysis 23:961–972. Popper, Karl. 1959. The Logic of Scientific Discovery. New York: Basic Books. Postman, Neil. 1993. Technopoly: The Surrender of Culture to Technology. New York: Vintage Books. Prior, Markus, and Arthur Lupia. 2008. “Money, Time, and Political Knowledge: Distinguishing Quick Recall and Political Learning Skills.” American Journal of Political Science 52:​ 169–183. Raiffa, Howard. 1994. The Art and Science of Negotiation, 12th ed. Cambridge: Cambridge University Press. Rantanen, Jorma. 1997. “Global Asbestos Epidemic: Is it Over?” Proceedings of an Expert Meeting on Asbestos, Asbestosis, and Cancer. People and Work Research report no. 14. Helsinki: Finnish Institute of Occupational Health. ———. 2008. “Challenges to Global Governance in the Changing World of Work.” In Risks in Modern Society, ed. Hans-Jürgen Bischoff, 17–60. Heidelberg: Springer. Rasmussen, Mikkel Vedby. 2006. The Risk Society at War: Terror, Technology and Strategy in the Twenty-First Century. Cambridge: Cambridge University Press. Rauschmayer, Felix, Jouni Paavola, and Heidi Wittmer. 2009. “European Governance of Natural Resources and Participation in a Multi-Level Context: An Editorial.” Environmental Policy and Governance 19(3): 141–147. Rauschmayer, Felix, and Heidi Wittmer. 2006. “Evaluating Deliberative and Analytical Methods for the Resolution of Environmental Conflicts.” Land Use Policy 23:108–122. Rayner, Steve. 1984. “Disagreeing about Risk: The Institutional Cultures of Risk Management and Planning for Future Generations.” In Risk Analysis, Institutions, and Public Policy, ed. Susan G. Hadden, 150–178. Port Washington, NY: Associated Faculty Press. ———. 1987. “Risk and Relativism in Science for Policy.” In The Social and Cultural Construction of Risk, ed. Branden B. Johnson and Vincent T. Covello, 5–23. Dordrecht: Riedel. ———. 1992. “Cultural Theory and Risk Analysis.” In Social Theories of Risk, ed. Sheldon Krimsky and Dominic Golding, 83–115. Westport, CT: Praeger. Reese, Stephen R. 2007. “The Framing Project: A Bridging Model for Media Research Revisited.” Journal of Communication 57:148–154. Renn, Ortwin. 1992. “Concepts of Risk: A Classification.” In Social Theories of Risk, ed. Sheldon Krimsky and Dominic Golding, 53–79. Westport, CT: Praeger. ———. 1999. “A Model for an Analytic-Deliberative Process in Risk Management.” Environmental Science and Technology 33:3049–3055. ———. 2004. “The Challenge of Integrating Deliberation and Expertise: Participation and Discourse in Risk Management.” In Risk Analysis and Society: An Interdisciplinary Char-

References

223

acterization of the Field, ed. Timothy L. MacDaniels and Mitchell J. Small, 289–366. Cambridge: Cambridge University Press. ———. 2006. “Risk Communication Insights and Experiences in the EMF Debate.” In Mobile EMF and Communication: International Perspectives, ed. Masatoshi Nishizawa, 87–101. Tokyo: National Institute for Earth Science and Disaster. ———. 2008a. “Concepts of Risk: An Interdisciplinary Review, Part 1: Disciplinary Risk Concepts.” GAIA 17(1): 50–66. ———. 2008b. “Concepts of Risk: An Interdisciplinary Review, Part 2: Integrative Approaches.” GAIA 17(2): 196–204. ———. 2008c. Risk Governance: Coping with Uncertainty in a Complex World. London: Earthscan. ———. 2010. “The Contribution of Different Types of Knowledge towards Understanding, Sharing, and Communicating Risk Concepts.” Catalan Journal of Communication and Cultural Studies 2(2): 177–195. ———. In press. Das Risikoparadox: Warum wir uns vor dem Falschen Fürchten. Munich: Fischer. Renn, Ortwin, William Burns, Roger E. Kasperson, Jeanne X. Kasperson, and Paul Slovic. 1992. “The Social Amplification of Risk: Theoretical Foundations and Empirical Application.” Journal of Social Issues 48: 137–160. Renn, Ortwin, Marion Dreyer, Andreas Klinke, and Christine Losert. 2002. Systemic Risks: A New Challenge for Risk Management. Paris: OECD. Renn, Ortwin, and Alexander Jaeger. 2010. “Risk, Liability, and Insurance: Ein Überblick aus Soziologischer Perspektive.” In Risk, Liability, and Insurance. Schwer objektivierbare Krankheiten, Einfluss auf die Versicherungswirtschaft, ed. Munich Re, 7–31. Munich: Munich Re. Renn, Ortwin, Carlo C. Jaeger, Eugene A. Rosa, and Thomas Webler. 1999. “The Rational Actor Paradigm in Risk Theories: Analysis and Critique.” In Risk in the Modern Age: Social Theory, Science and Environmental Decision-Making, ed. Maurie J. Cohen, 35–61. London: Macmillan. Renn, Ortwin, and Florian Keil. 2009. “What Is Systemic about Systemic Risks?” GAIA 18(2): 97–99. Renn, Ortwin, and Andreas Klinke. 2012. “Complexity, Uncertainty and Ambiguity in Inclusive Risk Governance.” In Risk and Social Theory in Environmental Management, ed. Thomas G. Measham and Stewart Lockie, 53–70. Collingwood: CSIRO. Renn, Ortwin, Andreas Klinke, and Marjolein van Asselt. 2011. “Coping with Complexity, Uncertainty and Ambiguity in Risk Governance: A Synthesis.” AMBIO 40(2): 231–246. Renn, Ortwin, and Pia Schweizer. 2009. “Inclusive Risk Governance: Concepts and Application to Environmental Policy Making.” Environmental Policy and Governance 19:174–185. Renn, Ortwin, and Piet Sellke. 2011. “Risk, Society, and Policy Making: Risk Governance in a Complex World.” International Journal of Performability Engineering 7(4): 349–366. Renn, Ortwin, and Katy Walker. 2008. Global Risk Governance: Concept and Practice Using the IRGC Framework. Berlin: Springer. Renn, Ortwin, and Thomas Webler. 1998. “Der kooperative Diskurs—Theoretische Grund­ lagen, Anforderungen, Möglichkeiten.” In Abfallpolitik im kooperativen Diskurs, ed. Ort­w in Renn, Hans Kastenholz, Patrick Schild, and Urs Wilhelm, 3–103. Zurich: Hochschulverlag. Renn, Ortwin, Thomas Webler, Horst Rakel, Peter C. Dienel, and Branden Johnson. 1993. “Public Participation in Decision Making.” Policy Sciences 26:189–214. Renn, Ortwin, Thomas Webler, and Peter Wiedemann, eds. 1995. Fairness and Competence in Citizen Participation. Dordrecht: Kluwer. Rip, Arie. 1985. “Experts in Public Arenas.” In Regulating Industrial Risks, ed. Harry Otway and Malcom Peltu, 94–110. London: Butterworth. Roberts, Steven. 2010. “Misrepresenting ‘Choice Biographies’? A Reply to Woodman.” Journal of Youth Studies 13:137–149.

224

References

Roca, Elisabet, Gonzalo Gamboa, and J. David Tàbara. 2008. “Assessing the Multidimensionality of Coastal Erosion Risks: Public Participation and Multicriteria Analysis in a Mediterranean Coastal System.” Risk Analysis 28(2): 399–412. Rockström, Johan, Will Steffen, Kevin Noone Åsa Persson, F. Stuart Chapin III, Eric Lambin, Timothy M. Lenton, Marten Sheffer, Carl Folke, Hans Joachim Schellnhuber, Björn Nykvist, Cynthia A. de Wit, Terry Hughes, Sander van der Leeuw, Henning Rodhe, Sverker Sörllin, Peter K. Snyder, Robert Constanza, Uno Svedin, Malin Falkenmark, Louise Karlberg, Robert W. Corell, Victoria J. Fabry, James Hansen, Brian Walker, Diana Liverman, Katherine Richardson, Paul Crutzem, and Jonathan Foley. 2009. “A Safe Operating Space for Humanity.” Nature 461:472–475. Rohrmann, Bernd, and Ortwin Renn. 2000. “Risk Perception Research: An Introduction.” In Cross-Cultural Risk Perception: A Survey of Empirical Studies, ed. Ortwin Renn and Bernd Rohrmann, 11–54. Dordrecht: Kluwer. Rosa, Eugene A. 1988. “NAMBY PAMBY and NIMBY PIMBY: Public Issues in the Siting of Hazardous Waste Facilities.” Forum for Applied Research and Public Policy 3:114–123. ———. 1998a. “Metatheoretical Foundations for Post-Normal Risk.” Journal of Risk Research 1(1): 15–44. ———. 1998b. “Comments on Commentary by Ravetz and Funtowicz: ‘Old-Fashioned Hypertext.’” Journal of Risk Research 1:111–115. ———. 2003. “The Logical Structure of the Social Amplification of Risk Framework (SARF): Metatheoretical Foundations and Policy Implications.” In The Social Amplification of Risk, ed. Nick Pidgeon, Roger E. Kasperson, and Paul Slovic, 47–79. Cambridge: Cambridge University Press. ———. 2008. “White, Black, and Gray: Critical Dialogue with the International Risk Governance Council’s Framework for Risk Governance.” In Global Risk Governance: Concept and Practice Using the IRGC Framework, ed. Ortwin Renn and Katherine D. Walker, 101– 118. Dordrecht: Springer. ———. 2010. “The Logical Status of Risk: To Burnish or Dull.” Journal of Risk Research 13:​ 239–253. Rosa, Eugene A., and Donald L. Clark. 1999. “Historical Routes to Technological Gridlock: Nuclear Technologies as Prototypical Vehicle.” Research in Social Problems and Public Policy 7:21–57. Rosa, Eugene A., and Lee Clarke. 2012. “Collective Hunch? Risk as the Real and the Elusive.” Journal of Environmental Studies and Science 2: 39–52. Rosa, Eugene A., Andreas Diekmann, Thomas Dietz, and Carlo C. Jaeger. 2010. Human Footprints on the Global Environment: Threats to Sustainability. Cambridge: Cambridge University Press. Rosa, Eugene A., Thomas Dietz, Richard H. Moss, Scott Atran, and Susanne Moser. 2012. “Managing the Risks of Climate Change and Terrorism.” Solutions 3(2): 59–65. Rosa, Eugene A., Aaron M. McCright, and Ortwin Renn. 2010. “Jürgen Habermas e la societá del rischio. Incontro di navi transito.” Quaderni di Teoria Sociale 10:55–82. Rosa, Eugene A., Seth P. Tuler, Baruch Fischhoff, Thomas Webler, Sharon M. Friedman, Richard E. Sclove, Kristin Shrader-Frechette, Mary R. English, Roger E. Kasperson, Robert L. Goble, Thomas M. Leschine, William Freudenburg, Caron Chess, Charles Perrow, Kai Erickson, and James F. Short. 2010. “Nuclear Waste: Knowledge Waste?” Science 329:​ 762–763. Rosa, Eugene A., Richard F. York, and Thomas Dietz. 2004. “Tracking the Anthropogenic Drivers of Ecological Impacts.” AMBIO 33:509–512. Rosenau, James N. 1992. “Governance, Order, and Change in World Politics.” In Governance without Government: Order and Change in World Politics, ed. James N. Rosenau and Ernst-Otto Czempiel, 1–29. Cambridge: Cambridge University Press. ———. 1995. “Governance in the 21st Century.” Global Governance 1(1): 13–43. Rosenbaum, Nelson. 1978. “Citizen Participation and Democratic Theory.” In Citizen Participation in America, ed. Stuart Langton, 43–54. Lexington, MA: Lexington Books.

References

225

Rowe, Gene, and Lynn J. Frewer. 2000. “Public Participation Methods: A Framework for Evaluation.” Science, Technology and Human Values 25(1): 3–29. Rowe, Gene, Ray Marsh, and Lynn J. Frewer. 2004. “Evaluation of a Deliberative Conference.” Science, Technology, and Human Values 29(1): 88–121. Rowe, William D. 1977. An Anatomy of Risk. Somerset, NJ: John Wiley and Sons. Royal Swedish Academy of Sciences. 2009. Economic Governance. Stockholm: Royal Swedish Academy of Sciences. Ruddat, Michael, and Alexander Sautter. 2005. Wahrnehmung und Bewertung des Mobilfunks. Eine empirische Bestandsaufnahme von Studien im Rahmen des Forschungsprojektes Untersuchung der Kenntnis und Wirkung von Informationsmaßnahmen im Bereich Mobilfunk und Ermittlung weiterer Ansatzpunkte zur Verbesserung der Information verschiedener Bevölkerungsgruppen. Stuttgart: Dialogik. Ruddat, Michael, Alexander Sautter, Ortwin Renn, Uwe Pfenning, and Frank Ulmer. 2010. “Communication about a Communication Technology.” Risk Research 13(3): 261–278. Rumsfeld, Donald H. 2002. “Press Briefing at the U.S. Department of Defense.” Available at http://www.defenselink.mil/transcripts/2002/+02122002_+212sdv2.html. Accessed June 27, 2004. Rundle, John B., Donald L. Turcotte, and William Klein, eds. 1996. Reduction and Predictability of Natural Disasters. Boulder, CO: Westview Press. Saint Augustine of Hippo. 1963. The Confessions of Saint Augustine, trans. Rex Warner. New York: Penguin Books. Sand, Peter. 2000. “The Precautionary Principle: A European Perspective.” Human and Ecological Risk Assessment 6(3): 445–458. Sandman, Peter M. 1988. “Hazard versus Outrage: Conceptual Frame for Describing Public Perception of Risk.” In Risk Communication, ed. Helmut Jungermann, Roger E. Kasperson, and Peter Wiedemann, 163–168. Jülich: Research Center. Schellnhuber, Hans-Joachim. 1999. “‘Earth System’ Analysis and the Second Copernican Revolution.” Nature 402:C19–C23. Schnaiberg, Allan. 1980. The Environment: From Surplus to Scarcity. New York: Oxford University Press. Schneider, Elke, Bettina Oppermann, and Ortwin Renn. 1998. “Implementing Structured Participation for Regional Level Waste Management Planning.” Risk: Health, Safety and Environment 9:379–395. Schopenhauer, Arthur. 1918. The World as Will and Representation. Cambridge: Cambridge University Press. Schulz, Marlen, and Ortwin Renn. 2009. “Methodik des Delphis. Die Fragebogenkonstruktion.” In Das Gruppendelphi. Konzept und Fragebogenkonstruktion, ed. Marlen Schulz and Ortwin Renn, 23–45. Wiesbaden: VS Verlag für Sozialwissenschaften. Schwarcz, Steven L. 2008. “Systemic Risk and the Subprime Mortgage Crisis.” Southern Methodist University Law Review 61(2): 44–56. Science Advisory Board, U.S. Environmental Protection Agency. 2001. Improved Science-Based Environmental Stakeholder Processes. Report no. EPA-SAB-EC-COM-01-006. Washington, DC: Science Advisory Board Scitovsky, Tibor. 1941. “A Note on Welfare Propositions in Economics.” Review of Economic Studies 9(1): 77–88. Sclove, Richard. 1995. Democracy and Technology. New York: Guilford. Scott, John. 2002. “Social Class and Stratification in Late Modernity.” Acta Sociologica 45(1): 23–35. Scott, Richard, and John Meyer. 1991. “The Organization of Societal Sectors: Proposition and Early Evidence.” In The New Institutionalism in Organizational Analysis, ed. Walter Powell and Paul J. DiMaggio, 108–140. Chicago: University of Chicago Press. Searle, John R. 1995. The Construction of Social Reality. New York: Free Press. ———. 1998. Mind, Language, and Society: Philosophy in the Real World. New York: Basic Books. Seligman, Adam B. 1991. The Idea of Civil Society. New York: Free Press.

226

References

Sen, Amartya K. 1977. “Rational Fools: A Critique of the Behavioral Foundations of Economic Theory.” Philosophy and Public Affairs 6 (4): 317–344. Shearing, Clifford, and Les Johnston. 2005. “Justice in the Risk Society.” Australian and New Zealand Journal of Criminology 38:25–38. Sheehy, Noel, Judith Wylie, and Gary McKeown. 2002. Quantifying Risk Amplification Processes: A Multi-Level Approach. Contract research report 367/2002, Health and Safety Executive. London: HMSO. Shils, Edward. 1991. “The Virtues of Civil Society.” Government and Opposition 26(2): 3–20. Short, James F., Jr. 1984. “The Social Fabric at Risk: Toward the Social Transformation of Risk Analysis.” American Sociological Review 49:711–725. Short, James F., Jr., and Lee Clarke, eds. 1992. Organizations, Uncertainties, and Risk. Boulder, CO: Westview. Shrader-Frechette, Kristin S. 1991. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley: University of California Press. Shrivastava, Paul. 1995. “Ecocentric Management for a Risk Society.” Academy of Management Review 20: 118–137. Simon, Herbert A. 1976. Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations. New York: Basic Books. Simon, Herbert A., and James G. March. 1958. Organizations. New York: Wiley. Sjöberg, Lennart, Bengt Jansson, Jean Brenot, Lynn Frewer, Anna Prades, and Arnfinn Tönnessen. 2000. Risk Perception in Commemoration of Chernobyl: A Cross-National Study. Stockholm: Center for Risk Research, Stockholm School of Economics. Skelcher, Chris. 2005. “Jurisdictional Integrity, Polycentrism, and the Design of Democratic Governance.” Governance 18(1): 89–110. Skillington, Tracey. 1997. “Politics and the Struggle to Define.” British Journal of Sociology 48:​ 493–513. Sklair, Leslie. 1994. “Global Sociology and Global Environmental Change.” In Social Theory and the Global Environment, ed. Michael Redclift and Ted Benton, 205–227. London: Routledge. Slovic, Paul. 1987. “Perception of Risk.” Science 236(4799): 280–285. ———. 1999. “Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment Battlefield.” Risk Analysis 19:689–701. ———, ed. 2001. The Perception of Risk. London: Earthscan. Slovic, Paul, Baruch Fischhoff, and Sarah Lichtenstein. 1981a. “Perceived Risk: Psychological Factors and Social Implications.” In Proceedings of the Royal Society, ed. Royal Society, 17–34. London: Royal Society. ———. 1981b. “Facts and Fears: Understanding Perceived Risk.” In Societal Risk Assessment: How Safe Is Safe Enough? ed. Richard Schwing and Walter Albers, 181–216. New York: Plenum. Sokal, Alan. 1996. “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” Social Text 46–47:217–252. Sørensen, Mads P., and Allan Christiansen. 2012. Ulrich Beck: An Introduction to the Theory of Second Modernity and the Risk Society. London: Routledge. Spaargaren, Gert, and Arthur P. J. Mol. 1992. “Sociology, Environment, and Modernity: Ecological Modernization as a Theory of Social Change.” Society and Natural Resources 5(4): 323–344. Starr, Chauncey. 1969. “Social Benefits versus Social Risks.” Science 165:1228–1232. Starr, Chauncey, and Chris Whipple. 1980. “Risks of Risk Decisions.” Science 208(440): 1114– 1119. Stehr, Nico. 2001. The Fragility of Modern Societies. London: Sage. Stern, Paul C., and Thomas Dietz. 1994. “The Value Basis for Environmental Concern.” Journal of Social Issues 50: 65–84. Stern, Paul C., Thomas Dietz, Troy Abel, Gregory A. Guagnano, and Linda Kalof. 1998. “A Value-Belief-Norm Theory of Support for Social Movements: The Case of Environmentalism.” Human Ecology Review 6:81–97.

References

227

Stern, Paul C., Thomas Dietz, and Linda Kalof. 1993. “Value Orientations, Gender, and Environmental Concern.” Environment and Behavior 25:322–348. Stern, Paul C., and Harvey V. Fineberg, eds. 1996. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press. Stirling, Andrew. 2003. “Risk, Uncertainty and Precaution: Some Instrumental Implications from the Social Sciences.” In Negotiating Change, ed. Frans Berkhout, Melissa Leachand, and Ian Scoones, 33–76. London: Edward Elgar. ———. 2004. “Opening Up or Closing Down: Analysis, Participation and Power in the Social Appraisal of Technology.” In Science, Citizenship and Globalisation, ed. Melissa Leach, Ian Scoones, and Brian Wynne, 218–231. London: Zed. ———. 2007. “Risk Assessment in Science: Towards a More Constructive Policy Debate.” EMBO Reports 8:309–315. ———. 2008. “Pluralism in the Social Appraisal of Technology.” Science Technology Human Values 33(4): 262–294. Stoll-Kleemann, Susanne, and Martin Welp, eds. 2006. Stakeholder Dialogues in Natural Resources Management: Theory and Practice. Heidelberg: Springer. Strydom, Piet. 2002. Risk, Environment, and Society: Ongoing Debates, Current Issues, and Future Prospects. Buckingham: Open University Press. Suchman, Mark. 1995. “Managing Legitimacy: Strategic and Institutional Approaches.” Academy of Management Review 20(3): 571–610. Sutter, Barbara. 2004. “Governing by Civil Society: Citizenship within a New Social Contract.” In Reflexive Representations: Politics, Hegemony, and Discourse in Global Capitalism, ed. Johannes Angermüller, Dirk Wiemann, and Raj Kollmorgen, 155–168. Münster: LIT. Sweeney, Robin. 2004. Environmental Risk Decision-Making: Risk Professionals and Their Use of Analytic-Deliberative Processes. Fairfax, VA: George Mason University. Taylor, Gordon R. 1979. The Doomsday Book: Can the World Survive? New York: World Publishing. Teigen, Karl H., Wibecke Brun, and Paul Slovic. 1988. “Societal Risks as Seen by a Norwegian Public.” Journal of Behavioral Decision Making 1:111–130. Thompson, Michael, Richard W. Ellis, and Aaron Wildavsky. 1990. Cultural Theory. Boulder, CO: Westview. Thoreau, Henry David. 1854. Walden; Or, Life in the Woods. Boston: Ticknor and Fields. Tuler, Seth. 1996. “Meanings, Understandings, and Interpersonal Relationships in Environmental Policy Discourse.” PhD diss., Clark University, Worcester, MA. Tuler, Seth, and Thomas Webler. 1995. “Process Evaluation for Discursive Decision Making in Environmental and Risk Policy.” Human Ecological Review 2:62–74. ———. 1999. “Designing an Analytic Deliberative Process for Environmental Health Policy Making in the U.S. Nuclear Weapons Complex.” Risk: Health, Safety and Environment 10(65): 65–87. Tulloch, John, and Deborah Lupton. 2003. Risk and Everyday Life. Thousand Oaks, CA: Sage. Tversky, Amos. 1972. “Elimination by Aspects: A Theory of Choice.” Psychological Review 79:​ 281–299. Tversky, Amos, and Daniel Kahneman. 1974. “Judgement under Uncertainty: Heuristics and Biases.” Science 185:1124–1131. ———. 1987. “Rational Choice and the Framing of Decisions.” In Rational Choice: The Contrast between Economics and Psychology, ed. Robin M. Hogarth and Melvin W. Reder, 67–84. Chicago: University of Chicago Press. United Nations. 2009. “2009 Revision of World Urbanization Prospects.” Available at http:// esa.un.org/unpd/wup/index.htm. Accessed March 28, 2012. United Nations Economic Commission for Europe. 2007. Hemispheric Transport of Air Pollution 2007. Air Pollution Studies No. 16. New York: United Nations. Urry, John. 2004. “Introduction: Thinking Society Anew.” In Conversations with Ulrich Beck, ed. Ulrich Beck and Johannes Willms, 1–10. Cambridge: Polity Press. ———. 2011. Climate Change and Society. Cambridge: Polity.

228

References

van Asselt, Marjolein. 2000. Perspectives on Uncertainty and Risk. Dordrecht: Kluwer. ———. 2005. “The Complex Significance of Uncertainty in a Risk Area.” International Journal of Risk Assessment and Management 5:125–158. ———. 2007. “Risk Governance: Over omgaan met onzekerheid en mogelijke toekomsten.” Inaugural lecture, Universiteit Maastricht. van Asselt, Marjolein, and Ortwin Renn. 2011. “Risk Governance.” Risk Research 1(4): 431– 449. van den Daele, Wolfgang. 1992. “Scientific Evidence and the Regulation of Technical Risks: Twenty Years of Demythologizing the Experts.” In The Culture and Power of Knowledge: Inquiries into Contemporary Societies, ed. Nico Stehr and Richard V. Ericson, 323–340. Berlin: Walter de Gruyter. van den Hove, Sybille. 2007. “A Rationale for Science-Policy Interface.” Futures 39(7): 3–22. Van der Sluijs, Jeoren P., Matthieu Craye, Silvio Funtowicz, Penny Kloprogge, Jerry Ravetz, and James Risbey. 2005. “Combining Quantitative and Qualitative Measures of Uncertainty in Model Based Environmental Assessment: The NUSAP System.” Risk Analysis 25(2): 481–492. Vari, Anna. 1989. “Approaches towards Conflict Resolution in Decision Processes.” In Social Decision Methodology for Technological Projects, ed. Charles Vlek and George Cvetkowich, 74–94. Dordrecht: Kluwer. Vaughan, Diane. 1996. The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago: University of Chicago Press. Vitousek, Peter M., Harold A. Mooney, Jane Lubchenko, and Jerry A. Melilo. 1997. “Human Domination of Earth’s Ecosystems.” Science 277:494–499. Vlek, Charles, and George Cvetkovich. 1989. “Social Decision Making on Technological Projects: Review of Key Issues and a Recommended Procedure.” In Social Decision Methodology for Technological Projects, ed. Charles Vlek and George Cvetkowich, 297–322. Dordrecht: Kluwer. von Schomberg, Rene. 1992. “Argumentation im Kontext wissenschaftlicher Kontroversen.” In Zur Anwendung der Diskursethik in Politik, Recht, Wissenschaft, ed. Karl-Otto Apel and Matthias Kettener, 260–277. Frankfurt am Main: Suhrkamp. ———. 1995. “The Erosion of the Valuespheres: The Ways in which Society Copes with Scientific, Moral and Ethical Uncertainty.” In Contested Technology: Ethics, Risk and Public Debate, ed. Rene von Schomberg, 13–28. Tilburg: International Centre for Human and Public Affairs. von Winterfeldt, Detlof. 1987. “Value Tree Analysis: An Introduction and an Application to Offshore Oil Drilling.” In Insuring and Managing Hazardous Risks: From Seveso to Bhopal and Beyond, ed. Paul R. Kleindorfer and Howard C. Kunreuther, 349–385. Heidelberg: Springer. von Winterfeldt, Detlof, and Ward Edwards. 1986. Decision Analysis in Behavioral Research. Cambridge: Cambridge University Press. Voß, Jan-Peter, Dierk Bauknecht, and René Kemp. 2006. Reflexive Governance for Sustainable Development. Cheltenham: Edward Elgar. Waldo, John. 1987. “Win/Win Does Work.” Timber, Fish, Wildlife: A Report from the Northwest Renewable Resources Center 1(1): 7. Walker, Ray A., and Vincent T. Covello. 1984. Low-Probability High-Consequence Risk Analysis. New York: Plenum. Walls, Jon, Tim O’Riordan, Tom Horlick-Jones, and Jörg Niewöhner. 2005. “The Meta-­ Governance of Risk and New Technologies: GM Crops and Mobile Phones.” Risk Research 8(7–8): 635–661. Warren, Mark E. 1993. “Can Participatory Democracy Produce Better Selves?” Political Psychology 14:209–234. ———. 2002. “Deliberative Democracy.” In Democratic Theory Today: Challenges for the 21st Century, ed. April Carter and Geoffrey Stokes, 173–202. Albany: State University of New York Press.

References

229

Watson, Sean, and Anthony Moran, eds. 2005. Trust, Risk, and Uncertainty. Hampshire: Palgrave Macmillan. Webb, Stephen A. 2006. Social Work in a Risk Society: Social and Political Perspectives. Hamp­ shire: Palgrave Macmillan. Webler, Thomas. 1995. “‘Right’ Discourse in Citizen Participation: An Evaluative Yardstick.” In Fairness and Competence in Citizen Participation: Evaluating New Models for Environmental Discourse, ed. Ortwin Renn, Thomas Webler and Peter Wiedemann, 35–86. Dordrecht: Kluwer. ———. 1999. “The Craft and Theory of Public Participation.” Risk Research 2:55–71. Webler, Thomas, Debra Levine, Horst Rakel, and Ortwin Renn. 1991. “The Group Delphi: A Novel Attempt at Reducing Uncertainty.” Technological Forecasting and Social Change 39:253–263. Webler, Thomas, Horst Rakel, Ortwin Renn, and Brandan Johnson. 1995. “Eliciting and Classifying Concerns: A Methodological Critique.” Risk Analysis 15(3): 421–436. Webler, Thomas, Seth Tuler, and Rob Krueger. 2001. “What Is a Good Public Participation Process?” Environmental Management 27:435–450. Weinberg, Steven. 2001. Facing Up: Science and Its Cultural Adversaries. Cambridge, MA: Harvard University Press. Weingart, Peter. 1999. “Scientific Expertise and Political Accountability: Paradoxes of Science.” Politics, Science and Public Policy 26:151–161. Welp, Martin, and Susanne Stoll-Kleemann. 2006. “Integrative Theory of Reflexive Dialogues.” In Stakeholder Dialogues in Natural Resources Management: Theory and Practice, ed. Susanne Stoll-Kleemann and Martin Welp, 43–78. Heidelberg: Springer. Whitfield, Stephen C., Eugene A. Rosa, Amy Dan, and Thomas Dietz. 2009. “Nuclear Power: Value Orientation and Risk Perception.” Risk Analysis 29:425–437. Wiener, Jonathan B. 2011. “The Real Pattern of Precaution.” In The Reality of Precaution: Comparing Risk Regulation in the United States and Europe, ed. Jonathan B. Wiener, Michael D. Rogers, James K. Hammitt and Peter H. Sand, 519–566. London: Earthscan. Wildavsky, Aaron, and Karl Dake. 1990. “Theories of Risk Perception: Who Fears What and Why?” Daedalus 119(4): 41–60. Wilkinson, Iain. 2010. Risk, Vulnerability, and Everyday Life. London: Routledge. Williams, M. J. 2008. “(In)Security Studies, Reflexive Modernization and the Risk Society.” Cooperation and Conflict 43:57–79. Willke, Helmut. 1995. Systemtheorie III. Steuerungstheorie. Stuttgart: UTB Fischer. Wilson, James, and Edmund A. C. Crouch. 2001. Risk-Benefit Analysis, 2nd ed. Cambridge, MA: Harvard University Press. Wisner, Ben, Piers Blaikie, Terry Cannon, and Ian Davis. 2004. At Risk: Natural Hazards, People’s Vulnerability and Disasters. London: Routledge. Wissenschaftliche Beirat der Bundesregierung Globale Umweltveränderungen (German Advisory Council on Global Change). 2000. World in Transition: Strategies for Managing Global Environmental Risks. Annual report. Heidelberg: Springer. ———. 2003. World in Transition: Towards Sustainable Energy Systems. Annual report. Heidelberg: Springer. Wolf, Klaus Dieter. 2002. “Contextualizing Normative Standards for Legitimate Governance Beyond the State.” In Participatory Governance: Political and Societal Implications, ed. Jürgen R. Grote and Bernard Gbikpi, 30–50. Opladen: Leske and Budrich. Wondelleck, Julia, Nancy Manring, and James Crowfoot. 1996. “Teetering at the Top of the Ladder: The Experience of Citizen Group Participants in Alternative Dispute Resolution Processes.” Sociological Perspectives 39(2): 249–262. Woodman, Dan. 2009. “The Mysterious Case of the Pervasive Choice Biography: Ulrich Beck, Structure/Agency, and the Middling State of Theory in the Sociology of Youth.” Journal of Youth Studies 12:243–256. World Health Organization. 2007. Extremely Low Frequency (ELF) Fields. Environmental Health Criteria 238. Geneva: WHO.

230

References

———. 2011. Electromagnetic Fields and Public Health. Fact sheet no. 193. Geneva: WHO. Wright, George. 1984. Behavioral Decision Theory. Beverly Hills, CA: Sage. Wynne, Brian. 1989. “Sheepfarming after Chernobyl.” Environment 31:11–15, 33–39. ———. 1992. “Uncertainty and Environmental Learning: Reconceiving Science and Policy in the Preventive Paradigm.” Global Environmental Change 2:111–127. ———. 1996. “May the Sheep Safely Graze? A Reflexive View of the Expert–Lay Knowledge Divide.” In Risk, Environment, and Modernity: Towards a New Ecology, ed. Scott Lash, Bronislaw Szerszynski, and Brian Wynne, 44–83. London: Sage. ———. 2002. “Risk and Environment as Legitimatory Discourses of Technology: Reflexivity Inside Out?” Current Sociology 50:459–477. Wynne, Brian, and Kerstin Dressel. 2001. “Cultures of Uncertainty: Transboundary Risks and BSE in Europe.” In Transboundary Risk Management, ed. Joanne Linnerooth-Bayer, Ragnar E. Löfstedt, and Gunnar Sjöstedt, 121–154. London: Earthscan. Yankelovich, Daniel. 1991. Coming to Public Judgment: Making Democracy Work in a Complex World. Syracuse, NY: Syracuse University Press. Yearley, Steven. 2000. “Making Systematic Sense of Public Discontents with Expert Knowledge: Two Analytic Approaches and a Case Study.” Public Understanding of Science 9:​ 105–122. York, Richard, Eugene A. Rosa, and Thomas Dietz. 2003. “Footprints on the Earth: The Environmental Consequences of Modernity.” American Sociological Review 68:279–300. Yosie, Terry F., and Timothy D. Herbst. 1998. Using Stakeholder Processes in Environmental Decision Making. Washington, DC: American Industrial Health Council. Young, Allan. 1995. The Harmony of Illusions: The Invention of Post-Traumatic Stress Disorder. Princeton, NJ: Princeton University Press. Zeckhauser, Richard, and W. Kip Viscusi. 1996. “The Risk Management Dilemma.” In Annals of the American Academy of Political and Social Science: Challenges in Risk Assessment and Risk Management, ed. Howard Kunreuther and Peter Slovic, 144–155. Thousand Oaks, CA: Sage. Zinn, Jens O., ed. 2008. Social Theories of Risk and Uncertainty: An Introduction. Malden, MA: Blackwell. Zinn, Jens O., and Peter Taylor-Gooby. 2006. “Risk as an Interdisciplinary Research Area.” In Risk in Social Science, ed. Peter Taylor-Gooby and Jens O. Zinn, 20–53. Oxford: Oxford University Press.

Index

Adaptive and integrative risk governance, 9, 139, 151, 156, 158, 169 Advanced modernity, xvii, 2–4, 7–9, 37, 68, 69, 71, 72, 76–81, 84, 89, 91–98, 100, 103, 104, 112–113, 116–119, 120, 143, 195–198, 201, 202 Ambiguity, 9, 125, 127–129, 130–131, 136–140, 143, 146–147, 150, 169, 178, 187, 192, 200 Amplification of risk, 28, 45, 126, 130, 155 Analytic-deliberative process, 9, 128, 139, 140, 167, 170, 179, 180–184, 188, 192, 193 Autopoietic, xv, 98, 101, 102, 106, 108, 114, 178 Beck, Ulrich, viii, xiv, xxiii, xxvii, xxix, 4, 8–9, 29, 35–39, 70–71, 72–84, 95–101, 196–201 Bounded rationality, 55–57 Bovine spongiform encephalopathy (BSE), 91, 126, 130 British Petroleum oil spill, 3, 80, 90 Collective agreement, 19, 22 Collective decision making, 51, 58, 150, 153, 167, 168, 176, 178 Common pool resources, 6, 7 Communication: of ambivalence, 173; of fear, 172; of opportunity, 172 Complexity, 9, 59–60, 106, 125, 127–129, 130–133, 139–140, 143, 147, 150, 169, 178, 187, 192, 200 Concern assessment, 156, 157, 160 Constraint, 1, 2, 17–19, 21, 22, 37, 62–64, 66, 92, 97, 112, 116

Cooperative risk discourse, 193 Cosmopolitan: coalition of actors, xxii; imperative, xiv Cosmopolitanism, 74, 75, 83, 94 Critical theory, 9, 61, 67, 68, 110–113, 115 Dangers (versus risks), xiv, xx, xxii, 15, 36, 74, 87, 97, 103–106, 108, 127, 129, 131, 132 Deficit model of communication and engagement, 166 Deliberation, 82, 107, 139, 140, 143–148, 151, 160, 165, 167, 168, 173, 178, 180–184, 186, 188, 189, 192, 193, 197 Deliberative democracy, 9, 182, 189, 192 Deliberative dialogue, 107 Deliberative discourse, 143, 182 Democratization of democracy, 93, 94, 99–100 Democratization of risk management, xxix, 9, 82–84, 201 Design discourse, 143, 147, 148, 173 Dialogic democracy, 92, 94 Dialogic resonance, 107, 108 Double contingency, 106 Effectiveness, 52, 108, 173, 177, 185, 192, 201 Efficiency, ix, 40, 42, 69, 104, 173, 174, 177, 189, 192 Electromagnetic field (EMF), 131, 137 Epistemic discourse, 145, 147 Epistemological constructivism, 28, 30, 97

232

Fact constructivism, 26 Framing, xvi, xxiii, 6, 7, 9, 16, 23, 37, 60, 66, 67, 103, 107, 118, 143, 155, 157, 159, 168, 182 Game theory models, 56 Giddens, Anthony, viii, xiv, xx, xxvii, xxix, 4, 8–9, 29, 37, 64, 70–72, 84–94, 95–101, 196–201 Global governance model, 45 Globalization, xiv, 2, 4, 5, 42, 43, 79, 83, 87, 88, 95, 99, 117, 119, 152, 195, 198 Global social movement, 83 Governance shift, 151 Grounded realism, 31, 32 Habermas, Jürgen, xiv, xxviii, 7–9, 27, 54, 110– 120, 173, 181–183, 196, 199–201 Hierarchical epistemology, 27, 29 Horizontal and vertical governance, 153 Humanizing of technology, 92–94 Inclusion, 31, 57, 145, 158, 166–169, 177, 179, 182 Individualization, xvi, 70, 72, 75, 76, 93, 199 Institutional diversity, 151 Institutionalized risk environment, 84, 87–89, 96, 99 Instrumental discourse, 143 Interdisciplinary risk estimation, 9, 150, 158, 160, 163, 164, 169 International Risk Governance Council, 6, 43, 128, 136, 144, 153, 154, 156, 157, 159, 160, 168 Large-scale technology, 33–35 Legitimacy, 37, 45, 60, 74, 90, 99, 107, 112, 145, 152, 155, 165, 173, 174, 176, 177, 192 Lifeworld, 7, 54, 112–120, 199–201 Life politics, 92–94 Logic: of risk, xxii, 79; of war, xxii, xxiii Luhmann, Niklas, viii, xiv, xxvii, 4, 8–9, 28–29, 35–39, 102–109, 113–115, 178, 196–201 Manufactured uncertainty, xx, 72, 86 Methodological individualism, 51, 62, 63 Methodological nationalism, xxi Ontological realism, 17–22, 27–29, 96–97 Ontological security, 53, 64, 70, 77, 89, 91, 119, 201 Optimization principle, 53 Ostensibility and repeatability principle, 19, 30–32 Participatory discourse, 143, 146, 147, 191 Phthalates, 135, 136

Index

Planetary boundaries, 141, 142 Pre-assessment, 6, 7, 150, 156, 157 Presupposition, 14–16, 19–21, 23, 26–29, 38, 50–53, 58, 97, 98, 133, 163 Probability, xx, xxvi, 22, 23, 30–32, 36, 38, 52, 87, 88, 91, 98, 106, 119, 123, 128, 133, 139, 140, 143, 154, 155, 160–163, 169, 177, 185 Public participation, 81, 82, 120, 171 Radical constructivism, 105 Rational action framework, 8, 49, 51–53, 66 Reflective discourse, 143, 144 Reflexive modernization theory, 29, 37, 68, 69, 70, 94 Reflexivity, 79, 85, 89, 91–93, 99, 107, 197 Repeatability, 19, 30–32 Resilience approach, 135 Ripple effects, 45, 123, 125–127, 145 Risk: analysis, 6, 28, 29, 38–40, 105, 132, 133, 137, 148, 149, 151, 154, 162, 201; appraisal, 156, 157; assessment, xxvi, 8, 22, 23, 28, 34– 38, 41, 45, 67, 72, 77, 90, 102, 113, 116, 118, 127, 132, 133, 136, 139, 140, 144, 147, 151, 155–157, 160, 162–164, 185, 192, 200, 201; characterization, xxvi, xxvii, 4, 9, 23, 24, 27, 32, 68, 139, 145, 157, 158, 169; communication, xxvii, 6, 9, 46, 107, 150, 166, 167, 198; conflicts, xvii, xx, xxiii, 36, 105, 107; conscience, xxvi, xxvii; decisions, xxviii, 40, 77, 81, 119, 169, 174, 177, 182, 196; dilemma, xv, 107, 170; evaluation, 9, 35, 143, 145, 146, 157, 158, 162–164, 168, 169; governance, viii, xv, xxi, xxii, 2, 4–9, 13, 23, 42–46, 57, 67, 82, 100, 105, 107–109, 111, 115, 120, 121, 123, 128, 131, 134–136, 139, 144, 145, 147, 149, 150, 151, 153–160, 162, 163, 166–173, 177, 179, 182, 189, 192, 195–197, 200; knowledge, 60, 71, 77, 89, 90, 103, 105, 167, 168, 179, 196, 197; management, viii, xvi, xxi, xxiii, xxvi, xxix, 9, 23, 34–36, 41, 42, 81, 124, 126, 127, 129, 130, 134, 148, 151, 156–158, 160, 162–170, 177–180, 182, 192, 193; optimization, 116; perception, viii, xxvi, 23, 33, 38, 40, 41, 45, 128, 157, 160, 171; policy, viii, 41, 67; society, xiii, xiv, xvi, xvii, xix, xx, xxi, xxii, xxiii, xxvii, xxxi, 4, 5, 7–9, 35–37, 40, 70–79, 81, 82, 84, 98, 100, 110, 116, 117, 195, 197–199, 202 Scientific analysis, 179, 192 Scientific community, 131 Scientization, 75, 77–79, 91 Semiautonomous systems, 59 Social cohesion, 61, 173, 174 Social communication, 39

Index

Social construction, 14–17, 20–23, 31, 32, 43, 77, 96, 97, 103, 200 Social explosiveness, xvi, xvii, 82 Social learning, 62, 168 Social movements, xvii, 83, 92–94, 98, 99, 118, 120, 199 Social order, 8, 71, 72, 74, 76, 84, 103, 119, 196 Social systems, 4, 58, 63–66, 81, 85, 88, 100, 102–104, 108, 126, 196, 199 Socialization of nature, 88 Stakeholder, 27, 82, 90, 128, 140, 144, 145, 147, 148, 151, 153, 156, 158, 160, 165–169, 179, 181, 186–188, 192, 193, 200 Structuration theory, 64, 97 Subpolitics, 81, 83, 84, 93, 97, 99, 120 Suspension heuristic, 26, 27 Synthetic realism, 31, 32 Systematic knowledge, 140, 172, 180, 181, 186, 193 Systemic risks, 8, 9, 43–45, 104, 123–125, 127– 132, 139, 140, 143, 145, 147, 149, 154, 158, 160, 161, 166, 169, 170, 171, 173, 192, 198

233

Technical analysis, 40 Technical progress, 70 Technocratic consciousness, 117, 201 Technological change, xxv, xxvi, 85, 117, 152 Technological development, 77, 82, 125, 146, 173 Technological innovation, xxvi, 78, 79, 94, 196 Technological risks, xiv, xxvii, 3, 41, 74, 124 Technological systems, xxvii, 33 Theory of communicative action, 7, 59, 109, 110, 182, 200 Tolerability and acceptability judgment, 145, 157 Unawareness, 77, 79–81, 91, 99 Uncertainty, 9, 125, 127–131, 140, 143, 147, 150, 169, 178, 187, 192, 200 World risk society, xiii, xiv, xvi, xvii, xviii, xix, xx, xxi, xxii, xxiii, 74, 81, 195 Worldview change, 2

Eugene A. Rosa (1941–2013) was the Edward R. Meyer Distinguished Professor of Natural Resource and Environmental Policy, Professor of Sociology, Affiliated Professor of Fine Arts, and Faculty Associate in the Center for Environmental Research, Education, and Outreach at Washington State University. Ortwin Renn is Professor and Chair of Environmental Sociology and Technology Assessment at Stuttgart University (Germany). He directs the Stuttgart Research Center for Interdisciplinary Risk and Innovation Studies at Stuttgart University (ZIRIUS) and the non-profit organization DIALOGIK . Aaron M. McCright is Associate Professor of Sociology in Lyman Briggs College, the Department of Sociology, and the Environmental Science and Policy Program at Michigan State University.